doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1606.03622 | 29 | On GEO and ATIS, the copying mechanism helps signiï¬cantly: it improves test accuracy by 10.4 percentage points on GEO and 6.4 points on ATIS. However, on OVERNIGHT, adding the copying mechanism actually makes our model perform slightly worse. This result is somewhat expected, as the OVERNIGHT dataset contains a very small number of distinct entities. It is also notable that both systems surpass the previous best system on OVERNIGHT by a wide margin.
We choose to use the copying mechanism in all subsequent experiments, as it has a large advan- tage in realistic settings where there are many dis- tinct entities in the world. The concurrent work of Gu et al. (2016) and Gulcehre et al. (2016), both of whom propose similar copying mechanisms, pro- vides additional evidence for the utility of copying on a wide range of NLP tasks.
# 5.4 Main Results
2The method of Liang et al. (2011) is not comparable to
For our main results, we train our model with a va- riety of data recombination strategies on all three datasets. These results are summarized in Tables 2 and 3. We compare our system to the baseline of not using any data recombination, as well as to state-of-the-art systems on all three datasets. | 1606.03622#29 | Data Recombination for Neural Semantic Parsing | Modeling crisp logical regularities is crucial in semantic parsing, making it
difficult for neural models with no task-specific prior knowledge to achieve
good results. In this paper, we introduce data recombination, a novel framework
for injecting such prior knowledge into a model. From the training data, we
induce a high-precision synchronous context-free grammar, which captures
important conditional independence properties commonly found in semantic
parsing. We then train a sequence-to-sequence recurrent network (RNN) model
with a novel attention-based copying mechanism on datapoints sampled from this
grammar, thereby teaching the model about these structural properties. Data
recombination improves the accuracy of our RNN model on three semantic parsing
datasets, leading to new state-of-the-art performance on the standard GeoQuery
dataset for models with comparable supervision. | http://arxiv.org/pdf/1606.03622 | Robin Jia, Percy Liang | cs.CL | ACL 2016 | null | cs.CL | 20160611 | 20160611 | [] |
1606.03622 | 30 | We ï¬nd that data recombination consistently improves accuracy across the three domains we evaluated on, and that the strongest results come from composing multiple strategies. Combin- ing ABSWHOLEPHRASES, ABSENTITIES, and CONCAT-2 yields a 4.3 percentage point improve- ment over the baseline without data recombina- tion on GEO, and an average of 1.7 percentage points on OVERNIGHT. In fact, on GEO, we achieve test accuracy of 89.3%, which surpasses the previous state-of-the-art, excluding Liang et al. (2011), which used a seed lexicon for predicates. On ATIS, we experiment with concatenating more than 2 examples, to make up for the fact that we cannot apply ABSWHOLEPHRASES, which gen- erates longer examples. We obtain a test accu- racy of 83.3 with ABSENTITIES composed with CONCAT-3, which beats the baseline by 7 percent- age points and is competitive with the state-of-the- art. | 1606.03622#30 | Data Recombination for Neural Semantic Parsing | Modeling crisp logical regularities is crucial in semantic parsing, making it
difficult for neural models with no task-specific prior knowledge to achieve
good results. In this paper, we introduce data recombination, a novel framework
for injecting such prior knowledge into a model. From the training data, we
induce a high-precision synchronous context-free grammar, which captures
important conditional independence properties commonly found in semantic
parsing. We then train a sequence-to-sequence recurrent network (RNN) model
with a novel attention-based copying mechanism on datapoints sampled from this
grammar, thereby teaching the model about these structural properties. Data
recombination improves the accuracy of our RNN model on three semantic parsing
datasets, leading to new state-of-the-art performance on the standard GeoQuery
dataset for models with comparable supervision. | http://arxiv.org/pdf/1606.03622 | Robin Jia, Percy Liang | cs.CL | ACL 2016 | null | cs.CL | 20160611 | 20160611 | [] |
1606.03622 | 31 | Data recombination without copying. For completeness, we also investigated the effects of data recombination on the model without attention-based copying. We found that recom- bination helped signiï¬cantly on GEO and ATIS, but hurt the model slightly on OVERNIGHT. On GEO, the best data recombination strategy yielded test accuracy of 82.9%, for a gain of 8.3 percent- age points over the baseline with no copying and no recombination; on ATIS, data recombination gives test accuracies as high as 74.6%, a 4.7 point gain over the same baseline. However, no data re- combination strategy improved average test accu- racy on OVERNIGHT; the best one resulted in a 0.3 percentage point decrease in test accuracy. We hypothesize that data recombination helps less on OVERNIGHT in general because the space of pos- sible logical forms is very limited, making it more like a large multiclass classiï¬cation task. There- fore, it is less important for the model to learn good compositional representations that general- ize to new logical forms at test time.
ours, as they as they used a seed lexicon mapping words to predicates. We explicitly avoid using such prior knowledge in our system. | 1606.03622#31 | Data Recombination for Neural Semantic Parsing | Modeling crisp logical regularities is crucial in semantic parsing, making it
difficult for neural models with no task-specific prior knowledge to achieve
good results. In this paper, we introduce data recombination, a novel framework
for injecting such prior knowledge into a model. From the training data, we
induce a high-precision synchronous context-free grammar, which captures
important conditional independence properties commonly found in semantic
parsing. We then train a sequence-to-sequence recurrent network (RNN) model
with a novel attention-based copying mechanism on datapoints sampled from this
grammar, thereby teaching the model about these structural properties. Data
recombination improves the accuracy of our RNN model on three semantic parsing
datasets, leading to new state-of-the-art performance on the standard GeoQuery
dataset for models with comparable supervision. | http://arxiv.org/pdf/1606.03622 | Robin Jia, Percy Liang | cs.CL | ACL 2016 | null | cs.CL | 20160611 | 20160611 | [] |
1606.03622 | 32 | ours, as they as they used a seed lexicon mapping words to predicates. We explicitly avoid using such prior knowledge in our system.
BASKETBALL BLOCKS CALENDAR HOUSING PUBLICATIONS RECIPES RESTAURANTS SOCIAL Avg. 46.3 85.2 86.7 86.7 84.7 85.2 87.5 41.9 58.1 60.2 55.9 60.7 54.1 60.2 74.4 78.0 78.0 79.2 75.6 78.6 81.0 54.0 71.4 65.6 69.8 69.8 67.2 72.5 59.0 76.4 73.9 76.4 74.5 73.9 78.3 70.8 79.6 77.3 77.8 80.1 79.6 81.0 75.9 76.2 79.5 80.7 79.5 81.9 79.5 48.2 81.4 81.3 80.9 80.8 82.1 79.6 58.8 75.8 75.3 75.9 75.7 75.3 77.5
Table 3: Test accuracy using different data recombination strategies on the OVERNIGHT tasks. | 1606.03622#32 | Data Recombination for Neural Semantic Parsing | Modeling crisp logical regularities is crucial in semantic parsing, making it
difficult for neural models with no task-specific prior knowledge to achieve
good results. In this paper, we introduce data recombination, a novel framework
for injecting such prior knowledge into a model. From the training data, we
induce a high-precision synchronous context-free grammar, which captures
important conditional independence properties commonly found in semantic
parsing. We then train a sequence-to-sequence recurrent network (RNN) model
with a novel attention-based copying mechanism on datapoints sampled from this
grammar, thereby teaching the model about these structural properties. Data
recombination improves the accuracy of our RNN model on three semantic parsing
datasets, leading to new state-of-the-art performance on the standard GeoQuery
dataset for models with comparable supervision. | http://arxiv.org/pdf/1606.03622 | Robin Jia, Percy Liang | cs.CL | ACL 2016 | null | cs.CL | 20160611 | 20160611 | [] |
1606.03622 | 33 | Table 3: Test accuracy using different data recombination strategies on the OVERNIGHT tasks.
Depth-2 (same length) x: ârel:12 of rel:17 of ent:14â y: ( _rel:12 ( _rel:17 _ent:14 ) ) Depth-4 (longer) x: ârel:23 of rel:36 of rel:38 of rel:10 of ent:05â y: ( _rel:23 ( _rel:36 ( _rel:38 ( _rel:10 _ent:05 ) ) ) )
Figure 5: A sample of our artiï¬cial data.
2 Ss 4 sda *e g LS ce | > © g 3 8 2 24 FA & s4 â Same length, independent Longer, independent Same length, recombinant a ae Longer, recombinant T T T T T T 0 100 300 400 500
as well. In comparison, applying ABSENTITIES alone, which generates examples of the same length as those in the original dataset, was generally less effective. | 1606.03622#33 | Data Recombination for Neural Semantic Parsing | Modeling crisp logical regularities is crucial in semantic parsing, making it
difficult for neural models with no task-specific prior knowledge to achieve
good results. In this paper, we introduce data recombination, a novel framework
for injecting such prior knowledge into a model. From the training data, we
induce a high-precision synchronous context-free grammar, which captures
important conditional independence properties commonly found in semantic
parsing. We then train a sequence-to-sequence recurrent network (RNN) model
with a novel attention-based copying mechanism on datapoints sampled from this
grammar, thereby teaching the model about these structural properties. Data
recombination improves the accuracy of our RNN model on three semantic parsing
datasets, leading to new state-of-the-art performance on the standard GeoQuery
dataset for models with comparable supervision. | http://arxiv.org/pdf/1606.03622 | Robin Jia, Percy Liang | cs.CL | ACL 2016 | null | cs.CL | 20160611 | 20160611 | [] |
1606.03622 | 34 | as well. In comparison, applying ABSENTITIES alone, which generates examples of the same length as those in the original dataset, was generally less effective.
We conducted additional experiments on artiï¬- cial data to investigate the importance of adding longer, harder examples. We experimented with adding new examples via data recombination, as well as adding new independent examples (e.g. to simulate the acquisition of more training data). We constructed a simple world containing a set of enti- ties and a set of binary relations. For any n, we can generate a set of depth-n examples, which involve the composition of n relations applied to a single entity. Example data points are shown in Figure 5. We train our model on various datasets, then test it on a set of 500 randomly chosen depth-2 exam- ples. The model always has access to a small seed training set of 100 depth-2 examples. We then add one of four types of examples to the training set:
⢠Same length, independent: New randomly chosen depth-2 examples.3
Number of additional examples
⢠Longer, independent: Randomly chosen depth-4 examples.
Figure 6: The results of our artiï¬cial data exper- iments. We see that the model learns more from longer examples than from same-length examples. | 1606.03622#34 | Data Recombination for Neural Semantic Parsing | Modeling crisp logical regularities is crucial in semantic parsing, making it
difficult for neural models with no task-specific prior knowledge to achieve
good results. In this paper, we introduce data recombination, a novel framework
for injecting such prior knowledge into a model. From the training data, we
induce a high-precision synchronous context-free grammar, which captures
important conditional independence properties commonly found in semantic
parsing. We then train a sequence-to-sequence recurrent network (RNN) model
with a novel attention-based copying mechanism on datapoints sampled from this
grammar, thereby teaching the model about these structural properties. Data
recombination improves the accuracy of our RNN model on three semantic parsing
datasets, leading to new state-of-the-art performance on the standard GeoQuery
dataset for models with comparable supervision. | http://arxiv.org/pdf/1606.03622 | Robin Jia, Percy Liang | cs.CL | ACL 2016 | null | cs.CL | 20160611 | 20160611 | [] |
1606.03622 | 35 | Figure 6: The results of our artiï¬cial data exper- iments. We see that the model learns more from longer examples than from same-length examples.
⢠Same length, recombinant: Depth-2 exam- ples sampled from the grammar induced by applying ABSENTITIES to the seed dataset.
# 5.5 Effect of Longer Examples
⢠Longer, recombinant: Depth-4 examples sampled from the grammar induced by apply- ing ABSWHOLEPHRASES followed by AB- SENTITIES to the seed dataset.
Interestingly, like ABSWHOLE- PHRASES and CONCAT-2 help the model even though the resulting recombinant examples are generally not in the support of the test distribution. In particular, these recombinant examples are on average longer than those in the actual dataset, which makes them harder for the attention-based for every domain, our best model. accuracy numbers involved some form of concate- nation, and often involved ABSWHOLEPHRASES
To maintain consistency between the independent and recombinant experiments, we ï¬x the recombi- nant examples across all epochs, instead of resam- pling at every epoch. In Figure 6, we plot accu- racy on the test set versus the number of additional examples added of each of these four types. As | 1606.03622#35 | Data Recombination for Neural Semantic Parsing | Modeling crisp logical regularities is crucial in semantic parsing, making it
difficult for neural models with no task-specific prior knowledge to achieve
good results. In this paper, we introduce data recombination, a novel framework
for injecting such prior knowledge into a model. From the training data, we
induce a high-precision synchronous context-free grammar, which captures
important conditional independence properties commonly found in semantic
parsing. We then train a sequence-to-sequence recurrent network (RNN) model
with a novel attention-based copying mechanism on datapoints sampled from this
grammar, thereby teaching the model about these structural properties. Data
recombination improves the accuracy of our RNN model on three semantic parsing
datasets, leading to new state-of-the-art performance on the standard GeoQuery
dataset for models with comparable supervision. | http://arxiv.org/pdf/1606.03622 | Robin Jia, Percy Liang | cs.CL | ACL 2016 | null | cs.CL | 20160611 | 20160611 | [] |
1606.03622 | 36 | 3Technically, these are not completely independent, as we sample these new examples without replacement. The same applies to the longer âindependentâ examples.
expected, independent examples are more help- ful than the recombinant ones, but both help the model improve considerably. In addition, we see that even though the test dataset only has short ex- amples, adding longer examples helps the model more than adding shorter ones, in both the inde- pendent and recombinant cases. These results un- derscore the importance training on longer, harder examples.
# 6 Discussion
In this paper, we have presented a novel frame- work we term data recombination, in which we generate new training examples from a high- precision generative model induced from the orig- training dataset. We have demonstrated inal its effectiveness in improving the accuracy of a sequence-to-sequence RNN model on three se- mantic parsing datasets, using a synchronous context-free grammar as our generative model. | 1606.03622#36 | Data Recombination for Neural Semantic Parsing | Modeling crisp logical regularities is crucial in semantic parsing, making it
difficult for neural models with no task-specific prior knowledge to achieve
good results. In this paper, we introduce data recombination, a novel framework
for injecting such prior knowledge into a model. From the training data, we
induce a high-precision synchronous context-free grammar, which captures
important conditional independence properties commonly found in semantic
parsing. We then train a sequence-to-sequence recurrent network (RNN) model
with a novel attention-based copying mechanism on datapoints sampled from this
grammar, thereby teaching the model about these structural properties. Data
recombination improves the accuracy of our RNN model on three semantic parsing
datasets, leading to new state-of-the-art performance on the standard GeoQuery
dataset for models with comparable supervision. | http://arxiv.org/pdf/1606.03622 | Robin Jia, Percy Liang | cs.CL | ACL 2016 | null | cs.CL | 20160611 | 20160611 | [] |
1606.03622 | 37 | There has been growing interest in applying neural networks to semantic parsing and related tasks. Dong and Lapata (2016) concurrently de- veloped an attention-based RNN model for se- mantic parsing, although they did not use data re- combination. Grefenstette et al. (2014) proposed a non-recurrent neural model for semantic pars- ing, though they did not run experiments. Mei et al. (2016) use an RNN model to perform a related task of instruction following.
Our proposed attention-based copying mech- anism bears a strong resemblance to two mod- els that were developed independently by other groups. Gu et al. (2016) apply a very similar copy- ing mechanism to text summarization and single- turn dialogue generation. Gulcehre et al. (2016) propose a model that decides at each step whether to write from a âshortlistâ vocabulary or copy from the input, and report improvements on machine translation and text summarization. Another piece of related work is Luong et al. (2015b), who train a neural machine translation system to copy rare words, relying on an external system to generate alignments. | 1606.03622#37 | Data Recombination for Neural Semantic Parsing | Modeling crisp logical regularities is crucial in semantic parsing, making it
difficult for neural models with no task-specific prior knowledge to achieve
good results. In this paper, we introduce data recombination, a novel framework
for injecting such prior knowledge into a model. From the training data, we
induce a high-precision synchronous context-free grammar, which captures
important conditional independence properties commonly found in semantic
parsing. We then train a sequence-to-sequence recurrent network (RNN) model
with a novel attention-based copying mechanism on datapoints sampled from this
grammar, thereby teaching the model about these structural properties. Data
recombination improves the accuracy of our RNN model on three semantic parsing
datasets, leading to new state-of-the-art performance on the standard GeoQuery
dataset for models with comparable supervision. | http://arxiv.org/pdf/1606.03622 | Robin Jia, Percy Liang | cs.CL | ACL 2016 | null | cs.CL | 20160611 | 20160611 | [] |
1606.03622 | 38 | Prior work has explored using paraphrasing for data augmentation on NLP tasks. Zhang et al. (2015) augment their data by swapping out words for synonyms from WordNet. Wang and Yang (2015) use a similar strategy, but identify similar words and phrases based on cosine distance be- tween vector space embeddings. Unlike our data
recombination strategies, these techniques only change inputs x, while keeping the labels y ï¬xed. Additionally, these paraphrasing-based transfor- mations can be described in terms of grammar induction, so they can be incorporated into our framework.
In data recombination, data generated by a high- precision generative model is used to train a sec- ond, domain-general model. Generative oversam- pling (Liu et al., 2007) learns a generative model in a multiclass classiï¬cation setting, then uses it to generate additional examples from rare classes in order to combat label imbalance. Uptraining (Petrov et al., 2010) uses data labeled by an ac- curate but slow model to train a computationally cheaper second model. Vinyals et al. (2015b) gen- erate a large dataset of constituency parse trees by taking sentences that multiple existing systems parse in the same way, and train a neural model on this dataset. | 1606.03622#38 | Data Recombination for Neural Semantic Parsing | Modeling crisp logical regularities is crucial in semantic parsing, making it
difficult for neural models with no task-specific prior knowledge to achieve
good results. In this paper, we introduce data recombination, a novel framework
for injecting such prior knowledge into a model. From the training data, we
induce a high-precision synchronous context-free grammar, which captures
important conditional independence properties commonly found in semantic
parsing. We then train a sequence-to-sequence recurrent network (RNN) model
with a novel attention-based copying mechanism on datapoints sampled from this
grammar, thereby teaching the model about these structural properties. Data
recombination improves the accuracy of our RNN model on three semantic parsing
datasets, leading to new state-of-the-art performance on the standard GeoQuery
dataset for models with comparable supervision. | http://arxiv.org/pdf/1606.03622 | Robin Jia, Percy Liang | cs.CL | ACL 2016 | null | cs.CL | 20160611 | 20160611 | [] |
1606.03622 | 39 | Some of our induced grammars generate ex- amples that are not in the test distribution, but nonetheless aid in generalization. Related work has also explored the idea of training on altered or out-of-domain data, often interpreting it as a form of regularization. Dropout training has been shown to be a form of adaptive regularization (Hinton et al., 2012; Wager et al., 2013). Guu et al. (2015) showed that encouraging a knowledge base completion model to handle longer path queries acts as a form of structural regularization.
Language is a blend of crisp regularities and soft relationships. Our work takes RNNs, which excel at modeling soft phenomena, and uses a highly structured toolâsynchronous context free grammarsâto infuse them with an understanding of crisp structure. We believe this paradigm for si- multaneously modeling the soft and hard aspects of language should have broader applicability be- yond semantic parsing.
Acknowledgments This work was supported by the NSF Graduate Research Fellowship under Grant No. DGE-114747, and the DARPA Com- municating with Computers (CwC) program under ARO prime contract no. W911NF-15-1-0462. | 1606.03622#39 | Data Recombination for Neural Semantic Parsing | Modeling crisp logical regularities is crucial in semantic parsing, making it
difficult for neural models with no task-specific prior knowledge to achieve
good results. In this paper, we introduce data recombination, a novel framework
for injecting such prior knowledge into a model. From the training data, we
induce a high-precision synchronous context-free grammar, which captures
important conditional independence properties commonly found in semantic
parsing. We then train a sequence-to-sequence recurrent network (RNN) model
with a novel attention-based copying mechanism on datapoints sampled from this
grammar, thereby teaching the model about these structural properties. Data
recombination improves the accuracy of our RNN model on three semantic parsing
datasets, leading to new state-of-the-art performance on the standard GeoQuery
dataset for models with comparable supervision. | http://arxiv.org/pdf/1606.03622 | Robin Jia, Percy Liang | cs.CL | ACL 2016 | null | cs.CL | 20160611 | 20160611 | [] |
1606.03622 | 40 | Reproducibility. All and experiments avail- able on the CodaLab platform at https: //worksheets.codalab.org/worksheets/ 0x50757a37779b485f89012e4ba03b6f4f/.
# References
[Artzi and Zettlemoyer2013a] Y. Artzi and L. Zettle- moyer. 2013a. UW SPF: The University of Wash- ington semantic parsing framework. arXiv preprint arXiv:1311.3011.
[Artzi and Zettlemoyer2013b] Y. Artzi and L. Zettle- moyer. 2013b. Weakly supervised learning of se- mantic parsers for mapping instructions to actions. Transactions of the Association for Computational Linguistics (TACL), 1:49â62.
and Y. Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.
[Berant et al.2013] J. Berant, A. Chou, R. Frostig, and P. Liang. 2013. Semantic parsing on Freebase from question-answer pairs. In Empirical Methods in Natural Language Processing (EMNLP). | 1606.03622#40 | Data Recombination for Neural Semantic Parsing | Modeling crisp logical regularities is crucial in semantic parsing, making it
difficult for neural models with no task-specific prior knowledge to achieve
good results. In this paper, we introduce data recombination, a novel framework
for injecting such prior knowledge into a model. From the training data, we
induce a high-precision synchronous context-free grammar, which captures
important conditional independence properties commonly found in semantic
parsing. We then train a sequence-to-sequence recurrent network (RNN) model
with a novel attention-based copying mechanism on datapoints sampled from this
grammar, thereby teaching the model about these structural properties. Data
recombination improves the accuracy of our RNN model on three semantic parsing
datasets, leading to new state-of-the-art performance on the standard GeoQuery
dataset for models with comparable supervision. | http://arxiv.org/pdf/1606.03622 | Robin Jia, Percy Liang | cs.CL | ACL 2016 | null | cs.CL | 20160611 | 20160611 | [] |
1606.03622 | 41 | Breuleux, F. Bastien, P. Lamblin, R. Pascanu, G. Des- jardins, J. Turian, D. Warde-Farley, and Y. Bengio. 2010. Theano: a CPU and GPU math expression In Python for Scientiï¬c Computing compiler. Conference.
Goldwasser, Clarke, M. Chang, and D. Roth. 2010. Driving semantic In Computa- parsing from the worldâs response. tional Natural Language Learning (CoNLL), pages 18â27.
[Dong and Lapata2016] L. Dong and M. Lapata. 2016. Language to logical form with neural attention. In Association for Computational Linguistics (ACL).
[Dyer et al.2015] C. Dyer, M. Ballesteros, W. Ling, A. Matthews, and N. A. Smith. 2015. Transition- based dependency parsing with stack long short- In Association for Computational term memory. Linguistics (ACL).
[Grefenstette et al.2014] E. Grefenstette, P. Blunsom, N. de Freitas, and K. M. Hermann. 2014. A deep architecture for semantic parsing. In ACL Workshop on Semantic Parsing, pages 22â27. | 1606.03622#41 | Data Recombination for Neural Semantic Parsing | Modeling crisp logical regularities is crucial in semantic parsing, making it
difficult for neural models with no task-specific prior knowledge to achieve
good results. In this paper, we introduce data recombination, a novel framework
for injecting such prior knowledge into a model. From the training data, we
induce a high-precision synchronous context-free grammar, which captures
important conditional independence properties commonly found in semantic
parsing. We then train a sequence-to-sequence recurrent network (RNN) model
with a novel attention-based copying mechanism on datapoints sampled from this
grammar, thereby teaching the model about these structural properties. Data
recombination improves the accuracy of our RNN model on three semantic parsing
datasets, leading to new state-of-the-art performance on the standard GeoQuery
dataset for models with comparable supervision. | http://arxiv.org/pdf/1606.03622 | Robin Jia, Percy Liang | cs.CL | ACL 2016 | null | cs.CL | 20160611 | 20160611 | [] |
1606.03622 | 42 | [Gu et al.2016] J. Gu, Z. Lu, H. Li, and V. O. Li. 2016. Incorporating copying mechanism in sequence-to- In Association for Computa- sequence learning. tional Linguistics (ACL).
[Gulcehre et al.2016] C. Gulcehre, S. Ahn, R. Nallap- ati, B. Zhou, and Y. Bengio. 2016. Pointing the unknown words. In Association for Computational Linguistics (ACL).
[Guu et al.2015] K. Guu, J. Miller, and P. Liang. 2015. Traversing knowledge graphs in vector space. In Empirical Methods in Natural Language Processing (EMNLP).
[Hinton et al.2012] G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. R. Salakhut- dinov. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580.
[Hochreiter and Schmidhuber1997] S. Hochreiter and J. Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735â1780. | 1606.03622#42 | Data Recombination for Neural Semantic Parsing | Modeling crisp logical regularities is crucial in semantic parsing, making it
difficult for neural models with no task-specific prior knowledge to achieve
good results. In this paper, we introduce data recombination, a novel framework
for injecting such prior knowledge into a model. From the training data, we
induce a high-precision synchronous context-free grammar, which captures
important conditional independence properties commonly found in semantic
parsing. We then train a sequence-to-sequence recurrent network (RNN) model
with a novel attention-based copying mechanism on datapoints sampled from this
grammar, thereby teaching the model about these structural properties. Data
recombination improves the accuracy of our RNN model on three semantic parsing
datasets, leading to new state-of-the-art performance on the standard GeoQuery
dataset for models with comparable supervision. | http://arxiv.org/pdf/1606.03622 | Robin Jia, Percy Liang | cs.CL | ACL 2016 | null | cs.CL | 20160611 | 20160611 | [] |
1606.03622 | 43 | [Jaitly and Hinton2013] N. Jaitly and G. E. Hinton. 2013. Vocal tract length perturbation (vtlp) im- proves speech recognition. In International Confer- ence on Machine Learning (ICML).
I. Sutskever, Imagenet classiï¬cation and G. E. Hinton. 2012. In Ad- with deep convolutional neural networks. vances in Neural Information Processing Systems (NIPS), pages 1097â1105.
and R. Barzilay. 2013. Using semantic uniï¬cation to generate regular expressions from natural lan- In Human Language Technology and guage. North American Association for Computational Linguistics (HLT/NAACL), pages 826â836.
[Kwiatkowski et al.2010] T. Kwiatkowski, L. Zettle- 2010. moyer, S. Goldwater, and M. Steedman. Inducing probabilistic CCG grammars from logi- In Em- cal form with higher-order uniï¬cation. pirical Methods in Natural Language Processing (EMNLP), pages 1223â1233. | 1606.03622#43 | Data Recombination for Neural Semantic Parsing | Modeling crisp logical regularities is crucial in semantic parsing, making it
difficult for neural models with no task-specific prior knowledge to achieve
good results. In this paper, we introduce data recombination, a novel framework
for injecting such prior knowledge into a model. From the training data, we
induce a high-precision synchronous context-free grammar, which captures
important conditional independence properties commonly found in semantic
parsing. We then train a sequence-to-sequence recurrent network (RNN) model
with a novel attention-based copying mechanism on datapoints sampled from this
grammar, thereby teaching the model about these structural properties. Data
recombination improves the accuracy of our RNN model on three semantic parsing
datasets, leading to new state-of-the-art performance on the standard GeoQuery
dataset for models with comparable supervision. | http://arxiv.org/pdf/1606.03622 | Robin Jia, Percy Liang | cs.CL | ACL 2016 | null | cs.CL | 20160611 | 20160611 | [] |
1606.03622 | 44 | [Kwiatkowski et al.2011] T. Kwiatkowski, L. Zettle- moyer, S. Goldwater, and M. Steedman. 2011. Lex- ical generalization in CCG grammar induction for semantic parsing. In Empirical Methods in Natural Language Processing (EMNLP), pages 1512â1523.
[Liang et al.2011] P. Liang, M. I. Jordan, and D. Klein. 2011. Learning dependency-based compositional In Association for Computational Lin- semantics. guistics (ACL), pages 590â599.
[Liu et al.2007] A. Liu, J. Ghosh, and C. Martin. 2007. Generative oversampling for mining imbalanced datasets. In International Conference on Data Min- ing (DMIN).
[Luong et al.2015a] M. Luong, H. Pham, and C. D. Effective approaches to Manning. attention-based neural machine translation. In Em- pirical Methods in Natural Language Processing (EMNLP), pages 1412â1421. | 1606.03622#44 | Data Recombination for Neural Semantic Parsing | Modeling crisp logical regularities is crucial in semantic parsing, making it
difficult for neural models with no task-specific prior knowledge to achieve
good results. In this paper, we introduce data recombination, a novel framework
for injecting such prior knowledge into a model. From the training data, we
induce a high-precision synchronous context-free grammar, which captures
important conditional independence properties commonly found in semantic
parsing. We then train a sequence-to-sequence recurrent network (RNN) model
with a novel attention-based copying mechanism on datapoints sampled from this
grammar, thereby teaching the model about these structural properties. Data
recombination improves the accuracy of our RNN model on three semantic parsing
datasets, leading to new state-of-the-art performance on the standard GeoQuery
dataset for models with comparable supervision. | http://arxiv.org/pdf/1606.03622 | Robin Jia, Percy Liang | cs.CL | ACL 2016 | null | cs.CL | 20160611 | 20160611 | [] |
1606.03622 | 45 | [Luong et al.2015b] M. Luong, I. Sutskever, Q. V. Le, O. Vinyals, and W. Zaremba. 2015b. Addressing the rare word problem in neural machine translation. In Association for Computational Linguistics (ACL), pages 11â19.
[Mei et al.2016] H. Mei, M. Bansal, and M. R. Walter. 2016. Listen, attend, and walk: Neural mapping of
navigational instructions to action sequences. In As- sociation for the Advancement of Artiï¬cial Intelli- gence (AAAI).
[Petrov et al.2010] S. Petrov, P. Chang, M. Ringgaard, and H. Alshawi. 2010. Uptraining for accurate de- terministic question parsing. In Empirical Methods in Natural Language Processing (EMNLP).
[Poon2013] H. Poon. 2013. Grounded unsupervised semantic parsing. In Association for Computational Linguistics (ACL).
[Sutskever et al.2014] I. Sutskever, O. Vinyals, and Q. V. Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems (NIPS), pages 3104â3112. | 1606.03622#45 | Data Recombination for Neural Semantic Parsing | Modeling crisp logical regularities is crucial in semantic parsing, making it
difficult for neural models with no task-specific prior knowledge to achieve
good results. In this paper, we introduce data recombination, a novel framework
for injecting such prior knowledge into a model. From the training data, we
induce a high-precision synchronous context-free grammar, which captures
important conditional independence properties commonly found in semantic
parsing. We then train a sequence-to-sequence recurrent network (RNN) model
with a novel attention-based copying mechanism on datapoints sampled from this
grammar, thereby teaching the model about these structural properties. Data
recombination improves the accuracy of our RNN model on three semantic parsing
datasets, leading to new state-of-the-art performance on the standard GeoQuery
dataset for models with comparable supervision. | http://arxiv.org/pdf/1606.03622 | Robin Jia, Percy Liang | cs.CL | ACL 2016 | null | cs.CL | 20160611 | 20160611 | [] |
1606.03622 | 46 | [Vinyals et al.2015a] O. Vinyals, M. Fortunato, and In Advances N. Jaitly. 2015a. Pointer networks. in Neural Information Processing Systems (NIPS), pages 2674â2682.
[Vinyals et al.2015b] O. Vinyals, L. Kaiser, T. Koo, 2015b. S. Petrov, I. Sutskever, and G. Hinton. In Advances Grammar as a foreign language. in Neural Information Processing Systems (NIPS), pages 2755â2763.
[Wager et al.2013] S. Wager, S. I. Wang, and P. Liang. 2013. Dropout training as adaptive regularization. In Advances in Neural Information Processing Sys- tems (NIPS).
[Wang and Yang2015] W. Y. Wang and D. Yang. 2015. Thatâs so annoying!!!: A lexical and frame-semantic embedding based data augmentation approach to au- tomatic categorization of annoying behaviors using #petpeeve tweets. In Empirical Methods in Natural Language Processing (EMNLP).
[Wang et al.2015] Y. Wang, J. Berant, and P. Liang. 2015. Building a semantic parser overnight. In As- sociation for Computational Linguistics (ACL). | 1606.03622#46 | Data Recombination for Neural Semantic Parsing | Modeling crisp logical regularities is crucial in semantic parsing, making it
difficult for neural models with no task-specific prior knowledge to achieve
good results. In this paper, we introduce data recombination, a novel framework
for injecting such prior knowledge into a model. From the training data, we
induce a high-precision synchronous context-free grammar, which captures
important conditional independence properties commonly found in semantic
parsing. We then train a sequence-to-sequence recurrent network (RNN) model
with a novel attention-based copying mechanism on datapoints sampled from this
grammar, thereby teaching the model about these structural properties. Data
recombination improves the accuracy of our RNN model on three semantic parsing
datasets, leading to new state-of-the-art performance on the standard GeoQuery
dataset for models with comparable supervision. | http://arxiv.org/pdf/1606.03622 | Robin Jia, Percy Liang | cs.CL | ACL 2016 | null | cs.CL | 20160611 | 20160611 | [] |
1606.03622 | 47 | J. Learning for semantic pars- Mooney. In North ing with statistical machine translation. American Association for Computational Linguis- tics (NAACL), pages 439â446.
J. Mooney. 2007. Learning synchronous grammars for semantic parsing with lambda calculus. In Asso- ciation for Computational Linguistics (ACL), pages 960â967.
[Zelle and Mooney1996] M. Zelle and R. J. Mooney. 1996. Learning to parse database queries using in- ductive logic programming. In Association for the Advancement of Artiï¬cial Intelligence (AAAI), pages 1050â1055.
[Zettlemoyer and Collins2005] L. S. Zettlemoyer and M. Collins. 2005. Learning to map sentences to log- ical form: Structured classiï¬cation with probabilis- tic categorial grammars. In Uncertainty in Artiï¬cial Intelligence (UAI), pages 658â666.
[Zettlemoyer and Collins2007] L. S. Zettlemoyer and 2007. Online learning of relaxed M. Collins. CCG grammars for parsing to logical form. In Empirical Methods in Natural Language Process- ing and Computational Natural Language Learning (EMNLP/CoNLL), pages 678â687. | 1606.03622#47 | Data Recombination for Neural Semantic Parsing | Modeling crisp logical regularities is crucial in semantic parsing, making it
difficult for neural models with no task-specific prior knowledge to achieve
good results. In this paper, we introduce data recombination, a novel framework
for injecting such prior knowledge into a model. From the training data, we
induce a high-precision synchronous context-free grammar, which captures
important conditional independence properties commonly found in semantic
parsing. We then train a sequence-to-sequence recurrent network (RNN) model
with a novel attention-based copying mechanism on datapoints sampled from this
grammar, thereby teaching the model about these structural properties. Data
recombination improves the accuracy of our RNN model on three semantic parsing
datasets, leading to new state-of-the-art performance on the standard GeoQuery
dataset for models with comparable supervision. | http://arxiv.org/pdf/1606.03622 | Robin Jia, Percy Liang | cs.CL | ACL 2016 | null | cs.CL | 20160611 | 20160611 | [] |
1606.03152 | 1 | In this paper, we propose to use deep pol- icy networks which are trained with an advantage actor-critic method for statisti- cally optimised dialogue systems. First, we show that, on summary state and ac- tion spaces, deep Reinforcement Learn- ing (RL) outperforms Gaussian Processes methods. Summary state and action spaces lead to good performance but re- quire pre-engineering effort, RL knowl- edge, and domain expertise. In order to remove the need to deï¬ne such summary spaces, we show that deep RL can also be trained efï¬ciently on the original state and action spaces. Dialogue systems based on partially observable Markov decision processes are known to require many di- alogues to train, which makes them un- appealing for practical deployment. We show that a deep RL method based on an actor-critic architecture can exploit a small amount of data very efï¬ciently. Indeed, with only a few hundred dialogues col- lected with a handcrafted policy, the actor- critic deep learner is considerably boot- strapped from a combination of supervised and batch RL. In addition, convergence to an optimal policy is signiï¬cantly sped up compared to other deep | 1606.03152#1 | Policy Networks with Two-Stage Training for Dialogue Systems | In this paper, we propose to use deep policy networks which are trained with
an advantage actor-critic method for statistically optimised dialogue systems.
First, we show that, on summary state and action spaces, deep Reinforcement
Learning (RL) outperforms Gaussian Processes methods. Summary state and action
spaces lead to good performance but require pre-engineering effort, RL
knowledge, and domain expertise. In order to remove the need to define such
summary spaces, we show that deep RL can also be trained efficiently on the
original state and action spaces. Dialogue systems based on partially
observable Markov decision processes are known to require many dialogues to
train, which makes them unappealing for practical deployment. We show that a
deep RL method based on an actor-critic architecture can exploit a small amount
of data very efficiently. Indeed, with only a few hundred dialogues collected
with a handcrafted policy, the actor-critic deep learner is considerably
bootstrapped from a combination of supervised and batch RL. In addition,
convergence to an optimal policy is significantly sped up compared to other
deep RL methods initialized on the data with batch RL. All experiments are
performed on a restaurant domain derived from the Dialogue State Tracking
Challenge 2 (DSTC2) dataset. | http://arxiv.org/pdf/1606.03152 | Mehdi Fatemi, Layla El Asri, Hannes Schulz, Jing He, Kaheer Suleman | cs.CL, cs.AI | SIGDial 2016 (Submitted: May 2016; Accepted: Jun 30, 2016) | Proceedings of the SIGDIAL 2016 Conference, pages 101--110, Los
Angeles, USA, 13-15 September 2016. Association for Computational Linguistics | cs.CL | 20160610 | 20160912 | [
{
"id": "1511.08099"
},
{
"id": "1601.04574"
},
{
"id": "1509.06461"
}
] |
1606.03152 | 3 | # Introduction
The statistical optimization of dialogue manage- ment in dialogue systems through Reinforcement Learning (RL) has been an active thread of research for more than two decades (Levin et al., 1997; Lemon and Pietquin, 2007; Laroche et al., 2010; GaËsi´c et al., 2012; Daubigney et al., 2012). Dialogue management has been successfully mod- elled as a Partially Observable Markov Decision Process (POMDP) (Williams and Young, 2007; GaËsi´c et al., 2012), which leads to systems that can learn from data and which are robust to noise. In this context, a dialogue between a user and a di- alogue system is framed as a sequential process where, at each turn, the system has to act based on what it has understood so far of the userâs utter- ances. | 1606.03152#3 | Policy Networks with Two-Stage Training for Dialogue Systems | In this paper, we propose to use deep policy networks which are trained with
an advantage actor-critic method for statistically optimised dialogue systems.
First, we show that, on summary state and action spaces, deep Reinforcement
Learning (RL) outperforms Gaussian Processes methods. Summary state and action
spaces lead to good performance but require pre-engineering effort, RL
knowledge, and domain expertise. In order to remove the need to define such
summary spaces, we show that deep RL can also be trained efficiently on the
original state and action spaces. Dialogue systems based on partially
observable Markov decision processes are known to require many dialogues to
train, which makes them unappealing for practical deployment. We show that a
deep RL method based on an actor-critic architecture can exploit a small amount
of data very efficiently. Indeed, with only a few hundred dialogues collected
with a handcrafted policy, the actor-critic deep learner is considerably
bootstrapped from a combination of supervised and batch RL. In addition,
convergence to an optimal policy is significantly sped up compared to other
deep RL methods initialized on the data with batch RL. All experiments are
performed on a restaurant domain derived from the Dialogue State Tracking
Challenge 2 (DSTC2) dataset. | http://arxiv.org/pdf/1606.03152 | Mehdi Fatemi, Layla El Asri, Hannes Schulz, Jing He, Kaheer Suleman | cs.CL, cs.AI | SIGDial 2016 (Submitted: May 2016; Accepted: Jun 30, 2016) | Proceedings of the SIGDIAL 2016 Conference, pages 101--110, Los
Angeles, USA, 13-15 September 2016. Association for Computational Linguistics | cs.CL | 20160610 | 20160912 | [
{
"id": "1511.08099"
},
{
"id": "1601.04574"
},
{
"id": "1509.06461"
}
] |
1606.03152 | 4 | Unfortunately, POMDP-based dialogue man- agers have been unï¬t for online deployment be- cause they typically require several thousands of dialogues for training (GaËsi´c et al., 2010, 2012). Nevertheless, recent work has shown that it is pos- sible to train a POMDP-based dialogue system on just a few hundred dialogues corresponding to on- line interactions with users (GaËsi´c et al., 2013). However, in order to do so, pre-engineering ef- forts, prior RL knowledge, and domain expertise must be applied. Indeed, summary state and ac- tion spaces must be used and the set of actions must be restricted depending on the current state so that notoriously bad actions are prohibited. | 1606.03152#4 | Policy Networks with Two-Stage Training for Dialogue Systems | In this paper, we propose to use deep policy networks which are trained with
an advantage actor-critic method for statistically optimised dialogue systems.
First, we show that, on summary state and action spaces, deep Reinforcement
Learning (RL) outperforms Gaussian Processes methods. Summary state and action
spaces lead to good performance but require pre-engineering effort, RL
knowledge, and domain expertise. In order to remove the need to define such
summary spaces, we show that deep RL can also be trained efficiently on the
original state and action spaces. Dialogue systems based on partially
observable Markov decision processes are known to require many dialogues to
train, which makes them unappealing for practical deployment. We show that a
deep RL method based on an actor-critic architecture can exploit a small amount
of data very efficiently. Indeed, with only a few hundred dialogues collected
with a handcrafted policy, the actor-critic deep learner is considerably
bootstrapped from a combination of supervised and batch RL. In addition,
convergence to an optimal policy is significantly sped up compared to other
deep RL methods initialized on the data with batch RL. All experiments are
performed on a restaurant domain derived from the Dialogue State Tracking
Challenge 2 (DSTC2) dataset. | http://arxiv.org/pdf/1606.03152 | Mehdi Fatemi, Layla El Asri, Hannes Schulz, Jing He, Kaheer Suleman | cs.CL, cs.AI | SIGDial 2016 (Submitted: May 2016; Accepted: Jun 30, 2016) | Proceedings of the SIGDIAL 2016 Conference, pages 101--110, Los
Angeles, USA, 13-15 September 2016. Association for Computational Linguistics | cs.CL | 20160610 | 20160912 | [
{
"id": "1511.08099"
},
{
"id": "1601.04574"
},
{
"id": "1509.06461"
}
] |
1606.03152 | 5 | In order to alleviate the need for a summary state space, deep RL (Mnih et al., 2013) has recently been applied to dialogue management (Cuay´ahuitl et al., 2015) in the context of negoti- ations. It was shown that deep RL performed sig- niï¬cantly better than other heuristic or supervised approaches. The authors performed learning over a large action space of 70 actions and they also had to use restricted action sets in order to learn efï¬ciently over this space. Besides, deep RL was not compared to other RL methods, which we do in this paper. In (Cuay´ahuitl, 2016), a simplistic implementation of deep Q Networks is presented,
again with no comparison to other RL methods. | 1606.03152#5 | Policy Networks with Two-Stage Training for Dialogue Systems | In this paper, we propose to use deep policy networks which are trained with
an advantage actor-critic method for statistically optimised dialogue systems.
First, we show that, on summary state and action spaces, deep Reinforcement
Learning (RL) outperforms Gaussian Processes methods. Summary state and action
spaces lead to good performance but require pre-engineering effort, RL
knowledge, and domain expertise. In order to remove the need to define such
summary spaces, we show that deep RL can also be trained efficiently on the
original state and action spaces. Dialogue systems based on partially
observable Markov decision processes are known to require many dialogues to
train, which makes them unappealing for practical deployment. We show that a
deep RL method based on an actor-critic architecture can exploit a small amount
of data very efficiently. Indeed, with only a few hundred dialogues collected
with a handcrafted policy, the actor-critic deep learner is considerably
bootstrapped from a combination of supervised and batch RL. In addition,
convergence to an optimal policy is significantly sped up compared to other
deep RL methods initialized on the data with batch RL. All experiments are
performed on a restaurant domain derived from the Dialogue State Tracking
Challenge 2 (DSTC2) dataset. | http://arxiv.org/pdf/1606.03152 | Mehdi Fatemi, Layla El Asri, Hannes Schulz, Jing He, Kaheer Suleman | cs.CL, cs.AI | SIGDial 2016 (Submitted: May 2016; Accepted: Jun 30, 2016) | Proceedings of the SIGDIAL 2016 Conference, pages 101--110, Los
Angeles, USA, 13-15 September 2016. Association for Computational Linguistics | cs.CL | 20160610 | 20160912 | [
{
"id": "1511.08099"
},
{
"id": "1601.04574"
},
{
"id": "1509.06461"
}
] |
1606.03152 | 6 | again with no comparison to other RL methods.
In this paper, we propose to efï¬ciently alleviate the need for summary spaces and restricted actions using deep RL. We analyse four deep RL mod- els: Deep Q Networks (DQN) (Mnih et al., 2013), Double DQN (DDQN) (van Hasselt et al., 2015), Deep Advantage Actor-Critic (DA2C) (Sutton et al., 2000) and a version of DA2C initialized with supervised learning (TDA2C)1 (similar idea to Silver et al. (2016)). All models are trained on a restaurant-seeking domain. We use the Dialogue State Tracking Challenge 2 (DSTC2) dataset to train an agenda-based user simulator (Schatzmann and Young, 2009) for online learning and to per- form batch RL and supervised learning. | 1606.03152#6 | Policy Networks with Two-Stage Training for Dialogue Systems | In this paper, we propose to use deep policy networks which are trained with
an advantage actor-critic method for statistically optimised dialogue systems.
First, we show that, on summary state and action spaces, deep Reinforcement
Learning (RL) outperforms Gaussian Processes methods. Summary state and action
spaces lead to good performance but require pre-engineering effort, RL
knowledge, and domain expertise. In order to remove the need to define such
summary spaces, we show that deep RL can also be trained efficiently on the
original state and action spaces. Dialogue systems based on partially
observable Markov decision processes are known to require many dialogues to
train, which makes them unappealing for practical deployment. We show that a
deep RL method based on an actor-critic architecture can exploit a small amount
of data very efficiently. Indeed, with only a few hundred dialogues collected
with a handcrafted policy, the actor-critic deep learner is considerably
bootstrapped from a combination of supervised and batch RL. In addition,
convergence to an optimal policy is significantly sped up compared to other
deep RL methods initialized on the data with batch RL. All experiments are
performed on a restaurant domain derived from the Dialogue State Tracking
Challenge 2 (DSTC2) dataset. | http://arxiv.org/pdf/1606.03152 | Mehdi Fatemi, Layla El Asri, Hannes Schulz, Jing He, Kaheer Suleman | cs.CL, cs.AI | SIGDial 2016 (Submitted: May 2016; Accepted: Jun 30, 2016) | Proceedings of the SIGDIAL 2016 Conference, pages 101--110, Los
Angeles, USA, 13-15 September 2016. Association for Computational Linguistics | cs.CL | 20160610 | 20160912 | [
{
"id": "1511.08099"
},
{
"id": "1601.04574"
},
{
"id": "1509.06461"
}
] |
1606.03152 | 7 | We ï¬rst show that, on summary state and ac- tion spaces, deep RL converges faster than Gaus- sian Processes SARSA (GPSARSA) (GaËsi´c et al., 2010). Then we show that deep RL enables us to work on the original state and action spaces. Al- though GPSARSA has also been tried on origi- nal state space (GaËsi´c et al., 2012), it is extremely slow in terms of wall-clock time due to its grow- ing kernel evaluations. Indeed, contrary to meth- ods such as GPSARSA, deep RL performs efï¬- cient generalization over the state space and mem- ory requirements do not increase with the num- ber of experiments. On the simple domain speci- ï¬ed by DSTC2, we do not need to restrict the ac- tions in order to learn efï¬ciently. In order to re- move the need for restricted actions in more com- plex domains, we advocate for the use of TDA2C and supervised learning as a pre-training step. We show that supervised learning on a small set of dialogues (only 706 dialogues) signiï¬cantly boot- straps TDA2C and enables us | 1606.03152#7 | Policy Networks with Two-Stage Training for Dialogue Systems | In this paper, we propose to use deep policy networks which are trained with
an advantage actor-critic method for statistically optimised dialogue systems.
First, we show that, on summary state and action spaces, deep Reinforcement
Learning (RL) outperforms Gaussian Processes methods. Summary state and action
spaces lead to good performance but require pre-engineering effort, RL
knowledge, and domain expertise. In order to remove the need to define such
summary spaces, we show that deep RL can also be trained efficiently on the
original state and action spaces. Dialogue systems based on partially
observable Markov decision processes are known to require many dialogues to
train, which makes them unappealing for practical deployment. We show that a
deep RL method based on an actor-critic architecture can exploit a small amount
of data very efficiently. Indeed, with only a few hundred dialogues collected
with a handcrafted policy, the actor-critic deep learner is considerably
bootstrapped from a combination of supervised and batch RL. In addition,
convergence to an optimal policy is significantly sped up compared to other
deep RL methods initialized on the data with batch RL. All experiments are
performed on a restaurant domain derived from the Dialogue State Tracking
Challenge 2 (DSTC2) dataset. | http://arxiv.org/pdf/1606.03152 | Mehdi Fatemi, Layla El Asri, Hannes Schulz, Jing He, Kaheer Suleman | cs.CL, cs.AI | SIGDial 2016 (Submitted: May 2016; Accepted: Jun 30, 2016) | Proceedings of the SIGDIAL 2016 Conference, pages 101--110, Los
Angeles, USA, 13-15 September 2016. Association for Computational Linguistics | cs.CL | 20160610 | 20160912 | [
{
"id": "1511.08099"
},
{
"id": "1601.04574"
},
{
"id": "1509.06461"
}
] |
1606.03152 | 9 | In Section 2 we brieï¬y review POMDP, RL and GPSARSA. The value-based deep RL models in- vestigated in this paper (DQN and DDQN) are de- scribed in Section 3. Policy networks and DA2C are discussed in Section 4. We then introduce the two-stage training of DA2C in Section 5. Experi- mental results are presented in Section 6. Finally, Section 7 concludes the paper and makes sugges- tions for future research.
1Teacher DA2C
# 2 Preliminaries | 1606.03152#9 | Policy Networks with Two-Stage Training for Dialogue Systems | In this paper, we propose to use deep policy networks which are trained with
an advantage actor-critic method for statistically optimised dialogue systems.
First, we show that, on summary state and action spaces, deep Reinforcement
Learning (RL) outperforms Gaussian Processes methods. Summary state and action
spaces lead to good performance but require pre-engineering effort, RL
knowledge, and domain expertise. In order to remove the need to define such
summary spaces, we show that deep RL can also be trained efficiently on the
original state and action spaces. Dialogue systems based on partially
observable Markov decision processes are known to require many dialogues to
train, which makes them unappealing for practical deployment. We show that a
deep RL method based on an actor-critic architecture can exploit a small amount
of data very efficiently. Indeed, with only a few hundred dialogues collected
with a handcrafted policy, the actor-critic deep learner is considerably
bootstrapped from a combination of supervised and batch RL. In addition,
convergence to an optimal policy is significantly sped up compared to other
deep RL methods initialized on the data with batch RL. All experiments are
performed on a restaurant domain derived from the Dialogue State Tracking
Challenge 2 (DSTC2) dataset. | http://arxiv.org/pdf/1606.03152 | Mehdi Fatemi, Layla El Asri, Hannes Schulz, Jing He, Kaheer Suleman | cs.CL, cs.AI | SIGDial 2016 (Submitted: May 2016; Accepted: Jun 30, 2016) | Proceedings of the SIGDIAL 2016 Conference, pages 101--110, Los
Angeles, USA, 13-15 September 2016. Association for Computational Linguistics | cs.CL | 20160610 | 20160912 | [
{
"id": "1511.08099"
},
{
"id": "1601.04574"
},
{
"id": "1509.06461"
}
] |
1606.03152 | 10 | 1Teacher DA2C
# 2 Preliminaries
The reinforcement learning problem consists of an environment (the user) and an agent (the system) (Sutton and Barto} {1998). The environment is de- scribed as a set of continuous or discrete states S and at each state s ⬠S, the system can perform an action from an action space A(s). The actions can be continuous, but in our case they are assumed to be discrete and finite. At time t, as a consequence of an action A; = a ⬠A(s), the state transitions from S; = s to Si4, = sâ ⬠S. In addition, a reward signal Ri+1 = R(S;, At, Si41) ⬠R pro- vides feedback on the quality of the transitior?| The agentâs task is to maximize at each state the expected discounted sum of rewards received after visiting this state. For this purpose, value func- tions are computed. The action-state value func- tion Q is defined as: Q" (St, At) = [Rist + Rive +P Ru3t..- | Si = s, At =al, (1) | 1606.03152#10 | Policy Networks with Two-Stage Training for Dialogue Systems | In this paper, we propose to use deep policy networks which are trained with
an advantage actor-critic method for statistically optimised dialogue systems.
First, we show that, on summary state and action spaces, deep Reinforcement
Learning (RL) outperforms Gaussian Processes methods. Summary state and action
spaces lead to good performance but require pre-engineering effort, RL
knowledge, and domain expertise. In order to remove the need to define such
summary spaces, we show that deep RL can also be trained efficiently on the
original state and action spaces. Dialogue systems based on partially
observable Markov decision processes are known to require many dialogues to
train, which makes them unappealing for practical deployment. We show that a
deep RL method based on an actor-critic architecture can exploit a small amount
of data very efficiently. Indeed, with only a few hundred dialogues collected
with a handcrafted policy, the actor-critic deep learner is considerably
bootstrapped from a combination of supervised and batch RL. In addition,
convergence to an optimal policy is significantly sped up compared to other
deep RL methods initialized on the data with batch RL. All experiments are
performed on a restaurant domain derived from the Dialogue State Tracking
Challenge 2 (DSTC2) dataset. | http://arxiv.org/pdf/1606.03152 | Mehdi Fatemi, Layla El Asri, Hannes Schulz, Jing He, Kaheer Suleman | cs.CL, cs.AI | SIGDial 2016 (Submitted: May 2016; Accepted: Jun 30, 2016) | Proceedings of the SIGDIAL 2016 Conference, pages 101--110, Los
Angeles, USA, 13-15 September 2016. Association for Computational Linguistics | cs.CL | 20160610 | 20160912 | [
{
"id": "1511.08099"
},
{
"id": "1601.04574"
},
{
"id": "1509.06461"
}
] |
1606.03152 | 11 | where γ is a discount factor in [0, 1]. In this equa- tion, the policy Ï speciï¬es the systemâs behaviour, i.e., it describes the agentâs action selection pro- cess at each state. A policy can be a deterministic mapping Ï(s) = a, which speciï¬es the action a to be selected when state s is met. On the other hand, a stochastic policy provides a probability distribu- tion over the action space at each state: Ï(a|s) = P[At = a|St = s].
The agentâs goal is to ï¬nd a policy that maximizes the Q-function at each state.
It is important to note that here the system does not have direct access to the state s. Instead, it sees this state through a perception process which typically includes an Automatic Speech Recogni- tion (ASR) step, a Natural Language Understand- ing (NLU) step, and a State Tracking (ST) step. This perception process injects noise in the state of the system and it has been shown that mod- elling dialogue management as a POMDP helps to overcome this noise (Williams and Young, 2007; Young et al., 2013). | 1606.03152#11 | Policy Networks with Two-Stage Training for Dialogue Systems | In this paper, we propose to use deep policy networks which are trained with
an advantage actor-critic method for statistically optimised dialogue systems.
First, we show that, on summary state and action spaces, deep Reinforcement
Learning (RL) outperforms Gaussian Processes methods. Summary state and action
spaces lead to good performance but require pre-engineering effort, RL
knowledge, and domain expertise. In order to remove the need to define such
summary spaces, we show that deep RL can also be trained efficiently on the
original state and action spaces. Dialogue systems based on partially
observable Markov decision processes are known to require many dialogues to
train, which makes them unappealing for practical deployment. We show that a
deep RL method based on an actor-critic architecture can exploit a small amount
of data very efficiently. Indeed, with only a few hundred dialogues collected
with a handcrafted policy, the actor-critic deep learner is considerably
bootstrapped from a combination of supervised and batch RL. In addition,
convergence to an optimal policy is significantly sped up compared to other
deep RL methods initialized on the data with batch RL. All experiments are
performed on a restaurant domain derived from the Dialogue State Tracking
Challenge 2 (DSTC2) dataset. | http://arxiv.org/pdf/1606.03152 | Mehdi Fatemi, Layla El Asri, Hannes Schulz, Jing He, Kaheer Suleman | cs.CL, cs.AI | SIGDial 2016 (Submitted: May 2016; Accepted: Jun 30, 2016) | Proceedings of the SIGDIAL 2016 Conference, pages 101--110, Los
Angeles, USA, 13-15 September 2016. Association for Computational Linguistics | cs.CL | 20160610 | 20160912 | [
{
"id": "1511.08099"
},
{
"id": "1601.04574"
},
{
"id": "1509.06461"
}
] |
1606.03152 | 12 | Within the POMDP framework, the state at time t, St, is not directly observable. Instead, the sys- tem has access to a noisy observation Ot.3 A 2In this paper, upper-case letters are used for random vari- ables, lower-case letters for non-random values (known or unknown), and calligraphy letters for sets.
3Here, the representation of the userâs goal and the userâs utterances.
POMDP is a tuple (S,.A, P, R,O, Z,7, bo) where S is the state space, A is the action space, P is the function encoding the transition probability: P,(s, 8â) = P(Si41 = 8â | Sp = 5, Ay = a), Ris the reward function, O is the observation space, Z encodes the observation probabilities Z,(s,0) = P(Q, = 0 | S; = s, Ay = a), 7 is a discount fac- tor, and bo is an initial belief state. The belief state is a distribution over states. Starting from bo, the state tracker maintains and updates the belief state according to the observations perceived during the dialogue. The dialogue manager then operates on this belief state. Consequently, the value functions as well as the policy of the agent are computed on the belief states B;: | 1606.03152#12 | Policy Networks with Two-Stage Training for Dialogue Systems | In this paper, we propose to use deep policy networks which are trained with
an advantage actor-critic method for statistically optimised dialogue systems.
First, we show that, on summary state and action spaces, deep Reinforcement
Learning (RL) outperforms Gaussian Processes methods. Summary state and action
spaces lead to good performance but require pre-engineering effort, RL
knowledge, and domain expertise. In order to remove the need to define such
summary spaces, we show that deep RL can also be trained efficiently on the
original state and action spaces. Dialogue systems based on partially
observable Markov decision processes are known to require many dialogues to
train, which makes them unappealing for practical deployment. We show that a
deep RL method based on an actor-critic architecture can exploit a small amount
of data very efficiently. Indeed, with only a few hundred dialogues collected
with a handcrafted policy, the actor-critic deep learner is considerably
bootstrapped from a combination of supervised and batch RL. In addition,
convergence to an optimal policy is significantly sped up compared to other
deep RL methods initialized on the data with batch RL. All experiments are
performed on a restaurant domain derived from the Dialogue State Tracking
Challenge 2 (DSTC2) dataset. | http://arxiv.org/pdf/1606.03152 | Mehdi Fatemi, Layla El Asri, Hannes Schulz, Jing He, Kaheer Suleman | cs.CL, cs.AI | SIGDial 2016 (Submitted: May 2016; Accepted: Jun 30, 2016) | Proceedings of the SIGDIAL 2016 Conference, pages 101--110, Los
Angeles, USA, 13-15 September 2016. Association for Computational Linguistics | cs.CL | 20160610 | 20160912 | [
{
"id": "1511.08099"
},
{
"id": "1601.04574"
},
{
"id": "1509.06461"
}
] |
1606.03152 | 14 | In this paper, we use GPSARSA as a baseline as it has been proved to be a successful algorithm for training POMDP-based dialogue managers (Engel et al., 2005; GaËsi´c et al., 2010). Formally, the Q- function is modelled as a Gaussian process, en- tirely deï¬ned by a mean and a kernel: Q(B, A) â¼ GP(m, (k(B, A), k(B, A))). The mean is usually initialized at 0 and it is then jointly updated with the covariance based on the systemâs observations (i.e., the visited belief states and actions, and the In order to avoid intractability in the rewards). number of experiments, we use kernel span spar- siï¬cation (Engel et al., 2005). This technique con- sists of approximating the kernel on a dictionary of linearly independent belief states. This dictio- nary is incrementally built during learning. Kernel span sparsiï¬cation requires setting a threshold on the precision to which the kernel is computed. As discussed in Section 6, this threshold needs to be ï¬ne-tuned for a good tradeoff between precision and performance.
# 3 Value-Based Deep Reinforcement Learning | 1606.03152#14 | Policy Networks with Two-Stage Training for Dialogue Systems | In this paper, we propose to use deep policy networks which are trained with
an advantage actor-critic method for statistically optimised dialogue systems.
First, we show that, on summary state and action spaces, deep Reinforcement
Learning (RL) outperforms Gaussian Processes methods. Summary state and action
spaces lead to good performance but require pre-engineering effort, RL
knowledge, and domain expertise. In order to remove the need to define such
summary spaces, we show that deep RL can also be trained efficiently on the
original state and action spaces. Dialogue systems based on partially
observable Markov decision processes are known to require many dialogues to
train, which makes them unappealing for practical deployment. We show that a
deep RL method based on an actor-critic architecture can exploit a small amount
of data very efficiently. Indeed, with only a few hundred dialogues collected
with a handcrafted policy, the actor-critic deep learner is considerably
bootstrapped from a combination of supervised and batch RL. In addition,
convergence to an optimal policy is significantly sped up compared to other
deep RL methods initialized on the data with batch RL. All experiments are
performed on a restaurant domain derived from the Dialogue State Tracking
Challenge 2 (DSTC2) dataset. | http://arxiv.org/pdf/1606.03152 | Mehdi Fatemi, Layla El Asri, Hannes Schulz, Jing He, Kaheer Suleman | cs.CL, cs.AI | SIGDial 2016 (Submitted: May 2016; Accepted: Jun 30, 2016) | Proceedings of the SIGDIAL 2016 Conference, pages 101--110, Los
Angeles, USA, 13-15 September 2016. Association for Computational Linguistics | cs.CL | 20160610 | 20160912 | [
{
"id": "1511.08099"
},
{
"id": "1601.04574"
},
{
"id": "1509.06461"
}
] |
1606.03152 | 15 | # 3 Value-Based Deep Reinforcement Learning
Broadly speaking, there are two main streams of methodologies in the RL literature: value approxi- mation and policy gradients. As suggested by their names, the former tries to approximate the value function whereas the latter tries to directly approx- imate the policy. Approximations are necessary for large or continuous belief and action spaces.
Indeed, if the belief space is large or continuous it would not be possible to store a value for each state in a table, so generalization over the state space is necessary. In this context, some of the beneï¬ts of deep RL techniques are the following:
⢠Generalisation over the belief space is efï¬- cient and the need for summary spaces is eliminated, normally with considerably less wall-clock training time comparing to GP- SARSA, for example.
⢠Memory requirements are limited and can be determined in advance unlike with methods such as GPSARSA.
⢠Deep architectures with several hidden layers can be efï¬ciently used for complex tasks and environments.
# 3.1 Deep Q Networks | 1606.03152#15 | Policy Networks with Two-Stage Training for Dialogue Systems | In this paper, we propose to use deep policy networks which are trained with
an advantage actor-critic method for statistically optimised dialogue systems.
First, we show that, on summary state and action spaces, deep Reinforcement
Learning (RL) outperforms Gaussian Processes methods. Summary state and action
spaces lead to good performance but require pre-engineering effort, RL
knowledge, and domain expertise. In order to remove the need to define such
summary spaces, we show that deep RL can also be trained efficiently on the
original state and action spaces. Dialogue systems based on partially
observable Markov decision processes are known to require many dialogues to
train, which makes them unappealing for practical deployment. We show that a
deep RL method based on an actor-critic architecture can exploit a small amount
of data very efficiently. Indeed, with only a few hundred dialogues collected
with a handcrafted policy, the actor-critic deep learner is considerably
bootstrapped from a combination of supervised and batch RL. In addition,
convergence to an optimal policy is significantly sped up compared to other
deep RL methods initialized on the data with batch RL. All experiments are
performed on a restaurant domain derived from the Dialogue State Tracking
Challenge 2 (DSTC2) dataset. | http://arxiv.org/pdf/1606.03152 | Mehdi Fatemi, Layla El Asri, Hannes Schulz, Jing He, Kaheer Suleman | cs.CL, cs.AI | SIGDial 2016 (Submitted: May 2016; Accepted: Jun 30, 2016) | Proceedings of the SIGDIAL 2016 Conference, pages 101--110, Los
Angeles, USA, 13-15 September 2016. Association for Computational Linguistics | cs.CL | 20160610 | 20160912 | [
{
"id": "1511.08099"
},
{
"id": "1601.04574"
},
{
"id": "1509.06461"
}
] |
1606.03152 | 16 | A Deep Q-Network (DQN) is a multi-layer neu- ral network which maps a belief state Bt to the values of the possible actions At â A(Bt = b) at that state, QÏ(Bt, At; wt), where wt is the weight vector of the neural network. Neural net- works for the approximation of value functions have long been investigated (Bertsekas and Tsit- siklis, 1996). However, these methods were previ- ously quite unstable (Mnih et al., 2013). In DQN, Mnih et al. (2013, 2015) proposed two techniques to overcome this instability-namely experience re- play and the use of a target network. In experi- ence replay, all the transitions are put in a ï¬nite pool D (Lin, 1993). Once the pool has reached its predeï¬ned maximum size, adding a new tran- sition results in deleting the oldest transition in the pool. During training, a mini-batch of tran- sitions is uniformly sampled from the pool, i.e. (Bt, At, Rt+1, Bt+1) â¼ U (D). This method re- moves the instability arising from strong | 1606.03152#16 | Policy Networks with Two-Stage Training for Dialogue Systems | In this paper, we propose to use deep policy networks which are trained with
an advantage actor-critic method for statistically optimised dialogue systems.
First, we show that, on summary state and action spaces, deep Reinforcement
Learning (RL) outperforms Gaussian Processes methods. Summary state and action
spaces lead to good performance but require pre-engineering effort, RL
knowledge, and domain expertise. In order to remove the need to define such
summary spaces, we show that deep RL can also be trained efficiently on the
original state and action spaces. Dialogue systems based on partially
observable Markov decision processes are known to require many dialogues to
train, which makes them unappealing for practical deployment. We show that a
deep RL method based on an actor-critic architecture can exploit a small amount
of data very efficiently. Indeed, with only a few hundred dialogues collected
with a handcrafted policy, the actor-critic deep learner is considerably
bootstrapped from a combination of supervised and batch RL. In addition,
convergence to an optimal policy is significantly sped up compared to other
deep RL methods initialized on the data with batch RL. All experiments are
performed on a restaurant domain derived from the Dialogue State Tracking
Challenge 2 (DSTC2) dataset. | http://arxiv.org/pdf/1606.03152 | Mehdi Fatemi, Layla El Asri, Hannes Schulz, Jing He, Kaheer Suleman | cs.CL, cs.AI | SIGDial 2016 (Submitted: May 2016; Accepted: Jun 30, 2016) | Proceedings of the SIGDIAL 2016 Conference, pages 101--110, Los
Angeles, USA, 13-15 September 2016. Association for Computational Linguistics | cs.CL | 20160610 | 20160912 | [
{
"id": "1511.08099"
},
{
"id": "1601.04574"
},
{
"id": "1509.06461"
}
] |
1606.03152 | 17 | i.e. (Bt, At, Rt+1, Bt+1) â¼ U (D). This method re- moves the instability arising from strong corre- lation between the subsequent transitions of an episode (a dialogue). Additionally, a target net- work with weight vector wâ is used. This target network is similar to the Q-network except that its weights are only copied every Ï steps from the Q-network, and remain ï¬xed during all the other steps. The loss function for the Q-network at iteration t takes the following form: | 1606.03152#17 | Policy Networks with Two-Stage Training for Dialogue Systems | In this paper, we propose to use deep policy networks which are trained with
an advantage actor-critic method for statistically optimised dialogue systems.
First, we show that, on summary state and action spaces, deep Reinforcement
Learning (RL) outperforms Gaussian Processes methods. Summary state and action
spaces lead to good performance but require pre-engineering effort, RL
knowledge, and domain expertise. In order to remove the need to define such
summary spaces, we show that deep RL can also be trained efficiently on the
original state and action spaces. Dialogue systems based on partially
observable Markov decision processes are known to require many dialogues to
train, which makes them unappealing for practical deployment. We show that a
deep RL method based on an actor-critic architecture can exploit a small amount
of data very efficiently. Indeed, with only a few hundred dialogues collected
with a handcrafted policy, the actor-critic deep learner is considerably
bootstrapped from a combination of supervised and batch RL. In addition,
convergence to an optimal policy is significantly sped up compared to other
deep RL methods initialized on the data with batch RL. All experiments are
performed on a restaurant domain derived from the Dialogue State Tracking
Challenge 2 (DSTC2) dataset. | http://arxiv.org/pdf/1606.03152 | Mehdi Fatemi, Layla El Asri, Hannes Schulz, Jing He, Kaheer Suleman | cs.CL, cs.AI | SIGDial 2016 (Submitted: May 2016; Accepted: Jun 30, 2016) | Proceedings of the SIGDIAL 2016 Conference, pages 101--110, Los
Angeles, USA, 13-15 September 2016. Association for Computational Linguistics | cs.CL | 20160610 | 20160912 | [
{
"id": "1511.08099"
},
{
"id": "1601.04574"
},
{
"id": "1509.06461"
}
] |
1606.03152 | 18 | Li(we) = EC, Ae, Rey1,Bey1)~U(D) [ (Re + ymax Q"(Br41,4'; wy, ) a 2 â Q" (Bi, Aes w:)) | : (4)
# 3.2 Double DQN: Overcoming
# Overestimation and Instability of DQN
The max operator in Equation 4 uses the same value network (i.e., the target network) to se- lect actions and evaluate them. This increases the probability of overestimating the value of the state-action pairs (van Hasselt, 2010; van Hasselt et al., 2015). To see this more clearly, the target part of the loss in Equation 4 can be rewritten as follows:
Rt+1 + γQÏ(Bt+1, argmax a QÏ(Bt+1, a; wâ t ); wâ t ).
In this equation, the target network is used twice. Decoupling is possible by using the Q-network for action selection as follows (van Hasselt et al., 2015):
Rt+1 + γQÏ(Bt+1, argmax a QÏ(Bt+1, a; wt); wâ t ). | 1606.03152#18 | Policy Networks with Two-Stage Training for Dialogue Systems | In this paper, we propose to use deep policy networks which are trained with
an advantage actor-critic method for statistically optimised dialogue systems.
First, we show that, on summary state and action spaces, deep Reinforcement
Learning (RL) outperforms Gaussian Processes methods. Summary state and action
spaces lead to good performance but require pre-engineering effort, RL
knowledge, and domain expertise. In order to remove the need to define such
summary spaces, we show that deep RL can also be trained efficiently on the
original state and action spaces. Dialogue systems based on partially
observable Markov decision processes are known to require many dialogues to
train, which makes them unappealing for practical deployment. We show that a
deep RL method based on an actor-critic architecture can exploit a small amount
of data very efficiently. Indeed, with only a few hundred dialogues collected
with a handcrafted policy, the actor-critic deep learner is considerably
bootstrapped from a combination of supervised and batch RL. In addition,
convergence to an optimal policy is significantly sped up compared to other
deep RL methods initialized on the data with batch RL. All experiments are
performed on a restaurant domain derived from the Dialogue State Tracking
Challenge 2 (DSTC2) dataset. | http://arxiv.org/pdf/1606.03152 | Mehdi Fatemi, Layla El Asri, Hannes Schulz, Jing He, Kaheer Suleman | cs.CL, cs.AI | SIGDial 2016 (Submitted: May 2016; Accepted: Jun 30, 2016) | Proceedings of the SIGDIAL 2016 Conference, pages 101--110, Los
Angeles, USA, 13-15 September 2016. Association for Computational Linguistics | cs.CL | 20160610 | 20160912 | [
{
"id": "1511.08099"
},
{
"id": "1601.04574"
},
{
"id": "1509.06461"
}
] |
1606.03152 | 19 | Rt+1 + γQÏ(Bt+1, argmax a QÏ(Bt+1, a; wt); wâ t ).
Then, similarly to DQN, the Q-network is trained using experience replay and the target network is updated every Ï steps. This new version of DQN, called Double DQN (DDQN), uses the two value networks in a decoupled manner, and alleviates the overestimation issue of DQN. This generally re- sults in a more stable learning process (van Hasselt et al., 2015).
In the following section, we present deep RL models which perform policy search and output a stochastic policy rather than value approximation with a deterministic policy.
# 4 Policy Networks and Deep Advantage Actor-Critic (DA2C)
A policy network is a parametrized probabilistic mapping between belief and action spaces:
Ïθ(a|b) = Ï(a|b; θ) = P(At = a|Bt = b, θt = θ),
where θ is the parameter vector (the weight vec- tor of a neural network).4 In order to train policy
4For parametrization, we use w for value networks and θ for policy networks. | 1606.03152#19 | Policy Networks with Two-Stage Training for Dialogue Systems | In this paper, we propose to use deep policy networks which are trained with
an advantage actor-critic method for statistically optimised dialogue systems.
First, we show that, on summary state and action spaces, deep Reinforcement
Learning (RL) outperforms Gaussian Processes methods. Summary state and action
spaces lead to good performance but require pre-engineering effort, RL
knowledge, and domain expertise. In order to remove the need to define such
summary spaces, we show that deep RL can also be trained efficiently on the
original state and action spaces. Dialogue systems based on partially
observable Markov decision processes are known to require many dialogues to
train, which makes them unappealing for practical deployment. We show that a
deep RL method based on an actor-critic architecture can exploit a small amount
of data very efficiently. Indeed, with only a few hundred dialogues collected
with a handcrafted policy, the actor-critic deep learner is considerably
bootstrapped from a combination of supervised and batch RL. In addition,
convergence to an optimal policy is significantly sped up compared to other
deep RL methods initialized on the data with batch RL. All experiments are
performed on a restaurant domain derived from the Dialogue State Tracking
Challenge 2 (DSTC2) dataset. | http://arxiv.org/pdf/1606.03152 | Mehdi Fatemi, Layla El Asri, Hannes Schulz, Jing He, Kaheer Suleman | cs.CL, cs.AI | SIGDial 2016 (Submitted: May 2016; Accepted: Jun 30, 2016) | Proceedings of the SIGDIAL 2016 Conference, pages 101--110, Los
Angeles, USA, 13-15 September 2016. Association for Computational Linguistics | cs.CL | 20160610 | 20160912 | [
{
"id": "1511.08099"
},
{
"id": "1601.04574"
},
{
"id": "1509.06461"
}
] |
1606.03152 | 20 | 4For parametrization, we use w for value networks and θ for policy networks.
networks, policy gradient algorithms have been developed (Williams, 1992; Sutton et al., 2000). Policy gradient algorithms are model-free meth- ods which directly approximate the policy by parametrizing it. The parameters are learnt using a gradient-based optimization method.
We ï¬rst need to deï¬ne an objective function J that will lead the search for the parameters θ. This objective function deï¬nes policy quality. One way of deï¬ning it is to take the average over the re- wards received by the agent. Another way is to compute the discounted sum of rewards for each trajectory, given that there is a designated start state. The policy gradient is then computed ac- cording to the Policy Gradient Theorem (Sutton et al., 2000).
Theorem 1 (Policy Gradient) For any differen- tiable policy Ïθ(b, a) and for the average reward or the start-state objective function, the policy gradient can be computed as
âθJ(θ) = EÏθ [âθ log Ïθ(a|b)QÏθ (b, a)]. | 1606.03152#20 | Policy Networks with Two-Stage Training for Dialogue Systems | In this paper, we propose to use deep policy networks which are trained with
an advantage actor-critic method for statistically optimised dialogue systems.
First, we show that, on summary state and action spaces, deep Reinforcement
Learning (RL) outperforms Gaussian Processes methods. Summary state and action
spaces lead to good performance but require pre-engineering effort, RL
knowledge, and domain expertise. In order to remove the need to define such
summary spaces, we show that deep RL can also be trained efficiently on the
original state and action spaces. Dialogue systems based on partially
observable Markov decision processes are known to require many dialogues to
train, which makes them unappealing for practical deployment. We show that a
deep RL method based on an actor-critic architecture can exploit a small amount
of data very efficiently. Indeed, with only a few hundred dialogues collected
with a handcrafted policy, the actor-critic deep learner is considerably
bootstrapped from a combination of supervised and batch RL. In addition,
convergence to an optimal policy is significantly sped up compared to other
deep RL methods initialized on the data with batch RL. All experiments are
performed on a restaurant domain derived from the Dialogue State Tracking
Challenge 2 (DSTC2) dataset. | http://arxiv.org/pdf/1606.03152 | Mehdi Fatemi, Layla El Asri, Hannes Schulz, Jing He, Kaheer Suleman | cs.CL, cs.AI | SIGDial 2016 (Submitted: May 2016; Accepted: Jun 30, 2016) | Proceedings of the SIGDIAL 2016 Conference, pages 101--110, Los
Angeles, USA, 13-15 September 2016. Association for Computational Linguistics | cs.CL | 20160610 | 20160912 | [
{
"id": "1511.08099"
},
{
"id": "1601.04574"
},
{
"id": "1509.06461"
}
] |
1606.03152 | 21 | âθJ(θ) = EÏθ [âθ log Ïθ(a|b)QÏθ (b, a)].
Policy gradient methods have been used success- fully in different domains. Two recent examples are AlphaGo by DeepMind (Silver et al., 2016) and MazeBase by Facebook AI (Sukhbaatar et al., 2016).
One way to exploit Theorem 1 is to parametrize QÏθ (b, a) separately (with a parameter vector w) and learn the parameter vector during training in a similar way as in DQN. The trained Q-network can then be used for policy evaluation in Equa- tion 5. Such algorithms are known in general as actor-critic algorithms, where the Q approximator is the critic and Ïθ is the actor (Sutton, 1984; Barto et al., 1990; Bhatnagar et al., 2009). This can be achieved with two separate deep neural networks: a Q-Network and a policy network. | 1606.03152#21 | Policy Networks with Two-Stage Training for Dialogue Systems | In this paper, we propose to use deep policy networks which are trained with
an advantage actor-critic method for statistically optimised dialogue systems.
First, we show that, on summary state and action spaces, deep Reinforcement
Learning (RL) outperforms Gaussian Processes methods. Summary state and action
spaces lead to good performance but require pre-engineering effort, RL
knowledge, and domain expertise. In order to remove the need to define such
summary spaces, we show that deep RL can also be trained efficiently on the
original state and action spaces. Dialogue systems based on partially
observable Markov decision processes are known to require many dialogues to
train, which makes them unappealing for practical deployment. We show that a
deep RL method based on an actor-critic architecture can exploit a small amount
of data very efficiently. Indeed, with only a few hundred dialogues collected
with a handcrafted policy, the actor-critic deep learner is considerably
bootstrapped from a combination of supervised and batch RL. In addition,
convergence to an optimal policy is significantly sped up compared to other
deep RL methods initialized on the data with batch RL. All experiments are
performed on a restaurant domain derived from the Dialogue State Tracking
Challenge 2 (DSTC2) dataset. | http://arxiv.org/pdf/1606.03152 | Mehdi Fatemi, Layla El Asri, Hannes Schulz, Jing He, Kaheer Suleman | cs.CL, cs.AI | SIGDial 2016 (Submitted: May 2016; Accepted: Jun 30, 2016) | Proceedings of the SIGDIAL 2016 Conference, pages 101--110, Los
Angeles, USA, 13-15 September 2016. Association for Computational Linguistics | cs.CL | 20160610 | 20160912 | [
{
"id": "1511.08099"
},
{
"id": "1601.04574"
},
{
"id": "1509.06461"
}
] |
1606.03152 | 22 | However, a direct use of Equation 5 with Q as critic is known to cause high variance (Williams, 1992). An important property of Equation 5 can be used in order to overcome this issue: subtract- ing any differentiable function Ba expressed over the belief space from QÏθ will not change the gra- dient. A good selection of Ba, which is called the baseline, can reduce the variance dramatically (Sutton and Barto, 1998). As a result, Equation 5 may be rewritten as follows:
âθJ(θ) = EÏθ [âθ log Ïθ(a|b)Ad(b, a)],
(6) | 1606.03152#22 | Policy Networks with Two-Stage Training for Dialogue Systems | In this paper, we propose to use deep policy networks which are trained with
an advantage actor-critic method for statistically optimised dialogue systems.
First, we show that, on summary state and action spaces, deep Reinforcement
Learning (RL) outperforms Gaussian Processes methods. Summary state and action
spaces lead to good performance but require pre-engineering effort, RL
knowledge, and domain expertise. In order to remove the need to define such
summary spaces, we show that deep RL can also be trained efficiently on the
original state and action spaces. Dialogue systems based on partially
observable Markov decision processes are known to require many dialogues to
train, which makes them unappealing for practical deployment. We show that a
deep RL method based on an actor-critic architecture can exploit a small amount
of data very efficiently. Indeed, with only a few hundred dialogues collected
with a handcrafted policy, the actor-critic deep learner is considerably
bootstrapped from a combination of supervised and batch RL. In addition,
convergence to an optimal policy is significantly sped up compared to other
deep RL methods initialized on the data with batch RL. All experiments are
performed on a restaurant domain derived from the Dialogue State Tracking
Challenge 2 (DSTC2) dataset. | http://arxiv.org/pdf/1606.03152 | Mehdi Fatemi, Layla El Asri, Hannes Schulz, Jing He, Kaheer Suleman | cs.CL, cs.AI | SIGDial 2016 (Submitted: May 2016; Accepted: Jun 30, 2016) | Proceedings of the SIGDIAL 2016 Conference, pages 101--110, Los
Angeles, USA, 13-15 September 2016. Association for Computational Linguistics | cs.CL | 20160610 | 20160912 | [
{
"id": "1511.08099"
},
{
"id": "1601.04574"
},
{
"id": "1509.06461"
}
] |
1606.03152 | 23 | âθJ(θ) = EÏθ [âθ log Ïθ(a|b)Ad(b, a)],
(6)
where Ad(b, a) = Q7¢(b, a) â Ba(b) is called the advantage function. A good baseline is the value function Vâ¢â, for which the advantage function becomes Ad(b,a) = Q7°(b,a) â V(b). How- ever, in this setting, we need to train two sepa- rate networks to parametrize Qâ° and Vâ°. A bet- ter approach is to use the TD error 6 = Ri41 + V7 (Bi+1) â V7( By) as advantage function. It can be proved that the expected value of the TD error is Qâ¢Â¢(b,a) â V79(b). If the TD error is used, only one network is needed, to parametrize V7(B,) = V⢠(Bi; wz). We call this network the value network. We can use a DQN-like method to train the value network using both experience re- play and a target network. For a transition B, = b, A, = a, Riz, = r and By+1 = 0bâ, the advantage function is calculated as in: | 1606.03152#23 | Policy Networks with Two-Stage Training for Dialogue Systems | In this paper, we propose to use deep policy networks which are trained with
an advantage actor-critic method for statistically optimised dialogue systems.
First, we show that, on summary state and action spaces, deep Reinforcement
Learning (RL) outperforms Gaussian Processes methods. Summary state and action
spaces lead to good performance but require pre-engineering effort, RL
knowledge, and domain expertise. In order to remove the need to define such
summary spaces, we show that deep RL can also be trained efficiently on the
original state and action spaces. Dialogue systems based on partially
observable Markov decision processes are known to require many dialogues to
train, which makes them unappealing for practical deployment. We show that a
deep RL method based on an actor-critic architecture can exploit a small amount
of data very efficiently. Indeed, with only a few hundred dialogues collected
with a handcrafted policy, the actor-critic deep learner is considerably
bootstrapped from a combination of supervised and batch RL. In addition,
convergence to an optimal policy is significantly sped up compared to other
deep RL methods initialized on the data with batch RL. All experiments are
performed on a restaurant domain derived from the Dialogue State Tracking
Challenge 2 (DSTC2) dataset. | http://arxiv.org/pdf/1606.03152 | Mehdi Fatemi, Layla El Asri, Hannes Schulz, Jing He, Kaheer Suleman | cs.CL, cs.AI | SIGDial 2016 (Submitted: May 2016; Accepted: Jun 30, 2016) | Proceedings of the SIGDIAL 2016 Conference, pages 101--110, Los
Angeles, USA, 13-15 September 2016. Association for Computational Linguistics | cs.CL | 20160610 | 20160912 | [
{
"id": "1511.08099"
},
{
"id": "1601.04574"
},
{
"id": "1509.06461"
}
] |
1606.03152 | 24 | bp =7 + V(b; w,) â V7(d; we). (7)
Because the gradient in Equation 6 is weighted by the advantage function, it may become quite large. In fact, the advantage function may act as a large learning rate. This can cause the learning process to become unstable. To avoid this issue, we add L2 regularization to the policy objective function. We call this method Deep Advantage Actor-Critic (DA2C).
In the next section, we show how this architec- ture can be used to efï¬ciently exploit a small set of handcrafted data.
# 5 Two-stage Training of the Policy Network | 1606.03152#24 | Policy Networks with Two-Stage Training for Dialogue Systems | In this paper, we propose to use deep policy networks which are trained with
an advantage actor-critic method for statistically optimised dialogue systems.
First, we show that, on summary state and action spaces, deep Reinforcement
Learning (RL) outperforms Gaussian Processes methods. Summary state and action
spaces lead to good performance but require pre-engineering effort, RL
knowledge, and domain expertise. In order to remove the need to define such
summary spaces, we show that deep RL can also be trained efficiently on the
original state and action spaces. Dialogue systems based on partially
observable Markov decision processes are known to require many dialogues to
train, which makes them unappealing for practical deployment. We show that a
deep RL method based on an actor-critic architecture can exploit a small amount
of data very efficiently. Indeed, with only a few hundred dialogues collected
with a handcrafted policy, the actor-critic deep learner is considerably
bootstrapped from a combination of supervised and batch RL. In addition,
convergence to an optimal policy is significantly sped up compared to other
deep RL methods initialized on the data with batch RL. All experiments are
performed on a restaurant domain derived from the Dialogue State Tracking
Challenge 2 (DSTC2) dataset. | http://arxiv.org/pdf/1606.03152 | Mehdi Fatemi, Layla El Asri, Hannes Schulz, Jing He, Kaheer Suleman | cs.CL, cs.AI | SIGDial 2016 (Submitted: May 2016; Accepted: Jun 30, 2016) | Proceedings of the SIGDIAL 2016 Conference, pages 101--110, Los
Angeles, USA, 13-15 September 2016. Association for Computational Linguistics | cs.CL | 20160610 | 20160912 | [
{
"id": "1511.08099"
},
{
"id": "1601.04574"
},
{
"id": "1509.06461"
}
] |
1606.03152 | 25 | # 5 Two-stage Training of the Policy Network
By deï¬nition, the policy network provides a prob- ability distribution over the action space. As a re- sult and in contrast to value-based methods such as DQN, a policy network can also be trained with direct supervised learning (Silver et al., 2016). Supervised training of RL agents has been well- studied in the context of Imitation Learning (IL). In IL, an agent learns to reproduce the behaviour of an expert. Supervised learning of the policy was one of the ï¬rst techniques used to solve this prob- lem (Pomerleau, 1989; Amit and Mataric, 2002). This direct type of imitation learning requires that the learning agent and the expert share the same characteristics. If this condition is not met, IL can be done at the level of the value functions rather than the policy directly (Piot et al., 2015). In this paper, the data that we use (DSTC2) was collected with a dialogue system similar to the one we train
so in our case, the demonstrator and the learner share the same characteristics. | 1606.03152#25 | Policy Networks with Two-Stage Training for Dialogue Systems | In this paper, we propose to use deep policy networks which are trained with
an advantage actor-critic method for statistically optimised dialogue systems.
First, we show that, on summary state and action spaces, deep Reinforcement
Learning (RL) outperforms Gaussian Processes methods. Summary state and action
spaces lead to good performance but require pre-engineering effort, RL
knowledge, and domain expertise. In order to remove the need to define such
summary spaces, we show that deep RL can also be trained efficiently on the
original state and action spaces. Dialogue systems based on partially
observable Markov decision processes are known to require many dialogues to
train, which makes them unappealing for practical deployment. We show that a
deep RL method based on an actor-critic architecture can exploit a small amount
of data very efficiently. Indeed, with only a few hundred dialogues collected
with a handcrafted policy, the actor-critic deep learner is considerably
bootstrapped from a combination of supervised and batch RL. In addition,
convergence to an optimal policy is significantly sped up compared to other
deep RL methods initialized on the data with batch RL. All experiments are
performed on a restaurant domain derived from the Dialogue State Tracking
Challenge 2 (DSTC2) dataset. | http://arxiv.org/pdf/1606.03152 | Mehdi Fatemi, Layla El Asri, Hannes Schulz, Jing He, Kaheer Suleman | cs.CL, cs.AI | SIGDial 2016 (Submitted: May 2016; Accepted: Jun 30, 2016) | Proceedings of the SIGDIAL 2016 Conference, pages 101--110, Los
Angeles, USA, 13-15 September 2016. Association for Computational Linguistics | cs.CL | 20160610 | 20160912 | [
{
"id": "1511.08099"
},
{
"id": "1601.04574"
},
{
"id": "1509.06461"
}
] |
1606.03152 | 26 | so in our case, the demonstrator and the learner share the same characteristics.
Similarly to Silver et al. (2016), here, we ini- tialize both the policy network and the value net- work on the data. The policy network is trained by minimising the categorical cross-entropy between the predicted action distribution and the demon- strated actions. The value network is trained di- rectly through RL rather than IL to give more ï¬ex- ibility in the kind of data we can use. Indeed, our goal is to collect a small number of dialogues and learn from them. IL usually assumes that the data corresponds to expert policies. However, di- alogues collected with a handcrafted policy or in a Wizard-of-Oz (WoZ) setting often contain both optimal and sub-optimal dialogues and RL can be used to learn from all of these dialogues. Super- vised training can also be done on these dialogues as we show in Section 6. | 1606.03152#26 | Policy Networks with Two-Stage Training for Dialogue Systems | In this paper, we propose to use deep policy networks which are trained with
an advantage actor-critic method for statistically optimised dialogue systems.
First, we show that, on summary state and action spaces, deep Reinforcement
Learning (RL) outperforms Gaussian Processes methods. Summary state and action
spaces lead to good performance but require pre-engineering effort, RL
knowledge, and domain expertise. In order to remove the need to define such
summary spaces, we show that deep RL can also be trained efficiently on the
original state and action spaces. Dialogue systems based on partially
observable Markov decision processes are known to require many dialogues to
train, which makes them unappealing for practical deployment. We show that a
deep RL method based on an actor-critic architecture can exploit a small amount
of data very efficiently. Indeed, with only a few hundred dialogues collected
with a handcrafted policy, the actor-critic deep learner is considerably
bootstrapped from a combination of supervised and batch RL. In addition,
convergence to an optimal policy is significantly sped up compared to other
deep RL methods initialized on the data with batch RL. All experiments are
performed on a restaurant domain derived from the Dialogue State Tracking
Challenge 2 (DSTC2) dataset. | http://arxiv.org/pdf/1606.03152 | Mehdi Fatemi, Layla El Asri, Hannes Schulz, Jing He, Kaheer Suleman | cs.CL, cs.AI | SIGDial 2016 (Submitted: May 2016; Accepted: Jun 30, 2016) | Proceedings of the SIGDIAL 2016 Conference, pages 101--110, Los
Angeles, USA, 13-15 September 2016. Association for Computational Linguistics | cs.CL | 20160610 | 20160912 | [
{
"id": "1511.08099"
},
{
"id": "1601.04574"
},
{
"id": "1509.06461"
}
] |
1606.03152 | 27 | Supervised actor-critic architectures following this idea have been proposed in the past (Ben- brahim and Franklin, 1997; Si et al., 2004); the actor works together with a human supervisor to gain competence on its task even if the criticâs es- timations are poor. For instance, a human can help a robot move by providing the robot with valid ac- tions. We advocate for the same kind of methods for dialogue systems. It is easy to collect a small number of high-quality dialogues and then use su- pervised learning on this data to teach the system valid actions. This also eliminates the need to de- ï¬ne restricted action sets.
In all the methods above, Adadelta will be used as the gradient-decent optimiser, which in our experiments works noticeably better than other methods such as Adagrad, Adam, and RMSProp.
# 6 Experiments
# 6.1 Comparison of DQN and GPSARSA | 1606.03152#27 | Policy Networks with Two-Stage Training for Dialogue Systems | In this paper, we propose to use deep policy networks which are trained with
an advantage actor-critic method for statistically optimised dialogue systems.
First, we show that, on summary state and action spaces, deep Reinforcement
Learning (RL) outperforms Gaussian Processes methods. Summary state and action
spaces lead to good performance but require pre-engineering effort, RL
knowledge, and domain expertise. In order to remove the need to define such
summary spaces, we show that deep RL can also be trained efficiently on the
original state and action spaces. Dialogue systems based on partially
observable Markov decision processes are known to require many dialogues to
train, which makes them unappealing for practical deployment. We show that a
deep RL method based on an actor-critic architecture can exploit a small amount
of data very efficiently. Indeed, with only a few hundred dialogues collected
with a handcrafted policy, the actor-critic deep learner is considerably
bootstrapped from a combination of supervised and batch RL. In addition,
convergence to an optimal policy is significantly sped up compared to other
deep RL methods initialized on the data with batch RL. All experiments are
performed on a restaurant domain derived from the Dialogue State Tracking
Challenge 2 (DSTC2) dataset. | http://arxiv.org/pdf/1606.03152 | Mehdi Fatemi, Layla El Asri, Hannes Schulz, Jing He, Kaheer Suleman | cs.CL, cs.AI | SIGDial 2016 (Submitted: May 2016; Accepted: Jun 30, 2016) | Proceedings of the SIGDIAL 2016 Conference, pages 101--110, Los
Angeles, USA, 13-15 September 2016. Association for Computational Linguistics | cs.CL | 20160610 | 20160912 | [
{
"id": "1511.08099"
},
{
"id": "1601.04574"
},
{
"id": "1509.06461"
}
] |
1606.03152 | 28 | # 6 Experiments
# 6.1 Comparison of DQN and GPSARSA
6.1.1 Experimental Protocol In this section, as a ï¬rst argument in favour of deep RL, we perform a comparison between GPSARSA and DQN on simulated dialogues. We trained an agenda-based user simulator which at each dia- logue turn, provides one or several dialogue act(s) in response to the latest machine act (Schatzmann et al., 2007; Schatzmann and Young, 2009). The dataset used for training this user-simulator is the Dialogue State Tracking Challenge 2 (DSTC2) (Henderson et al., 2014) dataset. State tracking is also trained on this dataset. DSTC2 includes
â pan â GPSARSA ââ DAQN-no-summary Average dialogue length 0 5 10 15 1 = _ 2) eS = OLF J ~ 2 o D> g-1 g <= -2 0 5 10 15 x1000 training dialogues & 20 f= â Dan 5 ââ ppan ® 15 ââ pa2zc = ist 10 os o D 5 gs g zo oO 5 10 15 1 a <j Yo 2 o D g-1 g <x -2 oO 5 10 15 x1000 training dialogues | 1606.03152#28 | Policy Networks with Two-Stage Training for Dialogue Systems | In this paper, we propose to use deep policy networks which are trained with
an advantage actor-critic method for statistically optimised dialogue systems.
First, we show that, on summary state and action spaces, deep Reinforcement
Learning (RL) outperforms Gaussian Processes methods. Summary state and action
spaces lead to good performance but require pre-engineering effort, RL
knowledge, and domain expertise. In order to remove the need to define such
summary spaces, we show that deep RL can also be trained efficiently on the
original state and action spaces. Dialogue systems based on partially
observable Markov decision processes are known to require many dialogues to
train, which makes them unappealing for practical deployment. We show that a
deep RL method based on an actor-critic architecture can exploit a small amount
of data very efficiently. Indeed, with only a few hundred dialogues collected
with a handcrafted policy, the actor-critic deep learner is considerably
bootstrapped from a combination of supervised and batch RL. In addition,
convergence to an optimal policy is significantly sped up compared to other
deep RL methods initialized on the data with batch RL. All experiments are
performed on a restaurant domain derived from the Dialogue State Tracking
Challenge 2 (DSTC2) dataset. | http://arxiv.org/pdf/1606.03152 | Mehdi Fatemi, Layla El Asri, Hannes Schulz, Jing He, Kaheer Suleman | cs.CL, cs.AI | SIGDial 2016 (Submitted: May 2016; Accepted: Jun 30, 2016) | Proceedings of the SIGDIAL 2016 Conference, pages 101--110, Los
Angeles, USA, 13-15 September 2016. Association for Computational Linguistics | cs.CL | 20160610 | 20160912 | [
{
"id": "1511.08099"
},
{
"id": "1601.04574"
},
{
"id": "1509.06461"
}
] |
1606.03152 | 29 | (a) Comparison of GPSARSA on summary spaces and DQN on summary (DQN) and original spaces (DQN-no- summary). (b) Comparison of DA2C, DQN and DDQN on original spaces.
Figure 1: Comparison of different algorithms on simulated dialogues, without any pre-training.
dialogues with users who are searching for restau- rants in Cambridge, UK.
In each dialogue, the user has a goal containing constraint slots and request slots. The constraint and request slots available in DSTC2 are listed in Appendix A. The constraints are the slots that the user has to provide to the system (for instance the user is looking for a speciï¬c type of food in a given area) and the requests are the slots that the user must receive from the system (for instance the user wants to know the address and phone number of the restaurant found by the system). | 1606.03152#29 | Policy Networks with Two-Stage Training for Dialogue Systems | In this paper, we propose to use deep policy networks which are trained with
an advantage actor-critic method for statistically optimised dialogue systems.
First, we show that, on summary state and action spaces, deep Reinforcement
Learning (RL) outperforms Gaussian Processes methods. Summary state and action
spaces lead to good performance but require pre-engineering effort, RL
knowledge, and domain expertise. In order to remove the need to define such
summary spaces, we show that deep RL can also be trained efficiently on the
original state and action spaces. Dialogue systems based on partially
observable Markov decision processes are known to require many dialogues to
train, which makes them unappealing for practical deployment. We show that a
deep RL method based on an actor-critic architecture can exploit a small amount
of data very efficiently. Indeed, with only a few hundred dialogues collected
with a handcrafted policy, the actor-critic deep learner is considerably
bootstrapped from a combination of supervised and batch RL. In addition,
convergence to an optimal policy is significantly sped up compared to other
deep RL methods initialized on the data with batch RL. All experiments are
performed on a restaurant domain derived from the Dialogue State Tracking
Challenge 2 (DSTC2) dataset. | http://arxiv.org/pdf/1606.03152 | Mehdi Fatemi, Layla El Asri, Hannes Schulz, Jing He, Kaheer Suleman | cs.CL, cs.AI | SIGDial 2016 (Submitted: May 2016; Accepted: Jun 30, 2016) | Proceedings of the SIGDIAL 2016 Conference, pages 101--110, Los
Angeles, USA, 13-15 September 2016. Association for Computational Linguistics | cs.CL | 20160610 | 20160912 | [
{
"id": "1511.08099"
},
{
"id": "1601.04574"
},
{
"id": "1509.06461"
}
] |
1606.03152 | 30 | Similarly, the belief state is composed of two parts: constraints and requests. The constraint part includes the probabilities of the top two values for each constraint slot as returned by the state tracker (the value might be empty with a probability zero if the slot has not been mentioned). The request part, on the other hand, includes the probability of each request slot. For instance the constraint part might be [food: (Italian, 0.85) (Indian, 0.1) (Not mentioned, 0.05)] and the request part might be [area: 0.95] meaning that the user is probably looking for an Italian restaurant and that he wants to know the area of the restaurant found by the sys- tem. To compare DQN to GPSARSA, we work on a summary state space (GaËsi´c et al., 2012, 2013). Each constraint is mapped to a one-hot vector, with 1 corresponding to the tuple in the grid vector gc = [(1, 0), (.8, .2), (.6, .2), (.6, .4), (.4, .4)] that minimizes the Euclidean distance to the top two probabilities. Similarly, each request slot is mapped to a one-hot vector according to | 1606.03152#30 | Policy Networks with Two-Stage Training for Dialogue Systems | In this paper, we propose to use deep policy networks which are trained with
an advantage actor-critic method for statistically optimised dialogue systems.
First, we show that, on summary state and action spaces, deep Reinforcement
Learning (RL) outperforms Gaussian Processes methods. Summary state and action
spaces lead to good performance but require pre-engineering effort, RL
knowledge, and domain expertise. In order to remove the need to define such
summary spaces, we show that deep RL can also be trained efficiently on the
original state and action spaces. Dialogue systems based on partially
observable Markov decision processes are known to require many dialogues to
train, which makes them unappealing for practical deployment. We show that a
deep RL method based on an actor-critic architecture can exploit a small amount
of data very efficiently. Indeed, with only a few hundred dialogues collected
with a handcrafted policy, the actor-critic deep learner is considerably
bootstrapped from a combination of supervised and batch RL. In addition,
convergence to an optimal policy is significantly sped up compared to other
deep RL methods initialized on the data with batch RL. All experiments are
performed on a restaurant domain derived from the Dialogue State Tracking
Challenge 2 (DSTC2) dataset. | http://arxiv.org/pdf/1606.03152 | Mehdi Fatemi, Layla El Asri, Hannes Schulz, Jing He, Kaheer Suleman | cs.CL, cs.AI | SIGDial 2016 (Submitted: May 2016; Accepted: Jun 30, 2016) | Proceedings of the SIGDIAL 2016 Conference, pages 101--110, Los
Angeles, USA, 13-15 September 2016. Association for Computational Linguistics | cs.CL | 20160610 | 20160912 | [
{
"id": "1511.08099"
},
{
"id": "1601.04574"
},
{
"id": "1509.06461"
}
] |
1606.03152 | 31 | .4)] that minimizes the Euclidean distance to the top two probabilities. Similarly, each request slot is mapped to a one-hot vector according to the grid gr = [1, .8, .6, .4, 0.]. The ï¬nal belief vector, known as the summary state, is deï¬ned as the con- catenation of the constraint and request one-hot vectors. Each summary state is a binary vector of length 60 (12 one-hot vectors of length 5) and the total number of states is 512. | 1606.03152#31 | Policy Networks with Two-Stage Training for Dialogue Systems | In this paper, we propose to use deep policy networks which are trained with
an advantage actor-critic method for statistically optimised dialogue systems.
First, we show that, on summary state and action spaces, deep Reinforcement
Learning (RL) outperforms Gaussian Processes methods. Summary state and action
spaces lead to good performance but require pre-engineering effort, RL
knowledge, and domain expertise. In order to remove the need to define such
summary spaces, we show that deep RL can also be trained efficiently on the
original state and action spaces. Dialogue systems based on partially
observable Markov decision processes are known to require many dialogues to
train, which makes them unappealing for practical deployment. We show that a
deep RL method based on an actor-critic architecture can exploit a small amount
of data very efficiently. Indeed, with only a few hundred dialogues collected
with a handcrafted policy, the actor-critic deep learner is considerably
bootstrapped from a combination of supervised and batch RL. In addition,
convergence to an optimal policy is significantly sped up compared to other
deep RL methods initialized on the data with batch RL. All experiments are
performed on a restaurant domain derived from the Dialogue State Tracking
Challenge 2 (DSTC2) dataset. | http://arxiv.org/pdf/1606.03152 | Mehdi Fatemi, Layla El Asri, Hannes Schulz, Jing He, Kaheer Suleman | cs.CL, cs.AI | SIGDial 2016 (Submitted: May 2016; Accepted: Jun 30, 2016) | Proceedings of the SIGDIAL 2016 Conference, pages 101--110, Los
Angeles, USA, 13-15 September 2016. Association for Computational Linguistics | cs.CL | 20160610 | 20160912 | [
{
"id": "1511.08099"
},
{
"id": "1601.04574"
},
{
"id": "1509.06461"
}
] |
1606.03152 | 32 | We also work on a summary action space and we use the act types listed in Table 1 in Appendix A. We add the necessary slot information as a post processing step. For example, the request act means that the system wants to request a slot from the user, e.g. request(food). In this case, the se- lection of the slot is based on min-max probabil- ity, i.e., the most ambiguous slot (which is the slot we want to request) is assumed to be the one for which the value with maximum probability has the minimum probability compared to the most cer- tain values of the other slots. Note that this heuris- tic approach to compute the summary state and ac- tion spaces is a requirement to make GPSARSA tractable; it is a serious limitation in general and should be avoided.
As reward, we use a normalized scheme with a reward of +1 if the dialogue ï¬nishes successfully
before 30 turns,5 a reward of -1 if the dialogue is not successful after 30 turns, and a reward of -0.03 for each turn. A reward of -1 is also distributed to the system if the user hangs up. In our settings, the user simulator hangs up every time the system pro- poses a restaurant which does not match at least one of his constraints. | 1606.03152#32 | Policy Networks with Two-Stage Training for Dialogue Systems | In this paper, we propose to use deep policy networks which are trained with
an advantage actor-critic method for statistically optimised dialogue systems.
First, we show that, on summary state and action spaces, deep Reinforcement
Learning (RL) outperforms Gaussian Processes methods. Summary state and action
spaces lead to good performance but require pre-engineering effort, RL
knowledge, and domain expertise. In order to remove the need to define such
summary spaces, we show that deep RL can also be trained efficiently on the
original state and action spaces. Dialogue systems based on partially
observable Markov decision processes are known to require many dialogues to
train, which makes them unappealing for practical deployment. We show that a
deep RL method based on an actor-critic architecture can exploit a small amount
of data very efficiently. Indeed, with only a few hundred dialogues collected
with a handcrafted policy, the actor-critic deep learner is considerably
bootstrapped from a combination of supervised and batch RL. In addition,
convergence to an optimal policy is significantly sped up compared to other
deep RL methods initialized on the data with batch RL. All experiments are
performed on a restaurant domain derived from the Dialogue State Tracking
Challenge 2 (DSTC2) dataset. | http://arxiv.org/pdf/1606.03152 | Mehdi Fatemi, Layla El Asri, Hannes Schulz, Jing He, Kaheer Suleman | cs.CL, cs.AI | SIGDial 2016 (Submitted: May 2016; Accepted: Jun 30, 2016) | Proceedings of the SIGDIAL 2016 Conference, pages 101--110, Los
Angeles, USA, 13-15 September 2016. Association for Computational Linguistics | cs.CL | 20160610 | 20160912 | [
{
"id": "1511.08099"
},
{
"id": "1601.04574"
},
{
"id": "1509.06461"
}
] |
1606.03152 | 33 | For the deep @-network, a Multi-Layer Percep- tron (MLP) is used with two fully connected hid- den layers, each having a tanh activation. The output layer has no activation and it provides the value for each of the summary machine acts. The summary machine acts are mapped to orig- inal acts using the heuristics explained previ- ously. Both algorithms are trained with 15000 dialogues. GPSARSA is trained with eâ¬-softmax exploration, which, with probability 1 â â¬, se- lects an action based on the logistic distribution Q(b,a) Plalb] = F eahay Oba) lects an action in a uniformly random way. From our experiments, this exploration scheme works best in terms of both convergence rate and vari- ance. For DQN, we use a simple e-greedy explo- ration which, with probability ⬠(same ⬠as above), uniformly selects an action and, with probability 1âe, selects an action maximizing the Q-function. For both algorithms, ¢ is annealed to less than 0.1 over the course of training. and, with probability â¬, seIn a second experiment, we remove both summary state and action spaces for DQN, | 1606.03152#33 | Policy Networks with Two-Stage Training for Dialogue Systems | In this paper, we propose to use deep policy networks which are trained with
an advantage actor-critic method for statistically optimised dialogue systems.
First, we show that, on summary state and action spaces, deep Reinforcement
Learning (RL) outperforms Gaussian Processes methods. Summary state and action
spaces lead to good performance but require pre-engineering effort, RL
knowledge, and domain expertise. In order to remove the need to define such
summary spaces, we show that deep RL can also be trained efficiently on the
original state and action spaces. Dialogue systems based on partially
observable Markov decision processes are known to require many dialogues to
train, which makes them unappealing for practical deployment. We show that a
deep RL method based on an actor-critic architecture can exploit a small amount
of data very efficiently. Indeed, with only a few hundred dialogues collected
with a handcrafted policy, the actor-critic deep learner is considerably
bootstrapped from a combination of supervised and batch RL. In addition,
convergence to an optimal policy is significantly sped up compared to other
deep RL methods initialized on the data with batch RL. All experiments are
performed on a restaurant domain derived from the Dialogue State Tracking
Challenge 2 (DSTC2) dataset. | http://arxiv.org/pdf/1606.03152 | Mehdi Fatemi, Layla El Asri, Hannes Schulz, Jing He, Kaheer Suleman | cs.CL, cs.AI | SIGDial 2016 (Submitted: May 2016; Accepted: Jun 30, 2016) | Proceedings of the SIGDIAL 2016 Conference, pages 101--110, Los
Angeles, USA, 13-15 September 2016. Association for Computational Linguistics | cs.CL | 20160610 | 20160912 | [
{
"id": "1511.08099"
},
{
"id": "1601.04574"
},
{
"id": "1509.06461"
}
] |
1606.03152 | 34 | less than 0.1 over the course of training. and, with probability â¬, seIn a second experiment, we remove both summary state and action spaces for DQN, i.e., we do not perform the Euclidean-distance map- ping as before but instead work directly on the probabilities themselves. Additionally, the state is augmented with the probability (returned by the state tracker) of each user act (see Table 2 in Appendix A), the dialogue turn, and the number of results returned by the database (0 if there was no query). Consequently, the state consists of 31 continuous values and two discrete values. The original action space is composed of 11 actions: offer6, select-food, request-area, select-pricerange, request-pricerange, request-food, expl-conf-area, expl-conf-food, expl-conf-pricerange, repeat. There | 1606.03152#34 | Policy Networks with Two-Stage Training for Dialogue Systems | In this paper, we propose to use deep policy networks which are trained with
an advantage actor-critic method for statistically optimised dialogue systems.
First, we show that, on summary state and action spaces, deep Reinforcement
Learning (RL) outperforms Gaussian Processes methods. Summary state and action
spaces lead to good performance but require pre-engineering effort, RL
knowledge, and domain expertise. In order to remove the need to define such
summary spaces, we show that deep RL can also be trained efficiently on the
original state and action spaces. Dialogue systems based on partially
observable Markov decision processes are known to require many dialogues to
train, which makes them unappealing for practical deployment. We show that a
deep RL method based on an actor-critic architecture can exploit a small amount
of data very efficiently. Indeed, with only a few hundred dialogues collected
with a handcrafted policy, the actor-critic deep learner is considerably
bootstrapped from a combination of supervised and batch RL. In addition,
convergence to an optimal policy is significantly sped up compared to other
deep RL methods initialized on the data with batch RL. All experiments are
performed on a restaurant domain derived from the Dialogue State Tracking
Challenge 2 (DSTC2) dataset. | http://arxiv.org/pdf/1606.03152 | Mehdi Fatemi, Layla El Asri, Hannes Schulz, Jing He, Kaheer Suleman | cs.CL, cs.AI | SIGDial 2016 (Submitted: May 2016; Accepted: Jun 30, 2016) | Proceedings of the SIGDIAL 2016 Conference, pages 101--110, Los
Angeles, USA, 13-15 September 2016. Association for Computational Linguistics | cs.CL | 20160610 | 20160912 | [
{
"id": "1511.08099"
},
{
"id": "1601.04574"
},
{
"id": "1509.06461"
}
] |
1606.03152 | 35 | 5A dialogue is successful if the user retrieves all the re- quest slots for a restaurant matching all the constraints of his goal.
6This act consists of proposing a restaurant to the user. In order to be consistent with the DSTC2 dataset, an offer al- ways contains the values for all the constraints understood by the system, e.g. offer(name = Super Ramen, food = Japanese, price range = cheap).
is no post-processing via min-max selection anymore since the slot is part of the action, e.g., select-area.
The policies are evaluated after each 1000 train- ing dialogues on 500 test dialogues without explo- ration. | 1606.03152#35 | Policy Networks with Two-Stage Training for Dialogue Systems | In this paper, we propose to use deep policy networks which are trained with
an advantage actor-critic method for statistically optimised dialogue systems.
First, we show that, on summary state and action spaces, deep Reinforcement
Learning (RL) outperforms Gaussian Processes methods. Summary state and action
spaces lead to good performance but require pre-engineering effort, RL
knowledge, and domain expertise. In order to remove the need to define such
summary spaces, we show that deep RL can also be trained efficiently on the
original state and action spaces. Dialogue systems based on partially
observable Markov decision processes are known to require many dialogues to
train, which makes them unappealing for practical deployment. We show that a
deep RL method based on an actor-critic architecture can exploit a small amount
of data very efficiently. Indeed, with only a few hundred dialogues collected
with a handcrafted policy, the actor-critic deep learner is considerably
bootstrapped from a combination of supervised and batch RL. In addition,
convergence to an optimal policy is significantly sped up compared to other
deep RL methods initialized on the data with batch RL. All experiments are
performed on a restaurant domain derived from the Dialogue State Tracking
Challenge 2 (DSTC2) dataset. | http://arxiv.org/pdf/1606.03152 | Mehdi Fatemi, Layla El Asri, Hannes Schulz, Jing He, Kaheer Suleman | cs.CL, cs.AI | SIGDial 2016 (Submitted: May 2016; Accepted: Jun 30, 2016) | Proceedings of the SIGDIAL 2016 Conference, pages 101--110, Los
Angeles, USA, 13-15 September 2016. Association for Computational Linguistics | cs.CL | 20160610 | 20160912 | [
{
"id": "1511.08099"
},
{
"id": "1601.04574"
},
{
"id": "1509.06461"
}
] |
1606.03152 | 36 | The policies are evaluated after each 1000 train- ing dialogues on 500 test dialogues without explo- ration.
6.1.2 Results Figure 1 illustrates the performance of DQN com- pared to GPSARSA. In our experiments with GP- SARSA we found that it was difï¬cult to ï¬nd a good tradeoff between precision and efï¬ciency. Indeed, for low precision, the algorithm learned rapidly but did not reach optimal behaviour, whereas higher precision made learning extremely slow but resulted in better end-performance. On summary spaces, DQN outperforms GPSARSA in terms of convergence. Indeed, GPSARSA re- quires twice as many dialogues to converge. It is also worth mentioning here that the wall-clock training time of GPSARSA is considerably longer than the one of DQN due to kernel evaluation. The second experiment validates the fact that Deep RL can be efï¬ciently trained directly on the belief state returned by the state tracker. Indeed, DQN on the original spaces performs as well as GPSARSA on the summary spaces.
In the next section, we train and compare the deep RL networks previously described on the original state and action spaces.
# 6.2 Comparison of the Deep RL Methods | 1606.03152#36 | Policy Networks with Two-Stage Training for Dialogue Systems | In this paper, we propose to use deep policy networks which are trained with
an advantage actor-critic method for statistically optimised dialogue systems.
First, we show that, on summary state and action spaces, deep Reinforcement
Learning (RL) outperforms Gaussian Processes methods. Summary state and action
spaces lead to good performance but require pre-engineering effort, RL
knowledge, and domain expertise. In order to remove the need to define such
summary spaces, we show that deep RL can also be trained efficiently on the
original state and action spaces. Dialogue systems based on partially
observable Markov decision processes are known to require many dialogues to
train, which makes them unappealing for practical deployment. We show that a
deep RL method based on an actor-critic architecture can exploit a small amount
of data very efficiently. Indeed, with only a few hundred dialogues collected
with a handcrafted policy, the actor-critic deep learner is considerably
bootstrapped from a combination of supervised and batch RL. In addition,
convergence to an optimal policy is significantly sped up compared to other
deep RL methods initialized on the data with batch RL. All experiments are
performed on a restaurant domain derived from the Dialogue State Tracking
Challenge 2 (DSTC2) dataset. | http://arxiv.org/pdf/1606.03152 | Mehdi Fatemi, Layla El Asri, Hannes Schulz, Jing He, Kaheer Suleman | cs.CL, cs.AI | SIGDial 2016 (Submitted: May 2016; Accepted: Jun 30, 2016) | Proceedings of the SIGDIAL 2016 Conference, pages 101--110, Los
Angeles, USA, 13-15 September 2016. Association for Computational Linguistics | cs.CL | 20160610 | 20160912 | [
{
"id": "1511.08099"
},
{
"id": "1601.04574"
},
{
"id": "1509.06461"
}
] |
1606.03152 | 37 | In the next section, we train and compare the deep RL networks previously described on the original state and action spaces.
# 6.2 Comparison of the Deep RL Methods
6.2.1 Experimental Protocol Similarly to the previous example, we work on a restaurant domain and use the DSTC2 speci- fications. We use eâgreedy exploration for all four algorithms with e⬠starting at 0.5 and be- ing linearly annealed at a rate of A = 0.99995. To speed up the learning process, the actions select-pricerange, select-area, and select-food are excluded from exploration. Note that this set does not depend on the state and is meant for exploration only. All the actions can be performed by the system at any moment.
We derived two datasets from DSTC2. The ï¬rst dataset contains the 2118 dialogues of DSTC2. We had these dialogues rated by a human expert, based on the quality of dialogue management and on a scale of 0 to 3. The second dataset only con- tains the dialogues with a rating of 3 (706 dia- logues). The underlying assumption is that these dialogues correspond to optimal policies.
â DDAN + Batch â DON + Batch 15 ââ DA2C + Batch Average dialogue length ° Average rewards -2 oO 5 10 15 x1000 training dialogues | 1606.03152#37 | Policy Networks with Two-Stage Training for Dialogue Systems | In this paper, we propose to use deep policy networks which are trained with
an advantage actor-critic method for statistically optimised dialogue systems.
First, we show that, on summary state and action spaces, deep Reinforcement
Learning (RL) outperforms Gaussian Processes methods. Summary state and action
spaces lead to good performance but require pre-engineering effort, RL
knowledge, and domain expertise. In order to remove the need to define such
summary spaces, we show that deep RL can also be trained efficiently on the
original state and action spaces. Dialogue systems based on partially
observable Markov decision processes are known to require many dialogues to
train, which makes them unappealing for practical deployment. We show that a
deep RL method based on an actor-critic architecture can exploit a small amount
of data very efficiently. Indeed, with only a few hundred dialogues collected
with a handcrafted policy, the actor-critic deep learner is considerably
bootstrapped from a combination of supervised and batch RL. In addition,
convergence to an optimal policy is significantly sped up compared to other
deep RL methods initialized on the data with batch RL. All experiments are
performed on a restaurant domain derived from the Dialogue State Tracking
Challenge 2 (DSTC2) dataset. | http://arxiv.org/pdf/1606.03152 | Mehdi Fatemi, Layla El Asri, Hannes Schulz, Jing He, Kaheer Suleman | cs.CL, cs.AI | SIGDial 2016 (Submitted: May 2016; Accepted: Jun 30, 2016) | Proceedings of the SIGDIAL 2016 Conference, pages 101--110, Los
Angeles, USA, 13-15 September 2016. Association for Computational Linguistics | cs.CL | 20160610 | 20160912 | [
{
"id": "1511.08099"
},
{
"id": "1601.04574"
},
{
"id": "1509.06461"
}
] |
1606.03152 | 38 | â DDAN + Batch â DON + Batch 15 ââ DA2C + Batch Average dialogue length ° Average rewards -2 oO 5 10 15 x1000 training dialogues
â SupExptBatchDA2c ââ SupFullBatchDA2c â BatchDA2c ââ DA2c Average dialogue length Average rewards 0 5 10 15 x1000 training dialogues
(a) Comparison of DA2C, DQN and DDQN after batch ini- tialization.
(b) Comparison of DA2C and DA2C after batch initializa- tion (batchDA2C), and TDA2C after supervised training on expert (SupExptBatchDA2C) and non-expert data (SupFull- BatchDA2C).
Figure 2: Comparison of different algorithms on simulated dialogues, with pre-training. | 1606.03152#38 | Policy Networks with Two-Stage Training for Dialogue Systems | In this paper, we propose to use deep policy networks which are trained with
an advantage actor-critic method for statistically optimised dialogue systems.
First, we show that, on summary state and action spaces, deep Reinforcement
Learning (RL) outperforms Gaussian Processes methods. Summary state and action
spaces lead to good performance but require pre-engineering effort, RL
knowledge, and domain expertise. In order to remove the need to define such
summary spaces, we show that deep RL can also be trained efficiently on the
original state and action spaces. Dialogue systems based on partially
observable Markov decision processes are known to require many dialogues to
train, which makes them unappealing for practical deployment. We show that a
deep RL method based on an actor-critic architecture can exploit a small amount
of data very efficiently. Indeed, with only a few hundred dialogues collected
with a handcrafted policy, the actor-critic deep learner is considerably
bootstrapped from a combination of supervised and batch RL. In addition,
convergence to an optimal policy is significantly sped up compared to other
deep RL methods initialized on the data with batch RL. All experiments are
performed on a restaurant domain derived from the Dialogue State Tracking
Challenge 2 (DSTC2) dataset. | http://arxiv.org/pdf/1606.03152 | Mehdi Fatemi, Layla El Asri, Hannes Schulz, Jing He, Kaheer Suleman | cs.CL, cs.AI | SIGDial 2016 (Submitted: May 2016; Accepted: Jun 30, 2016) | Proceedings of the SIGDIAL 2016 Conference, pages 101--110, Los
Angeles, USA, 13-15 September 2016. Association for Computational Linguistics | cs.CL | 20160610 | 20160912 | [
{
"id": "1511.08099"
},
{
"id": "1601.04574"
},
{
"id": "1509.06461"
}
] |
1606.03152 | 39 | Figure 2: Comparison of different algorithms on simulated dialogues, with pre-training.
We compare the convergence rates of the deep RL models in different settings. First, we com- pare DQN, DDQN and DA2C without any pre- training (Figure 1b). Then, we compare DQN, DDQN and TDA2C with an RL initialization on the DSTC2 dataset (Figure 2a). Finally, we focus on the advantage actor-critic models and compare DA2C, TDA2C, TDA2C with batch initialization on DSTC2, and TDA2C with batch initialization on the expert dialogues (Figure 2b).
of the dialogue acts chosen by the system were still appropriate, which explains that the system learns acceptable behavior from the entire dataset. This shows that supervised training, even when performed not only on optimal dialogues, makes learning much faster and relieves the need for re- stricted action sets. Valid actions are learnt from the dialogues and then RL exploits the good and bad dialogues to pursue training towards a high performing policy.
# 6.2.2 Results
# 7 Concluding Remarks | 1606.03152#39 | Policy Networks with Two-Stage Training for Dialogue Systems | In this paper, we propose to use deep policy networks which are trained with
an advantage actor-critic method for statistically optimised dialogue systems.
First, we show that, on summary state and action spaces, deep Reinforcement
Learning (RL) outperforms Gaussian Processes methods. Summary state and action
spaces lead to good performance but require pre-engineering effort, RL
knowledge, and domain expertise. In order to remove the need to define such
summary spaces, we show that deep RL can also be trained efficiently on the
original state and action spaces. Dialogue systems based on partially
observable Markov decision processes are known to require many dialogues to
train, which makes them unappealing for practical deployment. We show that a
deep RL method based on an actor-critic architecture can exploit a small amount
of data very efficiently. Indeed, with only a few hundred dialogues collected
with a handcrafted policy, the actor-critic deep learner is considerably
bootstrapped from a combination of supervised and batch RL. In addition,
convergence to an optimal policy is significantly sped up compared to other
deep RL methods initialized on the data with batch RL. All experiments are
performed on a restaurant domain derived from the Dialogue State Tracking
Challenge 2 (DSTC2) dataset. | http://arxiv.org/pdf/1606.03152 | Mehdi Fatemi, Layla El Asri, Hannes Schulz, Jing He, Kaheer Suleman | cs.CL, cs.AI | SIGDial 2016 (Submitted: May 2016; Accepted: Jun 30, 2016) | Proceedings of the SIGDIAL 2016 Conference, pages 101--110, Los
Angeles, USA, 13-15 September 2016. Association for Computational Linguistics | cs.CL | 20160610 | 20160912 | [
{
"id": "1511.08099"
},
{
"id": "1601.04574"
},
{
"id": "1509.06461"
}
] |
1606.03152 | 40 | # 6.2.2 Results
# 7 Concluding Remarks
As expected, DDQN converges faster than DQN on all experiments. Figure 1b shows that, with- out any pre-training, DA2C is the one which con- verges the fastest (6000 dialogues vs. 10000 dia- logues for the other models). Figure 2a gives con- sistent results and shows that, with initial train- ing on the 2118 dialogues of DSTC2, TDA2C converges signiï¬cantly faster than the other mod- els. Figure 2b focuses on DA2C and TDA2C. Compared to batch training, supervised training on DSTC2 speeds up convergence by 2000 dia- logues (3000 dialogues vs. 5000 dialogues). In- terestingly, there does not seem to be much dif- ference between supervised training on the expert data and on DSTC2. The expert data only con- sists of 706 dialogues out of 2118 dialogues. Our observation is that, in the non-expert data, many | 1606.03152#40 | Policy Networks with Two-Stage Training for Dialogue Systems | In this paper, we propose to use deep policy networks which are trained with
an advantage actor-critic method for statistically optimised dialogue systems.
First, we show that, on summary state and action spaces, deep Reinforcement
Learning (RL) outperforms Gaussian Processes methods. Summary state and action
spaces lead to good performance but require pre-engineering effort, RL
knowledge, and domain expertise. In order to remove the need to define such
summary spaces, we show that deep RL can also be trained efficiently on the
original state and action spaces. Dialogue systems based on partially
observable Markov decision processes are known to require many dialogues to
train, which makes them unappealing for practical deployment. We show that a
deep RL method based on an actor-critic architecture can exploit a small amount
of data very efficiently. Indeed, with only a few hundred dialogues collected
with a handcrafted policy, the actor-critic deep learner is considerably
bootstrapped from a combination of supervised and batch RL. In addition,
convergence to an optimal policy is significantly sped up compared to other
deep RL methods initialized on the data with batch RL. All experiments are
performed on a restaurant domain derived from the Dialogue State Tracking
Challenge 2 (DSTC2) dataset. | http://arxiv.org/pdf/1606.03152 | Mehdi Fatemi, Layla El Asri, Hannes Schulz, Jing He, Kaheer Suleman | cs.CL, cs.AI | SIGDial 2016 (Submitted: May 2016; Accepted: Jun 30, 2016) | Proceedings of the SIGDIAL 2016 Conference, pages 101--110, Los
Angeles, USA, 13-15 September 2016. Association for Computational Linguistics | cs.CL | 20160610 | 20160912 | [
{
"id": "1511.08099"
},
{
"id": "1601.04574"
},
{
"id": "1509.06461"
}
] |
1606.03152 | 41 | In this paper, we used policy networks for dia- logue systems and trained them in a two-stage fashion: supervised training and batch reinforce- ment learning followed by online reinforcement learning. An important feature of policy networks is that they directly provide a probability distribu- tion over the action space, which enables super- vised training. We compared the results with other deep reinforcement learning algorithms, namely Deep Q Networks and Double Deep Q Networks. The combination of supervised and reinforcement learning is the main beneï¬t of our method, which paves the way for developing trainable end-to-end dialogue systems. Supervised training on a small dataset considerably bootstraps the learning pro- cess and can be used to signiï¬cantly improve the
convergence rate of reinforcement learning in sta- tistically optimised dialogue systems.
# References
R. Amit and M. Mataric. 2002. Learning move- In Proc. ment sequences from demonstration. Int. Conf. on Development and Learning. pages 203â208.
A. G. Barto, R. S. Sutton, and C. W. Anderson. In Artiï¬cial Neural Networks, chapter 1990. Neuronlike Adaptive Elements That Can Solve Difï¬cult Learning Control Problems, pages 81â 93. | 1606.03152#41 | Policy Networks with Two-Stage Training for Dialogue Systems | In this paper, we propose to use deep policy networks which are trained with
an advantage actor-critic method for statistically optimised dialogue systems.
First, we show that, on summary state and action spaces, deep Reinforcement
Learning (RL) outperforms Gaussian Processes methods. Summary state and action
spaces lead to good performance but require pre-engineering effort, RL
knowledge, and domain expertise. In order to remove the need to define such
summary spaces, we show that deep RL can also be trained efficiently on the
original state and action spaces. Dialogue systems based on partially
observable Markov decision processes are known to require many dialogues to
train, which makes them unappealing for practical deployment. We show that a
deep RL method based on an actor-critic architecture can exploit a small amount
of data very efficiently. Indeed, with only a few hundred dialogues collected
with a handcrafted policy, the actor-critic deep learner is considerably
bootstrapped from a combination of supervised and batch RL. In addition,
convergence to an optimal policy is significantly sped up compared to other
deep RL methods initialized on the data with batch RL. All experiments are
performed on a restaurant domain derived from the Dialogue State Tracking
Challenge 2 (DSTC2) dataset. | http://arxiv.org/pdf/1606.03152 | Mehdi Fatemi, Layla El Asri, Hannes Schulz, Jing He, Kaheer Suleman | cs.CL, cs.AI | SIGDial 2016 (Submitted: May 2016; Accepted: Jun 30, 2016) | Proceedings of the SIGDIAL 2016 Conference, pages 101--110, Los
Angeles, USA, 13-15 September 2016. Association for Computational Linguistics | cs.CL | 20160610 | 20160912 | [
{
"id": "1511.08099"
},
{
"id": "1601.04574"
},
{
"id": "1509.06461"
}
] |
1606.03152 | 42 | H. Benbrahim and J. A. Franklin. 1997. Biped dynamic walking using reinforcement learning. Robotics and Autonomous Systems 22:283â302.
D. P. Bertsekas and J. Tsitsiklis. 1996. Neuro- Dynamic Programming. Athena Scientiï¬c.
S. Bhatnagar, R. Sutton, M. Ghavamzadeh, and M. Lee. 2009. Natural Actor-Critic Algorithms. Automatica 45(11).
Simpleds: A simple deep reinforcement learning dialogue system. arXiv:1601.04574v1 [cs.AI].
H. Cuay´ahuitl, S. Keizer, and O. Lemon. 2015. Strategic dialogue management via deep rein- forcement learning. arXiv:1511.08099 [cs.AI].
L. Daubigney, M. Geist, S. Chandramohan, and O. Pietquin. 2012. A Comprehensive Rein- forcement Learning Framework for Dialogue IEEE Journal of Management Optimisation. Selected Topics in Signal Processing 6(8):891â 902.
Y. Engel, S. Mannor, and R. Meir. 2005. Rein- forcement learning with gaussian processes. In Proc. of ICML. | 1606.03152#42 | Policy Networks with Two-Stage Training for Dialogue Systems | In this paper, we propose to use deep policy networks which are trained with
an advantage actor-critic method for statistically optimised dialogue systems.
First, we show that, on summary state and action spaces, deep Reinforcement
Learning (RL) outperforms Gaussian Processes methods. Summary state and action
spaces lead to good performance but require pre-engineering effort, RL
knowledge, and domain expertise. In order to remove the need to define such
summary spaces, we show that deep RL can also be trained efficiently on the
original state and action spaces. Dialogue systems based on partially
observable Markov decision processes are known to require many dialogues to
train, which makes them unappealing for practical deployment. We show that a
deep RL method based on an actor-critic architecture can exploit a small amount
of data very efficiently. Indeed, with only a few hundred dialogues collected
with a handcrafted policy, the actor-critic deep learner is considerably
bootstrapped from a combination of supervised and batch RL. In addition,
convergence to an optimal policy is significantly sped up compared to other
deep RL methods initialized on the data with batch RL. All experiments are
performed on a restaurant domain derived from the Dialogue State Tracking
Challenge 2 (DSTC2) dataset. | http://arxiv.org/pdf/1606.03152 | Mehdi Fatemi, Layla El Asri, Hannes Schulz, Jing He, Kaheer Suleman | cs.CL, cs.AI | SIGDial 2016 (Submitted: May 2016; Accepted: Jun 30, 2016) | Proceedings of the SIGDIAL 2016 Conference, pages 101--110, Los
Angeles, USA, 13-15 September 2016. Association for Computational Linguistics | cs.CL | 20160610 | 20160912 | [
{
"id": "1511.08099"
},
{
"id": "1601.04574"
},
{
"id": "1509.06461"
}
] |
1606.03152 | 43 | Y. Engel, S. Mannor, and R. Meir. 2005. Rein- forcement learning with gaussian processes. In Proc. of ICML.
M. GaËsi´c, C. Breslin, M. Henderson, D. Kim, M. Szummer, B. Thomson, P. Tsiakoulis, and S.J. Young. 2013. On-line policy optimisation of bayesian spoken dialogue systems via human In Proc. of ICASSP. pages 8367â interaction. 8371.
M. GaËsi´c, M. Henderson, B. Thomson, P. Tsiak- oulis, and S. Young. 2012. Policy optimisa- tion of POMDP-based dialogue systems with- out state space compression. In Proc. of SLT.
M. GaËsi´c, F. JurËc´ıËcek, S. Keizer, F. Mairesse, B. Thomson, K. Yu, and S. Young. 2010. Gaus- sian processes for fast policy optimisation of
POMDP-based dialogue managers. In Proc. of SIGDIAL.
M. Henderson, B. Thomson, and J. Williams. 2014. The Second Dialog State Tracking Chal- lenge. In Proc. of SIGDIAL. | 1606.03152#43 | Policy Networks with Two-Stage Training for Dialogue Systems | In this paper, we propose to use deep policy networks which are trained with
an advantage actor-critic method for statistically optimised dialogue systems.
First, we show that, on summary state and action spaces, deep Reinforcement
Learning (RL) outperforms Gaussian Processes methods. Summary state and action
spaces lead to good performance but require pre-engineering effort, RL
knowledge, and domain expertise. In order to remove the need to define such
summary spaces, we show that deep RL can also be trained efficiently on the
original state and action spaces. Dialogue systems based on partially
observable Markov decision processes are known to require many dialogues to
train, which makes them unappealing for practical deployment. We show that a
deep RL method based on an actor-critic architecture can exploit a small amount
of data very efficiently. Indeed, with only a few hundred dialogues collected
with a handcrafted policy, the actor-critic deep learner is considerably
bootstrapped from a combination of supervised and batch RL. In addition,
convergence to an optimal policy is significantly sped up compared to other
deep RL methods initialized on the data with batch RL. All experiments are
performed on a restaurant domain derived from the Dialogue State Tracking
Challenge 2 (DSTC2) dataset. | http://arxiv.org/pdf/1606.03152 | Mehdi Fatemi, Layla El Asri, Hannes Schulz, Jing He, Kaheer Suleman | cs.CL, cs.AI | SIGDial 2016 (Submitted: May 2016; Accepted: Jun 30, 2016) | Proceedings of the SIGDIAL 2016 Conference, pages 101--110, Los
Angeles, USA, 13-15 September 2016. Association for Computational Linguistics | cs.CL | 20160610 | 20160912 | [
{
"id": "1511.08099"
},
{
"id": "1601.04574"
},
{
"id": "1509.06461"
}
] |
1606.03152 | 44 | M. Henderson, B. Thomson, and J. Williams. 2014. The Second Dialog State Tracking Chal- lenge. In Proc. of SIGDIAL.
R. Laroche, G. Putois, and P. Bretier. 2010. Op- timising a handcrafted dialogue system design. In Proc. of Interspeech.
O. Lemon and O. Pietquin. 2007. Machine learn- In Proc. of ing for spoken dialogue systems. Interspeech. pages 2685â2688.
E. Levin, R. Pieraccini, and W. Eckert. 1997. Learning dialogue strategies within the markov decision process framework. In Proc. of ASRU.
L-J Lin. 1993. Reinforcement learning for robots using neural networks. Ph.D. thesis, Carnegie Mellon University.
V Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I Antonoglou, D. Wierstra, and M. Riedmiller. 2013. Playing Atari with deep reinforcement learning. In NIPS Deep Learning Workshop. | 1606.03152#44 | Policy Networks with Two-Stage Training for Dialogue Systems | In this paper, we propose to use deep policy networks which are trained with
an advantage actor-critic method for statistically optimised dialogue systems.
First, we show that, on summary state and action spaces, deep Reinforcement
Learning (RL) outperforms Gaussian Processes methods. Summary state and action
spaces lead to good performance but require pre-engineering effort, RL
knowledge, and domain expertise. In order to remove the need to define such
summary spaces, we show that deep RL can also be trained efficiently on the
original state and action spaces. Dialogue systems based on partially
observable Markov decision processes are known to require many dialogues to
train, which makes them unappealing for practical deployment. We show that a
deep RL method based on an actor-critic architecture can exploit a small amount
of data very efficiently. Indeed, with only a few hundred dialogues collected
with a handcrafted policy, the actor-critic deep learner is considerably
bootstrapped from a combination of supervised and batch RL. In addition,
convergence to an optimal policy is significantly sped up compared to other
deep RL methods initialized on the data with batch RL. All experiments are
performed on a restaurant domain derived from the Dialogue State Tracking
Challenge 2 (DSTC2) dataset. | http://arxiv.org/pdf/1606.03152 | Mehdi Fatemi, Layla El Asri, Hannes Schulz, Jing He, Kaheer Suleman | cs.CL, cs.AI | SIGDial 2016 (Submitted: May 2016; Accepted: Jun 30, 2016) | Proceedings of the SIGDIAL 2016 Conference, pages 101--110, Los
Angeles, USA, 13-15 September 2016. Association for Computational Linguistics | cs.CL | 20160610 | 20160912 | [
{
"id": "1511.08099"
},
{
"id": "1601.04574"
},
{
"id": "1509.06461"
}
] |
1606.03152 | 45 | V. Mnih, K. Kavukcuoglu, D. Silver, A.A. Rusu, J. Veness, M.G. Bellemare, A. Graves, M. Ried- miller, A.K. Fidjeland, G. Ostrovski, S. Pe- tersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, and D. Hassabis. 2015. Human-level control through deep reinforcement learning. Nature 518(7540):529â533.
B. Piot, M. Geist, and O. Pietquin. 2015. Imitation Learning Applied to Embodied Conversational Agents. In Proc. of MLIS.
D. A. Pomerleau. 1989. Alvinn: An autonomous In Proc. of land vehicle in a neural network. NIPS. pages 305â313.
J. Schatzmann, B. Thomson, K. Weilhammer, H. Ye, and S. Young. 2007. Agenda-based user simulation for bootstrapping a POMDP di- alogue system. In Proc. of NAACL HLT. pages 149â152. | 1606.03152#45 | Policy Networks with Two-Stage Training for Dialogue Systems | In this paper, we propose to use deep policy networks which are trained with
an advantage actor-critic method for statistically optimised dialogue systems.
First, we show that, on summary state and action spaces, deep Reinforcement
Learning (RL) outperforms Gaussian Processes methods. Summary state and action
spaces lead to good performance but require pre-engineering effort, RL
knowledge, and domain expertise. In order to remove the need to define such
summary spaces, we show that deep RL can also be trained efficiently on the
original state and action spaces. Dialogue systems based on partially
observable Markov decision processes are known to require many dialogues to
train, which makes them unappealing for practical deployment. We show that a
deep RL method based on an actor-critic architecture can exploit a small amount
of data very efficiently. Indeed, with only a few hundred dialogues collected
with a handcrafted policy, the actor-critic deep learner is considerably
bootstrapped from a combination of supervised and batch RL. In addition,
convergence to an optimal policy is significantly sped up compared to other
deep RL methods initialized on the data with batch RL. All experiments are
performed on a restaurant domain derived from the Dialogue State Tracking
Challenge 2 (DSTC2) dataset. | http://arxiv.org/pdf/1606.03152 | Mehdi Fatemi, Layla El Asri, Hannes Schulz, Jing He, Kaheer Suleman | cs.CL, cs.AI | SIGDial 2016 (Submitted: May 2016; Accepted: Jun 30, 2016) | Proceedings of the SIGDIAL 2016 Conference, pages 101--110, Los
Angeles, USA, 13-15 September 2016. Association for Computational Linguistics | cs.CL | 20160610 | 20160912 | [
{
"id": "1511.08099"
},
{
"id": "1601.04574"
},
{
"id": "1509.06461"
}
] |
1606.03152 | 46 | J. Schatzmann and S. Young. 2009. The hidden agenda user simulation model. Proc. of TASLP 17(4):733â747.
J. Si, A. G. Barto, W. B. Powell, and D. Wun- sch. 2004. Supervised ActorCritic Reinforce- ment Learning, pages 359â380.
D. Silver, A. Huang, C.J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser,
I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalch- brenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hass- abis. 2016. Mastering the game of go with deep neural networks and tree search. Nature 529(7587):484â489.
S. Sukhbaatar, A. Szlam, G. Synnaeve, S. Chintala, and R. Fergus. 2016. Maze- base: A sandbox for learning from games. arxiv.org/pdf/1511.07401 [cs.LG]. | 1606.03152#46 | Policy Networks with Two-Stage Training for Dialogue Systems | In this paper, we propose to use deep policy networks which are trained with
an advantage actor-critic method for statistically optimised dialogue systems.
First, we show that, on summary state and action spaces, deep Reinforcement
Learning (RL) outperforms Gaussian Processes methods. Summary state and action
spaces lead to good performance but require pre-engineering effort, RL
knowledge, and domain expertise. In order to remove the need to define such
summary spaces, we show that deep RL can also be trained efficiently on the
original state and action spaces. Dialogue systems based on partially
observable Markov decision processes are known to require many dialogues to
train, which makes them unappealing for practical deployment. We show that a
deep RL method based on an actor-critic architecture can exploit a small amount
of data very efficiently. Indeed, with only a few hundred dialogues collected
with a handcrafted policy, the actor-critic deep learner is considerably
bootstrapped from a combination of supervised and batch RL. In addition,
convergence to an optimal policy is significantly sped up compared to other
deep RL methods initialized on the data with batch RL. All experiments are
performed on a restaurant domain derived from the Dialogue State Tracking
Challenge 2 (DSTC2) dataset. | http://arxiv.org/pdf/1606.03152 | Mehdi Fatemi, Layla El Asri, Hannes Schulz, Jing He, Kaheer Suleman | cs.CL, cs.AI | SIGDial 2016 (Submitted: May 2016; Accepted: Jun 30, 2016) | Proceedings of the SIGDIAL 2016 Conference, pages 101--110, Los
Angeles, USA, 13-15 September 2016. Association for Computational Linguistics | cs.CL | 20160610 | 20160912 | [
{
"id": "1511.08099"
},
{
"id": "1601.04574"
},
{
"id": "1509.06461"
}
] |
1606.03152 | 47 | R. S. Sutton. 1984. Temporal credit assignment in reinforcement learning. Ph.D. thesis, Uni- versity of Massachusetts at Amherst, Amherst, MA, USA.
R. S. Sutton, D. McAllester, S. Singh, and Y. Man- sour. 2000. Policy gradient methods for re- inforcement learning with function approxima- tion. In Proc. of NIPS. volume 12, pages 1057â 1063.
R.S. Sutton and A.G. Barto. 1998. Reinforcement Learning. MIT Press.
H. van Hasselt. 2010. Double q-learning. In Proc. of NIPS. pages 2613â2621.
H. van Hasselt, A. Guez, and D. Silver. 2015. Deep reinforcement learning with double Q- learning. arXiv:1509.06461v3 [cs.LG].
J.D. Williams and S. Young. 2007. Partially ob- servable markov decision processes for spoken dialog systems. Proc. of CSL 21:231â422.
R.J. Williams. 1992. Simple statistical gradient- following algorithms for connectionist rein- forcement learning. Machine Learning 8:229â 256. | 1606.03152#47 | Policy Networks with Two-Stage Training for Dialogue Systems | In this paper, we propose to use deep policy networks which are trained with
an advantage actor-critic method for statistically optimised dialogue systems.
First, we show that, on summary state and action spaces, deep Reinforcement
Learning (RL) outperforms Gaussian Processes methods. Summary state and action
spaces lead to good performance but require pre-engineering effort, RL
knowledge, and domain expertise. In order to remove the need to define such
summary spaces, we show that deep RL can also be trained efficiently on the
original state and action spaces. Dialogue systems based on partially
observable Markov decision processes are known to require many dialogues to
train, which makes them unappealing for practical deployment. We show that a
deep RL method based on an actor-critic architecture can exploit a small amount
of data very efficiently. Indeed, with only a few hundred dialogues collected
with a handcrafted policy, the actor-critic deep learner is considerably
bootstrapped from a combination of supervised and batch RL. In addition,
convergence to an optimal policy is significantly sped up compared to other
deep RL methods initialized on the data with batch RL. All experiments are
performed on a restaurant domain derived from the Dialogue State Tracking
Challenge 2 (DSTC2) dataset. | http://arxiv.org/pdf/1606.03152 | Mehdi Fatemi, Layla El Asri, Hannes Schulz, Jing He, Kaheer Suleman | cs.CL, cs.AI | SIGDial 2016 (Submitted: May 2016; Accepted: Jun 30, 2016) | Proceedings of the SIGDIAL 2016 Conference, pages 101--110, Los
Angeles, USA, 13-15 September 2016. Association for Computational Linguistics | cs.CL | 20160610 | 20160912 | [
{
"id": "1511.08099"
},
{
"id": "1601.04574"
},
{
"id": "1509.06461"
}
] |
1606.03152 | 48 | R.J. Williams. 1992. Simple statistical gradient- following algorithms for connectionist rein- forcement learning. Machine Learning 8:229â 256.
S. Young, M. Gasic, B. Thomson, and J. Williams. 2013. POMDP-based statistical spoken dialog systems: A review. Proc. IEEE 101(5):1160â 1179.
# A Speciï¬cations of restaurant search in DTSC2
Constraint slots area, type of food, price range.
Request slots area, type of food, address, name, price range, postcode, signature dish, phone number
Table 1: Summary actions.
Action Description Cannot help No restaurant in the database matches the userâs constraints. Conï¬rm Domain Explicit Conï¬rm Offer Conï¬rm that the user is looking for a restaurant. Ask the user to conï¬rm a piece of information. Propose a restaurant to the user. Repeat Ask the user to repeat. Request Request a slot from the user. Select Ask the user to select a value between two propositions (e.g. select between Italian and Indian).
Table 2: User actions. | 1606.03152#48 | Policy Networks with Two-Stage Training for Dialogue Systems | In this paper, we propose to use deep policy networks which are trained with
an advantage actor-critic method for statistically optimised dialogue systems.
First, we show that, on summary state and action spaces, deep Reinforcement
Learning (RL) outperforms Gaussian Processes methods. Summary state and action
spaces lead to good performance but require pre-engineering effort, RL
knowledge, and domain expertise. In order to remove the need to define such
summary spaces, we show that deep RL can also be trained efficiently on the
original state and action spaces. Dialogue systems based on partially
observable Markov decision processes are known to require many dialogues to
train, which makes them unappealing for practical deployment. We show that a
deep RL method based on an actor-critic architecture can exploit a small amount
of data very efficiently. Indeed, with only a few hundred dialogues collected
with a handcrafted policy, the actor-critic deep learner is considerably
bootstrapped from a combination of supervised and batch RL. In addition,
convergence to an optimal policy is significantly sped up compared to other
deep RL methods initialized on the data with batch RL. All experiments are
performed on a restaurant domain derived from the Dialogue State Tracking
Challenge 2 (DSTC2) dataset. | http://arxiv.org/pdf/1606.03152 | Mehdi Fatemi, Layla El Asri, Hannes Schulz, Jing He, Kaheer Suleman | cs.CL, cs.AI | SIGDial 2016 (Submitted: May 2016; Accepted: Jun 30, 2016) | Proceedings of the SIGDIAL 2016 Conference, pages 101--110, Los
Angeles, USA, 13-15 September 2016. Association for Computational Linguistics | cs.CL | 20160610 | 20160912 | [
{
"id": "1511.08099"
},
{
"id": "1601.04574"
},
{
"id": "1509.06461"
}
] |
1606.03152 | 49 | Table 2: User actions.
Action Description Deny Deny a piece of information. Null Say nothing. Request More Conï¬rm Request more options. Ask the system to conï¬rm a piece of information. Acknowledge Acknowledge. Afï¬rm Say yes. Request Request a slot value. Inform Inform the system of a slot value. Thank you Thank the system. Repeat Ask the system to repeat. Request Alternatives Request alternative restaurant options. Negate Say no. Bye Say goodbye to the system. Hello Say hello to the system. Restart the system to restart | 1606.03152#49 | Policy Networks with Two-Stage Training for Dialogue Systems | In this paper, we propose to use deep policy networks which are trained with
an advantage actor-critic method for statistically optimised dialogue systems.
First, we show that, on summary state and action spaces, deep Reinforcement
Learning (RL) outperforms Gaussian Processes methods. Summary state and action
spaces lead to good performance but require pre-engineering effort, RL
knowledge, and domain expertise. In order to remove the need to define such
summary spaces, we show that deep RL can also be trained efficiently on the
original state and action spaces. Dialogue systems based on partially
observable Markov decision processes are known to require many dialogues to
train, which makes them unappealing for practical deployment. We show that a
deep RL method based on an actor-critic architecture can exploit a small amount
of data very efficiently. Indeed, with only a few hundred dialogues collected
with a handcrafted policy, the actor-critic deep learner is considerably
bootstrapped from a combination of supervised and batch RL. In addition,
convergence to an optimal policy is significantly sped up compared to other
deep RL methods initialized on the data with batch RL. All experiments are
performed on a restaurant domain derived from the Dialogue State Tracking
Challenge 2 (DSTC2) dataset. | http://arxiv.org/pdf/1606.03152 | Mehdi Fatemi, Layla El Asri, Hannes Schulz, Jing He, Kaheer Suleman | cs.CL, cs.AI | SIGDial 2016 (Submitted: May 2016; Accepted: Jun 30, 2016) | Proceedings of the SIGDIAL 2016 Conference, pages 101--110, Los
Angeles, USA, 13-15 September 2016. Association for Computational Linguistics | cs.CL | 20160610 | 20160912 | [
{
"id": "1511.08099"
},
{
"id": "1601.04574"
},
{
"id": "1509.06461"
}
] |
1606.02960 | 0 | 6 1 0 2
v o N 0 1 ] L C . s c [
2 v 0 6 9 2 0 . 6 0 6 1 : v i X r a
# Sequence-to-Sequence Learning as Beam-Search Optimization
Sam Wiseman and Alexander M. Rush School of Engineering and Applied Sciences Harvard University Cambridge, MA, USA {swiseman,srush}@seas.harvard.edu
# Abstract
Sequence-to-Sequence (seq2seq) modeling has rapidly become an important general- purpose NLP tool that has proven effective for many text-generation and sequence-labeling tasks. Seq2seq builds on deep neural language modeling and inherits its remarkable accuracy in estimating local, next-word distributions. In this work, we introduce a model and beam- search training scheme, based on the work of Daum´e III and Marcu (2005), that extends seq2seq to learn global sequence scores. This structured approach avoids classical biases as- sociated with local training and uniï¬es the training loss with the test-time usage, while preserving the proven model architecture of seq2seq and its efï¬cient training approach. We show that our system outperforms a highly- optimized attention-based seq2seq system and other baselines on three different sequence to sequence tasks: word ordering, parsing, and machine translation. | 1606.02960#0 | Sequence-to-Sequence Learning as Beam-Search Optimization | Sequence-to-Sequence (seq2seq) modeling has rapidly become an important
general-purpose NLP tool that has proven effective for many text-generation and
sequence-labeling tasks. Seq2seq builds on deep neural language modeling and
inherits its remarkable accuracy in estimating local, next-word distributions.
In this work, we introduce a model and beam-search training scheme, based on
the work of Daume III and Marcu (2005), that extends seq2seq to learn global
sequence scores. This structured approach avoids classical biases associated
with local training and unifies the training loss with the test-time usage,
while preserving the proven model architecture of seq2seq and its efficient
training approach. We show that our system outperforms a highly-optimized
attention-based seq2seq system and other baselines on three different sequence
to sequence tasks: word ordering, parsing, and machine translation. | http://arxiv.org/pdf/1606.02960 | Sam Wiseman, Alexander M. Rush | cs.CL, cs.LG, cs.NE, stat.ML | EMNLP 2016 camera-ready | null | cs.CL | 20160609 | 20161110 | [
{
"id": "1604.08633"
}
] |
1606.02960 | 1 | text generation applications, such as image or video captioning (Venugopalan et al., 2015; Xu et al., 2015).
The dominant approach to training a seq2seq sys- tem is as a conditional language model, with training maximizing the likelihood of each successive tar- get word conditioned on the input sequence and the gold history of target words. Thus, training uses a strictly word-level loss, usually cross-entropy over the target vocabulary. This approach has proven to be very effective and efï¬cient for training neural lan- guage models, and seq2seq models similarly obtain impressive perplexities for word-generation tasks.
Notably, however, seq2seq models are not used as conditional language models at test-time; they must instead generate fully-formed word sequences. In practice, generation is accomplished by searching over output sequences greedily or with beam search. In this context, Ranzato et al. (2016) note that the combination of the training and generation scheme just described leads to at least two major issues:
1
# 1 Introduction | 1606.02960#1 | Sequence-to-Sequence Learning as Beam-Search Optimization | Sequence-to-Sequence (seq2seq) modeling has rapidly become an important
general-purpose NLP tool that has proven effective for many text-generation and
sequence-labeling tasks. Seq2seq builds on deep neural language modeling and
inherits its remarkable accuracy in estimating local, next-word distributions.
In this work, we introduce a model and beam-search training scheme, based on
the work of Daume III and Marcu (2005), that extends seq2seq to learn global
sequence scores. This structured approach avoids classical biases associated
with local training and unifies the training loss with the test-time usage,
while preserving the proven model architecture of seq2seq and its efficient
training approach. We show that our system outperforms a highly-optimized
attention-based seq2seq system and other baselines on three different sequence
to sequence tasks: word ordering, parsing, and machine translation. | http://arxiv.org/pdf/1606.02960 | Sam Wiseman, Alexander M. Rush | cs.CL, cs.LG, cs.NE, stat.ML | EMNLP 2016 camera-ready | null | cs.CL | 20160609 | 20161110 | [
{
"id": "1604.08633"
}
] |
1606.02960 | 2 | 1
# 1 Introduction
Sequence-to-Sequence learning with deep neural networks (herein, seq2seq) (Sutskever et al., 2011; Sutskever et al., 2014) has rapidly become a very useful and surprisingly general-purpose tool for nat- In addition to demon- ural language processing. strating impressive results for machine translation (Bahdanau et al., 2015), roughly the same model and training have also proven to be useful for sen- tence compression (Filippova et al., 2015), parsing (Vinyals et al., 2015), and dialogue systems (Ser- ban et al., 2016), and they additionally underlie other
1. Exposure Bias: the model is never exposed to its own errors during training, and so the in- ferred histories at test-time do not resemble the gold training histories.
training uses a word-level loss, while at test-time we target improving sequence-level evaluation metrics, such as BLEU (Papineni et al., 2002).
We might additionally add the concern of label bias (Lafferty et al., 2001) to the list, since word- probabilities at each time-step are locally normal- ized, guaranteeing that successors of incorrect histories receive the same mass as do the successors of the true history. | 1606.02960#2 | Sequence-to-Sequence Learning as Beam-Search Optimization | Sequence-to-Sequence (seq2seq) modeling has rapidly become an important
general-purpose NLP tool that has proven effective for many text-generation and
sequence-labeling tasks. Seq2seq builds on deep neural language modeling and
inherits its remarkable accuracy in estimating local, next-word distributions.
In this work, we introduce a model and beam-search training scheme, based on
the work of Daume III and Marcu (2005), that extends seq2seq to learn global
sequence scores. This structured approach avoids classical biases associated
with local training and unifies the training loss with the test-time usage,
while preserving the proven model architecture of seq2seq and its efficient
training approach. We show that our system outperforms a highly-optimized
attention-based seq2seq system and other baselines on three different sequence
to sequence tasks: word ordering, parsing, and machine translation. | http://arxiv.org/pdf/1606.02960 | Sam Wiseman, Alexander M. Rush | cs.CL, cs.LG, cs.NE, stat.ML | EMNLP 2016 camera-ready | null | cs.CL | 20160609 | 20161110 | [
{
"id": "1604.08633"
}
] |
1606.02960 | 3 | In this work we develop a non-probabilistic vari- ant of the seq2seq model that can assign a score to any possible target sequence, and we propose a training procedure, inspired by the learning as search optimization (LaSO) framework of Daum´e III and Marcu (2005), that deï¬nes a loss function in terms of errors made during beam search. Fur- thermore, we provide an efï¬cient algorithm to back- propagate through the beam-search procedure dur- ing seq2seq training.
This approach offers a possible solution to each of the three aforementioned issues, while largely maintaining the model architecture and training ef- ï¬ciency of standard seq2seq learning. Moreover, by scoring sequences rather than words, our ap- proach also allows for enforcing hard-constraints on sequence generation at training time. To test out the effectiveness of the proposed approach, we develop a general-purpose seq2seq system with beam search optimization. We run experiments on three very dif- ferent problems: word ordering, syntactic parsing, and machine translation, and compare to a highly- tuned seq2seq system with attention (Luong et al., 2015). The version with beam search optimization shows signiï¬cant improvements on all three tasks, and particular improvements on tasks that require difï¬cult search.
# 2 Related Work | 1606.02960#3 | Sequence-to-Sequence Learning as Beam-Search Optimization | Sequence-to-Sequence (seq2seq) modeling has rapidly become an important
general-purpose NLP tool that has proven effective for many text-generation and
sequence-labeling tasks. Seq2seq builds on deep neural language modeling and
inherits its remarkable accuracy in estimating local, next-word distributions.
In this work, we introduce a model and beam-search training scheme, based on
the work of Daume III and Marcu (2005), that extends seq2seq to learn global
sequence scores. This structured approach avoids classical biases associated
with local training and unifies the training loss with the test-time usage,
while preserving the proven model architecture of seq2seq and its efficient
training approach. We show that our system outperforms a highly-optimized
attention-based seq2seq system and other baselines on three different sequence
to sequence tasks: word ordering, parsing, and machine translation. | http://arxiv.org/pdf/1606.02960 | Sam Wiseman, Alexander M. Rush | cs.CL, cs.LG, cs.NE, stat.ML | EMNLP 2016 camera-ready | null | cs.CL | 20160609 | 20161110 | [
{
"id": "1604.08633"
}
] |
1606.02960 | 4 | The issues of exposure bias and label bias have re- ceived much attention from authors in the structured prediction community, and we brieï¬y review some of this work here. One prominent approach to com- bating exposure bias is that of SEARN (Daum´e III et al., 2009), a meta-training algorithm that learns a search policy in the form of a cost-sensitive classiï¬er trained on examples generated from an interpolation of an oracle policy and the modelâs current (learned) policy. Thus, SEARN explicitly targets the mis- match between oracular training and non-oracular (often greedy) test-time inference by training on the output of the modelâs own policy. DAgger (Ross et al., 2011) is a similar approach, which differs in terms of how training examples are generated and aggregated, and there have additionally been important reï¬nements to this style of training over the past several years (Chang et al., 2015). When it comes to training RNNs, SEARN/DAgger has been applied under the name âscheduled samplingâ (Bengio et al., 2015), which involves training an RNN to generate the t + 1âst token in | 1606.02960#4 | Sequence-to-Sequence Learning as Beam-Search Optimization | Sequence-to-Sequence (seq2seq) modeling has rapidly become an important
general-purpose NLP tool that has proven effective for many text-generation and
sequence-labeling tasks. Seq2seq builds on deep neural language modeling and
inherits its remarkable accuracy in estimating local, next-word distributions.
In this work, we introduce a model and beam-search training scheme, based on
the work of Daume III and Marcu (2005), that extends seq2seq to learn global
sequence scores. This structured approach avoids classical biases associated
with local training and unifies the training loss with the test-time usage,
while preserving the proven model architecture of seq2seq and its efficient
training approach. We show that our system outperforms a highly-optimized
attention-based seq2seq system and other baselines on three different sequence
to sequence tasks: word ordering, parsing, and machine translation. | http://arxiv.org/pdf/1606.02960 | Sam Wiseman, Alexander M. Rush | cs.CL, cs.LG, cs.NE, stat.ML | EMNLP 2016 camera-ready | null | cs.CL | 20160609 | 20161110 | [
{
"id": "1604.08633"
}
] |
1606.02960 | 6 | is uncom- mon to use beam search when training with SEARN/DAgger. The early-update (Collins and Roark, 2004) and LaSO (Daum´e III and Marcu, 2005) training strategies, however, explicitly ac- count for beam search, and describe strategies for updating parameters when the gold structure be- comes unreachable during search. Early update and LaSO differ primarily in that the former discards a training example after the ï¬rst search error, whereas LaSO resumes searching after an error from a state that includes the gold partial structure. In the con- text of feed-forward neural network training, early update training has been recently explored in a feed- forward setting by Zhou et al. (2015) and Andor et al. (2016). Our work differs in that we adopt a LaSO-like paradigm (with some minor modiï¬ca- tions), and apply it to the training of seq2seq RNNs (rather than feed-forward networks). We also note that Watanabe and Sumita (2015) apply maximum- violation training (Huang et al., 2012), which is sim- ilar to early-update, to a parsing model with recur- rent components, and that Yazdani and Henderson (2015) use beam-search in training a discriminative, locally normalized dependency parser with recurrent components. | 1606.02960#6 | Sequence-to-Sequence Learning as Beam-Search Optimization | Sequence-to-Sequence (seq2seq) modeling has rapidly become an important
general-purpose NLP tool that has proven effective for many text-generation and
sequence-labeling tasks. Seq2seq builds on deep neural language modeling and
inherits its remarkable accuracy in estimating local, next-word distributions.
In this work, we introduce a model and beam-search training scheme, based on
the work of Daume III and Marcu (2005), that extends seq2seq to learn global
sequence scores. This structured approach avoids classical biases associated
with local training and unifies the training loss with the test-time usage,
while preserving the proven model architecture of seq2seq and its efficient
training approach. We show that our system outperforms a highly-optimized
attention-based seq2seq system and other baselines on three different sequence
to sequence tasks: word ordering, parsing, and machine translation. | http://arxiv.org/pdf/1606.02960 | Sam Wiseman, Alexander M. Rush | cs.CL, cs.LG, cs.NE, stat.ML | EMNLP 2016 camera-ready | null | cs.CL | 20160609 | 20161110 | [
{
"id": "1604.08633"
}
] |
1606.02960 | 7 | Recently authors have also proposed alleviating exposure bias using techniques from reinforcement learning. Ranzato et al. (2016) follow this ap- proach to train RNN decoders in a seq2seq model, and they obtain consistent improvements in perfor- mance, even over models trained with scheduled sampling. As Daum´e III and Marcu (2005) note, LaSO is similar to reinforcement learning, except it does not require âexplorationâ in the same way. Such exploration may be unnecessary in supervised text-generation, since we typically know the gold partial sequences at each time-step. Shen et al. (2016) use minimum risk training (approximated by
sampling) to address the issues of exposure bias and loss-evaluation mismatch for seq2seq MT, and show impressive performance gains. | 1606.02960#7 | Sequence-to-Sequence Learning as Beam-Search Optimization | Sequence-to-Sequence (seq2seq) modeling has rapidly become an important
general-purpose NLP tool that has proven effective for many text-generation and
sequence-labeling tasks. Seq2seq builds on deep neural language modeling and
inherits its remarkable accuracy in estimating local, next-word distributions.
In this work, we introduce a model and beam-search training scheme, based on
the work of Daume III and Marcu (2005), that extends seq2seq to learn global
sequence scores. This structured approach avoids classical biases associated
with local training and unifies the training loss with the test-time usage,
while preserving the proven model architecture of seq2seq and its efficient
training approach. We show that our system outperforms a highly-optimized
attention-based seq2seq system and other baselines on three different sequence
to sequence tasks: word ordering, parsing, and machine translation. | http://arxiv.org/pdf/1606.02960 | Sam Wiseman, Alexander M. Rush | cs.CL, cs.LG, cs.NE, stat.ML | EMNLP 2016 camera-ready | null | cs.CL | 20160609 | 20161110 | [
{
"id": "1604.08633"
}
] |
1606.02960 | 8 | sampling) to address the issues of exposure bias and loss-evaluation mismatch for seq2seq MT, and show impressive performance gains.
Whereas exposure bias results from training in a certain way, label bias results from properties of the model itself. In particular, label bias is likely to affect structured models that make sub-structure predictions using locally-normalized scores. Be- cause the neural and non-neural literature on this point has recently been reviewed by Andor et al. (2016), we simply note here that RNN models are typically locally normalized, and we are unaware of any speciï¬cally seq2seq work with RNNs that does not use locally-normalized scores. The model we introduce here, however, is not locally normalized, and so should not suffer from label bias. We also note that there are some (non-seq2seq) exceptions to the trend of locally normalized RNNs, such as the work of Sak et al. (2014) and Voigtlaender et al. (2015), who train LSTMs in the context of HMMs for speech recognition using sequence-level objec- tives; their work does not consider search, however.
# 3 Background and Notation | 1606.02960#8 | Sequence-to-Sequence Learning as Beam-Search Optimization | Sequence-to-Sequence (seq2seq) modeling has rapidly become an important
general-purpose NLP tool that has proven effective for many text-generation and
sequence-labeling tasks. Seq2seq builds on deep neural language modeling and
inherits its remarkable accuracy in estimating local, next-word distributions.
In this work, we introduce a model and beam-search training scheme, based on
the work of Daume III and Marcu (2005), that extends seq2seq to learn global
sequence scores. This structured approach avoids classical biases associated
with local training and unifies the training loss with the test-time usage,
while preserving the proven model architecture of seq2seq and its efficient
training approach. We show that our system outperforms a highly-optimized
attention-based seq2seq system and other baselines on three different sequence
to sequence tasks: word ordering, parsing, and machine translation. | http://arxiv.org/pdf/1606.02960 | Sam Wiseman, Alexander M. Rush | cs.CL, cs.LG, cs.NE, stat.ML | EMNLP 2016 camera-ready | null | cs.CL | 20160609 | 20161110 | [
{
"id": "1604.08633"
}
] |
1606.02960 | 9 | # 3 Background and Notation
In the simplest seq2seq scenario, we are given a col- lection of source-target sequence pairs and tasked with learning to generate target sequences from source sequences. For instance, we might view ma- chine translation in this way, where in particular we attempt to generate English sentences from (corre- sponding) French sentences. Seq2seq models are part of the broader class of âencoder-decoderâ mod- els (Cho et al., 2014), which ï¬rst use an encoding model to transform a source object into an encoded representation x. Many different sequential (and non-sequential) encoders have proven to be effec- tive for different source domains. In this work we are agnostic to the form of the encoding model, and simply assume an abstract source representation x. Once the input sequence is encoded, seq2seq models generate a target sequence using a decoder. The decoder is tasked with generating a target se- quence of words from a target vocabulary V. In particular, words are generated sequentially by con- ditioning on the input representation x and on the previously generated words or history. We use the notation w1:T to refer to an arbitrary word sequence
of length T , and the notation y1:T to refer to the gold (i.e., correct) target word sequence for an input x. | 1606.02960#9 | Sequence-to-Sequence Learning as Beam-Search Optimization | Sequence-to-Sequence (seq2seq) modeling has rapidly become an important
general-purpose NLP tool that has proven effective for many text-generation and
sequence-labeling tasks. Seq2seq builds on deep neural language modeling and
inherits its remarkable accuracy in estimating local, next-word distributions.
In this work, we introduce a model and beam-search training scheme, based on
the work of Daume III and Marcu (2005), that extends seq2seq to learn global
sequence scores. This structured approach avoids classical biases associated
with local training and unifies the training loss with the test-time usage,
while preserving the proven model architecture of seq2seq and its efficient
training approach. We show that our system outperforms a highly-optimized
attention-based seq2seq system and other baselines on three different sequence
to sequence tasks: word ordering, parsing, and machine translation. | http://arxiv.org/pdf/1606.02960 | Sam Wiseman, Alexander M. Rush | cs.CL, cs.LG, cs.NE, stat.ML | EMNLP 2016 camera-ready | null | cs.CL | 20160609 | 20161110 | [
{
"id": "1604.08633"
}
] |
1606.02960 | 10 | of length T , and the notation y1:T to refer to the gold (i.e., correct) target word sequence for an input x.
Most seq2seq systems utilize a recurrent neural network (RNN) for the decoder model. Formally, a recurrent neural network is a parameterized non- linear function RNN that recursively maps a se- quence of vectors to a sequence of hidden states. Let m1, . . . , mT be a sequence of T vectors, and let h0 be some initial state vector. Applying an RNN to any such sequence yields hidden states ht at each time-step t, as follows:
ht â RNN(mt, htâ1; θ),
where θ is the set of model parameters, which are In this work, the vectors mt shared over time. will always correspond to the embeddings of a tar- get word sequence w1:T , and so we will also write ht â RNN(wt, htâ1; θ), with wt standing in for its embedding. | 1606.02960#10 | Sequence-to-Sequence Learning as Beam-Search Optimization | Sequence-to-Sequence (seq2seq) modeling has rapidly become an important
general-purpose NLP tool that has proven effective for many text-generation and
sequence-labeling tasks. Seq2seq builds on deep neural language modeling and
inherits its remarkable accuracy in estimating local, next-word distributions.
In this work, we introduce a model and beam-search training scheme, based on
the work of Daume III and Marcu (2005), that extends seq2seq to learn global
sequence scores. This structured approach avoids classical biases associated
with local training and unifies the training loss with the test-time usage,
while preserving the proven model architecture of seq2seq and its efficient
training approach. We show that our system outperforms a highly-optimized
attention-based seq2seq system and other baselines on three different sequence
to sequence tasks: word ordering, parsing, and machine translation. | http://arxiv.org/pdf/1606.02960 | Sam Wiseman, Alexander M. Rush | cs.CL, cs.LG, cs.NE, stat.ML | EMNLP 2016 camera-ready | null | cs.CL | 20160609 | 20161110 | [
{
"id": "1604.08633"
}
] |
1606.02960 | 11 | RNN decoders are typically trained to act as con- ditional language models. That is, one attempts to model the probability of the ¢âth target word con- ditioned on « and the target history by stipulating that p(w;|wi-2â1, ©) = g(wz, he_-1, x), for some pa- rameterized function g typically computed with an affine layer followed by a softmax. In computing these probabilities, the state hy_, represents the tar- get history, and ho is typically set to be some func- tion of a. The complete model (including encoder) is trained, analogously to a neural language model, to minimize the cross-entropy loss at each time-step while conditioning on the gold history in the train- ing data. That is, the model is trained to minimize âInTT 2; p(yelgnt1, 2). | 1606.02960#11 | Sequence-to-Sequence Learning as Beam-Search Optimization | Sequence-to-Sequence (seq2seq) modeling has rapidly become an important
general-purpose NLP tool that has proven effective for many text-generation and
sequence-labeling tasks. Seq2seq builds on deep neural language modeling and
inherits its remarkable accuracy in estimating local, next-word distributions.
In this work, we introduce a model and beam-search training scheme, based on
the work of Daume III and Marcu (2005), that extends seq2seq to learn global
sequence scores. This structured approach avoids classical biases associated
with local training and unifies the training loss with the test-time usage,
while preserving the proven model architecture of seq2seq and its efficient
training approach. We show that our system outperforms a highly-optimized
attention-based seq2seq system and other baselines on three different sequence
to sequence tasks: word ordering, parsing, and machine translation. | http://arxiv.org/pdf/1606.02960 | Sam Wiseman, Alexander M. Rush | cs.CL, cs.LG, cs.NE, stat.ML | EMNLP 2016 camera-ready | null | cs.CL | 20160609 | 20161110 | [
{
"id": "1604.08633"
}
] |
1606.02960 | 12 | t=1 p(yt|y1:tâ1, x). discrete se- is quence generation can be performed by approx- imately maximizing the probability of the tar- get sequence under the conditional distribution, Ëy1:T = argbeamw1:T t=1 p(wt|w1:tâ1, x), where we use the notation argbeam to emphasize that the decoding process requires heuristic search, since the RNN model is non-Markovian. In practice, a simple beam search procedure that explores K prospective histories at each time-step has proven to be an effec- tive decoding approach. However, as noted above, decoding in this manner after conditional language- model style training potentially suffers from the issues of exposure bias and label bias, which moti- vates the work of this paper.
# 4 Beam Search Optimization | 1606.02960#12 | Sequence-to-Sequence Learning as Beam-Search Optimization | Sequence-to-Sequence (seq2seq) modeling has rapidly become an important
general-purpose NLP tool that has proven effective for many text-generation and
sequence-labeling tasks. Seq2seq builds on deep neural language modeling and
inherits its remarkable accuracy in estimating local, next-word distributions.
In this work, we introduce a model and beam-search training scheme, based on
the work of Daume III and Marcu (2005), that extends seq2seq to learn global
sequence scores. This structured approach avoids classical biases associated
with local training and unifies the training loss with the test-time usage,
while preserving the proven model architecture of seq2seq and its efficient
training approach. We show that our system outperforms a highly-optimized
attention-based seq2seq system and other baselines on three different sequence
to sequence tasks: word ordering, parsing, and machine translation. | http://arxiv.org/pdf/1606.02960 | Sam Wiseman, Alexander M. Rush | cs.CL, cs.LG, cs.NE, stat.ML | EMNLP 2016 camera-ready | null | cs.CL | 20160609 | 20161110 | [
{
"id": "1604.08633"
}
] |
1606.02960 | 13 | # 4 Beam Search Optimization
We begin by making one small change to the seq2seq modeling framework. Instead of predicting the probability of the next word, we instead learn to produce (non-probabilistic) scores for ranking se- quences. Deï¬ne the score of a sequence consisting of history w1:tâ1 followed by a single word wt as f (wt, htâ1, x), where f is a parameterized function examining the current hidden-state of the relevant RNN at time t â 1 as well as the input representa- tion x. In experiments, our f will have an identi- cal form to g but without the ï¬nal softmax transfor- mation (which transforms unnormalized scores into probabilities), thereby allowing the model to avoid issues associated with the label bias problem. | 1606.02960#13 | Sequence-to-Sequence Learning as Beam-Search Optimization | Sequence-to-Sequence (seq2seq) modeling has rapidly become an important
general-purpose NLP tool that has proven effective for many text-generation and
sequence-labeling tasks. Seq2seq builds on deep neural language modeling and
inherits its remarkable accuracy in estimating local, next-word distributions.
In this work, we introduce a model and beam-search training scheme, based on
the work of Daume III and Marcu (2005), that extends seq2seq to learn global
sequence scores. This structured approach avoids classical biases associated
with local training and unifies the training loss with the test-time usage,
while preserving the proven model architecture of seq2seq and its efficient
training approach. We show that our system outperforms a highly-optimized
attention-based seq2seq system and other baselines on three different sequence
to sequence tasks: word ordering, parsing, and machine translation. | http://arxiv.org/pdf/1606.02960 | Sam Wiseman, Alexander M. Rush | cs.CL, cs.LG, cs.NE, stat.ML | EMNLP 2016 camera-ready | null | cs.CL | 20160609 | 20161110 | [
{
"id": "1604.08633"
}
] |
1606.02960 | 14 | More importantly, we also modify how this model is trained. Ideally we would train by comparing the gold sequence to the highest-scoring complete sequence. However, because ï¬nding the argmax sequence according to this model is intractable, we propose to adopt a LaSO-like (Daum´e III and Marcu, 2005) scheme to train, which we will re- fer to as beam search optimization (BSO). In par- ticular, we deï¬ne a loss that penalizes the gold se- quence falling off the beam during training.1 The proposed training approach is a simple way to ex- pose the model to incorrect histories and to match the training procedure to test generation. Further- more we show that it can be implemented efï¬ciently without changing the asymptotic run-time of train- ing, beyond a factor of the beam size K.
# 4.1 Search-Based Loss
We now formalize this notion of a search-based loss for RNN training. Assume we have a set St of K candidate sequences of length t. We can calculate a score for each sequence in St using a scoring func- tion f parameterized with an RNN, as above, and we deï¬ne the sequence Ëy(K) 1:t â St to be the Kâth ranked | 1606.02960#14 | Sequence-to-Sequence Learning as Beam-Search Optimization | Sequence-to-Sequence (seq2seq) modeling has rapidly become an important
general-purpose NLP tool that has proven effective for many text-generation and
sequence-labeling tasks. Seq2seq builds on deep neural language modeling and
inherits its remarkable accuracy in estimating local, next-word distributions.
In this work, we introduce a model and beam-search training scheme, based on
the work of Daume III and Marcu (2005), that extends seq2seq to learn global
sequence scores. This structured approach avoids classical biases associated
with local training and unifies the training loss with the test-time usage,
while preserving the proven model architecture of seq2seq and its efficient
training approach. We show that our system outperforms a highly-optimized
attention-based seq2seq system and other baselines on three different sequence
to sequence tasks: word ordering, parsing, and machine translation. | http://arxiv.org/pdf/1606.02960 | Sam Wiseman, Alexander M. Rush | cs.CL, cs.LG, cs.NE, stat.ML | EMNLP 2016 camera-ready | null | cs.CL | 20160609 | 20161110 | [
{
"id": "1604.08633"
}
] |
1606.02960 | 15 | 1Using a non-probabilistic model further allows us to incur no loss (and thus require no update to parameters) when the gold sequence is on the beam; this contrasts with models based on a CRF loss, such as those of Andor et al. (2016) and Zhou et al. (2015), though in training those models are simply not updated when the gold sequence remains on the beam.
sequence in St according to f . That is, assuming distinct scores,
|{Ëy(k) 1:t â St | f (Ëy(k) t , Ëh (k) tâ1) > f (Ëy(K) t , Ëh (K) tâ1)}| = K â 1,
# oo
(k) where Ëy(k) tâ1 is the RNN state corresponding to its t â 1âst step, and where we have omitted the x argument to f for brevity.
We now deï¬ne a loss function that gives loss each time the score of the gold preï¬x y1:t does not exceed that of Ëy(K)
# Lif) = T
T Daa?) [1- a). F (yes rea) + FG | 1606.02960#15 | Sequence-to-Sequence Learning as Beam-Search Optimization | Sequence-to-Sequence (seq2seq) modeling has rapidly become an important
general-purpose NLP tool that has proven effective for many text-generation and
sequence-labeling tasks. Seq2seq builds on deep neural language modeling and
inherits its remarkable accuracy in estimating local, next-word distributions.
In this work, we introduce a model and beam-search training scheme, based on
the work of Daume III and Marcu (2005), that extends seq2seq to learn global
sequence scores. This structured approach avoids classical biases associated
with local training and unifies the training loss with the test-time usage,
while preserving the proven model architecture of seq2seq and its efficient
training approach. We show that our system outperforms a highly-optimized
attention-based seq2seq system and other baselines on three different sequence
to sequence tasks: word ordering, parsing, and machine translation. | http://arxiv.org/pdf/1606.02960 | Sam Wiseman, Alexander M. Rush | cs.CL, cs.LG, cs.NE, stat.ML | EMNLP 2016 camera-ready | null | cs.CL | 20160609 | 20161110 | [
{
"id": "1604.08633"
}
] |
1606.02960 | 16 | # Lif) = T
T Daa?) [1- a). F (yes rea) + FG
Above, the â(Ëy(K) 1:t ) term denotes a mistake-speciï¬c cost-function, which allows us to scale the loss de- pending on the severity of erroneously predicting Ëy(K) 1:t ; it is assumed to return 0 when the margin re- quirement is satisï¬ed, and a positive number other- wise. It is this term that allows us to use sequence- rather than word-level costs in training (addressing the 2nd issue in the introduction). For instance, when training a seq2seq model for machine trans- lation, it may be desirable to have â(Ëy(K) 1:t ) be in- versely related to the partial sentence-level BLEU score of Ëy(K) 1:t with y1:t; we experiment along these lines in Section 5.3.
Finally, because we want the full gold sequence to be at the top of the beam at the end of search, when t = T we modify the loss to require the score of y1:T to exceed the score of the highest ranked incorrect prediction by a margin. | 1606.02960#16 | Sequence-to-Sequence Learning as Beam-Search Optimization | Sequence-to-Sequence (seq2seq) modeling has rapidly become an important
general-purpose NLP tool that has proven effective for many text-generation and
sequence-labeling tasks. Seq2seq builds on deep neural language modeling and
inherits its remarkable accuracy in estimating local, next-word distributions.
In this work, we introduce a model and beam-search training scheme, based on
the work of Daume III and Marcu (2005), that extends seq2seq to learn global
sequence scores. This structured approach avoids classical biases associated
with local training and unifies the training loss with the test-time usage,
while preserving the proven model architecture of seq2seq and its efficient
training approach. We show that our system outperforms a highly-optimized
attention-based seq2seq system and other baselines on three different sequence
to sequence tasks: word ordering, parsing, and machine translation. | http://arxiv.org/pdf/1606.02960 | Sam Wiseman, Alexander M. Rush | cs.CL, cs.LG, cs.NE, stat.ML | EMNLP 2016 camera-ready | null | cs.CL | 20160609 | 20161110 | [
{
"id": "1604.08633"
}
] |
1606.02960 | 17 | We can optimize the loss L using a two-step pro- cess: (1) in a forward pass, we compute candidate sets St and record margin violations (sequences with non-zero loss); (2) in a backward pass, we back- propagate the errors through the seq2seq RNNs. Un- like standard seq2seq training, the ï¬rst-step requires running search (in our case beam search) to ï¬nd margin violations. The second step can be done by adapting back-propagation through time (BPTT). We next discuss the details of this process.
# 4.2 Forward: Find Violations
In order to minimize this loss, we need to specify a procedure for constructing candidate sequences Ëy(k) 1:t
at each time step t so that we ï¬nd margin viola- tions. We follow LaSO (rather than early-update 2; see Section 2) and build candidates in a recursive If there was no margin violation at tâ1, manner. then St is constructed using a standard beam search update. If there was a margin violation, St is con- structed as the K best sequences assuming the gold history y1:tâ1 through time-step tâ1. | 1606.02960#17 | Sequence-to-Sequence Learning as Beam-Search Optimization | Sequence-to-Sequence (seq2seq) modeling has rapidly become an important
general-purpose NLP tool that has proven effective for many text-generation and
sequence-labeling tasks. Seq2seq builds on deep neural language modeling and
inherits its remarkable accuracy in estimating local, next-word distributions.
In this work, we introduce a model and beam-search training scheme, based on
the work of Daume III and Marcu (2005), that extends seq2seq to learn global
sequence scores. This structured approach avoids classical biases associated
with local training and unifies the training loss with the test-time usage,
while preserving the proven model architecture of seq2seq and its efficient
training approach. We show that our system outperforms a highly-optimized
attention-based seq2seq system and other baselines on three different sequence
to sequence tasks: word ordering, parsing, and machine translation. | http://arxiv.org/pdf/1606.02960 | Sam Wiseman, Alexander M. Rush | cs.CL, cs.LG, cs.NE, stat.ML | EMNLP 2016 camera-ready | null | cs.CL | 20160609 | 20161110 | [
{
"id": "1604.08633"
}
] |
1606.02960 | 18 | Formally, assume the function succ maps a se- quence w1:tâ1 â V tâ1 to the set of all valid se- quences of length t that can be formed by appending to it a valid word w â V. In the simplest, uncon- strained case, we will have
succ(w1:tâ1) = {w1:tâ1, w | w â V}.
As an important aside, note that for some prob- lems it may be preferable to deï¬ne a succ func- tion which imposes hard constraints on successor sequences. For instance, if we would like to use seq2seq models for parsing (by emitting a con- stituency or dependency structure encoded into a se- quence in some way), we will have hard constraints on the sequences the model can output, namely, that they represent valid parses. While hard constraints such as these would be difï¬cult to add to standard seq2seq at training time, in our framework they can naturally be added to the succ function, allowing us to train with hard constraints; we experiment along these lines in Section 5.3, where we refer to a model trained with constrained beam search as ConBSO.
Having deï¬ned an appropriate succ function, we specify the candidate set as: | 1606.02960#18 | Sequence-to-Sequence Learning as Beam-Search Optimization | Sequence-to-Sequence (seq2seq) modeling has rapidly become an important
general-purpose NLP tool that has proven effective for many text-generation and
sequence-labeling tasks. Seq2seq builds on deep neural language modeling and
inherits its remarkable accuracy in estimating local, next-word distributions.
In this work, we introduce a model and beam-search training scheme, based on
the work of Daume III and Marcu (2005), that extends seq2seq to learn global
sequence scores. This structured approach avoids classical biases associated
with local training and unifies the training loss with the test-time usage,
while preserving the proven model architecture of seq2seq and its efficient
training approach. We show that our system outperforms a highly-optimized
attention-based seq2seq system and other baselines on three different sequence
to sequence tasks: word ordering, parsing, and machine translation. | http://arxiv.org/pdf/1606.02960 | Sam Wiseman, Alexander M. Rush | cs.CL, cs.LG, cs.NE, stat.ML | EMNLP 2016 camera-ready | null | cs.CL | 20160609 | 20161110 | [
{
"id": "1604.08633"
}
] |
1606.02960 | 19 | Having deï¬ned an appropriate succ function, we specify the candidate set as:
succ(y1:1-1) violation at tâ1 ( S; = topk P UL, suce(g 4) 1) otherwise,
where we have a margin violation at tâ1 iff (K) f (ytâ1, htâ2) < f (Ëy(K) tâ1 , Ëh tâ2) + 1, and where topK considers the scores given by f . This search procedure is illustrated in the top portion of Figure 1. In the forward pass of our training algorithm, shown as the ï¬rst part of Algorithm 1, we run this version of beam search and collect all sequences and their hidden states that lead to losses.
2We found that training with early-update rather than (de- layed) LaSO did not work well, even after pre-training. Given the success of early-update in many NLP tasks this was some- what surprising. We leave this question to future work. | 1606.02960#19 | Sequence-to-Sequence Learning as Beam-Search Optimization | Sequence-to-Sequence (seq2seq) modeling has rapidly become an important
general-purpose NLP tool that has proven effective for many text-generation and
sequence-labeling tasks. Seq2seq builds on deep neural language modeling and
inherits its remarkable accuracy in estimating local, next-word distributions.
In this work, we introduce a model and beam-search training scheme, based on
the work of Daume III and Marcu (2005), that extends seq2seq to learn global
sequence scores. This structured approach avoids classical biases associated
with local training and unifies the training loss with the test-time usage,
while preserving the proven model architecture of seq2seq and its efficient
training approach. We show that our system outperforms a highly-optimized
attention-based seq2seq system and other baselines on three different sequence
to sequence tasks: word ordering, parsing, and machine translation. | http://arxiv.org/pdf/1606.02960 | Sam Wiseman, Alexander M. Rush | cs.CL, cs.LG, cs.NE, stat.ML | EMNLP 2016 camera-ready | null | cs.CL | 20160609 | 20161110 | [
{
"id": "1604.08633"
}
] |
1606.02960 | 20 | smells } {ome } (today } barks } Friday ) Y barks ) My straight \ now {Eun (otar =a) (etsy) {_ dog {__ the (et) ine en dog ) GE) Eg {ae} {biue }â( dog (Tiome }â{ today} }â+( barks )
Figure 1: Top: possible Ëy(k) 1:t formed in training with a beam of size K = 3 and with gold sequence y1:6 = âa red dog runs quickly todayâ. The gold sequence is high- lighted in yellow, and the predicted preï¬xes involved in margin violations (at t = 4 and t = 6) are in gray. Note that time-step T = 6 uses a different loss criterion. Bot- tom: preï¬xes that actually participate in the loss, ar- ranged to illustrate the back-propagation process.
# 4.3 Backward: Merge Sequences | 1606.02960#20 | Sequence-to-Sequence Learning as Beam-Search Optimization | Sequence-to-Sequence (seq2seq) modeling has rapidly become an important
general-purpose NLP tool that has proven effective for many text-generation and
sequence-labeling tasks. Seq2seq builds on deep neural language modeling and
inherits its remarkable accuracy in estimating local, next-word distributions.
In this work, we introduce a model and beam-search training scheme, based on
the work of Daume III and Marcu (2005), that extends seq2seq to learn global
sequence scores. This structured approach avoids classical biases associated
with local training and unifies the training loss with the test-time usage,
while preserving the proven model architecture of seq2seq and its efficient
training approach. We show that our system outperforms a highly-optimized
attention-based seq2seq system and other baselines on three different sequence
to sequence tasks: word ordering, parsing, and machine translation. | http://arxiv.org/pdf/1606.02960 | Sam Wiseman, Alexander M. Rush | cs.CL, cs.LG, cs.NE, stat.ML | EMNLP 2016 camera-ready | null | cs.CL | 20160609 | 20161110 | [
{
"id": "1604.08633"
}
] |
1606.02960 | 21 | # 4.3 Backward: Merge Sequences
Once we have collected margin violations we can run backpropagation to compute parameter updates. Assume a margin violation occurs at time-step t be- tween the predicted history Ëy(K) 1:t and the gold his- tory y1:t. As in standard seq2seq training we must back-propagate this error through the gold history; however, unlike seq2seq we also have a gradient for the wrongly predicted history.
Recall that to back-propagate errors through an RNN we run a recursive backward procedure â denoted below by BRNN â at each time-step t, which accumulates the gradients of next-step and fu- ture losses with respect to ht. We have:
# âhtL â BRNN(âhtLt+1, âht+1L),
where Lt+1 is the loss at step t + 1, deriving, for instance, from the score f (yt+1, ht). Running this BRNN procedure from t = T â 1 to t = 0 is known as back-propagation through time (BPTT). | 1606.02960#21 | Sequence-to-Sequence Learning as Beam-Search Optimization | Sequence-to-Sequence (seq2seq) modeling has rapidly become an important
general-purpose NLP tool that has proven effective for many text-generation and
sequence-labeling tasks. Seq2seq builds on deep neural language modeling and
inherits its remarkable accuracy in estimating local, next-word distributions.
In this work, we introduce a model and beam-search training scheme, based on
the work of Daume III and Marcu (2005), that extends seq2seq to learn global
sequence scores. This structured approach avoids classical biases associated
with local training and unifies the training loss with the test-time usage,
while preserving the proven model architecture of seq2seq and its efficient
training approach. We show that our system outperforms a highly-optimized
attention-based seq2seq system and other baselines on three different sequence
to sequence tasks: word ordering, parsing, and machine translation. | http://arxiv.org/pdf/1606.02960 | Sam Wiseman, Alexander M. Rush | cs.CL, cs.LG, cs.NE, stat.ML | EMNLP 2016 camera-ready | null | cs.CL | 20160609 | 20161110 | [
{
"id": "1604.08633"
}
] |
1606.02960 | 22 | In determining the total computational cost of back-propagation here, ï¬rst note that in the worst case there is one violation at each time-step, which leads to T independent, incorrect sequences. Since we need to call BRNN O(T ) times for each se- quence, a naive strategy of running BPTT for each incorrect sequence would lead to an O(T 2) back- ward pass, rather than the O(T ) time required for the standard seq2seq approach.
Fortunately, our combination of search-strategy and loss make it possible to efï¬ciently share BRNN operations. This shared structure comes
naturally from the LaSO update, which resets the beam in a convenient way. | 1606.02960#22 | Sequence-to-Sequence Learning as Beam-Search Optimization | Sequence-to-Sequence (seq2seq) modeling has rapidly become an important
general-purpose NLP tool that has proven effective for many text-generation and
sequence-labeling tasks. Seq2seq builds on deep neural language modeling and
inherits its remarkable accuracy in estimating local, next-word distributions.
In this work, we introduce a model and beam-search training scheme, based on
the work of Daume III and Marcu (2005), that extends seq2seq to learn global
sequence scores. This structured approach avoids classical biases associated
with local training and unifies the training loss with the test-time usage,
while preserving the proven model architecture of seq2seq and its efficient
training approach. We show that our system outperforms a highly-optimized
attention-based seq2seq system and other baselines on three different sequence
to sequence tasks: word ordering, parsing, and machine translation. | http://arxiv.org/pdf/1606.02960 | Sam Wiseman, Alexander M. Rush | cs.CL, cs.LG, cs.NE, stat.ML | EMNLP 2016 camera-ready | null | cs.CL | 20160609 | 20161110 | [
{
"id": "1604.08633"
}
] |
1606.02960 | 23 | naturally from the LaSO update, which resets the beam in a convenient way.
We informally illustrate the process in Figure 1. The top of the diagram shows a possible sequence of Ëy(k) 1:t formed during search with a beam of size 3 for the target sequence y = âa red dog runs quickly today.â When the gold sequence falls off the beam at t = 4, search resumes with S5 = succ(y1:4), and so all subsequent predicted sequences have y1:4 as a preï¬x and are thus functions of h4. Moreover, be- cause our loss function only involves the scores of the gold preï¬x and the violating preï¬x, we end up with the relatively simple computation tree shown at the bottom of Figure 1. It is evident that we can backpropagate in a single pass, accumulating gradi- ents from sequences that diverge from the gold at the time-step that precedes their divergence. The second half of Algorithm 1 shows this explicitly for a single sequence, though it is straightforward to extend the algorithm to operate in batch.3
# 5 Data and Methods
We run experiments on three different tasks, com- paring our approach to the seq2seq baseline, and to other relevant baselines.
# 5.1 Model | 1606.02960#23 | Sequence-to-Sequence Learning as Beam-Search Optimization | Sequence-to-Sequence (seq2seq) modeling has rapidly become an important
general-purpose NLP tool that has proven effective for many text-generation and
sequence-labeling tasks. Seq2seq builds on deep neural language modeling and
inherits its remarkable accuracy in estimating local, next-word distributions.
In this work, we introduce a model and beam-search training scheme, based on
the work of Daume III and Marcu (2005), that extends seq2seq to learn global
sequence scores. This structured approach avoids classical biases associated
with local training and unifies the training loss with the test-time usage,
while preserving the proven model architecture of seq2seq and its efficient
training approach. We show that our system outperforms a highly-optimized
attention-based seq2seq system and other baselines on three different sequence
to sequence tasks: word ordering, parsing, and machine translation. | http://arxiv.org/pdf/1606.02960 | Sam Wiseman, Alexander M. Rush | cs.CL, cs.LG, cs.NE, stat.ML | EMNLP 2016 camera-ready | null | cs.CL | 20160609 | 20161110 | [
{
"id": "1604.08633"
}
] |
1606.02960 | 24 | # 5 Data and Methods
We run experiments on three different tasks, com- paring our approach to the seq2seq baseline, and to other relevant baselines.
# 5.1 Model
While the method we describe applies to seq2seq RNNs in general, for all experiments we use the global attention model of Luong et al. (2015) â which consists of an LSTM (Hochreiter and Schmidhuber, 1997) encoder and an LSTM decoder with a global attention model â as both the base- line seq2seq model (i.e., as the model that computes the g in Section 3) and as the model that computes our sequence-scores f (wt, htâ1, x). As in Luong et al. (2015), we also use âinput feeding,â which involves feeding the attention distribution from the previous time-step into the decoder at the current step. This model architecture has been found to be highly performant for neural machine translation and other seq2seq tasks.
3We also note that because we do not update the parameters until after the T âth search step, our training procedure differs slightly from LaSO (which is online), and in this aspect is essen- tially equivalent to the âdelayed LaSO updateâ of Bj¨orkelund and Kuhn (2014). | 1606.02960#24 | Sequence-to-Sequence Learning as Beam-Search Optimization | Sequence-to-Sequence (seq2seq) modeling has rapidly become an important
general-purpose NLP tool that has proven effective for many text-generation and
sequence-labeling tasks. Seq2seq builds on deep neural language modeling and
inherits its remarkable accuracy in estimating local, next-word distributions.
In this work, we introduce a model and beam-search training scheme, based on
the work of Daume III and Marcu (2005), that extends seq2seq to learn global
sequence scores. This structured approach avoids classical biases associated
with local training and unifies the training loss with the test-time usage,
while preserving the proven model architecture of seq2seq and its efficient
training approach. We show that our system outperforms a highly-optimized
attention-based seq2seq system and other baselines on three different sequence
to sequence tasks: word ordering, parsing, and machine translation. | http://arxiv.org/pdf/1606.02960 | Sam Wiseman, Alexander M. Rush | cs.CL, cs.LG, cs.NE, stat.ML | EMNLP 2016 camera-ready | null | cs.CL | 20160609 | 20161110 | [
{
"id": "1604.08633"
}
] |
1606.02960 | 26 | empty storage 71.7 4: r + 0; violations ~ {0} 5: 6 for t = ., Ido K=Ky ift AT else argmax f(g, A ) ky of ay + 7: if f(y:,hrâ1) < FG Ae *)) 41 then fer he, 9: Drat â i), 10: Add t to violations 11: ret 12: Si41 < topK(suce(y1:4)) 13: else 14: S41 < topK(U_, suce(g(*))) 15: /*BACKWARD*/ . 16: grad_hr <â grad_hr <0 17: for t = T â , ldo 18: gradi he CBRNN(s,£e0, gradi Ait) 19: gradhy <âBRNN(V;, £41, grad. hiv) 20: if t â 1 ⬠violations then . 21: grad_h;, â grad_h, + grad_h, 22: grad_h, +0
9: 10: 11: 12: 13: 14:
15: 16: 17: 18: 19: 20: 21: 22: | 1606.02960#26 | Sequence-to-Sequence Learning as Beam-Search Optimization | Sequence-to-Sequence (seq2seq) modeling has rapidly become an important
general-purpose NLP tool that has proven effective for many text-generation and
sequence-labeling tasks. Seq2seq builds on deep neural language modeling and
inherits its remarkable accuracy in estimating local, next-word distributions.
In this work, we introduce a model and beam-search training scheme, based on
the work of Daume III and Marcu (2005), that extends seq2seq to learn global
sequence scores. This structured approach avoids classical biases associated
with local training and unifies the training loss with the test-time usage,
while preserving the proven model architecture of seq2seq and its efficient
training approach. We show that our system outperforms a highly-optimized
attention-based seq2seq system and other baselines on three different sequence
to sequence tasks: word ordering, parsing, and machine translation. | http://arxiv.org/pdf/1606.02960 | Sam Wiseman, Alexander M. Rush | cs.CL, cs.LG, cs.NE, stat.ML | EMNLP 2016 camera-ready | null | cs.CL | 20160609 | 20161110 | [
{
"id": "1604.08633"
}
] |
1606.02960 | 27 | 9: 10: 11: 12: 13: 14:
15: 16: 17: 18: 19: 20: 21: 22:
To distinguish the models we refer to our system as BSO (beam search optimization) and to the base- line as seq2seq. When we apply constrained training (as discussed in Section 4.2), we refer to the model as ConBSO. In providing results we also distinguish between the beam size Ktr with which the model is trained, and the beam size Kte which is used at test-time. In general, if we plan on evaluating with a beam of size Kte it makes sense to train with a beam of size Ktr = Kte + 1, since our objective requires the gold sequence to be scored higher than the last sequence on the beam.
# 5.2 Methodology
Here we detail additional techniques we found nec- essary to ensure the model learned effectively. First, we found that the model failed to learn when trained from a random initialization.4 We therefore found it necessary to pre-train the model using a standard, word-level cross-entropy loss as described in Sec4This may be because there is relatively little signal in the sparse, sequence-level gradient, but this point requires further investigation. | 1606.02960#27 | Sequence-to-Sequence Learning as Beam-Search Optimization | Sequence-to-Sequence (seq2seq) modeling has rapidly become an important
general-purpose NLP tool that has proven effective for many text-generation and
sequence-labeling tasks. Seq2seq builds on deep neural language modeling and
inherits its remarkable accuracy in estimating local, next-word distributions.
In this work, we introduce a model and beam-search training scheme, based on
the work of Daume III and Marcu (2005), that extends seq2seq to learn global
sequence scores. This structured approach avoids classical biases associated
with local training and unifies the training loss with the test-time usage,
while preserving the proven model architecture of seq2seq and its efficient
training approach. We show that our system outperforms a highly-optimized
attention-based seq2seq system and other baselines on three different sequence
to sequence tasks: word ordering, parsing, and machine translation. | http://arxiv.org/pdf/1606.02960 | Sam Wiseman, Alexander M. Rush | cs.CL, cs.LG, cs.NE, stat.ML | EMNLP 2016 camera-ready | null | cs.CL | 20160609 | 20161110 | [
{
"id": "1604.08633"
}
] |
1606.02960 | 28 | tion 3. The necessity of pre-training in this instance is consistent with the ï¬ndings of other authors who train non-local neural models (Kingsbury, 2009; Sak et al., 2014; Andor et al., 2016; Ranzato et al., 2016).5
Similarly, it is clear that the smaller the beam used in training is, the less room the model has to make erroneous predictions without running afoul of the margin loss. Accordingly, we also found it use- ful to use a âcurriculum beamâ strategy in training, whereby the size of the beam is increased gradually during training. In particular, given a desired train- ing beam size Ktr, we began training with a beam of size 2, and increased it by 1 every 2 epochs until reaching Ktr.
Finally, it has been established that dropout (Sri- vastava et al., 2014) regularization improves the per- formance of LSTMs (Pham et al., 2014; Zaremba et al., 2014), and in our experiments we run beam search under dropout.6 | 1606.02960#28 | Sequence-to-Sequence Learning as Beam-Search Optimization | Sequence-to-Sequence (seq2seq) modeling has rapidly become an important
general-purpose NLP tool that has proven effective for many text-generation and
sequence-labeling tasks. Seq2seq builds on deep neural language modeling and
inherits its remarkable accuracy in estimating local, next-word distributions.
In this work, we introduce a model and beam-search training scheme, based on
the work of Daume III and Marcu (2005), that extends seq2seq to learn global
sequence scores. This structured approach avoids classical biases associated
with local training and unifies the training loss with the test-time usage,
while preserving the proven model architecture of seq2seq and its efficient
training approach. We show that our system outperforms a highly-optimized
attention-based seq2seq system and other baselines on three different sequence
to sequence tasks: word ordering, parsing, and machine translation. | http://arxiv.org/pdf/1606.02960 | Sam Wiseman, Alexander M. Rush | cs.CL, cs.LG, cs.NE, stat.ML | EMNLP 2016 camera-ready | null | cs.CL | 20160609 | 20161110 | [
{
"id": "1604.08633"
}
] |
1606.02960 | 29 | For all experiments, we trained both seq2seq and BSO models with mini-batch Adagrad (Duchi et al., 2011) (using batches of size 64), and we renormal- ized all gradients so they did not exceed 5 before updating parameters. We did not extensively tune learning-rates, but we found initial rates of 0.02 for the encoder and decoder LSTMs, and a rate of 0.1 or 0.2 for the ï¬nal linear layer (i.e., the layer tasked with making word-predictions at each time- step) to work well across all the tasks we consid- ered. Code implementing the experiments described below can be found at https://github.com/ harvardnlp/BSO.7
# 5.3 Tasks and Results
Our experiments are primarily intended to evaluate the effectiveness of beam search optimization over standard seq2seq training. As such, we run exper- iments with the same model across three very dif5Andor et al. (2016) found, however, that pre-training only increased convergence-speed, but was not necessary for obtain- ing good results. | 1606.02960#29 | Sequence-to-Sequence Learning as Beam-Search Optimization | Sequence-to-Sequence (seq2seq) modeling has rapidly become an important
general-purpose NLP tool that has proven effective for many text-generation and
sequence-labeling tasks. Seq2seq builds on deep neural language modeling and
inherits its remarkable accuracy in estimating local, next-word distributions.
In this work, we introduce a model and beam-search training scheme, based on
the work of Daume III and Marcu (2005), that extends seq2seq to learn global
sequence scores. This structured approach avoids classical biases associated
with local training and unifies the training loss with the test-time usage,
while preserving the proven model architecture of seq2seq and its efficient
training approach. We show that our system outperforms a highly-optimized
attention-based seq2seq system and other baselines on three different sequence
to sequence tasks: word ordering, parsing, and machine translation. | http://arxiv.org/pdf/1606.02960 | Sam Wiseman, Alexander M. Rush | cs.CL, cs.LG, cs.NE, stat.ML | EMNLP 2016 camera-ready | null | cs.CL | 20160609 | 20161110 | [
{
"id": "1604.08633"
}
] |
1606.02960 | 30 | 6However, it is important to ensure that the same mask ap- plied at each time-step of the forward search is also applied at the corresponding step of the backward pass. We accomplish this by pre-computing masks for each time-step, and sharing them between the partial sequence LSTMs.
7Our code is based on Yoon Kimâs seq2seq code, https: //github.com/harvardnlp/seq2seq-attn.
ferent problems: word ordering, dependency pars- ing, and machine translation. While we do not in- clude all the features and extensions necessary to reach state-of-the-art performance, even the baseline seq2seq model is generally quite performant. | 1606.02960#30 | Sequence-to-Sequence Learning as Beam-Search Optimization | Sequence-to-Sequence (seq2seq) modeling has rapidly become an important
general-purpose NLP tool that has proven effective for many text-generation and
sequence-labeling tasks. Seq2seq builds on deep neural language modeling and
inherits its remarkable accuracy in estimating local, next-word distributions.
In this work, we introduce a model and beam-search training scheme, based on
the work of Daume III and Marcu (2005), that extends seq2seq to learn global
sequence scores. This structured approach avoids classical biases associated
with local training and unifies the training loss with the test-time usage,
while preserving the proven model architecture of seq2seq and its efficient
training approach. We show that our system outperforms a highly-optimized
attention-based seq2seq system and other baselines on three different sequence
to sequence tasks: word ordering, parsing, and machine translation. | http://arxiv.org/pdf/1606.02960 | Sam Wiseman, Alexander M. Rush | cs.CL, cs.LG, cs.NE, stat.ML | EMNLP 2016 camera-ready | null | cs.CL | 20160609 | 20161110 | [
{
"id": "1604.08633"
}
] |
1606.02960 | 31 | Word Ordering The task of correctly ordering the words in a shufï¬ed sentence has recently gained some attention as a way to test the (syntactic) capa- bilities of text-generation systems (Zhang and Clark, 2011; Zhang and Clark, 2015; Liu et al., 2015; Schmaltz et al., 2016). We cast this task as seq2seq problem by viewing a shufï¬ed sentence as a source sentence, and the correctly ordered sentence as the target. While word ordering is a somewhat synthetic task, it has two interesting properties for our pur- poses. First, it is a task which plausibly requires search (due to the exponentially many possible or- derings), and, second, there is a clear hard constraint on output sequences, namely, that they be a permu- tation of the source sequence. For both the baseline and BSO models we enforce this constraint at test- time. However, we also experiment with constrain- ing the BSO model during training, as described in Section 4.2, by deï¬ning the succ function to only al- low successor sequences containing un-used words in the source sentence. | 1606.02960#31 | Sequence-to-Sequence Learning as Beam-Search Optimization | Sequence-to-Sequence (seq2seq) modeling has rapidly become an important
general-purpose NLP tool that has proven effective for many text-generation and
sequence-labeling tasks. Seq2seq builds on deep neural language modeling and
inherits its remarkable accuracy in estimating local, next-word distributions.
In this work, we introduce a model and beam-search training scheme, based on
the work of Daume III and Marcu (2005), that extends seq2seq to learn global
sequence scores. This structured approach avoids classical biases associated
with local training and unifies the training loss with the test-time usage,
while preserving the proven model architecture of seq2seq and its efficient
training approach. We show that our system outperforms a highly-optimized
attention-based seq2seq system and other baselines on three different sequence
to sequence tasks: word ordering, parsing, and machine translation. | http://arxiv.org/pdf/1606.02960 | Sam Wiseman, Alexander M. Rush | cs.CL, cs.LG, cs.NE, stat.ML | EMNLP 2016 camera-ready | null | cs.CL | 20160609 | 20161110 | [
{
"id": "1604.08633"
}
] |
1606.02960 | 32 | For experiments, we use the same PTB dataset (with the standard training, development, and test splits) and evaluation procedure as in Zhang and Clark (2015) and later work, with performance re- ported in terms of BLEU score with the correctly or- dered sentences. For all word-ordering experiments we use 2-layer encoder and decoder LSTMs, each with 256 hidden units, and dropout with a rate of 0.2 between LSTM layers. We use simple 0/1 costs in deï¬ning the â function.
We show our test-set results in Table 1. We see that on this task there is a large improvement at each beam size from switching to BSO, and a further im- provement from using the constrained model.
Inspired by a similar analysis in Daum´e III and Marcu (2005), we further examine the relationship between Ktr and Kte when training with ConBSO in Table 2. We see that larger Ktr hurt greedy in- ference, but that results continue to improve, at least initially, when using a Kte that is (somewhat) bigger than Ktr â 1. | 1606.02960#32 | Sequence-to-Sequence Learning as Beam-Search Optimization | Sequence-to-Sequence (seq2seq) modeling has rapidly become an important
general-purpose NLP tool that has proven effective for many text-generation and
sequence-labeling tasks. Seq2seq builds on deep neural language modeling and
inherits its remarkable accuracy in estimating local, next-word distributions.
In this work, we introduce a model and beam-search training scheme, based on
the work of Daume III and Marcu (2005), that extends seq2seq to learn global
sequence scores. This structured approach avoids classical biases associated
with local training and unifies the training loss with the test-time usage,
while preserving the proven model architecture of seq2seq and its efficient
training approach. We show that our system outperforms a highly-optimized
attention-based seq2seq system and other baselines on three different sequence
to sequence tasks: word ordering, parsing, and machine translation. | http://arxiv.org/pdf/1606.02960 | Sam Wiseman, Alexander M. Rush | cs.CL, cs.LG, cs.NE, stat.ML | EMNLP 2016 camera-ready | null | cs.CL | 20160609 | 20161110 | [
{
"id": "1604.08633"
}
] |
1606.02960 | 33 | Word Ordering (BLEU) Kte = 1 Kte = 5 Kte = 10 seq2seq BSO ConBSO 25.2 28.0 28.6 29.8 33.2 34.3 31.0 34.3 34.5 LSTM-LM 15.4 - 26.8
Table 1: Word ordering. BLEU Scores of seq2seq, BSO, constrained BSO, and a vanilla LSTM language model (from Schmaltz et al, 2016). All experiments above have Ktr = 6.
Word Ordering Beam Size (BLEU) Kte = 1 Kte = 5 Kte = 10 Ktr = 2 Ktr = 6 Ktr = 11 30.59 28.20 26.88 31.23 34.22 34.42 30.26 34.67 34.88 seq2seq 26.11 30.20 31.04
Table 2: Beam-size experiments on word ordering devel- opment set. All numbers reï¬ect training with constraints (ConBSO). | 1606.02960#33 | Sequence-to-Sequence Learning as Beam-Search Optimization | Sequence-to-Sequence (seq2seq) modeling has rapidly become an important
general-purpose NLP tool that has proven effective for many text-generation and
sequence-labeling tasks. Seq2seq builds on deep neural language modeling and
inherits its remarkable accuracy in estimating local, next-word distributions.
In this work, we introduce a model and beam-search training scheme, based on
the work of Daume III and Marcu (2005), that extends seq2seq to learn global
sequence scores. This structured approach avoids classical biases associated
with local training and unifies the training loss with the test-time usage,
while preserving the proven model architecture of seq2seq and its efficient
training approach. We show that our system outperforms a highly-optimized
attention-based seq2seq system and other baselines on three different sequence
to sequence tasks: word ordering, parsing, and machine translation. | http://arxiv.org/pdf/1606.02960 | Sam Wiseman, Alexander M. Rush | cs.CL, cs.LG, cs.NE, stat.ML | EMNLP 2016 camera-ready | null | cs.CL | 20160609 | 20161110 | [
{
"id": "1604.08633"
}
] |
1606.02960 | 34 | Table 2: Beam-size experiments on word ordering devel- opment set. All numbers reï¬ect training with constraints (ConBSO).
Dependency Parsing We next apply our model to dependency parsing, which also has hard con- straints and plausibly beneï¬ts from search. We treat dependency parsing with arc-standard transi- tions as a seq2seq task by attempting to map from a source sentence to a target sequence of source sentence words interleaved with the arc-standard, reduce-actions in its parse. For example, we attempt to map the source sentence
But it was the Quotron problems that ...
to the target sequence
But it was @L SBJ @L DEP the Quotron problems @L NMOD @L NMOD that ...
We use the standard Penn Treebank dataset splits with Stanford dependency labels, and the standard UAS/LAS evaluation metric (excluding punctua- tion) following Chen and Manning (2014). All models thus see only the words in the source and, when decoding, the actions it has emitted so far; no other features are used. We use 2-layer encoder and decoder LSTMs with 300 hidden units per layer | 1606.02960#34 | Sequence-to-Sequence Learning as Beam-Search Optimization | Sequence-to-Sequence (seq2seq) modeling has rapidly become an important
general-purpose NLP tool that has proven effective for many text-generation and
sequence-labeling tasks. Seq2seq builds on deep neural language modeling and
inherits its remarkable accuracy in estimating local, next-word distributions.
In this work, we introduce a model and beam-search training scheme, based on
the work of Daume III and Marcu (2005), that extends seq2seq to learn global
sequence scores. This structured approach avoids classical biases associated
with local training and unifies the training loss with the test-time usage,
while preserving the proven model architecture of seq2seq and its efficient
training approach. We show that our system outperforms a highly-optimized
attention-based seq2seq system and other baselines on three different sequence
to sequence tasks: word ordering, parsing, and machine translation. | http://arxiv.org/pdf/1606.02960 | Sam Wiseman, Alexander M. Rush | cs.CL, cs.LG, cs.NE, stat.ML | EMNLP 2016 camera-ready | null | cs.CL | 20160609 | 20161110 | [
{
"id": "1604.08633"
}
] |
1606.02960 | 35 | Dependency Parsing (UAS/LAS) Kte = 5 Kte = 1 Kte = 10 seq2seq 87.33/82.26 86.91/82.11 BSO ConBSO 85.11/79.32 88.53/84.16 91.00/87.18 91.25/86.92 88.66/84.33 91.17/87.41 91.57/87.26 Andor 93.17/91.18 - Table 3: Dependency parsing. UAS/LAS of seq2seq, BSO, ConBSO and baselines on PTB test set. Andor is the current state-of-the-art model for this data set (Andor et al. 2016), and we note that with a beam of size 32 they obtain 94.41/92.55. All experiments above have Ktr = 6.
and dropout with a rate of 0.3 between LSTM lay- ers. We replace singleton words in the training set with an UNK token, normalize digits to a single symbol, and initialize word embeddings for both source and target words from the publicly available word2vec (Mikolov et al., 2013) embeddings. We use simple 0/1 costs in deï¬ning the â function. | 1606.02960#35 | Sequence-to-Sequence Learning as Beam-Search Optimization | Sequence-to-Sequence (seq2seq) modeling has rapidly become an important
general-purpose NLP tool that has proven effective for many text-generation and
sequence-labeling tasks. Seq2seq builds on deep neural language modeling and
inherits its remarkable accuracy in estimating local, next-word distributions.
In this work, we introduce a model and beam-search training scheme, based on
the work of Daume III and Marcu (2005), that extends seq2seq to learn global
sequence scores. This structured approach avoids classical biases associated
with local training and unifies the training loss with the test-time usage,
while preserving the proven model architecture of seq2seq and its efficient
training approach. We show that our system outperforms a highly-optimized
attention-based seq2seq system and other baselines on three different sequence
to sequence tasks: word ordering, parsing, and machine translation. | http://arxiv.org/pdf/1606.02960 | Sam Wiseman, Alexander M. Rush | cs.CL, cs.LG, cs.NE, stat.ML | EMNLP 2016 camera-ready | null | cs.CL | 20160609 | 20161110 | [
{
"id": "1604.08633"
}
] |
1606.02960 | 36 | As in the word-ordering case, we also experiment with modifying the succ function in order to train under hard constraints, namely, that the emitted tar- get sequence be a valid parse. In particular, we con- strain the output at each time-step to obey the stack constraint, and we ensure words in the source are emitted in order.
We show results on the test-set in Table 3. BSO and ConBSO both show signiï¬cant improvements over seq2seq, with ConBSO improving most on UAS, and BSO improving most on LAS. We achieve a reasonable ï¬nal score of 91.57 UAS, which lags behind the state-of-the-art, but is promising for a general-purpose, word-only model. | 1606.02960#36 | Sequence-to-Sequence Learning as Beam-Search Optimization | Sequence-to-Sequence (seq2seq) modeling has rapidly become an important
general-purpose NLP tool that has proven effective for many text-generation and
sequence-labeling tasks. Seq2seq builds on deep neural language modeling and
inherits its remarkable accuracy in estimating local, next-word distributions.
In this work, we introduce a model and beam-search training scheme, based on
the work of Daume III and Marcu (2005), that extends seq2seq to learn global
sequence scores. This structured approach avoids classical biases associated
with local training and unifies the training loss with the test-time usage,
while preserving the proven model architecture of seq2seq and its efficient
training approach. We show that our system outperforms a highly-optimized
attention-based seq2seq system and other baselines on three different sequence
to sequence tasks: word ordering, parsing, and machine translation. | http://arxiv.org/pdf/1606.02960 | Sam Wiseman, Alexander M. Rush | cs.CL, cs.LG, cs.NE, stat.ML | EMNLP 2016 camera-ready | null | cs.CL | 20160609 | 20161110 | [
{
"id": "1604.08633"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.