doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1606.02960 | 37 | Translation We ï¬nally evaluate our model on a small machine translation dataset, which allows us to experiment with a cost function that is not 0/1, and to consider other baselines that attempt to mit- igate exposure bias in the seq2seq setting. We use the dataset from the work of Ranzato et al. (2016), which uses data from the German-to-English por- tion of the IWSLT 2014 machine translation eval- uation campaign (Cettolo et al., 2014). The data comes from translated TED talks, and the dataset contains roughly 153K training sentences, 7K devel- opment sentences, and 7K test sentences. We use the same preprocessing and dataset splits as Ranzato et
Machine Translation (BLEU) Kte = 1 Kte = 5 Kte = 10 seq2seq 22.53 BSO, SB-â 23.83 24.03 26.36 23.87 25.48 XENT DAD MIXER 17.74 20.12 20.73 20.10 22.25 21.81 20.28 22.40 21.83 | 1606.02960#37 | Sequence-to-Sequence Learning as Beam-Search Optimization | Sequence-to-Sequence (seq2seq) modeling has rapidly become an important
general-purpose NLP tool that has proven effective for many text-generation and
sequence-labeling tasks. Seq2seq builds on deep neural language modeling and
inherits its remarkable accuracy in estimating local, next-word distributions.
In this work, we introduce a model and beam-search training scheme, based on
the work of Daume III and Marcu (2005), that extends seq2seq to learn global
sequence scores. This structured approach avoids classical biases associated
with local training and unifies the training loss with the test-time usage,
while preserving the proven model architecture of seq2seq and its efficient
training approach. We show that our system outperforms a highly-optimized
attention-based seq2seq system and other baselines on three different sequence
to sequence tasks: word ordering, parsing, and machine translation. | http://arxiv.org/pdf/1606.02960 | Sam Wiseman, Alexander M. Rush | cs.CL, cs.LG, cs.NE, stat.ML | EMNLP 2016 camera-ready | null | cs.CL | 20160609 | 20161110 | [
{
"id": "1604.08633"
}
] |
1606.02960 | 38 | Table 4: Machine translation experiments on test set; re- sults below middle line are from MIXER model of Ran- zato et al. (2016). SB-â indicates sentence BLEU costs are used in deï¬ning â. XENT is similar to our seq2seq model but with a convolutional encoder and simpler at- tention. DAD trains seq2seq with scheduled sampling (Bengio et al., 2015). BSO, SB-â experiments above have Ktr = 6.
al. (2016), and like them we also use a single-layer LSTM decoder with 256 units. We also use dropout with a rate of 0.2 between each LSTM layer. We em- phasize, however, that while our decoder LSTM is of the same size as that of Ranzato et al. (2016), our re- sults are not directly comparable, because we use an LSTM encoder (rather than a convolutional encoder as they do), a slightly different attention mechanism, and input feeding (Luong et al., 2015). | 1606.02960#38 | Sequence-to-Sequence Learning as Beam-Search Optimization | Sequence-to-Sequence (seq2seq) modeling has rapidly become an important
general-purpose NLP tool that has proven effective for many text-generation and
sequence-labeling tasks. Seq2seq builds on deep neural language modeling and
inherits its remarkable accuracy in estimating local, next-word distributions.
In this work, we introduce a model and beam-search training scheme, based on
the work of Daume III and Marcu (2005), that extends seq2seq to learn global
sequence scores. This structured approach avoids classical biases associated
with local training and unifies the training loss with the test-time usage,
while preserving the proven model architecture of seq2seq and its efficient
training approach. We show that our system outperforms a highly-optimized
attention-based seq2seq system and other baselines on three different sequence
to sequence tasks: word ordering, parsing, and machine translation. | http://arxiv.org/pdf/1606.02960 | Sam Wiseman, Alexander M. Rush | cs.CL, cs.LG, cs.NE, stat.ML | EMNLP 2016 camera-ready | null | cs.CL | 20160609 | 20161110 | [
{
"id": "1604.08633"
}
] |
1606.02960 | 39 | 1:t ) to 1 â SB(Ëy(K) r+1:t, yr+1:t), where r is the last margin violation and SB denotes smoothed, sentence-level BLEU (Chen and Cherry, 2014). This setting of â should act to penalize erroneous predictions with a relatively low sentence-level BLEU score more than those with a relatively high sentence-level BLEU score. In Table 4 we show our ï¬nal results and those from Ranzato et al. (2016).8 While we start with an improved baseline, we see similarly large increases in accuracy as those obtained by DAD and MIXER, in particular when Kte > 1.
these sequence-level costs in Table 5, which compares us- ing sentence-level BLEU costs in deï¬ning â with using 0/1 costs. We see that the more sophisti- cated sequence-level costs have a moderate effect on BLEU score.
# 8Some results from personal communication.
Machine Translation (BLEU) Kte = 1 Kte = 5 Kte = 10 0/1-â 25.73 SB-â 25.99 28.21 28.45 27.43 27.58 | 1606.02960#39 | Sequence-to-Sequence Learning as Beam-Search Optimization | Sequence-to-Sequence (seq2seq) modeling has rapidly become an important
general-purpose NLP tool that has proven effective for many text-generation and
sequence-labeling tasks. Seq2seq builds on deep neural language modeling and
inherits its remarkable accuracy in estimating local, next-word distributions.
In this work, we introduce a model and beam-search training scheme, based on
the work of Daume III and Marcu (2005), that extends seq2seq to learn global
sequence scores. This structured approach avoids classical biases associated
with local training and unifies the training loss with the test-time usage,
while preserving the proven model architecture of seq2seq and its efficient
training approach. We show that our system outperforms a highly-optimized
attention-based seq2seq system and other baselines on three different sequence
to sequence tasks: word ordering, parsing, and machine translation. | http://arxiv.org/pdf/1606.02960 | Sam Wiseman, Alexander M. Rush | cs.CL, cs.LG, cs.NE, stat.ML | EMNLP 2016 camera-ready | null | cs.CL | 20160609 | 20161110 | [
{
"id": "1604.08633"
}
] |
1606.02960 | 40 | Table 5: BLEU scores obtained on the machine trans- lation development data when training with â(Ëy(k) 1:t ) = 1 (top) and â(Ëy(k) r+1:t, yr+1:t) (bottom), and Ktr = 6.
Timing Given Algorithm 1, we would expect training time to increase linearly with the size of the beam. On the above MT task, our highly tuned seq2seq baseline processes an average of 13,038 to- kens/second (including both source and target to- kens) on a GTX 970 GPU. For beams of size Ktr = 2, 3, 4, 5, and 6, our implementation processes on average 1,985, 1,768, 1,709, 1,521, and 1,458 to- kens/second, respectively. Thus, we appear to pay an initial constant factor of â 3.3 due to the more complicated forward and backward passes, and then training scales with the size of the beam. Because we batch beam predictions on a GPU, however, we ï¬nd that in practice training time scales sub-linearly with the beam-size.
# 6 Conclusion | 1606.02960#40 | Sequence-to-Sequence Learning as Beam-Search Optimization | Sequence-to-Sequence (seq2seq) modeling has rapidly become an important
general-purpose NLP tool that has proven effective for many text-generation and
sequence-labeling tasks. Seq2seq builds on deep neural language modeling and
inherits its remarkable accuracy in estimating local, next-word distributions.
In this work, we introduce a model and beam-search training scheme, based on
the work of Daume III and Marcu (2005), that extends seq2seq to learn global
sequence scores. This structured approach avoids classical biases associated
with local training and unifies the training loss with the test-time usage,
while preserving the proven model architecture of seq2seq and its efficient
training approach. We show that our system outperforms a highly-optimized
attention-based seq2seq system and other baselines on three different sequence
to sequence tasks: word ordering, parsing, and machine translation. | http://arxiv.org/pdf/1606.02960 | Sam Wiseman, Alexander M. Rush | cs.CL, cs.LG, cs.NE, stat.ML | EMNLP 2016 camera-ready | null | cs.CL | 20160609 | 20161110 | [
{
"id": "1604.08633"
}
] |
1606.02960 | 41 | # 6 Conclusion
We have introduced a variant of seq2seq and an as- sociated beam search training scheme, which ad- dresses exposure bias as well as label bias, and moreover allows for both training with sequence- level cost functions as well as with hard constraints. Future work will examine scaling this approach to much larger datasets.
# Acknowledgments
We thank Yoon Kim for helpful discussions and for providing the initial seq2seq code on which our im- plementations are based. We thank Allen Schmaltz for help with the word ordering experiments. We also gratefully acknowledge the support of a Google Research Award.
# References
[Andor et al.2016] Daniel Andor, Chris Alberti, David Weiss, Aliaksei Severyn, Alessandro Presta, Kuzman
Ganchev, Slav Petrov, and Michael Collins. 2016. Globally normalized transition-based neural networks. ACL.
[Bahdanau et al.2015] Dzmitry Bahdanau, Kyunghyun 2015. Neural machine Cho, and Yoshua Bengio. translation by jointly learning to align and translate. In ICLR.
Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, and Yoshua Bengio. 2016. An Actor-Critic Algorithm for Sequence Prediction. CoRR, abs/1607.07086. | 1606.02960#41 | Sequence-to-Sequence Learning as Beam-Search Optimization | Sequence-to-Sequence (seq2seq) modeling has rapidly become an important
general-purpose NLP tool that has proven effective for many text-generation and
sequence-labeling tasks. Seq2seq builds on deep neural language modeling and
inherits its remarkable accuracy in estimating local, next-word distributions.
In this work, we introduce a model and beam-search training scheme, based on
the work of Daume III and Marcu (2005), that extends seq2seq to learn global
sequence scores. This structured approach avoids classical biases associated
with local training and unifies the training loss with the test-time usage,
while preserving the proven model architecture of seq2seq and its efficient
training approach. We show that our system outperforms a highly-optimized
attention-based seq2seq system and other baselines on three different sequence
to sequence tasks: word ordering, parsing, and machine translation. | http://arxiv.org/pdf/1606.02960 | Sam Wiseman, Alexander M. Rush | cs.CL, cs.LG, cs.NE, stat.ML | EMNLP 2016 camera-ready | null | cs.CL | 20160609 | 20161110 | [
{
"id": "1604.08633"
}
] |
1606.02960 | 42 | Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. 2015. Scheduled sampling for sequence prediction with recurrent neural networks. In Advances in Neural Information Processing Systems, pages 1171â1179.
and Jonas Kuhn. 2014. Learning structured perceptrons for coreference Resolution with Latent Antecedents and Non-local Features. ACL, Baltimore, MD, USA, June.
[Cettolo et al.2014] Mauro Cettolo, Jan Niehues, Sebas- tian St¨uker, Luisa Bentivogli, and Marcello Federico. 2014. Report on the 11th iwslt evaluation campaign. In Proceedings of IWSLT, 20014.
[Chang et al.2015] Kai-Wei Chang, Hal Daum´e III, John Langford, and Stephane Ross. 2015. Efï¬cient pro- grammable learning to search. In Arxiv.
[Chen and Cherry2014] Boxing Chen and Colin Cherry. 2014. A systematic comparison of smoothing tech- niques for sentence-level bleu. ACL 2014, page 362. [Chen and Manning2014] Danqi Chen and Christopher D Manning. 2014. A fast and accurate dependency parser using neural networks. In EMNLP, pages 740â 750. | 1606.02960#42 | Sequence-to-Sequence Learning as Beam-Search Optimization | Sequence-to-Sequence (seq2seq) modeling has rapidly become an important
general-purpose NLP tool that has proven effective for many text-generation and
sequence-labeling tasks. Seq2seq builds on deep neural language modeling and
inherits its remarkable accuracy in estimating local, next-word distributions.
In this work, we introduce a model and beam-search training scheme, based on
the work of Daume III and Marcu (2005), that extends seq2seq to learn global
sequence scores. This structured approach avoids classical biases associated
with local training and unifies the training loss with the test-time usage,
while preserving the proven model architecture of seq2seq and its efficient
training approach. We show that our system outperforms a highly-optimized
attention-based seq2seq system and other baselines on three different sequence
to sequence tasks: word ordering, parsing, and machine translation. | http://arxiv.org/pdf/1606.02960 | Sam Wiseman, Alexander M. Rush | cs.CL, cs.LG, cs.NE, stat.ML | EMNLP 2016 camera-ready | null | cs.CL | 20160609 | 20161110 | [
{
"id": "1604.08633"
}
] |
1606.02960 | 43 | [Cho et al.2014] KyungHyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder- decoder approaches. Eighth Workshop on Syntax, Se- mantics and Structure in Statistical Translation.
and Brian Roark. 2004. Incremental parsing with the perceptron algorithm. In Proceedings of the 42nd Annual Meet- ing on Association for Computational Linguistics, page 111. Association for Computational Linguistics. [Daum´e III and Marcu2005] Hal Daum´e III and Daniel Marcu. 2005. Learning as search optimization: ap- proximate large margin methods for structured predic- In Proceedings of the Twenty-Second Interna- tion. tional Conference on Machine Learning (ICML 2005), pages 169â176.
[Daum´e III et al.2009] Hal Daum´e III, John Langford, and Daniel Marcu. 2009. Search-based structured pre- diction. Machine Learning, 75(3):297â325. | 1606.02960#43 | Sequence-to-Sequence Learning as Beam-Search Optimization | Sequence-to-Sequence (seq2seq) modeling has rapidly become an important
general-purpose NLP tool that has proven effective for many text-generation and
sequence-labeling tasks. Seq2seq builds on deep neural language modeling and
inherits its remarkable accuracy in estimating local, next-word distributions.
In this work, we introduce a model and beam-search training scheme, based on
the work of Daume III and Marcu (2005), that extends seq2seq to learn global
sequence scores. This structured approach avoids classical biases associated
with local training and unifies the training loss with the test-time usage,
while preserving the proven model architecture of seq2seq and its efficient
training approach. We show that our system outperforms a highly-optimized
attention-based seq2seq system and other baselines on three different sequence
to sequence tasks: word ordering, parsing, and machine translation. | http://arxiv.org/pdf/1606.02960 | Sam Wiseman, Alexander M. Rush | cs.CL, cs.LG, cs.NE, stat.ML | EMNLP 2016 camera-ready | null | cs.CL | 20160609 | 20161110 | [
{
"id": "1604.08633"
}
] |
1606.02960 | 44 | [Duchi et al.2011] John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive Subgradient Methods for On- line Learning and Stochastic Optimization. The Jour- nal of Machine Learning Research, 12:2121â2159. [Filippova et al.2015] Katja Filippova, Enrique Alfon- seca, Carlos A Colmenares, Lukasz Kaiser, and Oriol Vinyals. 2015. Sentence compression by deletion with lstms. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 360â368.
[Hochreiter and Schmidhuber1997] Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Comput., 9:1735â1780.
[Huang et al.2012] Liang Huang, Suphan Fayong, and Yang Guo. 2012. Structured perceptron with inexact search. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies, pages 142â151. Association for Computational Linguistics. | 1606.02960#44 | Sequence-to-Sequence Learning as Beam-Search Optimization | Sequence-to-Sequence (seq2seq) modeling has rapidly become an important
general-purpose NLP tool that has proven effective for many text-generation and
sequence-labeling tasks. Seq2seq builds on deep neural language modeling and
inherits its remarkable accuracy in estimating local, next-word distributions.
In this work, we introduce a model and beam-search training scheme, based on
the work of Daume III and Marcu (2005), that extends seq2seq to learn global
sequence scores. This structured approach avoids classical biases associated
with local training and unifies the training loss with the test-time usage,
while preserving the proven model architecture of seq2seq and its efficient
training approach. We show that our system outperforms a highly-optimized
attention-based seq2seq system and other baselines on three different sequence
to sequence tasks: word ordering, parsing, and machine translation. | http://arxiv.org/pdf/1606.02960 | Sam Wiseman, Alexander M. Rush | cs.CL, cs.LG, cs.NE, stat.ML | EMNLP 2016 camera-ready | null | cs.CL | 20160609 | 20161110 | [
{
"id": "1604.08633"
}
] |
1606.02960 | 45 | [Kingsbury2009] Brian Kingsbury. 2009. Lattice-based optimization of sequence classiï¬cation criteria for In Acoustics, neural-network acoustic modeling. Speech and Signal Processing, 2009. ICASSP 2009. IEEE International Conference on, pages 3761â3764. IEEE.
[Lafferty et al.2001] John D. Lafferty, Andrew McCal- lum, and Fernando C. N. Pereira. 2001. Condi- tional random ï¬elds: Probabilistic models for seg- menting and labeling sequence data. In Proceedings of the Eighteenth International Conference on Machine Learning (ICML 2001), pages 282â289.
[Liu et al.2015] Yijia Liu, Yue Zhang, Wanxiang Che, and Bing Qin. 2015. Transition-based syntactic lineariza- tion. In Proceedings of NAACL.
and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015, pages 1412â1421. | 1606.02960#45 | Sequence-to-Sequence Learning as Beam-Search Optimization | Sequence-to-Sequence (seq2seq) modeling has rapidly become an important
general-purpose NLP tool that has proven effective for many text-generation and
sequence-labeling tasks. Seq2seq builds on deep neural language modeling and
inherits its remarkable accuracy in estimating local, next-word distributions.
In this work, we introduce a model and beam-search training scheme, based on
the work of Daume III and Marcu (2005), that extends seq2seq to learn global
sequence scores. This structured approach avoids classical biases associated
with local training and unifies the training loss with the test-time usage,
while preserving the proven model architecture of seq2seq and its efficient
training approach. We show that our system outperforms a highly-optimized
attention-based seq2seq system and other baselines on three different sequence
to sequence tasks: word ordering, parsing, and machine translation. | http://arxiv.org/pdf/1606.02960 | Sam Wiseman, Alexander M. Rush | cs.CL, cs.LG, cs.NE, stat.ML | EMNLP 2016 camera-ready | null | cs.CL | 20160609 | 20161110 | [
{
"id": "1604.08633"
}
] |
1606.02960 | 46 | [Mikolov et al.2013] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Dis- tributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111â3119.
[Papineni et al.2002] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association
for computational linguistics, pages 311â318. Associ- ation for Computational Linguistics.
[Pham et al.2014] Vu Pham, Th´eodore Bluche, Christo- pher Kermorvant, and J´erËome Louradour. 2014. Dropout improves recurrent neural networks for hand- In Frontiers in Handwriting writing recognition. Recognition (ICFHR), 2014 14th International Con- ference on, pages 285â290. IEEE.
Sumit [Ranzato et al.2016] MarcâAurelio Chopra, Michael Auli, and Wojciech Zaremba. 2016. Sequence level training with recurrent neural networks. ICLR. | 1606.02960#46 | Sequence-to-Sequence Learning as Beam-Search Optimization | Sequence-to-Sequence (seq2seq) modeling has rapidly become an important
general-purpose NLP tool that has proven effective for many text-generation and
sequence-labeling tasks. Seq2seq builds on deep neural language modeling and
inherits its remarkable accuracy in estimating local, next-word distributions.
In this work, we introduce a model and beam-search training scheme, based on
the work of Daume III and Marcu (2005), that extends seq2seq to learn global
sequence scores. This structured approach avoids classical biases associated
with local training and unifies the training loss with the test-time usage,
while preserving the proven model architecture of seq2seq and its efficient
training approach. We show that our system outperforms a highly-optimized
attention-based seq2seq system and other baselines on three different sequence
to sequence tasks: word ordering, parsing, and machine translation. | http://arxiv.org/pdf/1606.02960 | Sam Wiseman, Alexander M. Rush | cs.CL, cs.LG, cs.NE, stat.ML | EMNLP 2016 camera-ready | null | cs.CL | 20160609 | 20161110 | [
{
"id": "1604.08633"
}
] |
1606.02960 | 47 | [Ross et al.2011] St´ephane Ross, Geoffrey J. Gordon, and Drew Bagnell. 2011. A reduction of imitation learn- ing and structured prediction to no-regret online learn- In Proceedings of the Fourteenth International ing. Conference on Artiï¬cial Intelligence and Statistics, pages 627â635.
[Sak et al.2014] Hasim Sak, Oriol Vinyals, Georg Heigold, Andrew W. Senior, Erik McDermott, Rajat Monga, and Mark Z. Mao. 2014. Sequence discrimi- native distributed training of long short-term memory In INTERSPEECH 2014, recurrent neural networks. pages 1209â1213.
[Schmaltz et al.2016] Allen Schmaltz, Alexander M Rush, and Stuart M Shieber. 2016. Word ordering without syntax. arXiv preprint arXiv:1604.08633. [Serban et al.2016] Iulian Vlad Serban, Alessandro Sor- doni, Yoshua Bengio, Aaron C. Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using generative hierarchical neural network models. In Proceedings of the Thirtieth AAAI Conference on Artiï¬cial Intelligence, pages 3776â3784. | 1606.02960#47 | Sequence-to-Sequence Learning as Beam-Search Optimization | Sequence-to-Sequence (seq2seq) modeling has rapidly become an important
general-purpose NLP tool that has proven effective for many text-generation and
sequence-labeling tasks. Seq2seq builds on deep neural language modeling and
inherits its remarkable accuracy in estimating local, next-word distributions.
In this work, we introduce a model and beam-search training scheme, based on
the work of Daume III and Marcu (2005), that extends seq2seq to learn global
sequence scores. This structured approach avoids classical biases associated
with local training and unifies the training loss with the test-time usage,
while preserving the proven model architecture of seq2seq and its efficient
training approach. We show that our system outperforms a highly-optimized
attention-based seq2seq system and other baselines on three different sequence
to sequence tasks: word ordering, parsing, and machine translation. | http://arxiv.org/pdf/1606.02960 | Sam Wiseman, Alexander M. Rush | cs.CL, cs.LG, cs.NE, stat.ML | EMNLP 2016 camera-ready | null | cs.CL | 20160609 | 20161110 | [
{
"id": "1604.08633"
}
] |
1606.02960 | 48 | [Shen et al.2016] Shiqi Shen, Yong Cheng, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016. Minimum risk training for neural machine translation. In Proceedings of the 54th Annual Meeting of the As- sociation for Computational Linguistics, ACL 2016. [Srivastava et al.2014] Nitish Srivastava, Geoffrey Hin- ton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to pre- vent neural networks from overï¬tting. The Journal of Machine Learning Research, 15(1):1929â1958.
[Sutskever et al.2011] Ilya Sutskever, James Martens, and Geoffrey E Hinton. 2011. Generating text with recur- rent neural networks. In Proceedings of the 28th In- ternational Conference on Machine Learning (ICML), pages 1017â1024. | 1606.02960#48 | Sequence-to-Sequence Learning as Beam-Search Optimization | Sequence-to-Sequence (seq2seq) modeling has rapidly become an important
general-purpose NLP tool that has proven effective for many text-generation and
sequence-labeling tasks. Seq2seq builds on deep neural language modeling and
inherits its remarkable accuracy in estimating local, next-word distributions.
In this work, we introduce a model and beam-search training scheme, based on
the work of Daume III and Marcu (2005), that extends seq2seq to learn global
sequence scores. This structured approach avoids classical biases associated
with local training and unifies the training loss with the test-time usage,
while preserving the proven model architecture of seq2seq and its efficient
training approach. We show that our system outperforms a highly-optimized
attention-based seq2seq system and other baselines on three different sequence
to sequence tasks: word ordering, parsing, and machine translation. | http://arxiv.org/pdf/1606.02960 | Sam Wiseman, Alexander M. Rush | cs.CL, cs.LG, cs.NE, stat.ML | EMNLP 2016 camera-ready | null | cs.CL | 20160609 | 20161110 | [
{
"id": "1604.08633"
}
] |
1606.02960 | 49 | [Sutskever et al.2014] Ilya Sutskever, Oriol Vinyals, and Quoc VV Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Informa- tion Processing Systems (NIPS), pages 3104â3112. [Venugopalan et al.2015] Subhashini Venugopalan, Mar- cus Rohrbach, Jeffrey Donahue, Raymond J. Mooney,
Trevor Darrell, and Kate Saenko. 2015. Sequence to sequence - video to text. In ICCV, pages 4534â4542. [Vinyals et al.2015] Oriol Vinyals, Åukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. 2015. Grammar as a foreign language. In Advances in Neural Information Processing Systems, pages 2755â 2763.
Patrick Doetsch, Simon Wiesler, Ralf Schluter, and Hermann Sequence-discriminative training of Ney. In Acoustics, Speech and recurrent neural networks. Signal Processing (ICASSP), 2015 IEEE International Conference on, pages 2100â2104. IEEE.
[Watanabe and Sumita2015] Taro Watanabe and Eiichiro Sumita. 2015. Transition-based neural constituent parsing. Proceedings of ACL-IJCNLP. | 1606.02960#49 | Sequence-to-Sequence Learning as Beam-Search Optimization | Sequence-to-Sequence (seq2seq) modeling has rapidly become an important
general-purpose NLP tool that has proven effective for many text-generation and
sequence-labeling tasks. Seq2seq builds on deep neural language modeling and
inherits its remarkable accuracy in estimating local, next-word distributions.
In this work, we introduce a model and beam-search training scheme, based on
the work of Daume III and Marcu (2005), that extends seq2seq to learn global
sequence scores. This structured approach avoids classical biases associated
with local training and unifies the training loss with the test-time usage,
while preserving the proven model architecture of seq2seq and its efficient
training approach. We show that our system outperforms a highly-optimized
attention-based seq2seq system and other baselines on three different sequence
to sequence tasks: word ordering, parsing, and machine translation. | http://arxiv.org/pdf/1606.02960 | Sam Wiseman, Alexander M. Rush | cs.CL, cs.LG, cs.NE, stat.ML | EMNLP 2016 camera-ready | null | cs.CL | 20160609 | 20161110 | [
{
"id": "1604.08633"
}
] |
1606.02960 | 50 | [Watanabe and Sumita2015] Taro Watanabe and Eiichiro Sumita. 2015. Transition-based neural constituent parsing. Proceedings of ACL-IJCNLP.
Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron C. Courville, Ruslan Salakhutdinov, Richard S. Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption In ICML, pages generation with visual attention. 2048â2057.
[Yazdani and Henderson2015] Majid Yazdani and James Henderson. 2015. Incremental recurrent neural net- work dependency parser with search-based discrimi- In Proceedings of the 19th Confer- native training. ence on Computational Natural Language Learning, (CoNLL 2015), pages 142â152.
[Zaremba et al.2014] Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. 2014. Recurrent neural network regularization. CoRR, abs/1409.2329.
[Zhang and Clark2011] Yue Zhang and Stephen Clark. 2011. Syntax-based grammaticality improvement us- ing ccg and guided search. In Proceedings of the Con- ference on Empirical Methods in Natural Language Processing, pages 1147â1157. Association for Com- putational Linguistics. | 1606.02960#50 | Sequence-to-Sequence Learning as Beam-Search Optimization | Sequence-to-Sequence (seq2seq) modeling has rapidly become an important
general-purpose NLP tool that has proven effective for many text-generation and
sequence-labeling tasks. Seq2seq builds on deep neural language modeling and
inherits its remarkable accuracy in estimating local, next-word distributions.
In this work, we introduce a model and beam-search training scheme, based on
the work of Daume III and Marcu (2005), that extends seq2seq to learn global
sequence scores. This structured approach avoids classical biases associated
with local training and unifies the training loss with the test-time usage,
while preserving the proven model architecture of seq2seq and its efficient
training approach. We show that our system outperforms a highly-optimized
attention-based seq2seq system and other baselines on three different sequence
to sequence tasks: word ordering, parsing, and machine translation. | http://arxiv.org/pdf/1606.02960 | Sam Wiseman, Alexander M. Rush | cs.CL, cs.LG, cs.NE, stat.ML | EMNLP 2016 camera-ready | null | cs.CL | 20160609 | 20161110 | [
{
"id": "1604.08633"
}
] |
1606.02960 | 51 | [Zhang and Clark2015] Yue Zhang and Stephen Clark. 2015. Discriminative syntax-based word order- ing for text generation. Computational Linguistics, 41(3):503â538.
[Zhou et al.2015] Hao Zhou, Yue Zhang, and Jiajun Chen. 2015. A neural probabilistic structured-prediction model for transition-based dependency parsing. In Proceedings of the 53rd Annual Meeting of the As- sociation for Computational Linguistics, pages 1213â 1222. | 1606.02960#51 | Sequence-to-Sequence Learning as Beam-Search Optimization | Sequence-to-Sequence (seq2seq) modeling has rapidly become an important
general-purpose NLP tool that has proven effective for many text-generation and
sequence-labeling tasks. Seq2seq builds on deep neural language modeling and
inherits its remarkable accuracy in estimating local, next-word distributions.
In this work, we introduce a model and beam-search training scheme, based on
the work of Daume III and Marcu (2005), that extends seq2seq to learn global
sequence scores. This structured approach avoids classical biases associated
with local training and unifies the training loss with the test-time usage,
while preserving the proven model architecture of seq2seq and its efficient
training approach. We show that our system outperforms a highly-optimized
attention-based seq2seq system and other baselines on three different sequence
to sequence tasks: word ordering, parsing, and machine translation. | http://arxiv.org/pdf/1606.02960 | Sam Wiseman, Alexander M. Rush | cs.CL, cs.LG, cs.NE, stat.ML | EMNLP 2016 camera-ready | null | cs.CL | 20160609 | 20161110 | [
{
"id": "1604.08633"
}
] |
1606.02447 | 0 | 6 1 0 2
n u J 8 ] L C . s c [
1 v 7 4 4 2 0 . 6 0 6 1 : v i X r a
# Learning Language Games through Interaction
# Sida I. Wang Percy Liang Christopher D. Manning
# Computer Science Department Stanford University {sidaw,pliang,manning}@cs.stanford.edu
# Abstract
We introduce a new language learning setting relevant to building adaptive nat- It is inspired ural language interfaces. by Wittgensteinâs language games: a hu- man wishes to accomplish some task (e.g., achieving a certain conï¬guration of blocks), but can only communicate with a computer, who performs the actual actions (e.g., removing all red blocks). The com- puter initially knows nothing about lan- guage and therefore must learn it from scratch through interaction, while the hu- man adapts to the computerâs capabilities. We created a game called SHRDLURN in a blocks world and collected interactions from 100 people playing it. First, we an- alyze the humansâ strategies, showing that using compositionality and avoiding syn- onyms correlates positively with task per- formance. Second, we compare computer strategies, showing that modeling prag- matics on a semantic parsing model accel- erates learning for more strategic players. | 1606.02447#0 | Learning Language Games through Interaction | We introduce a new language learning setting relevant to building adaptive
natural language interfaces. It is inspired by Wittgenstein's language games: a
human wishes to accomplish some task (e.g., achieving a certain configuration
of blocks), but can only communicate with a computer, who performs the actual
actions (e.g., removing all red blocks). The computer initially knows nothing
about language and therefore must learn it from scratch through interaction,
while the human adapts to the computer's capabilities. We created a game in a
blocks world and collected interactions from 100 people playing it. First, we
analyze the humans' strategies, showing that using compositionality and
avoiding synonyms correlates positively with task performance. Second, we
compare computer strategies, showing how to quickly learn a semantic parsing
model from scratch, and that modeling pragmatics further accelerates learning
for successful players. | http://arxiv.org/pdf/1606.02447 | Sida I. Wang, Percy Liang, Christopher D. Manning | cs.CL, cs.AI, I.2.6; I.2.7 | 11 pages, ACL 2016 | null | cs.CL | 20160608 | 20160608 | [] |
1606.02447 | 1 | > goal 2): put blocks on the top 2 orange from the left (#14/32), 1: showing the next one put brown blocks on orange LIL Alla
Figure 1: The SHRDLURN game: the objective is to transform the start state into the goal state. The human types in an utterance, and the computer (which does not know the goal state) tries to in- terpret the utterance and perform the correspond- ing action. The computer initially knows nothing about the language, but through the humanâs feed- back, learns the humanâs language while making progress towards the game goal.
# Introduction
Wittgenstein (1953) famously said that language derives its meaning from use, and introduced the concept of language games to illustrate the ï¬uid- ity and purpose-orientedness of language. He de- scribed how a builder B and an assistant A can use a primitive language consisting of four wordsâ âblockâ, âpillarâ, âslabâ, âbeamââto successfully communicate what block to pass from A to B. This is only one such language; many others would also work for accomplishing the cooperative goal. | 1606.02447#1 | Learning Language Games through Interaction | We introduce a new language learning setting relevant to building adaptive
natural language interfaces. It is inspired by Wittgenstein's language games: a
human wishes to accomplish some task (e.g., achieving a certain configuration
of blocks), but can only communicate with a computer, who performs the actual
actions (e.g., removing all red blocks). The computer initially knows nothing
about language and therefore must learn it from scratch through interaction,
while the human adapts to the computer's capabilities. We created a game in a
blocks world and collected interactions from 100 people playing it. First, we
analyze the humans' strategies, showing that using compositionality and
avoiding synonyms correlates positively with task performance. Second, we
compare computer strategies, showing how to quickly learn a semantic parsing
model from scratch, and that modeling pragmatics further accelerates learning
for successful players. | http://arxiv.org/pdf/1606.02447 | Sida I. Wang, Percy Liang, Christopher D. Manning | cs.CL, cs.AI, I.2.6; I.2.7 | 11 pages, ACL 2016 | null | cs.CL | 20160608 | 20160608 | [] |
1606.02447 | 2 | (ILLG). In the ILLG setting, the two parties do not initially speak a common language, but nonethe- less need to collaboratively accomplish a goal. Speciï¬cally, we created a game called SHRD- LURN,1 in homage to the seminal work of Wino- grad (1972). As shown in Figure 1, the objective is to transform a start state into a goal state, but the only action the human can take is entering an utterance. The computer parses the utterance and produces a ranked list of possible interpretations according to its current model. The human scrolls through the list and chooses the intended one, si- multaneously advancing the state of the blocks and providing feedback to the computer. Both the hu- man and the computer wish to reach the goal state
This paper operationalizes and explores the idea of language games in a learning setting, which we call interactive learning through language games
# 1Demo: http://shrdlurn.sidaw.xyz
(only known to the human) with as little scrolling as possible. For the computer to be successful, it has to learn the humanâs language quickly over the course of the game, so that the human can accom- plish the goal more efï¬ciently. Conversely, the hu- man must also accommodate the computer, at least partially understanding what it can and cannot do. | 1606.02447#2 | Learning Language Games through Interaction | We introduce a new language learning setting relevant to building adaptive
natural language interfaces. It is inspired by Wittgenstein's language games: a
human wishes to accomplish some task (e.g., achieving a certain configuration
of blocks), but can only communicate with a computer, who performs the actual
actions (e.g., removing all red blocks). The computer initially knows nothing
about language and therefore must learn it from scratch through interaction,
while the human adapts to the computer's capabilities. We created a game in a
blocks world and collected interactions from 100 people playing it. First, we
analyze the humans' strategies, showing that using compositionality and
avoiding synonyms correlates positively with task performance. Second, we
compare computer strategies, showing how to quickly learn a semantic parsing
model from scratch, and that modeling pragmatics further accelerates learning
for successful players. | http://arxiv.org/pdf/1606.02447 | Sida I. Wang, Percy Liang, Christopher D. Manning | cs.CL, cs.AI, I.2.6; I.2.7 | 11 pages, ACL 2016 | null | cs.CL | 20160608 | 20160608 | [] |
1606.02447 | 3 | We model the computer in the ILLG as a se- mantic parser (Section 3), which maps natural lan- guage utterances (e.g., âremove redâ) into logical forms (e.g., remove(with(red))). The seman- tic parser has no seed lexicon and no annotated logical forms, so it just generates many candidate logical forms. Based on the humanâs feedback, it performs online gradient updates on the parame- ters corresponding to simple lexical features. | 1606.02447#3 | Learning Language Games through Interaction | We introduce a new language learning setting relevant to building adaptive
natural language interfaces. It is inspired by Wittgenstein's language games: a
human wishes to accomplish some task (e.g., achieving a certain configuration
of blocks), but can only communicate with a computer, who performs the actual
actions (e.g., removing all red blocks). The computer initially knows nothing
about language and therefore must learn it from scratch through interaction,
while the human adapts to the computer's capabilities. We created a game in a
blocks world and collected interactions from 100 people playing it. First, we
analyze the humans' strategies, showing that using compositionality and
avoiding synonyms correlates positively with task performance. Second, we
compare computer strategies, showing how to quickly learn a semantic parsing
model from scratch, and that modeling pragmatics further accelerates learning
for successful players. | http://arxiv.org/pdf/1606.02447 | Sida I. Wang, Percy Liang, Christopher D. Manning | cs.CL, cs.AI, I.2.6; I.2.7 | 11 pages, ACL 2016 | null | cs.CL | 20160608 | 20160608 | [] |
1606.02447 | 4 | it became evident that while the computer was eventually able to learn the language, it was learning less quickly than one might hope. For example, after learning that âremove redâ maps to remove(with(red)), it would think that âremove cyanâ also mapped to remove(with(red)), whereas a human would likely use mutual exclusivity to rule out that hypothesis (Markman and Wachtel, 1988). We therefore introduce a pragmatics model in which the computer explicitly reasons about the human, in the spirit of previous work on pragmatics (Gol- land et al., 2010; Frank and Goodman, 2012; Smith et al., 2013). To make the model suitable for our ILLG setting, we introduce a new online learning algorithm. Empirically, we show that our pragmatic model improves the online accuracy by 8% compared to our best non-pragmatic model on the 10 most successful players (Section 5.3). | 1606.02447#4 | Learning Language Games through Interaction | We introduce a new language learning setting relevant to building adaptive
natural language interfaces. It is inspired by Wittgenstein's language games: a
human wishes to accomplish some task (e.g., achieving a certain configuration
of blocks), but can only communicate with a computer, who performs the actual
actions (e.g., removing all red blocks). The computer initially knows nothing
about language and therefore must learn it from scratch through interaction,
while the human adapts to the computer's capabilities. We created a game in a
blocks world and collected interactions from 100 people playing it. First, we
analyze the humans' strategies, showing that using compositionality and
avoiding synonyms correlates positively with task performance. Second, we
compare computer strategies, showing how to quickly learn a semantic parsing
model from scratch, and that modeling pragmatics further accelerates learning
for successful players. | http://arxiv.org/pdf/1606.02447 | Sida I. Wang, Percy Liang, Christopher D. Manning | cs.CL, cs.AI, I.2.6; I.2.7 | 11 pages, ACL 2016 | null | cs.CL | 20160608 | 20160608 | [] |
1606.02447 | 5 | What is special about the ILLG setting is the real-time nature of learning, in which the human also learns and adapts to the computer. While the human can teach the computer any languageâ English, Arabic, Polish, a custom programming languageâa good human player will choose to use utterances that the computer is more likely to learn quickly. In the parlance of communication theory, the human accommodates the computer (Giles, 2008; Ireland et al., 2011). Using Ama- zon Mechanical Turk, we collected and analyzed around 10k utterances from 100 games of SHRD- LURN. We show that successful players tend to use compositional utterances with a consistent vo- cabulary and syntax, which matches the inductive biases of the computer (Section 5.2). In addition,
through this interaction, many players adapt to the computer by becoming more consistent, more pre- cise, and more concise.
On the practical side, natural language systems are often trained once and deployed, and users must live with their imperfections. We believe that studying the ILLG setting will be integral for creating adaptive and customizable systems, es- pecially for resource-poor languages and new do- mains where starting from close to scratch is un- avoidable.
# 2 Setting | 1606.02447#5 | Learning Language Games through Interaction | We introduce a new language learning setting relevant to building adaptive
natural language interfaces. It is inspired by Wittgenstein's language games: a
human wishes to accomplish some task (e.g., achieving a certain configuration
of blocks), but can only communicate with a computer, who performs the actual
actions (e.g., removing all red blocks). The computer initially knows nothing
about language and therefore must learn it from scratch through interaction,
while the human adapts to the computer's capabilities. We created a game in a
blocks world and collected interactions from 100 people playing it. First, we
analyze the humans' strategies, showing that using compositionality and
avoiding synonyms correlates positively with task performance. Second, we
compare computer strategies, showing how to quickly learn a semantic parsing
model from scratch, and that modeling pragmatics further accelerates learning
for successful players. | http://arxiv.org/pdf/1606.02447 | Sida I. Wang, Percy Liang, Christopher D. Manning | cs.CL, cs.AI, I.2.6; I.2.7 | 11 pages, ACL 2016 | null | cs.CL | 20160608 | 20160608 | [] |
1606.02447 | 6 | # 2 Setting
We now describe the interactive learning of lan- guage games (ILLG) setting formally. There are wo players, the human and the computer. The game proceeds through a fixed number of levels. In each level, both players are presented with a starting state s © Y, but only the human sees the goal state t ⬠Y. (e.g. in SHRDLURN, VY is the set of all configurations of blocks). The human transmits an utterance x (e.g., âremove redâ) to the computer. The computer then con- structs a ranked list of candidate actions Z = 21,---,2K] C Z (e.g., remove (with (red) ), add(with(orange) ), etc.), where Z is all possible actions. For each z; ⬠Z, it computes yi = [zi], the successor state from executing ac- ion z; on state s. The computer returns to the hu- man the ordered list Y = [yi,..., yx] of succes- sor states. The human then chooses y; from the list Y (we say the computer is correct if 1 = 1). The state then updates to s = y;. The level ends when s = t, and the players advance to the next level. | 1606.02447#6 | Learning Language Games through Interaction | We introduce a new language learning setting relevant to building adaptive
natural language interfaces. It is inspired by Wittgenstein's language games: a
human wishes to accomplish some task (e.g., achieving a certain configuration
of blocks), but can only communicate with a computer, who performs the actual
actions (e.g., removing all red blocks). The computer initially knows nothing
about language and therefore must learn it from scratch through interaction,
while the human adapts to the computer's capabilities. We created a game in a
blocks world and collected interactions from 100 people playing it. First, we
analyze the humans' strategies, showing that using compositionality and
avoiding synonyms correlates positively with task performance. Second, we
compare computer strategies, showing how to quickly learn a semantic parsing
model from scratch, and that modeling pragmatics further accelerates learning
for successful players. | http://arxiv.org/pdf/1606.02447 | Sida I. Wang, Percy Liang, Christopher D. Manning | cs.CL, cs.AI, I.2.6; I.2.7 | 11 pages, ACL 2016 | null | cs.CL | 20160608 | 20160608 | [] |
1606.02447 | 7 | Since only the human knows the goal state t and only the computer can perform actions, the only way for the two to play the game successfully is for the human to somehow encode the desired ac- tion in the utterance x. However, we assume the two players do not have a shared language, so the human needs to pick a language and teach it to the computer. As an additional twist, the human does not know the exact set of actions Z (although they might have some preconception of the computerâs capabilities).2 Finally, the human only sees the outcomes of the computerâs actions, not the actual logical actions themselves.
We expect the game to proceed as follows: In the beginning, the computer does not understand
2This is often the case when we try to interact with a new software system or service before reading the manual.
what the human is saying and performs arbitrary actions. As the computer obtains feedback and learns, the two should become more proï¬cient at communicating and thus playing the game. Herein lies our key design principle: language learning should be necessary for the players to achieve good game performance. | 1606.02447#7 | Learning Language Games through Interaction | We introduce a new language learning setting relevant to building adaptive
natural language interfaces. It is inspired by Wittgenstein's language games: a
human wishes to accomplish some task (e.g., achieving a certain configuration
of blocks), but can only communicate with a computer, who performs the actual
actions (e.g., removing all red blocks). The computer initially knows nothing
about language and therefore must learn it from scratch through interaction,
while the human adapts to the computer's capabilities. We created a game in a
blocks world and collected interactions from 100 people playing it. First, we
analyze the humans' strategies, showing that using compositionality and
avoiding synonyms correlates positively with task performance. Second, we
compare computer strategies, showing how to quickly learn a semantic parsing
model from scratch, and that modeling pragmatics further accelerates learning
for successful players. | http://arxiv.org/pdf/1606.02447 | Sida I. Wang, Percy Liang, Christopher D. Manning | cs.CL, cs.AI, I.2.6; I.2.7 | 11 pages, ACL 2016 | null | cs.CL | 20160608 | 20160608 | [] |
1606.02447 | 8 | SHRDLURN. Let us now describe the details of our speciï¬c game, SHRDLURN. Each state s â Y consists of stacks of colored blocks ar- ranged in a line (Figure 1), where each stack The actions is a vertical column of blocks. Z are deï¬ned compositionally via the gram- mar in Table 1. Each action either adds to or removes from a set of stacks, and a set of stacks is computed via various set operations and selecting by color. For example, the action remove(leftmost(with(red))) removes the top block from the leftmost stack whose topmost block is red. The compositionality of the actions gives the computer non-trivial capabilities. Of course, the human must teach a language to har- ness those capabilities, while not quite knowing the exact extent of the capabilities. The actual game proceeds according to a curriculum, where the earlier levels only need simpler actions with fewer predicates. | 1606.02447#8 | Learning Language Games through Interaction | We introduce a new language learning setting relevant to building adaptive
natural language interfaces. It is inspired by Wittgenstein's language games: a
human wishes to accomplish some task (e.g., achieving a certain configuration
of blocks), but can only communicate with a computer, who performs the actual
actions (e.g., removing all red blocks). The computer initially knows nothing
about language and therefore must learn it from scratch through interaction,
while the human adapts to the computer's capabilities. We created a game in a
blocks world and collected interactions from 100 people playing it. First, we
analyze the humans' strategies, showing that using compositionality and
avoiding synonyms correlates positively with task performance. Second, we
compare computer strategies, showing how to quickly learn a semantic parsing
model from scratch, and that modeling pragmatics further accelerates learning
for successful players. | http://arxiv.org/pdf/1606.02447 | Sida I. Wang, Percy Liang, Christopher D. Manning | cs.CL, cs.AI, I.2.6; I.2.7 | 11 pages, ACL 2016 | null | cs.CL | 20160608 | 20160608 | [] |
1606.02447 | 9 | We designed SHRDLURN in this way for sev- eral reasons. First, visual block manipulations are intuitive and can be easily crowdsourced, and it can be fun as an actual game that people would play. Second, the action space is designed to be compositional, mirroring the structure of natural language. Third, many actions z lead to the same successor state y = [z]]s; e.g., the âleftmost stackâ might coincide with the âstack with red blocksâ for some state s and therefore an action involving ei- ther one would result in the same outcome. Since the human only points out the correct y, the com- puter must grapple with this indirect supervision, a reflection of real language learning.
# 3 Semantic parsing model
Following Zettlemoyer and Collins (2005) and most recent work on semantic parsing, we use a log-linear model over logical forms (actions) z â Z given an utterance x:
pθ(z | x) â exp(θTÏ(x, z)), (1)
where Ï(x, z) â Rd is a feature vector and θ â Rd is a parameter vector. The denotation y (successor state) is obtained by executing z on a state s; formally, y =
[z]s. | 1606.02447#9 | Learning Language Games through Interaction | We introduce a new language learning setting relevant to building adaptive
natural language interfaces. It is inspired by Wittgenstein's language games: a
human wishes to accomplish some task (e.g., achieving a certain configuration
of blocks), but can only communicate with a computer, who performs the actual
actions (e.g., removing all red blocks). The computer initially knows nothing
about language and therefore must learn it from scratch through interaction,
while the human adapts to the computer's capabilities. We created a game in a
blocks world and collected interactions from 100 people playing it. First, we
analyze the humans' strategies, showing that using compositionality and
avoiding synonyms correlates positively with task performance. Second, we
compare computer strategies, showing how to quickly learn a semantic parsing
model from scratch, and that modeling pragmatics further accelerates learning
for successful players. | http://arxiv.org/pdf/1606.02447 | Sida I. Wang, Percy Liang, Christopher D. Manning | cs.CL, cs.AI, I.2.6; I.2.7 | 11 pages, ACL 2016 | null | cs.CL | 20160608 | 20160608 | [] |
1606.02447 | 10 | [z]s.
Features. Our features are n-grams (including skip-grams) conjoined with tree-grams on the log- Speciï¬cally, on the utterance ical form side. side (e.g., âstack red on orangeâ), we use uni- grams (âstackâ, â, â), bigrams (âredâ, âonâ, â), tri- grams (âredâ, âonâ, âorangeâ), and skip-trigrams (âstackâ, â, âonâ). On the logical form side, fea- tures corresponds to the predicates in the logical forms and their arguments. For each predicate h, let h.i be the i-th argument of h. Then, we de- ï¬ne tree-gram features Ï(h, d) for predicate h and depth d = 0, 1, 2, 3 recursively as follows:
Ï(h, 0) = {h}, Ï(h, d) = {(h, i, Ï(h.i, d â 1)) | i = 1, 2, 3}. | 1606.02447#10 | Learning Language Games through Interaction | We introduce a new language learning setting relevant to building adaptive
natural language interfaces. It is inspired by Wittgenstein's language games: a
human wishes to accomplish some task (e.g., achieving a certain configuration
of blocks), but can only communicate with a computer, who performs the actual
actions (e.g., removing all red blocks). The computer initially knows nothing
about language and therefore must learn it from scratch through interaction,
while the human adapts to the computer's capabilities. We created a game in a
blocks world and collected interactions from 100 people playing it. First, we
analyze the humans' strategies, showing that using compositionality and
avoiding synonyms correlates positively with task performance. Second, we
compare computer strategies, showing how to quickly learn a semantic parsing
model from scratch, and that modeling pragmatics further accelerates learning
for successful players. | http://arxiv.org/pdf/1606.02447 | Sida I. Wang, Percy Liang, Christopher D. Manning | cs.CL, cs.AI, I.2.6; I.2.7 | 11 pages, ACL 2016 | null | cs.CL | 20160608 | 20160608 | [] |
1606.02447 | 11 | The set of all features is just the cross product of utterance features and logical form features. if x = âenlever toutâ and z = For example, remove(all()), then features include:
(âenleverâ, all) (âenleverâ, remove) (âenleverâ, (remove, 1, all)) (âtoutâ, (remove, 1, all)) (âtoutâ, all) (âtoutâ, remove)
Note that we do not model an explicit alignment or derivation compositionally connecting the utter- ance and the logical form, in contrast to most tradi- tional work in semantic parsing (Zettlemoyer and Collins, 2005; Wong and Mooney, 2007; Liang et al., 2011; Kwiatkowski et al., 2010; Berant et al., 2013), instead following a looser model of semantics similar to (Pasupat and Liang, 2015). Modeling explicit alignments or derivations is only computationally feasible when we are learn- ing from annotated logical forms or have a seed lexicon, since the number of derivations is much larger than the number of logical forms. In the ILLG setting, neither are available. | 1606.02447#11 | Learning Language Games through Interaction | We introduce a new language learning setting relevant to building adaptive
natural language interfaces. It is inspired by Wittgenstein's language games: a
human wishes to accomplish some task (e.g., achieving a certain configuration
of blocks), but can only communicate with a computer, who performs the actual
actions (e.g., removing all red blocks). The computer initially knows nothing
about language and therefore must learn it from scratch through interaction,
while the human adapts to the computer's capabilities. We created a game in a
blocks world and collected interactions from 100 people playing it. First, we
analyze the humans' strategies, showing that using compositionality and
avoiding synonyms correlates positively with task performance. Second, we
compare computer strategies, showing how to quickly learn a semantic parsing
model from scratch, and that modeling pragmatics further accelerates learning
for successful players. | http://arxiv.org/pdf/1606.02447 | Sida I. Wang, Percy Liang, Christopher D. Manning | cs.CL, cs.AI, I.2.6; I.2.7 | 11 pages, ACL 2016 | null | cs.CL | 20160608 | 20160608 | [] |
1606.02447 | 12 | Generation/parsing. We generate logical forms from smallest to largest using beam search. Speciï¬cally, for each size n = 1, . . . , 8, we con- struct a set of logical forms of size n (with ex- actly n predicates) by combining logical forms of smaller sizes according to the grammar rules in Ta- ble 1. For each n, we keep the 100 logical forms z with the highest score θTÏ(x, z) according to the current model θ. Let Z be the set of logical forms on the ï¬nal beam, which contains logical forms of all sizes n. During training, due to pruning at
Rule Semantics Description all() cyan|brown|red|orange primitive color with(c) not(s) leftmost|rightmost(s) Set Color Color â Set Set â Set Set â Set Set Color â Act add(s, c) Set â Act all stacks remove(s)
stacks whose top block has color c all stacks except those in s leftmost/rightmost stack in s add block with color c on each stack in s remove the topmost block of each stack in s | 1606.02447#12 | Learning Language Games through Interaction | We introduce a new language learning setting relevant to building adaptive
natural language interfaces. It is inspired by Wittgenstein's language games: a
human wishes to accomplish some task (e.g., achieving a certain configuration
of blocks), but can only communicate with a computer, who performs the actual
actions (e.g., removing all red blocks). The computer initially knows nothing
about language and therefore must learn it from scratch through interaction,
while the human adapts to the computer's capabilities. We created a game in a
blocks world and collected interactions from 100 people playing it. First, we
analyze the humans' strategies, showing that using compositionality and
avoiding synonyms correlates positively with task performance. Second, we
compare computer strategies, showing how to quickly learn a semantic parsing
model from scratch, and that modeling pragmatics further accelerates learning
for successful players. | http://arxiv.org/pdf/1606.02447 | Sida I. Wang, Percy Liang, Christopher D. Manning | cs.CL, cs.AI, I.2.6; I.2.7 | 11 pages, ACL 2016 | null | cs.CL | 20160608 | 20160608 | [] |
1606.02447 | 13 | Table 1: The formal grammar defining the compositional action space Z for SHRDLURN. We use c to denote a Color, and s to denote a Set. For example, one action that we have in SHRDLURN is: âadd an orange block to all but the leftmost brown blockâ +> add (not (leftmost (with (brown) )), orange).
intermediate sizes, Z is not guaranteed to contain the logical form that obtains the observed state y. To mitigate this effect, we use a curriculum so that only simple actions are needed in the initial levels, giving the human an opportunity to teach the com- puter about basic terms such as colors ï¬rst before moving to larger composite actions.
The system executes all of the logical forms on the ï¬nal beam Z, and orders the resulting denota- tions y by the maximum probability of any logical form that produced it.3
Learning. When the human provides feedback in the form of a particular y, the system forms the following loss function:
(0,2,y) =âlogpe(y | x,8) + AllAlla, (2)
pθ(y | x, s) = pθ(z | x). z: (3)
# z:[z]s=y | 1606.02447#13 | Learning Language Games through Interaction | We introduce a new language learning setting relevant to building adaptive
natural language interfaces. It is inspired by Wittgenstein's language games: a
human wishes to accomplish some task (e.g., achieving a certain configuration
of blocks), but can only communicate with a computer, who performs the actual
actions (e.g., removing all red blocks). The computer initially knows nothing
about language and therefore must learn it from scratch through interaction,
while the human adapts to the computer's capabilities. We created a game in a
blocks world and collected interactions from 100 people playing it. First, we
analyze the humans' strategies, showing that using compositionality and
avoiding synonyms correlates positively with task performance. Second, we
compare computer strategies, showing how to quickly learn a semantic parsing
model from scratch, and that modeling pragmatics further accelerates learning
for successful players. | http://arxiv.org/pdf/1606.02447 | Sida I. Wang, Percy Liang, Christopher D. Manning | cs.CL, cs.AI, I.2.6; I.2.7 | 11 pages, ACL 2016 | null | cs.CL | 20160608 | 20160608 | [] |
1606.02447 | 14 | pθ(y | x, s) = pθ(z | x). z: (3)
# z:[z]s=y
zrm-red = remove(with(red)) as the cor- rect logical form. The computer then performs a gradient update on the loss function (2), up- weighting features such as (âremoveâ, remove) and (âremoveâ, red).
Next, suppose the human utters âremove cyanâ. Note that zrm-req Will score higher than all other formulas since the (âremoveâ, red) feature will fire again. While statistically justified, this be- havior fails to meet our intuitive expectations for a smart language learner. Moreover, this behav- ior is not specific to our model, but applies to any statistical model that simply tries to fit the data without additional prior knowledge about the spe- cific language. While we would not expect the computer to magically guess âremove cyanâ ++ remove(with(cyan)), it should at least push down the probability of zrm-rea because Zrm-rea intuitively is already well-explained by another ut- terance âremove redâ.
Then it makes a single gradient update using Ada- Grad (Duchi et al., 2010), which maintains a per- feature step size.
# 4 Modeling pragmatics | 1606.02447#14 | Learning Language Games through Interaction | We introduce a new language learning setting relevant to building adaptive
natural language interfaces. It is inspired by Wittgenstein's language games: a
human wishes to accomplish some task (e.g., achieving a certain configuration
of blocks), but can only communicate with a computer, who performs the actual
actions (e.g., removing all red blocks). The computer initially knows nothing
about language and therefore must learn it from scratch through interaction,
while the human adapts to the computer's capabilities. We created a game in a
blocks world and collected interactions from 100 people playing it. First, we
analyze the humans' strategies, showing that using compositionality and
avoiding synonyms correlates positively with task performance. Second, we
compare computer strategies, showing how to quickly learn a semantic parsing
model from scratch, and that modeling pragmatics further accelerates learning
for successful players. | http://arxiv.org/pdf/1606.02447 | Sida I. Wang, Percy Liang, Christopher D. Manning | cs.CL, cs.AI, I.2.6; I.2.7 | 11 pages, ACL 2016 | null | cs.CL | 20160608 | 20160608 | [] |
1606.02447 | 15 | Then it makes a single gradient update using Ada- Grad (Duchi et al., 2010), which maintains a per- feature step size.
# 4 Modeling pragmatics
This phenomenon, mutual exclusivity, was stud- ied by Markman and Wachtel (1988). They found that children, during their language acquisition process, reject a second label for an object and treat it instead as a label for a novel object.
In our initial experience with the semantic pars- ing model described in Section 3, we found that it was able to learn reasonably well, but lacked a reasoning ability that one ï¬nds in hu- man learners. To illustrate the point, consider the beginning of a game when θ = 0 in the log-linear model pθ(z Suppose that human utters âremove redâ and then identiï¬es
3 We tried ordering based on the sum of the probabilities (which corresponds to marginalizing out the logical form), but this had the degenerate effect of assigning too much prob- ability mass to y being the set of empty stacks, which can result from many actions. | 1606.02447#15 | Learning Language Games through Interaction | We introduce a new language learning setting relevant to building adaptive
natural language interfaces. It is inspired by Wittgenstein's language games: a
human wishes to accomplish some task (e.g., achieving a certain configuration
of blocks), but can only communicate with a computer, who performs the actual
actions (e.g., removing all red blocks). The computer initially knows nothing
about language and therefore must learn it from scratch through interaction,
while the human adapts to the computer's capabilities. We created a game in a
blocks world and collected interactions from 100 people playing it. First, we
analyze the humans' strategies, showing that using compositionality and
avoiding synonyms correlates positively with task performance. Second, we
compare computer strategies, showing how to quickly learn a semantic parsing
model from scratch, and that modeling pragmatics further accelerates learning
for successful players. | http://arxiv.org/pdf/1606.02447 | Sida I. Wang, Percy Liang, Christopher D. Manning | cs.CL, cs.AI, I.2.6; I.2.7 | 11 pages, ACL 2016 | null | cs.CL | 20160608 | 20160608 | [] |
1606.02447 | 16 | The pragmatic computer. To model mutual ex- clusivity formally, we turn to probabilistic mod- els of pragmatics (Golland et al., 2010; Frank and Goodman, 2012; Smith et al., 2013; Goodman and Lassiter, 2015), which operationalize the ideas of Grice (1975). The central idea in these models is to treat language as a cooperative game between a speaker (human) and a listener (computer) as we are doing, but where the listener has an ex- plicit model of the speakerâs strategy, which in turn models the listener. Formally, let S(x | z) be the speakerâs strategy and L(z | x) be the listenerâs
âremove redâ âremove cyanâ âremove redâ âremove cyanâ âremove redâ âremove cyanâ 0.8 0.6 0.57 0.43 0.46 0.24 0.1 0.2 0.33 0.67 0.27 0.38 | 1606.02447#16 | Learning Language Games through Interaction | We introduce a new language learning setting relevant to building adaptive
natural language interfaces. It is inspired by Wittgenstein's language games: a
human wishes to accomplish some task (e.g., achieving a certain configuration
of blocks), but can only communicate with a computer, who performs the actual
actions (e.g., removing all red blocks). The computer initially knows nothing
about language and therefore must learn it from scratch through interaction,
while the human adapts to the computer's capabilities. We created a game in a
blocks world and collected interactions from 100 people playing it. First, we
analyze the humans' strategies, showing that using compositionality and
avoiding synonyms correlates positively with task performance. Second, we
compare computer strategies, showing how to quickly learn a semantic parsing
model from scratch, and that modeling pragmatics further accelerates learning
for successful players. | http://arxiv.org/pdf/1606.02447 | Sida I. Wang, Percy Liang, Christopher D. Manning | cs.CL, cs.AI, I.2.6; I.2.7 | 11 pages, ACL 2016 | null | cs.CL | 20160608 | 20160608 | [] |
1606.02447 | 17 | Table 2: Suppose the computer saw one exam- ple of âremove redâ ++Zzm-reqa, and then the hu- man utters âremove cyanâ. top: the literal lis- tener, po(z | x), mistakingly chooses Zrm-rea Over Zrm-cyan- Middle: the pragmatic speaker, S(x | z), assigns a higher probability to to âremove cyanâ given Zrmâcyan; bottom: the pragmatic lis- tener, L(z | x) correctly assigns a lower probabil- ity tO Zrm-rea Where p(z) is uniform.
strategy. The speaker takes into account the literal semantic parsing model pθ(z | x) as well as a prior over utterances p(x), while the listener considers the speaker S(x | z) and a prior p(z):
(4)
S(x | z) â (pθ(z | x)p(x))β , L(z | x) â S(x | z)p(z),
(5) | 1606.02447#17 | Learning Language Games through Interaction | We introduce a new language learning setting relevant to building adaptive
natural language interfaces. It is inspired by Wittgenstein's language games: a
human wishes to accomplish some task (e.g., achieving a certain configuration
of blocks), but can only communicate with a computer, who performs the actual
actions (e.g., removing all red blocks). The computer initially knows nothing
about language and therefore must learn it from scratch through interaction,
while the human adapts to the computer's capabilities. We created a game in a
blocks world and collected interactions from 100 people playing it. First, we
analyze the humans' strategies, showing that using compositionality and
avoiding synonyms correlates positively with task performance. Second, we
compare computer strategies, showing how to quickly learn a semantic parsing
model from scratch, and that modeling pragmatics further accelerates learning
for successful players. | http://arxiv.org/pdf/1606.02447 | Sida I. Wang, Percy Liang, Christopher D. Manning | cs.CL, cs.AI, I.2.6; I.2.7 | 11 pages, ACL 2016 | null | cs.CL | 20160608 | 20160608 | [] |
1606.02447 | 18 | (5)
where β ⥠1 is a hyperparameter that sharpens the distribution (Smith et al., 2013). The com- puter would then use L(z | x) to rank candidates rather than pθ. Note that our pragmatic model only affects the ranking of actions returned to the hu- man and does not affect the gradient updates of the model pθ.
Let us walk through a simple example to see the effect of modeling pragmatics. Table 2 shows that the literal listener pθ(z | x) assigns high probabil- ity to zrm-red for both âremove redâ and âremove cyanâ. Assuming a uniform p(x) and β = 1, the pragmatic speaker S(x | z) corresponds to normal- izing each column of pθ. Note that if the pragmatic speaker wanted to convey zrm-cyan, there is a de- cent chance that they would favor âremove cyanâ. Next, assuming a uniform p(z), the pragmatic lis- tener L(z | x) corresponds to normalizing each row of S(x | z). The result is that conditioned on âremove cyanâ, zrm-cyan is now more likely than zrm-red, which is the desired effect. | 1606.02447#18 | Learning Language Games through Interaction | We introduce a new language learning setting relevant to building adaptive
natural language interfaces. It is inspired by Wittgenstein's language games: a
human wishes to accomplish some task (e.g., achieving a certain configuration
of blocks), but can only communicate with a computer, who performs the actual
actions (e.g., removing all red blocks). The computer initially knows nothing
about language and therefore must learn it from scratch through interaction,
while the human adapts to the computer's capabilities. We created a game in a
blocks world and collected interactions from 100 people playing it. First, we
analyze the humans' strategies, showing that using compositionality and
avoiding synonyms correlates positively with task performance. Second, we
compare computer strategies, showing how to quickly learn a semantic parsing
model from scratch, and that modeling pragmatics further accelerates learning
for successful players. | http://arxiv.org/pdf/1606.02447 | Sida I. Wang, Percy Liang, Christopher D. Manning | cs.CL, cs.AI, I.2.6; I.2.7 | 11 pages, ACL 2016 | null | cs.CL | 20160608 | 20160608 | [] |
1606.02447 | 19 | The pragmatic listener models the speaker as a cooperative agent who behaves in a way to max- imize communicative success. Certain speaker
behaviors such as avoiding synonyms (e.g., not âdelete cardinalâ) and using a consistent word or- dering (e.g, not âred removeâ) fall out of the game theory.4 For speakers that do not follow this strat- egy, our pragmatic model is incorrect, but as we get more data through game play, the literal lis- tener pθ(z | x) will sharpen, so that the literal lis- tener and the pragmatic listener will coincide in the limit.
Vz,C(z) â0 Vz, Q(z) -⬠repeat receive utterance x from human L(z|x)« ahve | x)? send human a list Y ranked by L(z | x) receive y ⬠Y from human 0+ 0â nVok(O, x,y) Q(z) = Q(z) + palz |)? Cz) + C(z) + pol | 2, [zls = y) P(e C(z)+a Dehc(2")s0 (c(eâ)+a) until game ends | 1606.02447#19 | Learning Language Games through Interaction | We introduce a new language learning setting relevant to building adaptive
natural language interfaces. It is inspired by Wittgenstein's language games: a
human wishes to accomplish some task (e.g., achieving a certain configuration
of blocks), but can only communicate with a computer, who performs the actual
actions (e.g., removing all red blocks). The computer initially knows nothing
about language and therefore must learn it from scratch through interaction,
while the human adapts to the computer's capabilities. We created a game in a
blocks world and collected interactions from 100 people playing it. First, we
analyze the humans' strategies, showing that using compositionality and
avoiding synonyms correlates positively with task performance. Second, we
compare computer strategies, showing how to quickly learn a semantic parsing
model from scratch, and that modeling pragmatics further accelerates learning
for successful players. | http://arxiv.org/pdf/1606.02447 | Sida I. Wang, Percy Liang, Christopher D. Manning | cs.CL, cs.AI, I.2.6; I.2.7 | 11 pages, ACL 2016 | null | cs.CL | 20160608 | 20160608 | [] |
1606.02447 | 20 | Algorithm 1: Online learning algorithm that updates the parameters of the semantic parser θ as well as counts C, Q required to perform pragmatic reasoning.
Online learning with pragmatics. To imple- ment the pragmatic listener as defined in 6), we need to compute the speakerâs normalization con- stant $>. po(z | x)p(x) in order to compute S(z | z) in (4). This requires parsing all utterances x based on po(z | x). To avoid this heavy computa- ion in an online setting, we propose Algorithm{I] where some approximations are used for the sake of efficiency. First, to approximate the intractable sum over all utterances x, we only use the exam- ples that are seen to compute the normalization constant >, po(2 | a)p(e) © X;pol(2 | i). Then, in order to avoid parsing all previous exam- ples again using the current parameters for each new example, we store Q(z) = >; po,(z | ai)°, where 6; is the parameter after the model updates on the i*â example x;. While 6; is different from the current parameter 0, po(z | xi) © po,(z | vi) or the relevant example x;, which is accounted for | 1606.02447#20 | Learning Language Games through Interaction | We introduce a new language learning setting relevant to building adaptive
natural language interfaces. It is inspired by Wittgenstein's language games: a
human wishes to accomplish some task (e.g., achieving a certain configuration
of blocks), but can only communicate with a computer, who performs the actual
actions (e.g., removing all red blocks). The computer initially knows nothing
about language and therefore must learn it from scratch through interaction,
while the human adapts to the computer's capabilities. We created a game in a
blocks world and collected interactions from 100 people playing it. First, we
analyze the humans' strategies, showing that using compositionality and
avoiding synonyms correlates positively with task performance. Second, we
compare computer strategies, showing how to quickly learn a semantic parsing
model from scratch, and that modeling pragmatics further accelerates learning
for successful players. | http://arxiv.org/pdf/1606.02447 | Sida I. Wang, Percy Liang, Christopher D. Manning | cs.CL, cs.AI, I.2.6; I.2.7 | 11 pages, ACL 2016 | null | cs.CL | 20160608 | 20160608 | [] |
1606.02447 | 21 | 4 Of course, synonyms and variable word order occur in real language. We would need a more complex game com- pared to SHRDLURN to capture this effect.
# by both θi and θ.
In Algorithm 1, the pragmatic listener L(z | x) can be interpreted as an importance-weighted ver- sion of the sharpened literal listener pβ θ , where it is downweighted by Q(z), which reï¬ects which zâs the literal listener prefers, and upweighted by P (z), which is just a smoothed estimate of the ac- tual distribution over logical forms p(z). By con- struction, Algorithm 1 is the same as (4) except that it uses the normalization constant Q based on stale parameters θi after seeing example, and it uses samples to compute the sum over x. Follow- ing (5), we also need p(z), which is estimated by P (z) using add-α smoothing on the counts C(z). Note that Q(z) and C(z) are updated after the model parameters are updated for the current ex- ample. | 1606.02447#21 | Learning Language Games through Interaction | We introduce a new language learning setting relevant to building adaptive
natural language interfaces. It is inspired by Wittgenstein's language games: a
human wishes to accomplish some task (e.g., achieving a certain configuration
of blocks), but can only communicate with a computer, who performs the actual
actions (e.g., removing all red blocks). The computer initially knows nothing
about language and therefore must learn it from scratch through interaction,
while the human adapts to the computer's capabilities. We created a game in a
blocks world and collected interactions from 100 people playing it. First, we
analyze the humans' strategies, showing that using compositionality and
avoiding synonyms correlates positively with task performance. Second, we
compare computer strategies, showing how to quickly learn a semantic parsing
model from scratch, and that modeling pragmatics further accelerates learning
for successful players. | http://arxiv.org/pdf/1606.02447 | Sida I. Wang, Percy Liang, Christopher D. Manning | cs.CL, cs.AI, I.2.6; I.2.7 | 11 pages, ACL 2016 | null | cs.CL | 20160608 | 20160608 | [] |
1606.02447 | 22 | Lastly, there is a small complication due to only observing the denotation y and not the logical form z. We simply give each consistent logical form {z | [z]; = y} a pseudocount based on the model: C(z) + C(z) + po(z | x, [z]s = y) where po(z | x, [zs = y) « exp(0'A(a, z)) for [2], = y (0 otherwise). Compared to prior work where the setting is
[2], = y (0 otherwise). Compared to prior work where the setting is specifically designed to require pragmatic infer- ence, pragmatics arises naturally in ILLG. We think that this form of pragmatics is the most im- portant during learning, and becomes less impor- tant if we had more data. Indeed, if we have a lot of data and a small number of possible zs, then Lala) ~ pa(z|a) as 37, po(z|x)p(x) > plz) when 3 = 1P) However, for semantic parsing, we would not be in this regime even if we have a large amount of training data. In particular, we are nowhere near that regime in SHRDLURN, and most of our utterances / logical forms are seen only once, and the importance of modeling pragmatics remains. | 1606.02447#22 | Learning Language Games through Interaction | We introduce a new language learning setting relevant to building adaptive
natural language interfaces. It is inspired by Wittgenstein's language games: a
human wishes to accomplish some task (e.g., achieving a certain configuration
of blocks), but can only communicate with a computer, who performs the actual
actions (e.g., removing all red blocks). The computer initially knows nothing
about language and therefore must learn it from scratch through interaction,
while the human adapts to the computer's capabilities. We created a game in a
blocks world and collected interactions from 100 people playing it. First, we
analyze the humans' strategies, showing that using compositionality and
avoiding synonyms correlates positively with task performance. Second, we
compare computer strategies, showing how to quickly learn a semantic parsing
model from scratch, and that modeling pragmatics further accelerates learning
for successful players. | http://arxiv.org/pdf/1606.02447 | Sida I. Wang, Percy Liang, Christopher D. Manning | cs.CL, cs.AI, I.2.6; I.2.7 | 11 pages, ACL 2016 | null | cs.CL | 20160608 | 20160608 | [] |
1606.02447 | 23 | # 5 Experiments
# 5.1 Setting
Data. Using Amazon Mechanical Turk (AMT), we paid 100 workers 3 dollars each to play SHRD- LURN. In total, we have 10223 utterances along with their starting states s. Of these, 8874 ut- terances are labeled with their denotations y; the rest are unlabeled, since the player can try any ut- terance without accepting an action. 100 players completed the entire game under identical settings.
5Technically, we also need pθ to be well-speciï¬ed.
We deliberately chose to start from scratch for ev- ery worker, so that we can study the diversity of strategies that different people used in a controlled setting. | 1606.02447#23 | Learning Language Games through Interaction | We introduce a new language learning setting relevant to building adaptive
natural language interfaces. It is inspired by Wittgenstein's language games: a
human wishes to accomplish some task (e.g., achieving a certain configuration
of blocks), but can only communicate with a computer, who performs the actual
actions (e.g., removing all red blocks). The computer initially knows nothing
about language and therefore must learn it from scratch through interaction,
while the human adapts to the computer's capabilities. We created a game in a
blocks world and collected interactions from 100 people playing it. First, we
analyze the humans' strategies, showing that using compositionality and
avoiding synonyms correlates positively with task performance. Second, we
compare computer strategies, showing how to quickly learn a semantic parsing
model from scratch, and that modeling pragmatics further accelerates learning
for successful players. | http://arxiv.org/pdf/1606.02447 | Sida I. Wang, Percy Liang, Christopher D. Manning | cs.CL, cs.AI, I.2.6; I.2.7 | 11 pages, ACL 2016 | null | cs.CL | 20160608 | 20160608 | [] |
1606.02447 | 24 | We deliberately chose to start from scratch for ev- ery worker, so that we can study the diversity of strategies that different people used in a controlled setting.
Each game consists of 50 blocks tasks divided into 5 levels of 10 tasks each, in increasing com- plexity. Each level aims to reach an end goal given a start state. Each game took on average 89 utterances to complete.6 It only took 6 hours to complete these 100 games on AMT and each game took around an hour on average according to AMTâs work time tracker (which does not account for multi-tasking players). The players were pro- vided minimal instructions on the game controls. Importantly, we gave no example utterances in or- der to avoid biasing their language use. Around 20 players were confused and told us that the in- structions were not clear and gave us mostly spam utterances. Fortunately, most players understood the setting and some even enjoyed SHRDLURN as reï¬ected by their optional comments:
⢠That was probably the most fun thing I have ever done on mTurk.
⢠Wow this was one mind bending games [sic]. | 1606.02447#24 | Learning Language Games through Interaction | We introduce a new language learning setting relevant to building adaptive
natural language interfaces. It is inspired by Wittgenstein's language games: a
human wishes to accomplish some task (e.g., achieving a certain configuration
of blocks), but can only communicate with a computer, who performs the actual
actions (e.g., removing all red blocks). The computer initially knows nothing
about language and therefore must learn it from scratch through interaction,
while the human adapts to the computer's capabilities. We created a game in a
blocks world and collected interactions from 100 people playing it. First, we
analyze the humans' strategies, showing that using compositionality and
avoiding synonyms correlates positively with task performance. Second, we
compare computer strategies, showing how to quickly learn a semantic parsing
model from scratch, and that modeling pragmatics further accelerates learning
for successful players. | http://arxiv.org/pdf/1606.02447 | Sida I. Wang, Percy Liang, Christopher D. Manning | cs.CL, cs.AI, I.2.6; I.2.7 | 11 pages, ACL 2016 | null | cs.CL | 20160608 | 20160608 | [] |
1606.02447 | 25 | ⢠That was probably the most fun thing I have ever done on mTurk.
⢠Wow this was one mind bending games [sic].
Metrics. We use the number of scrolls as a mea- sure of game performance for each player. For each example, the number of scrolls is the position in the list Y of the action selected by the player. It was possible to complete this version of SHRD- LURN by scrolling (all actions can be found in the ï¬rst 125 of Y )â22 of the 100 players failed to teach an actual language, and instead ï¬nished the game mostly by scrolling. Let us call them spam players, who usually typed single letters, random words, digits, or random phrases (e.g. âhow are youâ). Overall, spam players had to scroll a lot: 21.6 scrolls per utterance versus only 7.4 for the non-spam players.
# 5.2 Human strategies
Some example utterances can be found in Table 3. Most of the players used English, but vary in their adherence to conventions such as use of determin- ers, plurals, and proper word ordering. 5 players invented their own language, which are more pre- cise, more consistent than general English. One player used Polish, and another used Polish nota- tion (bottom of Table 3). | 1606.02447#25 | Learning Language Games through Interaction | We introduce a new language learning setting relevant to building adaptive
natural language interfaces. It is inspired by Wittgenstein's language games: a
human wishes to accomplish some task (e.g., achieving a certain configuration
of blocks), but can only communicate with a computer, who performs the actual
actions (e.g., removing all red blocks). The computer initially knows nothing
about language and therefore must learn it from scratch through interaction,
while the human adapts to the computer's capabilities. We created a game in a
blocks world and collected interactions from 100 people playing it. First, we
analyze the humans' strategies, showing that using compositionality and
avoiding synonyms correlates positively with task performance. Second, we
compare computer strategies, showing how to quickly learn a semantic parsing
model from scratch, and that modeling pragmatics further accelerates learning
for successful players. | http://arxiv.org/pdf/1606.02447 | Sida I. Wang, Percy Liang, Christopher D. Manning | cs.CL, cs.AI, I.2.6; I.2.7 | 11 pages, ACL 2016 | null | cs.CL | 20160608 | 20160608 | [] |
1606.02447 | 26 | 6 This number is not 50 because some block tasks need multiple steps and players are also allowed to explore without reaching the goal.
Most successful players (1stâ20th)
rem cy pos 1, stack or blk pos 4, rem blk pos 2 thru 5, rem blk pos 2 thru 4, stack bn blk pos 1 thru 2, ï¬ll bn blk, stack or blk pos 2 thru 6, rem cy blk pos 2 ï¬ll rd blk (3.01)
remove the brown block, remove all orange blocks, put brown block on orange blocks, put orange blocks on all blocks, put blue block on leftmost blue block in top row (2.78)
Remove the center block, Remove the red block, Remove all red blocks, Remove the ï¬rst orange block, Put a brown block on the ï¬rst brown block, Add blue block on ï¬rst blue block (2.72) | 1606.02447#26 | Learning Language Games through Interaction | We introduce a new language learning setting relevant to building adaptive
natural language interfaces. It is inspired by Wittgenstein's language games: a
human wishes to accomplish some task (e.g., achieving a certain configuration
of blocks), but can only communicate with a computer, who performs the actual
actions (e.g., removing all red blocks). The computer initially knows nothing
about language and therefore must learn it from scratch through interaction,
while the human adapts to the computer's capabilities. We created a game in a
blocks world and collected interactions from 100 people playing it. First, we
analyze the humans' strategies, showing that using compositionality and
avoiding synonyms correlates positively with task performance. Second, we
compare computer strategies, showing how to quickly learn a semantic parsing
model from scratch, and that modeling pragmatics further accelerates learning
for successful players. | http://arxiv.org/pdf/1606.02447 | Sida I. Wang, Percy Liang, Christopher D. Manning | cs.CL, cs.AI, I.2.6; I.2.7 | 11 pages, ACL 2016 | null | cs.CL | 20160608 | 20160608 | [] |
1606.02447 | 27 | Average players (21thâ50th) reinsert pink, take brown, put in pink, remove two pink from second layer, Add two red to second layer in odd intervals, Add ï¬ve pink to second layer, Remove one blue and one brown from bottom layer (9.17) remove red, remove 1 red, remove 2 4 orange, add 2 red, add 1 2 3 4 blue, emove 1 3 5 orange, add 2 4 orange, add 2 orange, remove 2 3 brown, add 1 2 3 4 5 red, remove 2 3 4 5 6, remove 2, add 1 2 3 4 6 red (8.37) move second cube, double red with blue, double ï¬rst red with red, triple second and fourth with orange, add red, remove orange on row two, add blue to column two, add brown on ï¬rst and third (7.18)
# Least successful players (51thâ) | 1606.02447#27 | Learning Language Games through Interaction | We introduce a new language learning setting relevant to building adaptive
natural language interfaces. It is inspired by Wittgenstein's language games: a
human wishes to accomplish some task (e.g., achieving a certain configuration
of blocks), but can only communicate with a computer, who performs the actual
actions (e.g., removing all red blocks). The computer initially knows nothing
about language and therefore must learn it from scratch through interaction,
while the human adapts to the computer's capabilities. We created a game in a
blocks world and collected interactions from 100 people playing it. First, we
analyze the humans' strategies, showing that using compositionality and
avoiding synonyms correlates positively with task performance. Second, we
compare computer strategies, showing how to quickly learn a semantic parsing
model from scratch, and that modeling pragmatics further accelerates learning
for successful players. | http://arxiv.org/pdf/1606.02447 | Sida I. Wang, Percy Liang, Christopher D. Manning | cs.CL, cs.AI, I.2.6; I.2.7 | 11 pages, ACL 2016 | null | cs.CL | 20160608 | 20160608 | [] |
1606.02447 | 28 | # Least successful players (51thâ)
holdleftmost, holdbrown, holdleftmost, blueonblue, brownonblue1, blueonorange, holdblue, holdorange2, blueonred2 , holdends1, holdrightend, hold2, orangeonorangerightmost (14.15) âadd red cubes on center left, center right, far left and far rightâ, âremove blue blocks on row two column two, row two column fourâ, remove red blocks in center left and center right on second row (12.6)
# Spam players (â¼ 85thâ100)
next, hello happy, how are you, move, gold, build goal blocks, 23,house, gabboli, x, runâxav, d, j, xcv, dulicate goal (21.7)
Most interesting | 1606.02447#28 | Learning Language Games through Interaction | We introduce a new language learning setting relevant to building adaptive
natural language interfaces. It is inspired by Wittgenstein's language games: a
human wishes to accomplish some task (e.g., achieving a certain configuration
of blocks), but can only communicate with a computer, who performs the actual
actions (e.g., removing all red blocks). The computer initially knows nothing
about language and therefore must learn it from scratch through interaction,
while the human adapts to the computer's capabilities. We created a game in a
blocks world and collected interactions from 100 people playing it. First, we
analyze the humans' strategies, showing that using compositionality and
avoiding synonyms correlates positively with task performance. Second, we
compare computer strategies, showing how to quickly learn a semantic parsing
model from scratch, and that modeling pragmatics further accelerates learning
for successful players. | http://arxiv.org/pdf/1606.02447 | Sida I. Wang, Percy Liang, Christopher D. Manning | cs.CL, cs.AI, I.2.6; I.2.7 | 11 pages, ACL 2016 | null | cs.CL | 20160608 | 20160608 | [] |
1606.02447 | 29 | Most interesting
usu´n br Ëazowe klocki, postaw pomara´nczowy klocek na pierwszym klocku, postaw czerwone klocki na pomara´nczowych, usu´n pomara´nczowe klocki w górnym rzËedzie rm scat + 1 c, + 1 c, rm sh, + 1 2 4 sh, + 1 c, - 4 o, rm 1 r, + 1 3 o, full ï¬ll c, rm o, full ï¬ll sh, - 1 3, full ï¬ll sh, rm sh, rm r, + 2 3 r, rm o, + 3 sh, + 2 3 sh, rm b, - 1 o, + 2 c, mBROWN,mBLUE,mORANGE RED+ORANGEËORANGE, BROWN+BROWNm1+BROWNm3, ORANGE +BROWN +ORANGEËm1+ ORANGEËm3 + BROWNËË2 + BROWNËË4 | 1606.02447#29 | Learning Language Games through Interaction | We introduce a new language learning setting relevant to building adaptive
natural language interfaces. It is inspired by Wittgenstein's language games: a
human wishes to accomplish some task (e.g., achieving a certain configuration
of blocks), but can only communicate with a computer, who performs the actual
actions (e.g., removing all red blocks). The computer initially knows nothing
about language and therefore must learn it from scratch through interaction,
while the human adapts to the computer's capabilities. We created a game in a
blocks world and collected interactions from 100 people playing it. First, we
analyze the humans' strategies, showing that using compositionality and
avoiding synonyms correlates positively with task performance. Second, we
compare computer strategies, showing how to quickly learn a semantic parsing
model from scratch, and that modeling pragmatics further accelerates learning
for successful players. | http://arxiv.org/pdf/1606.02447 | Sida I. Wang, Percy Liang, Christopher D. Manning | cs.CL, cs.AI, I.2.6; I.2.7 | 11 pages, ACL 2016 | null | cs.CL | 20160608 | 20160608 | [] |
1606.02447 | 30 | Table 3: Example utterances, along with the average number of scrolls for that player in parentheses. Success is measured by the number of scrolls, where the more successful players need less scrolls. 1) The 20 most successful players tend to use consistent and concise language whose semantics is similar to our logical language. 2) Average players tend to be slightly more verbose and inconsistent (left and right), or signiï¬cantly different from our logical langauge (middle). 3) Reasons for being unsuccessful vary. Left: no tokenization, middle: used a coordinate system and many conjunctions; right: confused in the beginning, and used a language very different from our logical language. | 1606.02447#30 | Learning Language Games through Interaction | We introduce a new language learning setting relevant to building adaptive
natural language interfaces. It is inspired by Wittgenstein's language games: a
human wishes to accomplish some task (e.g., achieving a certain configuration
of blocks), but can only communicate with a computer, who performs the actual
actions (e.g., removing all red blocks). The computer initially knows nothing
about language and therefore must learn it from scratch through interaction,
while the human adapts to the computer's capabilities. We created a game in a
blocks world and collected interactions from 100 people playing it. First, we
analyze the humans' strategies, showing that using compositionality and
avoiding synonyms correlates positively with task performance. Second, we
compare computer strategies, showing how to quickly learn a semantic parsing
model from scratch, and that modeling pragmatics further accelerates learning
for successful players. | http://arxiv.org/pdf/1606.02447 | Sida I. Wang, Percy Liang, Christopher D. Manning | cs.CL, cs.AI, I.2.6; I.2.7 | 11 pages, ACL 2016 | null | cs.CL | 20160608 | 20160608 | [] |
1606.02447 | 31 | Overall, we ï¬nd that many players adapt in ILLG by becoming more consistent, less verbose, and more precise, even if they used standard En- glish at the beginning. For example, some players became more consistent over time (e.g. from us- ing both âremoveâ and âdiscardâ to only using âre- moveâ). In terms of verbosity, removing function words like determiners as the game progresses is In each of the following a common adaptation. examples from different players, we compare an utterance that appeared early in the game to a sim- ilar utterance that appeared later: âRemove the red onesâ became âRemove red.â; âadd brown on top of redâ became âadd orange on redâ; âadd red blocks to all red blocksâ became âadd red to redâ; âdark redâ became âredâ; one player used âtheâ in all of the ï¬rst 20 utterances, and then never used âtheâ in the last 75 utterances. | 1606.02447#31 | Learning Language Games through Interaction | We introduce a new language learning setting relevant to building adaptive
natural language interfaces. It is inspired by Wittgenstein's language games: a
human wishes to accomplish some task (e.g., achieving a certain configuration
of blocks), but can only communicate with a computer, who performs the actual
actions (e.g., removing all red blocks). The computer initially knows nothing
about language and therefore must learn it from scratch through interaction,
while the human adapts to the computer's capabilities. We created a game in a
blocks world and collected interactions from 100 people playing it. First, we
analyze the humans' strategies, showing that using compositionality and
avoiding synonyms correlates positively with task performance. Second, we
compare computer strategies, showing how to quickly learn a semantic parsing
model from scratch, and that modeling pragmatics further accelerates learning
for successful players. | http://arxiv.org/pdf/1606.02447 | Sida I. Wang, Percy Liang, Christopher D. Manning | cs.CL, cs.AI, I.2.6; I.2.7 | 11 pages, ACL 2016 | null | cs.CL | 20160608 | 20160608 | [] |
1606.02447 | 32 | Players also vary in precision, ranging from overspeciï¬ed (e.g. âremove the orange cube at the leftâ, âremove red blocks from top rowâ) to under- speciï¬ed or requiring context (e.g. âchange col- orsâ, âadd one blueâ, âBuild more blocusâ, âMove the blocks foolâ,âAdd two red cubesâ). We found that some players became more precise over time, as they gain a better understanding of ILLG. | 1606.02447#32 | Learning Language Games through Interaction | We introduce a new language learning setting relevant to building adaptive
natural language interfaces. It is inspired by Wittgenstein's language games: a
human wishes to accomplish some task (e.g., achieving a certain configuration
of blocks), but can only communicate with a computer, who performs the actual
actions (e.g., removing all red blocks). The computer initially knows nothing
about language and therefore must learn it from scratch through interaction,
while the human adapts to the computer's capabilities. We created a game in a
blocks world and collected interactions from 100 people playing it. First, we
analyze the humans' strategies, showing that using compositionality and
avoiding synonyms correlates positively with task performance. Second, we
compare computer strategies, showing how to quickly learn a semantic parsing
model from scratch, and that modeling pragmatics further accelerates learning
for successful players. | http://arxiv.org/pdf/1606.02447 | Sida I. Wang, Percy Liang, Christopher D. Manning | cs.CL, cs.AI, I.2.6; I.2.7 | 11 pages, ACL 2016 | null | cs.CL | 20160608 | 20160608 | [] |
1606.02447 | 33 | Most players use utterances that actually do not match our logical language in Table 1, even the successful players. In particular, numbers are of- ten used. While some concepts always have the same effect in our blocks world (e.g. âï¬rst blockâ means leftmost), most are different. More con- cretely, of the top 10 players, 7 used numbers of some form and only 3 players matched our logical language. Some players who did not match the logical language performed quite well nevertheless. One possible explanation is because the ac- tion required is somewhat constrained by the logi- cal language and some tokens can have unintended interpretations. For example, the computer can correctly interpret numerical positional references, as long as the player only refers to the leftmost and rightmost positions. So if the player says ârem blk pos 4â and ârem blk pos 1â, the computer can interpret âposâ as rightmost and interpret the bigram (âposâ, â1â) as leftmost. On the other hand, players who deviated signiï¬cantly by de- scribing the desired state declaratively (e.g. âred orange redâ, | 1606.02447#33 | Learning Language Games through Interaction | We introduce a new language learning setting relevant to building adaptive
natural language interfaces. It is inspired by Wittgenstein's language games: a
human wishes to accomplish some task (e.g., achieving a certain configuration
of blocks), but can only communicate with a computer, who performs the actual
actions (e.g., removing all red blocks). The computer initially knows nothing
about language and therefore must learn it from scratch through interaction,
while the human adapts to the computer's capabilities. We created a game in a
blocks world and collected interactions from 100 people playing it. First, we
analyze the humans' strategies, showing that using compositionality and
avoiding synonyms correlates positively with task performance. Second, we
compare computer strategies, showing how to quickly learn a semantic parsing
model from scratch, and that modeling pragmatics further accelerates learning
for successful players. | http://arxiv.org/pdf/1606.02447 | Sida I. Wang, Percy Liang, Christopher D. Manning | cs.CL, cs.AI, I.2.6; I.2.7 | 11 pages, ACL 2016 | null | cs.CL | 20160608 | 20160608 | [] |
1606.02447 | 35 | Compositionality. As far as we can tell, all players used a compositional language; no one in- vented unrelated words for each action. Interest- ingly, 3 players did not put spaces between words. Since we assume monomorphemic words sepa- rated by spaces, they had to do a lot of scrolling as a result (e.g., 14.15 with utterances like âor- angeonorangerightmostâ).
# 5.3 Computer strategies
We now present quantitative results on how quickly the computer can learn, where our goal is to achieve high accuracy on new utterances as we make just a single pass over the data. The num- ber of scrolls used to evaluate player is sensitive to outliers and not as intuitive as accuracy. Instead, we consider online accuracy, described as follows. Formally, if a player produced T utterances x(j) and labeled them y(j), then
online accuracy â© = 7! [y =| ,
where z(j) = arg maxz pθ(jâ1)(z|x(j)) is the model prediction based on the previous parame- ter θ(jâ1). Note that the online accuracy is de- ï¬ned with respect to the player-reported labels, which only corresponds to the actual accuracy if the player is precise and honest. This is not true for most spam players. | 1606.02447#35 | Learning Language Games through Interaction | We introduce a new language learning setting relevant to building adaptive
natural language interfaces. It is inspired by Wittgenstein's language games: a
human wishes to accomplish some task (e.g., achieving a certain configuration
of blocks), but can only communicate with a computer, who performs the actual
actions (e.g., removing all red blocks). The computer initially knows nothing
about language and therefore must learn it from scratch through interaction,
while the human adapts to the computer's capabilities. We created a game in a
blocks world and collected interactions from 100 people playing it. First, we
analyze the humans' strategies, showing that using compositionality and
avoiding synonyms correlates positively with task performance. Second, we
compare computer strategies, showing how to quickly learn a semantic parsing
model from scratch, and that modeling pragmatics further accelerates learning
for successful players. | http://arxiv.org/pdf/1606.02447 | Sida I. Wang, Percy Liang, Christopher D. Manning | cs.CL, cs.AI, I.2.6; I.2.7 | 11 pages, ACL 2016 | null | cs.CL | 20160608 | 20160608 | [] |
1606.02447 | 36 | Compositionality. To study the importance of compositionality, we consider two baselines. First, consider a non-compositional model (mem(a) (b)
Figure 2: Pragmatics improve online accuracy. In these plots, each marker is a player. red o: play- ers who ranked 1â20 in terms of minimizing num- ber of scrolls, green x: players 20â50; blue +: lower than 50 (includes spam players). Marker sizes correspond to player rank, where better play- ers are depicted with larger markers. 2a: online accuracies with and without pragmatics on the full model; 2b: same for the half model.
Method memorize half model half + prag full model full + prag top 10 25.4 38.7 43.7 48.6 52.8 top 20 24.5 38.4 42.7 47.8 49.8 top 50 22.5 36.0 39.7 44.9 45.8 all 100 17.6 27.0 29.4 33.3 33.8 | 1606.02447#36 | Learning Language Games through Interaction | We introduce a new language learning setting relevant to building adaptive
natural language interfaces. It is inspired by Wittgenstein's language games: a
human wishes to accomplish some task (e.g., achieving a certain configuration
of blocks), but can only communicate with a computer, who performs the actual
actions (e.g., removing all red blocks). The computer initially knows nothing
about language and therefore must learn it from scratch through interaction,
while the human adapts to the computer's capabilities. We created a game in a
blocks world and collected interactions from 100 people playing it. First, we
analyze the humans' strategies, showing that using compositionality and
avoiding synonyms correlates positively with task performance. Second, we
compare computer strategies, showing how to quickly learn a semantic parsing
model from scratch, and that modeling pragmatics further accelerates learning
for successful players. | http://arxiv.org/pdf/1606.02447 | Sida I. Wang, Percy Liang, Christopher D. Manning | cs.CL, cs.AI, I.2.6; I.2.7 | 11 pages, ACL 2016 | null | cs.CL | 20160608 | 20160608 | [] |
1606.02447 | 37 | Table 4: Average online accuracy under vari- featurize entire utter- ous settings. memorize: ance and logical form non-compositionally; half model: featurize the utterances with unigrams, bi- grams, and skip-grams but conjoin with the entire logical form; full model: the model described in Section 3; +prag: the models above, with our on- line pragmatics algorithm described in Section 4. Both compositionality and pragmatics improve ac- curacy.
orize) that just remembers pairs of complete ut- terance and logical forms. We implement this using indicator features on features (x, z), e.g., (âremove all the red blocksâ, zrm-red), and use a large learning rate. Second, we consider a model (half ) that treats utterances composition- ally with unigrams, bigrams, and skip-trigrams features, but the logical forms are regarded as non-compositional, so we have features such as (âremoveâ, zrm-red), (âredâ, zrm-red), etc. | 1606.02447#37 | Learning Language Games through Interaction | We introduce a new language learning setting relevant to building adaptive
natural language interfaces. It is inspired by Wittgenstein's language games: a
human wishes to accomplish some task (e.g., achieving a certain configuration
of blocks), but can only communicate with a computer, who performs the actual
actions (e.g., removing all red blocks). The computer initially knows nothing
about language and therefore must learn it from scratch through interaction,
while the human adapts to the computer's capabilities. We created a game in a
blocks world and collected interactions from 100 people playing it. First, we
analyze the humans' strategies, showing that using compositionality and
avoiding synonyms correlates positively with task performance. Second, we
compare computer strategies, showing how to quickly learn a semantic parsing
model from scratch, and that modeling pragmatics further accelerates learning
for successful players. | http://arxiv.org/pdf/1606.02447 | Sida I. Wang, Percy Liang, Christopher D. Manning | cs.CL, cs.AI, I.2.6; I.2.7 | 11 pages, ACL 2016 | null | cs.CL | 20160608 | 20160608 | [] |
1606.02447 | 38 | Table 4 shows that the full model (Section 3) signiï¬cantly outperforms both the memorize and half baselines. The learning rate η = 0.1 is se- lected via cross validation, and we used α = 1 and β = 3 following Smith et al. (2013).
Pragmatics. Next, we study the effect of prag- Figure 2 shows matics on online accuracy. that modeling pragmatics helps successful players (e.g., top 10 by number of scrolls) who use precise and consistent languages. Interestingly, our prag- matics model did not help and can even hurt the less successful players who are less precise and consistent. This is expected behavior: the prag- matics model assumes that the human is coopera- tive and behaving rationally. For the bottom half of the players, this assumption is not true, in which case the pragmatics model is not useful.
# 6 Related Work and Discussion | 1606.02447#38 | Learning Language Games through Interaction | We introduce a new language learning setting relevant to building adaptive
natural language interfaces. It is inspired by Wittgenstein's language games: a
human wishes to accomplish some task (e.g., achieving a certain configuration
of blocks), but can only communicate with a computer, who performs the actual
actions (e.g., removing all red blocks). The computer initially knows nothing
about language and therefore must learn it from scratch through interaction,
while the human adapts to the computer's capabilities. We created a game in a
blocks world and collected interactions from 100 people playing it. First, we
analyze the humans' strategies, showing that using compositionality and
avoiding synonyms correlates positively with task performance. Second, we
compare computer strategies, showing how to quickly learn a semantic parsing
model from scratch, and that modeling pragmatics further accelerates learning
for successful players. | http://arxiv.org/pdf/1606.02447 | Sida I. Wang, Percy Liang, Christopher D. Manning | cs.CL, cs.AI, I.2.6; I.2.7 | 11 pages, ACL 2016 | null | cs.CL | 20160608 | 20160608 | [] |
1606.02447 | 39 | # 6 Related Work and Discussion
Our work connects with a broad body of work on grounded language, in which language is used in some environment as a means towards some goal. Examples include playing games (Branavan et al., 2009, 2010; Reckman et al., 2010) interacting with robotics (Tellex et al., 2011, 2014), and following instructions (Vogel and Jurafsky, 2010; Chen and Mooney, 2011; Artzi and Zettlemoyer, 2013) Se- mantic parsing utterances to logical forms, which we leverage, plays an important role in these set- tings (Kollar et al., 2010; Matuszek et al., 2012; Artzi and Zettlemoyer, 2013).
What makes this work unique is our new inter- active learning of language games (ILLG) setting, in which a model has to learn a language from scratch through interaction. While online gradient descent is frequently used, for example in seman- tic parsing (Zettlemoyer and Collins, 2007; Chen, 2012), we using it in a truly online setting, taking one pass over the data and measuring online accu- racy (Cesa-Bianchi and Lugosi, 2006). | 1606.02447#39 | Learning Language Games through Interaction | We introduce a new language learning setting relevant to building adaptive
natural language interfaces. It is inspired by Wittgenstein's language games: a
human wishes to accomplish some task (e.g., achieving a certain configuration
of blocks), but can only communicate with a computer, who performs the actual
actions (e.g., removing all red blocks). The computer initially knows nothing
about language and therefore must learn it from scratch through interaction,
while the human adapts to the computer's capabilities. We created a game in a
blocks world and collected interactions from 100 people playing it. First, we
analyze the humans' strategies, showing that using compositionality and
avoiding synonyms correlates positively with task performance. Second, we
compare computer strategies, showing how to quickly learn a semantic parsing
model from scratch, and that modeling pragmatics further accelerates learning
for successful players. | http://arxiv.org/pdf/1606.02447 | Sida I. Wang, Percy Liang, Christopher D. Manning | cs.CL, cs.AI, I.2.6; I.2.7 | 11 pages, ACL 2016 | null | cs.CL | 20160608 | 20160608 | [] |
1606.02447 | 40 | To speed up learning, we leverage computa- tional models of pragmatics (Jäger, 2008; Golland et al., 2010; Frank and Goodman, 2012; Smith et al., 2013; Vogel et al., 2013). The main differ- ence is these previous works use pragmatics with a trained base model, whereas we learn the model online. Monroe and Potts (2015) uses learning to improve the pragmatics model. In contrast, we use pragmatics to speed up the learning pro- cess by capturing phenomena like mutual exclu- sivity (Markman and Wachtel, 1988). We also dif- fer from prior work in several details. First, we model pragmatics in the online learning setting where we use an online update for the pragmat- ics model. Second, unlikely the reference games where pragmatic effects plays an important role by | 1606.02447#40 | Learning Language Games through Interaction | We introduce a new language learning setting relevant to building adaptive
natural language interfaces. It is inspired by Wittgenstein's language games: a
human wishes to accomplish some task (e.g., achieving a certain configuration
of blocks), but can only communicate with a computer, who performs the actual
actions (e.g., removing all red blocks). The computer initially knows nothing
about language and therefore must learn it from scratch through interaction,
while the human adapts to the computer's capabilities. We created a game in a
blocks world and collected interactions from 100 people playing it. First, we
analyze the humans' strategies, showing that using compositionality and
avoiding synonyms correlates positively with task performance. Second, we
compare computer strategies, showing how to quickly learn a semantic parsing
model from scratch, and that modeling pragmatics further accelerates learning
for successful players. | http://arxiv.org/pdf/1606.02447 | Sida I. Wang, Percy Liang, Christopher D. Manning | cs.CL, cs.AI, I.2.6; I.2.7 | 11 pages, ACL 2016 | null | cs.CL | 20160608 | 20160608 | [] |
1606.02447 | 41 | design, SHRDLURN is not speciï¬cally designed to require pragmatics. The improvement we get is mainly due to players trying to be consistent in their language use. Finaly, we treat both the utter- ance and the logical forms as featurized composi- tional objects. Smith et al. (2013) treats utterances (i.e. words) and logical forms (i.e. objects) as cat- egories; Monroe and Potts (2015) used features, but also over ï¬at categories.
Looking forward, we believe that the ILLG set- ting is worth studying and has important implica- tions for natural language interfaces. Today, these If these systems are trained once and deployed. systems could quickly adapt to user feedback in real-time as in this work, then we might be able to more readily create systems for resource-poor languages and new domains, that are customizable and improve through use.
# Acknowledgments
DARPA Communicating with Computers (CwC) program under ARO prime contract no. W911NF- 15-1-0462. The ï¬rst author is supported by a NSERC PGS-D fellowship. In addition, we thank Will Monroe, and Chris Potts for their insightful comments and discussions on pragmatics.
# Reproducibility | 1606.02447#41 | Learning Language Games through Interaction | We introduce a new language learning setting relevant to building adaptive
natural language interfaces. It is inspired by Wittgenstein's language games: a
human wishes to accomplish some task (e.g., achieving a certain configuration
of blocks), but can only communicate with a computer, who performs the actual
actions (e.g., removing all red blocks). The computer initially knows nothing
about language and therefore must learn it from scratch through interaction,
while the human adapts to the computer's capabilities. We created a game in a
blocks world and collected interactions from 100 people playing it. First, we
analyze the humans' strategies, showing that using compositionality and
avoiding synonyms correlates positively with task performance. Second, we
compare computer strategies, showing how to quickly learn a semantic parsing
model from scratch, and that modeling pragmatics further accelerates learning
for successful players. | http://arxiv.org/pdf/1606.02447 | Sida I. Wang, Percy Liang, Christopher D. Manning | cs.CL, cs.AI, I.2.6; I.2.7 | 11 pages, ACL 2016 | null | cs.CL | 20160608 | 20160608 | [] |
1606.02447 | 42 | # Reproducibility
All code, data, and experiments for this paper are available on the CodaLab platform: https://worksheets.
codalab.org/worksheets/ 0x9fe4d080bac944e9a6bd58478cb05e5e The client side code is here: https://github.com/sidaw/shrdlurn/tree/ acl16-demo and a demo: http://shrdlurn.sidaw.xyz
# References
Y. Artzi and L. Zettlemoyer. 2013. Weakly super- vised learning of semantic parsers for mapping instructions to actions. Transactions of the As- sociation for Computational Linguistics (TACL) 1:49â62.
J. Berant, A. Chou, R. Frostig, and P. Liang. 2013. Semantic parsing on Freebase from question- answer pairs. In Empirical Methods in Natural Language Processing (EMNLP).
S. Branavan, H. Chen, L. S. Zettlemoyer, and R. Barzilay. 2009. Reinforcement learning for
In Associa- mapping instructions to actions. tion for Computational Linguistics and Inter- national Joint Conference on Natural Language Processing (ACL-IJCNLP). pages 82â90. | 1606.02447#42 | Learning Language Games through Interaction | We introduce a new language learning setting relevant to building adaptive
natural language interfaces. It is inspired by Wittgenstein's language games: a
human wishes to accomplish some task (e.g., achieving a certain configuration
of blocks), but can only communicate with a computer, who performs the actual
actions (e.g., removing all red blocks). The computer initially knows nothing
about language and therefore must learn it from scratch through interaction,
while the human adapts to the computer's capabilities. We created a game in a
blocks world and collected interactions from 100 people playing it. First, we
analyze the humans' strategies, showing that using compositionality and
avoiding synonyms correlates positively with task performance. Second, we
compare computer strategies, showing how to quickly learn a semantic parsing
model from scratch, and that modeling pragmatics further accelerates learning
for successful players. | http://arxiv.org/pdf/1606.02447 | Sida I. Wang, Percy Liang, Christopher D. Manning | cs.CL, cs.AI, I.2.6; I.2.7 | 11 pages, ACL 2016 | null | cs.CL | 20160608 | 20160608 | [] |
1606.02447 | 43 | In Associa- mapping instructions to actions. tion for Computational Linguistics and Inter- national Joint Conference on Natural Language Processing (ACL-IJCNLP). pages 82â90.
S. Branavan, L. Zettlemoyer, and R. Barzilay. 2010. Reading between the lines: Learning to map high-level instructions to commands. In Association for Computational Linguistics (ACL). pages 1268â1277.
N. Cesa-Bianchi and G. Lugosi. 2006. Predic- tion, learning, and games. Cambridge Univer- sity Press.
D. L. Chen. 2012. Fast online lexicon learning for grounded language acquisition. In Association for Computational Linguistics (ACL).
D. L. Chen and R. J. Mooney. 2011. Learning to interpret natural language navigation instruc- tions from observations. In Association for the Advancement of Artiï¬cial Intelligence (AAAI). pages 859â865.
J. Duchi, E. Hazan, and Y. Singer. 2010. Adap- tive subgradient methods for online learning and stochastic optimization. In Conference on Learning Theory (COLT).
M. Frank and N. D. Goodman. 2012. Predicting pragmatic reasoning in language games. Sci- ence 336:998â998. | 1606.02447#43 | Learning Language Games through Interaction | We introduce a new language learning setting relevant to building adaptive
natural language interfaces. It is inspired by Wittgenstein's language games: a
human wishes to accomplish some task (e.g., achieving a certain configuration
of blocks), but can only communicate with a computer, who performs the actual
actions (e.g., removing all red blocks). The computer initially knows nothing
about language and therefore must learn it from scratch through interaction,
while the human adapts to the computer's capabilities. We created a game in a
blocks world and collected interactions from 100 people playing it. First, we
analyze the humans' strategies, showing that using compositionality and
avoiding synonyms correlates positively with task performance. Second, we
compare computer strategies, showing how to quickly learn a semantic parsing
model from scratch, and that modeling pragmatics further accelerates learning
for successful players. | http://arxiv.org/pdf/1606.02447 | Sida I. Wang, Percy Liang, Christopher D. Manning | cs.CL, cs.AI, I.2.6; I.2.7 | 11 pages, ACL 2016 | null | cs.CL | 20160608 | 20160608 | [] |
1606.02447 | 44 | M. Frank and N. D. Goodman. 2012. Predicting pragmatic reasoning in language games. Sci- ence 336:998â998.
H. Giles. 2008. Communication accommodation theory. Sage Publications, Inc.
D. Golland, P. Liang, and D. Klein. 2010. A game- theoretic approach to generating spatial descrip- In Empirical Methods in Natural Lan- tions. guage Processing (EMNLP).
N. Goodman and D. Lassiter. 2015. Probabilistic Semantics and Pragmatics: Uncertainty in Lan- guage and Thought. The Handbook of Contem- porary Semantic Theory, 2nd Edition Wiley- Blackwell.
H. P. Grice. 1975. Logic and conversation. Syntax and semantics 3:41â58.
M. E. Ireland, R. B. Slatcher, P. W. Eastwick, L. E. Scissors, E. J. Finkel, and J. W. Pennebaker. 2011. Language style matching predicts rela- tionship initiation and stability. Psychological Science 22(1):39â44.
G. Jäger. 2008. Game theory in semantics and pragmatics. Technical report, University of Tübingen. | 1606.02447#44 | Learning Language Games through Interaction | We introduce a new language learning setting relevant to building adaptive
natural language interfaces. It is inspired by Wittgenstein's language games: a
human wishes to accomplish some task (e.g., achieving a certain configuration
of blocks), but can only communicate with a computer, who performs the actual
actions (e.g., removing all red blocks). The computer initially knows nothing
about language and therefore must learn it from scratch through interaction,
while the human adapts to the computer's capabilities. We created a game in a
blocks world and collected interactions from 100 people playing it. First, we
analyze the humans' strategies, showing that using compositionality and
avoiding synonyms correlates positively with task performance. Second, we
compare computer strategies, showing how to quickly learn a semantic parsing
model from scratch, and that modeling pragmatics further accelerates learning
for successful players. | http://arxiv.org/pdf/1606.02447 | Sida I. Wang, Percy Liang, Christopher D. Manning | cs.CL, cs.AI, I.2.6; I.2.7 | 11 pages, ACL 2016 | null | cs.CL | 20160608 | 20160608 | [] |
1606.02447 | 45 | G. Jäger. 2008. Game theory in semantics and pragmatics. Technical report, University of Tübingen.
T. Kollar, S. Tellex, D. Roy, and N. Roy. 2010. Grounding verbs of motion in natural language commands to robots. In International Sympo- sium on Experimental Robotics (ISER).
T. Kwiatkowski, L. Zettlemoyer, S. Goldwater, and M. Steedman. 2010. Inducing probabilistic CCG grammars from logical form with higher- In Empirical Methods in order uniï¬cation. Natural Language Processing (EMNLP). pages 1223â1233.
P. Liang, M. I. Jordan, and D. Klein. 2011. Learning dependency-based compositional se- mantics. In Association for Computational Lin- guistics (ACL). pages 590â599.
E. Markman and G. F. Wachtel. 1988. Childrenâs use of mutual exclusivity to constrain the mean- ings of words. Cognitive Psychology 20:125â 157.
C. Matuszek, N. FitzGerald, L. Zettlemoyer, L. Bo, and D. Fox. 2012. A joint model of language and perception for grounded attribute In International Conference on Ma- learning. chine Learning (ICML). pages 1671â1678. | 1606.02447#45 | Learning Language Games through Interaction | We introduce a new language learning setting relevant to building adaptive
natural language interfaces. It is inspired by Wittgenstein's language games: a
human wishes to accomplish some task (e.g., achieving a certain configuration
of blocks), but can only communicate with a computer, who performs the actual
actions (e.g., removing all red blocks). The computer initially knows nothing
about language and therefore must learn it from scratch through interaction,
while the human adapts to the computer's capabilities. We created a game in a
blocks world and collected interactions from 100 people playing it. First, we
analyze the humans' strategies, showing that using compositionality and
avoiding synonyms correlates positively with task performance. Second, we
compare computer strategies, showing how to quickly learn a semantic parsing
model from scratch, and that modeling pragmatics further accelerates learning
for successful players. | http://arxiv.org/pdf/1606.02447 | Sida I. Wang, Percy Liang, Christopher D. Manning | cs.CL, cs.AI, I.2.6; I.2.7 | 11 pages, ACL 2016 | null | cs.CL | 20160608 | 20160608 | [] |
1606.02447 | 46 | W. Monroe and C. Potts. 2015. Learning in the Rational Speech Acts model. In Proceedings of 20th Amsterdam Colloquium.
P. Pasupat and P. Liang. 2015. Compositional se- mantic parsing on semi-structured tables. In As- sociation for Computational Linguistics (ACL).
H. Reckman, J. Orkin, and D. Roy. 2010. Learning meanings of words and constructions, grounded In Conference on Natural in a virtual game. Language Processing (KONVENS).
N. J. Smith, N. D. Goodman, and M. C. Frank. 2013. Learning and using language via re- cursive pragmatic reasoning about other agents. In Advances in Neural Information Processing Systems (NIPS).
S. Tellex, R. Knepper, A. Li, D. Rus, and N. Roy. 2014. Asking for help using inverse semantics. In Robotics: Science and Systems (RSS).
S. Tellex, T. Kollar, S. Dickerson, M. R. Walter, A. G. Banerjee, S. J. Teller, and N. Roy. 2011. Understanding natural language commands for robotic navigation and mobile manipulation. In Association for the Advancement of Artiï¬cial Intelligence (AAAI). | 1606.02447#46 | Learning Language Games through Interaction | We introduce a new language learning setting relevant to building adaptive
natural language interfaces. It is inspired by Wittgenstein's language games: a
human wishes to accomplish some task (e.g., achieving a certain configuration
of blocks), but can only communicate with a computer, who performs the actual
actions (e.g., removing all red blocks). The computer initially knows nothing
about language and therefore must learn it from scratch through interaction,
while the human adapts to the computer's capabilities. We created a game in a
blocks world and collected interactions from 100 people playing it. First, we
analyze the humans' strategies, showing that using compositionality and
avoiding synonyms correlates positively with task performance. Second, we
compare computer strategies, showing how to quickly learn a semantic parsing
model from scratch, and that modeling pragmatics further accelerates learning
for successful players. | http://arxiv.org/pdf/1606.02447 | Sida I. Wang, Percy Liang, Christopher D. Manning | cs.CL, cs.AI, I.2.6; I.2.7 | 11 pages, ACL 2016 | null | cs.CL | 20160608 | 20160608 | [] |
1606.02447 | 47 | A. Vogel, M. Bodoia, C. Potts, and D. Juraf- sky. 2013. Emergence of gricean maxims from
In North Ameri- multi-agent decision theory. can Association for Computational Linguistics (NAACL). pages 1072â1081.
A. Vogel and D. Jurafsky. 2010. Learning to fol- low navigational directions. In Association for Computational Linguistics (ACL). pages 806â 814.
T. Winograd. 1972. Understanding Natural Lan- guage. Academic Press.
L. Wittgenstein. 1953. Philosophical Investiga- tions. Blackwell, Oxford.
Y. W. Wong and R. J. Mooney. 2007. Learn- ing synchronous grammars for semantic parsing with lambda calculus. In Association for Com- putational Linguistics (ACL). pages 960â967.
L. S. Zettlemoyer and M. Collins. 2005. Learn- ing to map sentences to logical form: Struc- tured classiï¬cation with probabilistic categorial In Uncertainty in Artiï¬cial Intelli- grammars. gence (UAI). pages 658â666. | 1606.02447#47 | Learning Language Games through Interaction | We introduce a new language learning setting relevant to building adaptive
natural language interfaces. It is inspired by Wittgenstein's language games: a
human wishes to accomplish some task (e.g., achieving a certain configuration
of blocks), but can only communicate with a computer, who performs the actual
actions (e.g., removing all red blocks). The computer initially knows nothing
about language and therefore must learn it from scratch through interaction,
while the human adapts to the computer's capabilities. We created a game in a
blocks world and collected interactions from 100 people playing it. First, we
analyze the humans' strategies, showing that using compositionality and
avoiding synonyms correlates positively with task performance. Second, we
compare computer strategies, showing how to quickly learn a semantic parsing
model from scratch, and that modeling pragmatics further accelerates learning
for successful players. | http://arxiv.org/pdf/1606.02447 | Sida I. Wang, Percy Liang, Christopher D. Manning | cs.CL, cs.AI, I.2.6; I.2.7 | 11 pages, ACL 2016 | null | cs.CL | 20160608 | 20160608 | [] |
1606.02006 | 0 | 2016
6 1 0 2
# t c O 5
# ] L C . s c [
arXiv:1606.02006v2 [es.CL]
2 v 6 0 0 2 0 . 6 0 6 1 : v i X r a
# Incorporating Discrete Translation Lexicons into Neural Machine Translation
Philip Arthurâ, Graham Neubigââ , Satoshi Nakamuraâ â Graduate School of Information Science, Nara Institute of Science and Technology â Language Technologies Institute, Carnegie Mellon University [email protected] [email protected] [email protected]
# Abstract | 1606.02006#0 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | Neural machine translation (NMT) often makes mistakes in translating
low-frequency content words that are essential to understanding the meaning of
the sentence. We propose a method to alleviate this problem by augmenting NMT
systems with discrete translation lexicons that efficiently encode translations
of these low-frequency words. We describe a method to calculate the lexicon
probability of the next word in the translation candidate by using the
attention vector of the NMT model to select which source word lexical
probabilities the model should focus on. We test two methods to combine this
probability with the standard NMT probability: (1) using it as a bias, and (2)
linear interpolation. Experiments on two corpora show an improvement of 2.0-2.3
BLEU and 0.13-0.44 NIST score, and faster convergence time. | http://arxiv.org/pdf/1606.02006 | Philip Arthur, Graham Neubig, Satoshi Nakamura | cs.CL | Accepted at EMNLP 2016 | null | cs.CL | 20160607 | 20161005 | [
{
"id": "1606.02006"
}
] |
1606.02006 | 1 | # Abstract
Neural machine translation (NMT) often makes mistakes in translating low-frequency content words that are essential to understand- ing the meaning of the sentence. We propose a method to alleviate this problem by aug- menting NMT systems with discrete transla- tion lexicons that efï¬ciently encode transla- tions of these low-frequency words. We de- scribe a method to calculate the lexicon proba- bility of the next word in the translation candi- date by using the attention vector of the NMT model to select which source word lexical probabilities the model should focus on. We test two methods to combine this probability with the standard NMT probability: (1) using it as a bias, and (2) linear interpolation. Exper- iments on two corpora show an improvement of 2.0-2.3 BLEU and 0.13-0.44 NIST score, and faster convergence time.1
# 1 Introduction | 1606.02006#1 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | Neural machine translation (NMT) often makes mistakes in translating
low-frequency content words that are essential to understanding the meaning of
the sentence. We propose a method to alleviate this problem by augmenting NMT
systems with discrete translation lexicons that efficiently encode translations
of these low-frequency words. We describe a method to calculate the lexicon
probability of the next word in the translation candidate by using the
attention vector of the NMT model to select which source word lexical
probabilities the model should focus on. We test two methods to combine this
probability with the standard NMT probability: (1) using it as a bias, and (2)
linear interpolation. Experiments on two corpora show an improvement of 2.0-2.3
BLEU and 0.13-0.44 NIST score, and faster convergence time. | http://arxiv.org/pdf/1606.02006 | Philip Arthur, Graham Neubig, Satoshi Nakamura | cs.CL | Accepted at EMNLP 2016 | null | cs.CL | 20160607 | 20161005 | [
{
"id": "1606.02006"
}
] |
1606.02006 | 2 | # 1 Introduction
Neural machine translation (NMT, §2; Kalchbren- ner and Blunsom (2013), Sutskever et al. (2014)) is a variant of statistical machine translation (SMT; Brown et al. (1993)), using neural networks. NMT has recently gained popularity due to its ability to model the translation process end-to-end using a sin- gle probabilistic model, and for its state-of-the-art performance on several language pairs (Luong et al., 2015a; Sennrich et al., 2016).
Input: Reference: ãã¥ãã¸ã¢ ã® åºèº«ã§ãã I come from Tunisia. System: Chunisia no shusshindesu. (Iâm from Tunisia.) ãã«ã¦ã§ã¼ ã® åºèº«ã§ãã Noruue- no shusshindesu. (Iâm from Norway.)
Figure 1: An example of a mistake made by NMT on low-frequency content words. | 1606.02006#2 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | Neural machine translation (NMT) often makes mistakes in translating
low-frequency content words that are essential to understanding the meaning of
the sentence. We propose a method to alleviate this problem by augmenting NMT
systems with discrete translation lexicons that efficiently encode translations
of these low-frequency words. We describe a method to calculate the lexicon
probability of the next word in the translation candidate by using the
attention vector of the NMT model to select which source word lexical
probabilities the model should focus on. We test two methods to combine this
probability with the standard NMT probability: (1) using it as a bias, and (2)
linear interpolation. Experiments on two corpora show an improvement of 2.0-2.3
BLEU and 0.13-0.44 NIST score, and faster convergence time. | http://arxiv.org/pdf/1606.02006 | Philip Arthur, Graham Neubig, Satoshi Nakamura | cs.CL | Accepted at EMNLP 2016 | null | cs.CL | 20160607 | 20161005 | [
{
"id": "1606.02006"
}
] |
1606.02006 | 3 | Figure 1: An example of a mistake made by NMT on low-frequency content words.
continuous-valued numbers. This is in contrast to more traditional SMT methods such as phrase-based machine translation (PBMT; Koehn et al. (2003)), which represent translations as discrete pairs of word strings in the source and target languages. The use of continuous representations is a major advan- tage, allowing NMT to share statistical power be- tween similar words (e.g. âdogâ and âcatâ) or con- texts (e.g. âthis isâ and âthat isâ). However, this property also has a drawback in that NMT systems often mistranslate into words that seem natural in the context, but do not reï¬ect the content of the source sentence. For example, Figure 1 is a sentence from our data where the NMT system mistakenly trans- lated âTunisiaâ into the word for âNorway.â This variety of error is particularly serious because the content words that are often mistranslated by NMT are also the words that play a key role in determining the whole meaning of the sentence.
One feature of NMT systems is that they treat each word in the vocabulary as a vector of | 1606.02006#3 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | Neural machine translation (NMT) often makes mistakes in translating
low-frequency content words that are essential to understanding the meaning of
the sentence. We propose a method to alleviate this problem by augmenting NMT
systems with discrete translation lexicons that efficiently encode translations
of these low-frequency words. We describe a method to calculate the lexicon
probability of the next word in the translation candidate by using the
attention vector of the NMT model to select which source word lexical
probabilities the model should focus on. We test two methods to combine this
probability with the standard NMT probability: (1) using it as a bias, and (2)
linear interpolation. Experiments on two corpora show an improvement of 2.0-2.3
BLEU and 0.13-0.44 NIST score, and faster convergence time. | http://arxiv.org/pdf/1606.02006 | Philip Arthur, Graham Neubig, Satoshi Nakamura | cs.CL | Accepted at EMNLP 2016 | null | cs.CL | 20160607 | 20161005 | [
{
"id": "1606.02006"
}
] |
1606.02006 | 4 | One feature of NMT systems is that they treat each word in the vocabulary as a vector of
1Tools to replicate our experiments can be found at http://isw3.naist.jp/~philip-a/emnlp2016/index.html
In contrast, PBMT and other traditional SMT methods tend to rarely make this kind of mistake. This is because they base their translations on dis- crete phrase mappings, which ensure that source words will be translated into a target word that has
been observed as a translation at least once in the training data. In addition, because the discrete map- pings are memorized explicitly, they can be learned efï¬ciently from as little as a single instance (barring errors in word alignments). Thus we hypothesize that if we can incorporate a similar variety of infor- mation into NMT, this has the potential to alleviate problems with the previously mentioned fatal errors on low-frequency words. | 1606.02006#4 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | Neural machine translation (NMT) often makes mistakes in translating
low-frequency content words that are essential to understanding the meaning of
the sentence. We propose a method to alleviate this problem by augmenting NMT
systems with discrete translation lexicons that efficiently encode translations
of these low-frequency words. We describe a method to calculate the lexicon
probability of the next word in the translation candidate by using the
attention vector of the NMT model to select which source word lexical
probabilities the model should focus on. We test two methods to combine this
probability with the standard NMT probability: (1) using it as a bias, and (2)
linear interpolation. Experiments on two corpora show an improvement of 2.0-2.3
BLEU and 0.13-0.44 NIST score, and faster convergence time. | http://arxiv.org/pdf/1606.02006 | Philip Arthur, Graham Neubig, Satoshi Nakamura | cs.CL | Accepted at EMNLP 2016 | null | cs.CL | 20160607 | 20161005 | [
{
"id": "1606.02006"
}
] |
1606.02006 | 5 | In this paper, we propose a simple, yet effective method to incorporate discrete, probabilistic lexi- cons as an additional information source in NMT (§3). First we demonstrate how to transform lexi- cal translation probabilities (§3.1) into a predictive probability for the next word by utilizing attention vectors from attentional NMT models (Bahdanau et al., 2015). We then describe methods to incorporate this probability into NMT, either through linear in- terpolation with the NMT probabilities (§3.2.2) or as the bias to the NMT predictive distribution (§3.2.1). We construct these lexicon probabilities by using traditional word alignment methods on the training data (§4.1), other external parallel data resources such as a handmade dictionary (§4.2), or using a hy- brid between the two (§4.3).
We perform experiments (§5) on two English- the Japanese methodâs utility in improving translation accuracy and reducing the time required for training.
# 2 Neural Machine Translation | 1606.02006#5 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | Neural machine translation (NMT) often makes mistakes in translating
low-frequency content words that are essential to understanding the meaning of
the sentence. We propose a method to alleviate this problem by augmenting NMT
systems with discrete translation lexicons that efficiently encode translations
of these low-frequency words. We describe a method to calculate the lexicon
probability of the next word in the translation candidate by using the
attention vector of the NMT model to select which source word lexical
probabilities the model should focus on. We test two methods to combine this
probability with the standard NMT probability: (1) using it as a bias, and (2)
linear interpolation. Experiments on two corpora show an improvement of 2.0-2.3
BLEU and 0.13-0.44 NIST score, and faster convergence time. | http://arxiv.org/pdf/1606.02006 | Philip Arthur, Graham Neubig, Satoshi Nakamura | cs.CL | Accepted at EMNLP 2016 | null | cs.CL | 20160607 | 20161005 | [
{
"id": "1606.02006"
}
] |
1606.02006 | 6 | We perform experiments (§5) on two English- the Japanese methodâs utility in improving translation accuracy and reducing the time required for training.
# 2 Neural Machine Translation
The goal of machine translation is to translate a se- quence of source words F = f |F | into a sequence of target words E = e|E| 1 . These words belong to the source vocabulary Vf , and the target vocabulary Ve respectively. NMT performs this translation by cal- culating the conditional probability pm(ei|F, eiâ1 ) of the ith target word ei based on the source F and the preceding target words eiâ1 . This is done by en- 1 coding the context hF, eiâ1 i a ï¬xed-width vector ηi, and calculating the probability as follows:
pm(ei|F, eiâ1 1 ) = softmax(Wsηi + bs), (1)
where Ws and bs are respectively weight matrix and bias vector parameters.
The exact variety of the NMT model depends on how we calculate ηi used as input. While there | 1606.02006#6 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | Neural machine translation (NMT) often makes mistakes in translating
low-frequency content words that are essential to understanding the meaning of
the sentence. We propose a method to alleviate this problem by augmenting NMT
systems with discrete translation lexicons that efficiently encode translations
of these low-frequency words. We describe a method to calculate the lexicon
probability of the next word in the translation candidate by using the
attention vector of the NMT model to select which source word lexical
probabilities the model should focus on. We test two methods to combine this
probability with the standard NMT probability: (1) using it as a bias, and (2)
linear interpolation. Experiments on two corpora show an improvement of 2.0-2.3
BLEU and 0.13-0.44 NIST score, and faster convergence time. | http://arxiv.org/pdf/1606.02006 | Philip Arthur, Graham Neubig, Satoshi Nakamura | cs.CL | Accepted at EMNLP 2016 | null | cs.CL | 20160607 | 20161005 | [
{
"id": "1606.02006"
}
] |
1606.02006 | 7 | where Ws and bs are respectively weight matrix and bias vector parameters.
The exact variety of the NMT model depends on how we calculate ηi used as input. While there
are many methods to perform this modeling, we opt to use attentional models (Bahdanau et al., 2015), which focus on particular words in the source sen- tence when calculating the probability of ei. These models represent the current state of the art in NMT, and are also convenient for use in our proposed method. Speciï¬cally, we use the method of Luong et al. (2015a), which we describe brieï¬y here and refer readers to the original paper for details.
First, an encoder converts the source sentence F into a matrix R where each column represents a sin- gle word in the input sentence as a continuous vec- tor. This representation is generated using a bidirec- tional encoder
ââr j = enc(embed(fj), ââr jâ1) ââr j = enc(embed(fj), ââr j+1) rj = [ââr j; ââr j]. | 1606.02006#7 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | Neural machine translation (NMT) often makes mistakes in translating
low-frequency content words that are essential to understanding the meaning of
the sentence. We propose a method to alleviate this problem by augmenting NMT
systems with discrete translation lexicons that efficiently encode translations
of these low-frequency words. We describe a method to calculate the lexicon
probability of the next word in the translation candidate by using the
attention vector of the NMT model to select which source word lexical
probabilities the model should focus on. We test two methods to combine this
probability with the standard NMT probability: (1) using it as a bias, and (2)
linear interpolation. Experiments on two corpora show an improvement of 2.0-2.3
BLEU and 0.13-0.44 NIST score, and faster convergence time. | http://arxiv.org/pdf/1606.02006 | Philip Arthur, Graham Neubig, Satoshi Nakamura | cs.CL | Accepted at EMNLP 2016 | null | cs.CL | 20160607 | 20161005 | [
{
"id": "1606.02006"
}
] |
1606.02006 | 8 | Here the embed(·) function maps the words into a representation (Bengio et al., 2003), and enc(·) is a stacking long short term memory (LSTM) neural network (Hochreiter and Schmidhuber, 1997; Gers et al., 2000; Sutskever et al., 2014). Finally we con- catenate the two vectors ââr j and ââr j into a bidirec- tional representation rj. These vectors are further concatenated into the matrix R where the jth col- umn corresponds to rj.
Next, we generate the output one word at a time while referencing this encoded input sentence and tracking progress with a decoder LSTM. The de- coderâs hidden state hi is a ï¬xed-length continuous vector representing the previous target words eiâ1 , initialized as h0 = 0. Based on this hi, we calculate a similarity vector αi, with each element equal to
αi,j = sim(hi, rj). (2)
sim(·) can be an arbitrary similarity function, which we set to the dot product, following Luong et al. (2015a). We then normalize this into an attention vector, which weights the amount of focus that we put on each word in the source sentence
ai = softmax(αi). (3) | 1606.02006#8 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | Neural machine translation (NMT) often makes mistakes in translating
low-frequency content words that are essential to understanding the meaning of
the sentence. We propose a method to alleviate this problem by augmenting NMT
systems with discrete translation lexicons that efficiently encode translations
of these low-frequency words. We describe a method to calculate the lexicon
probability of the next word in the translation candidate by using the
attention vector of the NMT model to select which source word lexical
probabilities the model should focus on. We test two methods to combine this
probability with the standard NMT probability: (1) using it as a bias, and (2)
linear interpolation. Experiments on two corpora show an improvement of 2.0-2.3
BLEU and 0.13-0.44 NIST score, and faster convergence time. | http://arxiv.org/pdf/1606.02006 | Philip Arthur, Graham Neubig, Satoshi Nakamura | cs.CL | Accepted at EMNLP 2016 | null | cs.CL | 20160607 | 20161005 | [
{
"id": "1606.02006"
}
] |
1606.02006 | 9 | ai = softmax(αi). (3)
This attention vector is then used to weight the en- coded representation R to create a context vector ci for the current time step
c = Ra.
Finally, we create ηi by concatenating the previous hidden state hiâ1 with the context vector, and per- forming an afï¬ne transform
ηi = Wη[hiâ1; ci] + bη,
Once we have this representation of the current state, we can calculate pm(ei|F, eiâ1 ) according to Equation (1). The next word ei is chosen according to this probability, and we update the hidden state by inputting the chosen word into the decoder LSTM
hi = enc(embed(ei), hiâ1). (4)
If we deï¬ne all the parameters in this model as θ, we can then train the model by minimizing the negative log-likelihood of the training data
â log(pm(ei|F, eiâ1 Ëθ = argmin 1 θ ; θ)).
# i XhF, Ei X 3 Integrating Lexicons into NMT | 1606.02006#9 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | Neural machine translation (NMT) often makes mistakes in translating
low-frequency content words that are essential to understanding the meaning of
the sentence. We propose a method to alleviate this problem by augmenting NMT
systems with discrete translation lexicons that efficiently encode translations
of these low-frequency words. We describe a method to calculate the lexicon
probability of the next word in the translation candidate by using the
attention vector of the NMT model to select which source word lexical
probabilities the model should focus on. We test two methods to combine this
probability with the standard NMT probability: (1) using it as a bias, and (2)
linear interpolation. Experiments on two corpora show an improvement of 2.0-2.3
BLEU and 0.13-0.44 NIST score, and faster convergence time. | http://arxiv.org/pdf/1606.02006 | Philip Arthur, Graham Neubig, Satoshi Nakamura | cs.CL | Accepted at EMNLP 2016 | null | cs.CL | 20160607 | 20161005 | [
{
"id": "1606.02006"
}
] |
1606.02006 | 10 | # i XhF, Ei X 3 Integrating Lexicons into NMT
In §2 we described how traditional NMT models calculate the probability of the next target word pm(ei|eiâ1 , F ). Our goal in this paper is to improve the accuracy of this probability estimate by incorpo- rating information from discrete probabilistic lexi- cons. We assume that we have a lexicon that, given a source word f , assigns a probability pl(e|f ) to tar- get word e. For a source word f , this probability will generally be non-zero for a small number of transla- tion candidates, and zero for the majority of words in VE. In this section, we ï¬rst describe how we in- corporate these probabilities into NMT, and explain how we actually obtain the pl(e|f ) probabilities in §4.
# 3.1 Converting Lexicon Probabilities into Conditioned Predictive Proabilities
First, we need to convert lexical probabilities pl(e|f ) for the individual words in the source sentence F to a form that can be used together with pm(ei|eiâ1 , F ). Given input sentence F , we can construct a matrix in which each column corre- sponds to a word in the input sentence, each row corresponds to a word in the VE, and the entry cor- responds to the appropriate lexical probability: | 1606.02006#10 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | Neural machine translation (NMT) often makes mistakes in translating
low-frequency content words that are essential to understanding the meaning of
the sentence. We propose a method to alleviate this problem by augmenting NMT
systems with discrete translation lexicons that efficiently encode translations
of these low-frequency words. We describe a method to calculate the lexicon
probability of the next word in the translation candidate by using the
attention vector of the NMT model to select which source word lexical
probabilities the model should focus on. We test two methods to combine this
probability with the standard NMT probability: (1) using it as a bias, and (2)
linear interpolation. Experiments on two corpora show an improvement of 2.0-2.3
BLEU and 0.13-0.44 NIST score, and faster convergence time. | http://arxiv.org/pdf/1606.02006 | Philip Arthur, Graham Neubig, Satoshi Nakamura | cs.CL | Accepted at EMNLP 2016 | null | cs.CL | 20160607 | 20161005 | [
{
"id": "1606.02006"
}
] |
1606.02006 | 11 | LF = pl(e = 1|f1) ... pl(e = 1|f|F |) ... · · · . . . pl(e = |Ve||f1) · · · pl(e = |Ve||f|F |) .
This matrix can be precomputed during the encoding stage because it only requires information about the source sentence F .
Next we convert this matrix into a predictive prob- ability over the next word: pl(ei|F, eiâ1 ). To do so we use the alignment probability a from Equation (3) to weight each column of the LF matrix:
pl(ei|F, eiâ1 1 ) = LF ai = pl(e = 1|f1) · · · plex(e = 1|f|F |) . . . pl(e = Ve|f1) · · · plex(e = Ve|f|F |) ... ... ai,1 ... ai,|F | . | 1606.02006#11 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | Neural machine translation (NMT) often makes mistakes in translating
low-frequency content words that are essential to understanding the meaning of
the sentence. We propose a method to alleviate this problem by augmenting NMT
systems with discrete translation lexicons that efficiently encode translations
of these low-frequency words. We describe a method to calculate the lexicon
probability of the next word in the translation candidate by using the
attention vector of the NMT model to select which source word lexical
probabilities the model should focus on. We test two methods to combine this
probability with the standard NMT probability: (1) using it as a bias, and (2)
linear interpolation. Experiments on two corpora show an improvement of 2.0-2.3
BLEU and 0.13-0.44 NIST score, and faster convergence time. | http://arxiv.org/pdf/1606.02006 | Philip Arthur, Graham Neubig, Satoshi Nakamura | cs.CL | Accepted at EMNLP 2016 | null | cs.CL | 20160607 | 20161005 | [
{
"id": "1606.02006"
}
] |
1606.02006 | 12 | This calculation is similar to the way how attentional models calculate the context vector ci, but over a vector representing the probabilities of the target vo- cabulary, instead of the distributed representations of the source words. The process of involving ai is important because at every time step i, the lexi- cal probability pl(ei|eiâ1 , F ) will be inï¬uenced by 1 different source words.
# 3.2 Combining Predictive Probabilities
After calculating the lexicon predictive proba- bility pl(ei|eiâ1 , F ), next we need to integrate this probability with the NMT model probability pm(ei|eiâ1 , F ). To do so, we examine two methods: (1) adding it as a bias, and (2) linear interpolation.
# 3.2.1 Model Bias
In our ï¬rst bias method, we use pl(·) to bias the probability distribution calculated by the vanilla NMT model. Speciï¬cally, we add a small constant Ç« to pl(·), take the logarithm, and add this adjusted log probability to the input of the softmax as follows:
pb(ei|F, eiâ1 1 ) = softmax(Wsηi + bs+ log(pl(ei|F, eiâ1 1 ) + Ç«)). | 1606.02006#12 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | Neural machine translation (NMT) often makes mistakes in translating
low-frequency content words that are essential to understanding the meaning of
the sentence. We propose a method to alleviate this problem by augmenting NMT
systems with discrete translation lexicons that efficiently encode translations
of these low-frequency words. We describe a method to calculate the lexicon
probability of the next word in the translation candidate by using the
attention vector of the NMT model to select which source word lexical
probabilities the model should focus on. We test two methods to combine this
probability with the standard NMT probability: (1) using it as a bias, and (2)
linear interpolation. Experiments on two corpora show an improvement of 2.0-2.3
BLEU and 0.13-0.44 NIST score, and faster convergence time. | http://arxiv.org/pdf/1606.02006 | Philip Arthur, Graham Neubig, Satoshi Nakamura | cs.CL | Accepted at EMNLP 2016 | null | cs.CL | 20160607 | 20161005 | [
{
"id": "1606.02006"
}
] |
1606.02006 | 13 | We take the logarithm of pl(·) so that the values will still be in the probability domain after the softmax is calculated, and add the hyper-parameter Ç« to prevent zero probabilities from becoming ââ after taking the log. When Ç« is small, the model will be more heavily biased towards using the lexicon, and when Ç« is larger the lexicon probabilities will be given less weight. We use Ç« = 0.001 for this paper.
# 3.2.2 Linear Interpolation
We also attempt to incorporate the two probabil- ities through linear interpolation between the stan- dard NMT probability model probability pm(·) and the lexicon probability pl(·). We will call this the linear method, and deï¬ne it as follows: | 1606.02006#13 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | Neural machine translation (NMT) often makes mistakes in translating
low-frequency content words that are essential to understanding the meaning of
the sentence. We propose a method to alleviate this problem by augmenting NMT
systems with discrete translation lexicons that efficiently encode translations
of these low-frequency words. We describe a method to calculate the lexicon
probability of the next word in the translation candidate by using the
attention vector of the NMT model to select which source word lexical
probabilities the model should focus on. We test two methods to combine this
probability with the standard NMT probability: (1) using it as a bias, and (2)
linear interpolation. Experiments on two corpora show an improvement of 2.0-2.3
BLEU and 0.13-0.44 NIST score, and faster convergence time. | http://arxiv.org/pdf/1606.02006 | Philip Arthur, Graham Neubig, Satoshi Nakamura | cs.CL | Accepted at EMNLP 2016 | null | cs.CL | 20160607 | 20161005 | [
{
"id": "1606.02006"
}
] |
1606.02006 | 15 | This notation is partly inspired by Allamanis et al. (2016) and Gu et al. (2016) who use linear inter- polation to merge a standard attentional model with a âcopyâ operator that copies a source word as-is into the target sentence. The main difference is that they use this to copy words into the output while our method uses it to inï¬uence the probabilities of all target words.
# 4 Constructing Lexicon Probabilities
In the previous section, we have deï¬ned some ways to use predictive probabilities pl(ei|F, eiâ1 ) based on word-to-word lexical probabilities pl(e|f ). Next, we deï¬ne three ways to construct these lexical prob- abilities using automatically learned lexicons, hand- made lexicons, or a combination of both.
# 4.1 Automatically Learned Lexicons
In traditional SMT systems, lexical translation prob- abilities are generally learned directly from parallel data in an unsupervised fashion using a model such as the IBM models (Brown et al., 1993; Och and Ney, 2003). These models can be used to estimate the alignments and lexical translation probabilities pl(e|f ) between the tokens of the two languages us- ing the expectation maximization (EM) algorithm. | 1606.02006#15 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | Neural machine translation (NMT) often makes mistakes in translating
low-frequency content words that are essential to understanding the meaning of
the sentence. We propose a method to alleviate this problem by augmenting NMT
systems with discrete translation lexicons that efficiently encode translations
of these low-frequency words. We describe a method to calculate the lexicon
probability of the next word in the translation candidate by using the
attention vector of the NMT model to select which source word lexical
probabilities the model should focus on. We test two methods to combine this
probability with the standard NMT probability: (1) using it as a bias, and (2)
linear interpolation. Experiments on two corpora show an improvement of 2.0-2.3
BLEU and 0.13-0.44 NIST score, and faster convergence time. | http://arxiv.org/pdf/1606.02006 | Philip Arthur, Graham Neubig, Satoshi Nakamura | cs.CL | Accepted at EMNLP 2016 | null | cs.CL | 20160607 | 20161005 | [
{
"id": "1606.02006"
}
] |
1606.02006 | 17 | The IBM models vary in level of reï¬nement, with Model 1 relying solely on these lexical probabil- ities, and latter IBM models (Models 2, 3, 4, 5) introducing more sophisticated models of fertility and relative alignment. Even though IBM models also occasionally have problems when dealing with âgarbage collectingâ effects the rare words (e.g. (Liang et al., 2006)), traditional SMT systems gen- erally achieve better translation accuracies of low- frequency words than NMT systems (Sutskever et al., 2014), indicating that these problems are less prominent than they are in NMT.
Note that in many cases, NMT limits the target vocabulary (Jean et al., 2015) for training speed or memory constraints, resulting in rare words not be- ing covered by the NMT vocabulary VE. Accord- ingly, we allocate the remaining probability assigned by the lexicon to the unknown word symbol hunki:
pl,a(e = hunki|f ) = 1 â pl,a(e = i|f ). (5)
# iâVe X
# 4.2 Manual Lexicons | 1606.02006#17 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | Neural machine translation (NMT) often makes mistakes in translating
low-frequency content words that are essential to understanding the meaning of
the sentence. We propose a method to alleviate this problem by augmenting NMT
systems with discrete translation lexicons that efficiently encode translations
of these low-frequency words. We describe a method to calculate the lexicon
probability of the next word in the translation candidate by using the
attention vector of the NMT model to select which source word lexical
probabilities the model should focus on. We test two methods to combine this
probability with the standard NMT probability: (1) using it as a bias, and (2)
linear interpolation. Experiments on two corpora show an improvement of 2.0-2.3
BLEU and 0.13-0.44 NIST score, and faster convergence time. | http://arxiv.org/pdf/1606.02006 | Philip Arthur, Graham Neubig, Satoshi Nakamura | cs.CL | Accepted at EMNLP 2016 | null | cs.CL | 20160607 | 20161005 | [
{
"id": "1606.02006"
}
] |
1606.02006 | 18 | # iâVe X
# 4.2 Manual Lexicons
In addition, for many language pairs, broad- coverage handmade dictionaries exist, and it is desir- able that we be able to use the information included in them as well. Unlike automatically learned lexi- cons, however, handmade dictionaries generally do not contain translation probabilities. To construct the probability pl(e|f ), we deï¬ne the set of trans- lations Kf existing in the dictionary for particular source word f , and assume a uniform distribution over these words:
pl,m(e|f ) = 1 |Kf | 0 ( if e â Kf otherwise . | 1606.02006#18 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | Neural machine translation (NMT) often makes mistakes in translating
low-frequency content words that are essential to understanding the meaning of
the sentence. We propose a method to alleviate this problem by augmenting NMT
systems with discrete translation lexicons that efficiently encode translations
of these low-frequency words. We describe a method to calculate the lexicon
probability of the next word in the translation candidate by using the
attention vector of the NMT model to select which source word lexical
probabilities the model should focus on. We test two methods to combine this
probability with the standard NMT probability: (1) using it as a bias, and (2)
linear interpolation. Experiments on two corpora show an improvement of 2.0-2.3
BLEU and 0.13-0.44 NIST score, and faster convergence time. | http://arxiv.org/pdf/1606.02006 | Philip Arthur, Graham Neubig, Satoshi Nakamura | cs.CL | Accepted at EMNLP 2016 | null | cs.CL | 20160607 | 20161005 | [
{
"id": "1606.02006"
}
] |
1606.02006 | 19 | Following Equation (5), unknown source words will assign their probability mass to the hunki tag.
# 4.3 Hybrid Lexicons
Handmade lexicons have broad coverage of words but their probabilities might not be as accurate as the
Tokens Corpus Sentence Data Ja En 464K 3.60M 4.97M 377K 7.77M 8.04M 5.3K 3.8K 24.3K 26.8K 5.5K 26.0K 28.4K BTEC KFTT BTEC KFTT BTEC KFTT Train 510 1160 508 1169 Dev 3.8K Test
Table 1: Corpus details.
learned ones, particularly if the automatic lexicon is constructed on in-domain data. Thus, we also test a hybrid method where we use the handmade lexi- cons to complement the automatically learned lexi- con.2 3 Speciï¬cally, inspired by phrase table ï¬ll-up used in PBMT systems (Bisazza et al., 2011), we use the probability of the automatically learned lex- icons pl,a by default, and fall back to the handmade lexicons pl,m only for uncovered words:
pl,h(e|f ) = ( pl,a(e|f ) pl,m(e|f ) otherwise if f is covered
# 5 Experiment & Result | 1606.02006#19 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | Neural machine translation (NMT) often makes mistakes in translating
low-frequency content words that are essential to understanding the meaning of
the sentence. We propose a method to alleviate this problem by augmenting NMT
systems with discrete translation lexicons that efficiently encode translations
of these low-frequency words. We describe a method to calculate the lexicon
probability of the next word in the translation candidate by using the
attention vector of the NMT model to select which source word lexical
probabilities the model should focus on. We test two methods to combine this
probability with the standard NMT probability: (1) using it as a bias, and (2)
linear interpolation. Experiments on two corpora show an improvement of 2.0-2.3
BLEU and 0.13-0.44 NIST score, and faster convergence time. | http://arxiv.org/pdf/1606.02006 | Philip Arthur, Graham Neubig, Satoshi Nakamura | cs.CL | Accepted at EMNLP 2016 | null | cs.CL | 20160607 | 20161005 | [
{
"id": "1606.02006"
}
] |
1606.02006 | 20 | pl,h(e|f ) = ( pl,a(e|f ) pl,m(e|f ) otherwise if f is covered
# 5 Experiment & Result
In this section, we describe experiments we use to evaluate our proposed methods.
# 5.1 Settings
Dataset: We perform experiments on two widely- used tasks for the English-to-Japanese language pair: KFTT (Neubig, 2011) and BTEC (Kikui et al., 2003). KFTT is a collection of Wikipedia article about city of Kyoto and BTEC is a travel conversa- tion corpus. BTEC is an easier translation task than KFTT, because KFTT covers a broader domain, has a larger vocabulary of rare words, and has relatively long sentences. The details of each corpus are de- picted in Table 1.
We tokenize English according to the Penn Tree- bank standard (Marcus et al., 1993) and lowercase,
2Alternatively, we could imagine a method where we com- bined the training data and dictionary before training the word alignments to create the lexicon. We attempted this, and results were comparable to or worse than the ï¬ll-up method, so we use the ï¬ll-up method for the remainder of the paper. | 1606.02006#20 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | Neural machine translation (NMT) often makes mistakes in translating
low-frequency content words that are essential to understanding the meaning of
the sentence. We propose a method to alleviate this problem by augmenting NMT
systems with discrete translation lexicons that efficiently encode translations
of these low-frequency words. We describe a method to calculate the lexicon
probability of the next word in the translation candidate by using the
attention vector of the NMT model to select which source word lexical
probabilities the model should focus on. We test two methods to combine this
probability with the standard NMT probability: (1) using it as a bias, and (2)
linear interpolation. Experiments on two corpora show an improvement of 2.0-2.3
BLEU and 0.13-0.44 NIST score, and faster convergence time. | http://arxiv.org/pdf/1606.02006 | Philip Arthur, Graham Neubig, Satoshi Nakamura | cs.CL | Accepted at EMNLP 2016 | null | cs.CL | 20160607 | 20161005 | [
{
"id": "1606.02006"
}
] |
1606.02006 | 22 | and tokenize Japanese using KyTea (Neubig et al., 2011). We limit training sentence length up to 50 in both experiments and keep the test data at the original length. We replace words of frequency less than a threshold u in both languages with the hunki symbol and exclude them from our vocabulary. We choose u = 1 for BTEC and u = 3 for KFTT, re- sulting in |Vf | = 17.8k, |Ve| = 21.8k for BTEC and |Vf | = 48.2k, |Ve| = 49.1k for KFTT. NMT Systems: We build the described models us- ing the Chainer4 toolkit. The depth of the stacking LSTM is d = 4 and hidden node size h = 800. We concatenate the forward and backward encod- ings (resulting in a 1600 dimension vector) and then perform a linear transformation to 800 dimensions. We train the system using the Adam (Kingma and Ba, 2014) optimization method with the default set- tings: α = 1eâ3, β1 = 0.9, β2 = 0.999, Ç« = 1eâ8. Additionally, we add dropout (Srivastava et al., 2014) with drop rate r = 0.2 at the last layer | 1606.02006#22 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | Neural machine translation (NMT) often makes mistakes in translating
low-frequency content words that are essential to understanding the meaning of
the sentence. We propose a method to alleviate this problem by augmenting NMT
systems with discrete translation lexicons that efficiently encode translations
of these low-frequency words. We describe a method to calculate the lexicon
probability of the next word in the translation candidate by using the
attention vector of the NMT model to select which source word lexical
probabilities the model should focus on. We test two methods to combine this
probability with the standard NMT probability: (1) using it as a bias, and (2)
linear interpolation. Experiments on two corpora show an improvement of 2.0-2.3
BLEU and 0.13-0.44 NIST score, and faster convergence time. | http://arxiv.org/pdf/1606.02006 | Philip Arthur, Graham Neubig, Satoshi Nakamura | cs.CL | Accepted at EMNLP 2016 | null | cs.CL | 20160607 | 20161005 | [
{
"id": "1606.02006"
}
] |
1606.02006 | 24 | At test time, we use beam search with beam size b = 5. We follow Luong et al. (2015b) in replac- ing every unknown token at position i with the tar- get token that maximizes the probability pl,a(ei|fj). We choose source word fj according to the high- est alignment score in Equation (3). This unknown word replacement is applied to both baseline and proposed systems. Finally, because NMT models tend to give higher probabilities to shorter sentences (Cho et al., 2014), we discount the probability of hEOSi token by 10% to correct for this bias. Traditional SMT Systems: We also prepare two traditional SMT systems for comparison: a PBMT system (Koehn et al., 2003) using Moses5 (Koehn et al., 2007), and a hierarchical phrase-based MT sys- tem (Chiang, 2007) using Travatar6 (Neubig, 2013), Systems are built using the default settings, with models trained on the training data, and weights tuned on the development data. Lexicons: We use a total of 3 lexicons for the
4http://chainer.org/index.html 5http://www.statmt.org/moses/ 6http://www.phontron.com/travatar/ | 1606.02006#24 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | Neural machine translation (NMT) often makes mistakes in translating
low-frequency content words that are essential to understanding the meaning of
the sentence. We propose a method to alleviate this problem by augmenting NMT
systems with discrete translation lexicons that efficiently encode translations
of these low-frequency words. We describe a method to calculate the lexicon
probability of the next word in the translation candidate by using the
attention vector of the NMT model to select which source word lexical
probabilities the model should focus on. We test two methods to combine this
probability with the standard NMT probability: (1) using it as a bias, and (2)
linear interpolation. Experiments on two corpora show an improvement of 2.0-2.3
BLEU and 0.13-0.44 NIST score, and faster convergence time. | http://arxiv.org/pdf/1606.02006 | Philip Arthur, Graham Neubig, Satoshi Nakamura | cs.CL | Accepted at EMNLP 2016 | null | cs.CL | 20160607 | 20161005 | [
{
"id": "1606.02006"
}
] |
1606.02006 | 25 | 4http://chainer.org/index.html 5http://www.statmt.org/moses/ 6http://www.phontron.com/travatar/
BLEU NIST RECALL BLEU NIST RECALL pbmt 48.18 hiero 52.27 attn 48.31 auto-bias 49.74â 50.34â hyb-bias
Table 2: Accuracies for the baseline attentional NMT (attn) and the proposed bias-based method using the automatic (auto-bias) or hybrid (hyb-bias) dictionaries. Bold indicates a gain over the attn baseline, â indicates a signiï¬cant increase at p < 0.05, and â indicates p < 0.10. Traditional phrase-based (pbmt) and hierarchical phrase based (hiero) systems are shown for reference. | 1606.02006#25 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | Neural machine translation (NMT) often makes mistakes in translating
low-frequency content words that are essential to understanding the meaning of
the sentence. We propose a method to alleviate this problem by augmenting NMT
systems with discrete translation lexicons that efficiently encode translations
of these low-frequency words. We describe a method to calculate the lexicon
probability of the next word in the translation candidate by using the
attention vector of the NMT model to select which source word lexical
probabilities the model should focus on. We test two methods to combine this
probability with the standard NMT probability: (1) using it as a bias, and (2)
linear interpolation. Experiments on two corpora show an improvement of 2.0-2.3
BLEU and 0.13-0.44 NIST score, and faster convergence time. | http://arxiv.org/pdf/1606.02006 | Philip Arthur, Graham Neubig, Satoshi Nakamura | cs.CL | Accepted at EMNLP 2016 | null | cs.CL | 20160607 | 20161005 | [
{
"id": "1606.02006"
}
] |
1606.02006 | 26 | proposed method, and apply bias and linear method for all of them, totaling 6 experiments. The ï¬rst lexicon (auto) is built on the training data using the automatically learned lexicon method of §4.1 separately for both the BTEC and KFTT ex- periments. Automatic alignment is performed using GIZA++ (Och and Ney, 2003). The second lexicon (man) is built using the popular English-Japanese dictionary Eijiro7 with the manual lexicon method of §4.2. Eijiro contains 104K distinct word-to-word translation entries. The third lexicon (hyb) is built by combining the ï¬rst and second lexicon with the hybrid method of §4.3. Evaluation: We use standard single reference BLEU-4 (Papineni et al., 2002) to evaluate the trans- lation performance. Additionally, we also use NIST (Doddington, 2002), which is a measure that puts a particular focus on low-frequency word strings, and thus is sensitive to the low-frequency words we are focusing on in this paper. We measure the statistical signiï¬cant differences between systems using paired bootstrap resampling (Koehn, 2004) with 10,000 it- erations and measure statistical signiï¬cance at the p < 0.05 and p < 0.10 levels. | 1606.02006#26 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | Neural machine translation (NMT) often makes mistakes in translating
low-frequency content words that are essential to understanding the meaning of
the sentence. We propose a method to alleviate this problem by augmenting NMT
systems with discrete translation lexicons that efficiently encode translations
of these low-frequency words. We describe a method to calculate the lexicon
probability of the next word in the translation candidate by using the
attention vector of the NMT model to select which source word lexical
probabilities the model should focus on. We test two methods to combine this
probability with the standard NMT probability: (1) using it as a bias, and (2)
linear interpolation. Experiments on two corpora show an improvement of 2.0-2.3
BLEU and 0.13-0.44 NIST score, and faster convergence time. | http://arxiv.org/pdf/1606.02006 | Philip Arthur, Graham Neubig, Satoshi Nakamura | cs.CL | Accepted at EMNLP 2016 | null | cs.CL | 20160607 | 20161005 | [
{
"id": "1606.02006"
}
] |
1606.02006 | 27 | Additionally, we also calculate the recall of rare words from the references. We deï¬ne ârare wordsâ as words that appear less than eight times in the tar- get training corpus or references, and measure the percentage of time they are recovered by each trans- lation system.
# 5.2 Effect of Integrating Lexicons
In this section, we ï¬rst a detailed examination of the utility of the proposed bias method when used | 1606.02006#27 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | Neural machine translation (NMT) often makes mistakes in translating
low-frequency content words that are essential to understanding the meaning of
the sentence. We propose a method to alleviate this problem by augmenting NMT
systems with discrete translation lexicons that efficiently encode translations
of these low-frequency words. We describe a method to calculate the lexicon
probability of the next word in the translation candidate by using the
attention vector of the NMT model to select which source word lexical
probabilities the model should focus on. We test two methods to combine this
probability with the standard NMT probability: (1) using it as a bias, and (2)
linear interpolation. Experiments on two corpora show an improvement of 2.0-2.3
BLEU and 0.13-0.44 NIST score, and faster convergence time. | http://arxiv.org/pdf/1606.02006 | Philip Arthur, Graham Neubig, Satoshi Nakamura | cs.CL | Accepted at EMNLP 2016 | null | cs.CL | 20160607 | 20161005 | [
{
"id": "1606.02006"
}
] |
1606.02006 | 28 | 7http://eijiro.jp
20 U E L B 15 10 attn auto-bias hyb-bias 5 0 1000 2000 time (minutes) 3000 4000
Figure 2: Training curves for the baseline attn and the proposed bias method.
with the auto or hyb lexicons, which empirically gave the best results, and perform a comparison among the other lexicon integration methods in the following section. Table 2 shows the results of these methods, along with the corresponding baselines.
First, compared to the baseline attn, our bias method achieved consistently higher scores on both test sets. In particular, the gains on the more difï¬- cult KFTT set are large, up to 2.3 BLEU, 0.44 NIST, and 30% Recall, demonstrating the utility of the pro- posed method in the face of more diverse content and fewer high-frequency words.
Compared to the traditional pbmt systems hiero, particularly on KFTT we can see that the proposed method allows the NMT system to exceed the traditional SMT methods in BLEU. This is de- spite the fact that we are not performing ensembling, which has proven to be essential to exceed tradi- tional systems in several previous works (Sutskever | 1606.02006#28 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | Neural machine translation (NMT) often makes mistakes in translating
low-frequency content words that are essential to understanding the meaning of
the sentence. We propose a method to alleviate this problem by augmenting NMT
systems with discrete translation lexicons that efficiently encode translations
of these low-frequency words. We describe a method to calculate the lexicon
probability of the next word in the translation candidate by using the
attention vector of the NMT model to select which source word lexical
probabilities the model should focus on. We test two methods to combine this
probability with the standard NMT probability: (1) using it as a bias, and (2)
linear interpolation. Experiments on two corpora show an improvement of 2.0-2.3
BLEU and 0.13-0.44 NIST score, and faster convergence time. | http://arxiv.org/pdf/1606.02006 | Philip Arthur, Graham Neubig, Satoshi Nakamura | cs.CL | Accepted at EMNLP 2016 | null | cs.CL | 20160607 | 20161005 | [
{
"id": "1606.02006"
}
] |
1606.02006 | 29 | Input Reference attn Do you have an opinion regarding extramarital affairs? ä¸å« ã« é¢ã㦠æè¦ ã ããã¾ã ãã Furin ni kanshite iken ga arimasu ka. ãµãã«ã¼ ã« é¢ãã æè¦ ã¯ ããã¾ã ãã Sakk¯a ni kansuru iken wa arimasu ka. (Do you have an opinion about soccer?) auto-bias ä¸å« ã« é¢ã㦠æè¦ ã ããã¾ã ãã Furin ni kanshite iken ga arimasu ka. (Do you have an opinion about affairs?) Could you put these fragile things in a safe place? ãã® å£ãç© ã å®å
¨ãª å ´æ ã« ç½®ã㦠ãããã¾ãã ã ã Kono kowaremono o anzenâna basho ni oite moraemasen ka. è²´éå ã å®å
¨ ã« åºããã ã® ã§ãã ã Kich¯o-hin o anzen ni dashitai nodesuga. (Iâd like to safely put out these valuables.) Input Reference attn auto-bias ãã® å£ãç© ã | 1606.02006#29 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | Neural machine translation (NMT) often makes mistakes in translating
low-frequency content words that are essential to understanding the meaning of
the sentence. We propose a method to alleviate this problem by augmenting NMT
systems with discrete translation lexicons that efficiently encode translations
of these low-frequency words. We describe a method to calculate the lexicon
probability of the next word in the translation candidate by using the
attention vector of the NMT model to select which source word lexical
probabilities the model should focus on. We test two methods to combine this
probability with the standard NMT probability: (1) using it as a bias, and (2)
linear interpolation. Experiments on two corpora show an improvement of 2.0-2.3
BLEU and 0.13-0.44 NIST score, and faster convergence time. | http://arxiv.org/pdf/1606.02006 | Philip Arthur, Graham Neubig, Satoshi Nakamura | cs.CL | Accepted at EMNLP 2016 | null | cs.CL | 20160607 | 20161005 | [
{
"id": "1606.02006"
}
] |
1606.02006 | 31 | Table 3: Examples where the proposed auto-bias improved over the baseline system attn. Underlines indicate words were mistaken in the baseline output but correct in the proposed modelâs output.
et al., 2014; Luong et al., 2015a; Sennrich et al., 2016). Interestingly, despite gains in BLEU, the NMT methods still fall behind in NIST score on the KFTT data set, demonstrating that traditional SMT systems still tend to have a small advantage in translating lower-frequency words, despite the gains made by the proposed method.
In Table 3, we show some illustrative examples where the proposed method (auto-bias) was able to obtain a correct translation while the normal at- tentional model was not. The ï¬rst example is a mistake in translating âextramarital affairsâ into the Japanese equivalent of âsoccer,â entirely changing the main topic of the sentence. This is typical of the errors that we have observed NMT systems make (the mistake from Figure 1 is also from attn, and was ï¬xed by our proposed method). The second ex- ample demonstrates how these mistakes can then af- fect the process of choosing the remaining words, propagating the error through the whole sentence. | 1606.02006#31 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | Neural machine translation (NMT) often makes mistakes in translating
low-frequency content words that are essential to understanding the meaning of
the sentence. We propose a method to alleviate this problem by augmenting NMT
systems with discrete translation lexicons that efficiently encode translations
of these low-frequency words. We describe a method to calculate the lexicon
probability of the next word in the translation candidate by using the
attention vector of the NMT model to select which source word lexical
probabilities the model should focus on. We test two methods to combine this
probability with the standard NMT probability: (1) using it as a bias, and (2)
linear interpolation. Experiments on two corpora show an improvement of 2.0-2.3
BLEU and 0.13-0.44 NIST score, and faster convergence time. | http://arxiv.org/pdf/1606.02006 | Philip Arthur, Graham Neubig, Satoshi Nakamura | cs.CL | Accepted at EMNLP 2016 | null | cs.CL | 20160607 | 20161005 | [
{
"id": "1606.02006"
}
] |
1606.02006 | 32 | Next, we examine the effect of the proposed method on the training time for each neural MT method, drawing training curves for the KFTT data in Figure 2. Here we can see that the proposed bias training methods achieve reasonable BLEU scores in the upper 10s even after the ï¬rst iteration. In con- trast, the baseline attn method has a BLEU score of around 5 after the ï¬rst iteration, and takes signiï¬- cantly longer to approach values close to its maximal
ae Ba Dd ¢ E bd ¢ we On WN & FR ATEW, S FOAKATEW, bias attn
Ba E bd ¢ On WN & FR S FOAKATEW,
ae Dd ¢ E we ATEW, S
Figure 3: Attention matrices for baseline attn and proposed bias methods. Lighter colors indicate stronger attention between the words, and boxes sur- rounding words indicate the correct alignments.
accuracy. This shows that by incorporating lexical probabilities, we can effectively bootstrap the learn- ing of the NMT system, allowing it to approach an appropriate answer in a more timely fashion.8 | 1606.02006#32 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | Neural machine translation (NMT) often makes mistakes in translating
low-frequency content words that are essential to understanding the meaning of
the sentence. We propose a method to alleviate this problem by augmenting NMT
systems with discrete translation lexicons that efficiently encode translations
of these low-frequency words. We describe a method to calculate the lexicon
probability of the next word in the translation candidate by using the
attention vector of the NMT model to select which source word lexical
probabilities the model should focus on. We test two methods to combine this
probability with the standard NMT probability: (1) using it as a bias, and (2)
linear interpolation. Experiments on two corpora show an improvement of 2.0-2.3
BLEU and 0.13-0.44 NIST score, and faster convergence time. | http://arxiv.org/pdf/1606.02006 | Philip Arthur, Graham Neubig, Satoshi Nakamura | cs.CL | Accepted at EMNLP 2016 | null | cs.CL | 20160607 | 20161005 | [
{
"id": "1606.02006"
}
] |
1606.02006 | 33 | It is also interesting to examine the alignment vec8Note that these gains are despite the fact that one iteration of the proposed method takes a longer (167 minutes for attn vs. 275 minutes for auto-bias) due to the necessity to cal- culate and use the lexical probability matrix for each sentence. It also takes an additional 297 minutes to train the lexicon with GIZA++, but this can be greatly reduced with more efï¬cient training methods (Dyer et al., 2013).
(a) BTEC
Lexicon bias BLEU NIST linear bias linear 48.31 - auto man hyb 5.98 49.74â 49.08 50.34â 6.11 6.03â 6.10â 5.90 6.14â 5.94
47.97 51.04â 49.27 (b) KFTT
BLEU NIST Lexicon bias linear bias linear - auto man hyb 5.15 20.86 5.59â 5.12 5.55â 23.20â 20.78 22.80â 4.61 5.11 5.03 18.19 20.88 20.33
Table 4: A comparison of the bias and linear lexicon integration methods on the automatic, man- ual, and hybrid lexicons. The ï¬rst line without lexi- con is the traditional attentional NMT. | 1606.02006#33 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | Neural machine translation (NMT) often makes mistakes in translating
low-frequency content words that are essential to understanding the meaning of
the sentence. We propose a method to alleviate this problem by augmenting NMT
systems with discrete translation lexicons that efficiently encode translations
of these low-frequency words. We describe a method to calculate the lexicon
probability of the next word in the translation candidate by using the
attention vector of the NMT model to select which source word lexical
probabilities the model should focus on. We test two methods to combine this
probability with the standard NMT probability: (1) using it as a bias, and (2)
linear interpolation. Experiments on two corpora show an improvement of 2.0-2.3
BLEU and 0.13-0.44 NIST score, and faster convergence time. | http://arxiv.org/pdf/1606.02006 | Philip Arthur, Graham Neubig, Satoshi Nakamura | cs.CL | Accepted at EMNLP 2016 | null | cs.CL | 20160607 | 20161005 | [
{
"id": "1606.02006"
}
] |
1606.02006 | 34 | tors produced by the baseline and proposed meth- ods, a visualization of which we show in Figure 3. For this sentence, the outputs of both meth- ods were both identical and correct, but we can see that the proposed method (right) placed sharper attention on the actual source word correspond- ing to content words in the target sentence. This trend of peakier attention distributions in the pro- posed method held throughout the corpus, with the per-word entropy of the attention vectors being 3.23 bits for auto-bias, compared with 3.81 bits for attn, indicating that the auto-bias method places more certainty in its attention decisions.
# 5.3 Comparison of Integration Methods
Finally, we perform a full comparison between the various methods for integrating lexicons into the translation process, with results shown in Table 4. In general the bias method improves accuracy for the auto and hyb lexicon, but is less effective for the man lexicon. This is likely due to the fact that the manual lexicon, despite having broad coverage, did not sufï¬ciently cover target-domain words (cov- erage of unique words in the source vocabulary was 35.3% and 9.7% for BTEC and KFTT respectively). the the trend is linear method, with it improving man systems, | 1606.02006#34 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | Neural machine translation (NMT) often makes mistakes in translating
low-frequency content words that are essential to understanding the meaning of
the sentence. We propose a method to alleviate this problem by augmenting NMT
systems with discrete translation lexicons that efficiently encode translations
of these low-frequency words. We describe a method to calculate the lexicon
probability of the next word in the translation candidate by using the
attention vector of the NMT model to select which source word lexical
probabilities the model should focus on. We test two methods to combine this
probability with the standard NMT probability: (1) using it as a bias, and (2)
linear interpolation. Experiments on two corpora show an improvement of 2.0-2.3
BLEU and 0.13-0.44 NIST score, and faster convergence time. | http://arxiv.org/pdf/1606.02006 | Philip Arthur, Graham Neubig, Satoshi Nakamura | cs.CL | Accepted at EMNLP 2016 | null | cs.CL | 20160607 | 20161005 | [
{
"id": "1606.02006"
}
] |
1606.02006 | 35 | but causing decreases when using the auto and hyb lexicons. This indicates that the linear method is more suited for cases where the lexi- con does not closely match the target domain, and plays a more complementary role. Compared to the log-linear modeling of bias, which strictly en- forces constraints imposed by the lexicon distribu- linear interpolation is intu- tion (Klakow, 1998), itively more appropriate for integrating this type of complimentary information.
On the other hand, the performance of linear in- terpolation was generally lower than that of the bias method. One potential reason for this is the fact that we use a constant interpolation coefï¬cient that was set ï¬xed in every context. Gu et al. (2016) have re- cently developed methods to use the context infor- mation from the decoder to calculate the different in- terpolation coefï¬cients for every decoding step, and it is possible that introducing these methods would improve our results.
# 6 Additional Experiments | 1606.02006#35 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | Neural machine translation (NMT) often makes mistakes in translating
low-frequency content words that are essential to understanding the meaning of
the sentence. We propose a method to alleviate this problem by augmenting NMT
systems with discrete translation lexicons that efficiently encode translations
of these low-frequency words. We describe a method to calculate the lexicon
probability of the next word in the translation candidate by using the
attention vector of the NMT model to select which source word lexical
probabilities the model should focus on. We test two methods to combine this
probability with the standard NMT probability: (1) using it as a bias, and (2)
linear interpolation. Experiments on two corpora show an improvement of 2.0-2.3
BLEU and 0.13-0.44 NIST score, and faster convergence time. | http://arxiv.org/pdf/1606.02006 | Philip Arthur, Graham Neubig, Satoshi Nakamura | cs.CL | Accepted at EMNLP 2016 | null | cs.CL | 20160607 | 20161005 | [
{
"id": "1606.02006"
}
] |
1606.02006 | 36 | # 6 Additional Experiments
To test whether the proposed method is useful on larger data sets, we also performed follow-up ex- periments on the larger Japanese-English ASPEC dataset (Nakazawa et al., 2016) that consist of 2 million training examples, 63 million tokens, and 81,000 vocabulary size. We gained an improvement in BLEU score from 20.82 using the attn baseline to 22.66 using the auto-bias proposed method. This experiment shows that our method scales to larger datasets.
# 7 Related Work
From the beginning of work on NMT, unknown words that do not exist in the system vocabulary have been focused on as a weakness of these sys- tems. Early methods to handle these unknown words replaced them with appropriate words in the target vocabulary (Jean et al., 2015; Luong et al., 2015b) according to a lexicon similar to the one used in this work. In contrast to our work, these only handle unknown words and do not incorporate information from the lexicon in the learning procedure.
There have also been other approaches that incor- porate models that learn when to copy words as-is into the target language (Allamanis et al., 2016; Gu | 1606.02006#36 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | Neural machine translation (NMT) often makes mistakes in translating
low-frequency content words that are essential to understanding the meaning of
the sentence. We propose a method to alleviate this problem by augmenting NMT
systems with discrete translation lexicons that efficiently encode translations
of these low-frequency words. We describe a method to calculate the lexicon
probability of the next word in the translation candidate by using the
attention vector of the NMT model to select which source word lexical
probabilities the model should focus on. We test two methods to combine this
probability with the standard NMT probability: (1) using it as a bias, and (2)
linear interpolation. Experiments on two corpora show an improvement of 2.0-2.3
BLEU and 0.13-0.44 NIST score, and faster convergence time. | http://arxiv.org/pdf/1606.02006 | Philip Arthur, Graham Neubig, Satoshi Nakamura | cs.CL | Accepted at EMNLP 2016 | null | cs.CL | 20160607 | 20161005 | [
{
"id": "1606.02006"
}
] |
1606.02006 | 37 | There have also been other approaches that incor- porate models that learn when to copy words as-is into the target language (Allamanis et al., 2016; Gu
et al., 2016; G¨ulc¸ehre et al., 2016). These models are similar to the linear approach of §3.2.2, but are only applicable to words that can be copied as- is into the target language. In fact, these models can be thought of as a subclass of the proposed approach that use a lexicon that assigns a all its probability to target words that are the same as the source. On the other hand, while we are simply using a static in- terpolation coefï¬cient λ, these works generally have a more sophisticated method for choosing the inter- polation between the standard and âcopyâ models. Incorporating these into our linear method is a promising avenue for future work. | 1606.02006#37 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | Neural machine translation (NMT) often makes mistakes in translating
low-frequency content words that are essential to understanding the meaning of
the sentence. We propose a method to alleviate this problem by augmenting NMT
systems with discrete translation lexicons that efficiently encode translations
of these low-frequency words. We describe a method to calculate the lexicon
probability of the next word in the translation candidate by using the
attention vector of the NMT model to select which source word lexical
probabilities the model should focus on. We test two methods to combine this
probability with the standard NMT probability: (1) using it as a bias, and (2)
linear interpolation. Experiments on two corpora show an improvement of 2.0-2.3
BLEU and 0.13-0.44 NIST score, and faster convergence time. | http://arxiv.org/pdf/1606.02006 | Philip Arthur, Graham Neubig, Satoshi Nakamura | cs.CL | Accepted at EMNLP 2016 | null | cs.CL | 20160607 | 20161005 | [
{
"id": "1606.02006"
}
] |
1606.02006 | 38 | In addition Mi et al. (2016) have also recently pro- posed a similar approach by limiting the number of vocabulary being predicted by each batch or sen- tence. This vocabulary is made by considering the original HMM alignments gathered from the train- ing corpus. Basically, this method is a speciï¬c ver- sion of our bias method that gives some of the vocab- ulary a bias of negative inï¬nity and all other vocab- ulary a uniform distribution. Our method improves over this by considering actual translation probabil- ities, and also considering the attention vector when deciding how to combine these probabilities.
Finally, there have been a number of recent works that improve accuracy of low-frequency words us- ing character-based translation models (Ling et al., 2015; Costa-Juss`a and Fonollosa, 2016; Chung et al., 2016). However, Luong and Manning (2016) have found that even when using character-based models, incorporating information about words al- lows for gains in translation accuracy, and it is likely that our lexicon-based method could result in im- provements in these hybrid systems as well.
# 8 Conclusion & Future Work | 1606.02006#38 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | Neural machine translation (NMT) often makes mistakes in translating
low-frequency content words that are essential to understanding the meaning of
the sentence. We propose a method to alleviate this problem by augmenting NMT
systems with discrete translation lexicons that efficiently encode translations
of these low-frequency words. We describe a method to calculate the lexicon
probability of the next word in the translation candidate by using the
attention vector of the NMT model to select which source word lexical
probabilities the model should focus on. We test two methods to combine this
probability with the standard NMT probability: (1) using it as a bias, and (2)
linear interpolation. Experiments on two corpora show an improvement of 2.0-2.3
BLEU and 0.13-0.44 NIST score, and faster convergence time. | http://arxiv.org/pdf/1606.02006 | Philip Arthur, Graham Neubig, Satoshi Nakamura | cs.CL | Accepted at EMNLP 2016 | null | cs.CL | 20160607 | 20161005 | [
{
"id": "1606.02006"
}
] |
1606.02006 | 39 | # 8 Conclusion & Future Work
In this paper, we have proposed a method to in- corporate discrete probabilistic lexicons into NMT systems to solve the difï¬culties that NMT systems have demonstrated with low-frequency words. As a result, we achieved substantial increases in BLEU (2.0-2.3) and NIST (0.13-0.44) scores, and observed qualitative improvements in the translations of con- tent words.
For future work, we are interested in conducting the experiments on larger-scale translation tasks. We
also plan to do subjective evaluation, as we expect that improvements in content word translation are critical to subjective impressions of translation re- sults. Finally, we are also interested in improve- ments to the linear method where λ is calculated based on the context, instead of using a ï¬xed value.
# Acknowledgment
We thank Makoto Morishita and Yusuke Oda for their help in this project. We also thank the faculty members of AHC lab for their supports and sugges- tions.
This work was supported by grants from the Min- istry of Education, Culture, Sport, Science, and Technology of Japan and in part by JSPS KAKENHI Grant Number 16H05873.
# References | 1606.02006#39 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | Neural machine translation (NMT) often makes mistakes in translating
low-frequency content words that are essential to understanding the meaning of
the sentence. We propose a method to alleviate this problem by augmenting NMT
systems with discrete translation lexicons that efficiently encode translations
of these low-frequency words. We describe a method to calculate the lexicon
probability of the next word in the translation candidate by using the
attention vector of the NMT model to select which source word lexical
probabilities the model should focus on. We test two methods to combine this
probability with the standard NMT probability: (1) using it as a bias, and (2)
linear interpolation. Experiments on two corpora show an improvement of 2.0-2.3
BLEU and 0.13-0.44 NIST score, and faster convergence time. | http://arxiv.org/pdf/1606.02006 | Philip Arthur, Graham Neubig, Satoshi Nakamura | cs.CL | Accepted at EMNLP 2016 | null | cs.CL | 20160607 | 20161005 | [
{
"id": "1606.02006"
}
] |
1606.02006 | 40 | # References
Miltiadis Allamanis, Hao Peng, and Charles Sutton. 2016. A convolutional attention network for extreme summarization of source code. In Proceedings of the 33th International Conference on Machine Learning (ICML).
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the 4th International Conference on Learning Representa- tions (ICLR).
Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A neural probabilistic lan- guage model. Journal of Machine Learning Research, pages 1137â1155.
Arianna Bisazza, Nick Ruiz, and Marcello Federico. 2011. Fill-up versus interpolation methods for phrase- based SMT adaptation. In Proceedings of the 2011 International Workshop on Spoken Language Transla- tion (IWSLT), pages 136â143. | 1606.02006#40 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | Neural machine translation (NMT) often makes mistakes in translating
low-frequency content words that are essential to understanding the meaning of
the sentence. We propose a method to alleviate this problem by augmenting NMT
systems with discrete translation lexicons that efficiently encode translations
of these low-frequency words. We describe a method to calculate the lexicon
probability of the next word in the translation candidate by using the
attention vector of the NMT model to select which source word lexical
probabilities the model should focus on. We test two methods to combine this
probability with the standard NMT probability: (1) using it as a bias, and (2)
linear interpolation. Experiments on two corpora show an improvement of 2.0-2.3
BLEU and 0.13-0.44 NIST score, and faster convergence time. | http://arxiv.org/pdf/1606.02006 | Philip Arthur, Graham Neubig, Satoshi Nakamura | cs.CL | Accepted at EMNLP 2016 | null | cs.CL | 20160607 | 20161005 | [
{
"id": "1606.02006"
}
] |
1606.02006 | 41 | Peter F. Brown, Vincent J. Della Pietra, Stephen A. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estima- tion. Computational Linguistics, pages 263â311. David Chiang. 2007. Hierarchical phrase-based transla- tion. Computational Linguistics, pages 201â228. Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bah- danau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoderâdecoder ap- proaches. In Proceedings of the Workshop on Syntax and Structure in Statistical Translation (SSST), pages 103â111. | 1606.02006#41 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | Neural machine translation (NMT) often makes mistakes in translating
low-frequency content words that are essential to understanding the meaning of
the sentence. We propose a method to alleviate this problem by augmenting NMT
systems with discrete translation lexicons that efficiently encode translations
of these low-frequency words. We describe a method to calculate the lexicon
probability of the next word in the translation candidate by using the
attention vector of the NMT model to select which source word lexical
probabilities the model should focus on. We test two methods to combine this
probability with the standard NMT probability: (1) using it as a bias, and (2)
linear interpolation. Experiments on two corpora show an improvement of 2.0-2.3
BLEU and 0.13-0.44 NIST score, and faster convergence time. | http://arxiv.org/pdf/1606.02006 | Philip Arthur, Graham Neubig, Satoshi Nakamura | cs.CL | Accepted at EMNLP 2016 | null | cs.CL | 20160607 | 20161005 | [
{
"id": "1606.02006"
}
] |
1606.02006 | 42 | Junyoung Chung, Kyunghyun Cho, and Yoshua Bengio. 2016. A character-level decoder without explicit seg- mentation for neural machine translation. In Proceed- ings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL), pages 1693â1703. Marta R. Costa-Juss`a and Jos´e A. R. Fonollosa. 2016. Character-based neural machine translation. In Pro- ceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL), pages 357â361. George Doddington. 2002. Automatic evaluation of ma- chine translation quality using n-gram co-occurrence statistics. In Proceedings of the Second Interna- tional Conference on Human Language Technology Research, pages 138â145.
Chris Dyer, Victor Chahuneau, and Noah A. Smith. 2013. A simple, fast, and effective reparameterization of IBM model 2. In Proceedings of the 2013 Confer- ence of the North American Chapter of the Associa- tion for Computational Linguistics: Human Language Technologies, pages 644â648. | 1606.02006#42 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | Neural machine translation (NMT) often makes mistakes in translating
low-frequency content words that are essential to understanding the meaning of
the sentence. We propose a method to alleviate this problem by augmenting NMT
systems with discrete translation lexicons that efficiently encode translations
of these low-frequency words. We describe a method to calculate the lexicon
probability of the next word in the translation candidate by using the
attention vector of the NMT model to select which source word lexical
probabilities the model should focus on. We test two methods to combine this
probability with the standard NMT probability: (1) using it as a bias, and (2)
linear interpolation. Experiments on two corpora show an improvement of 2.0-2.3
BLEU and 0.13-0.44 NIST score, and faster convergence time. | http://arxiv.org/pdf/1606.02006 | Philip Arthur, Graham Neubig, Satoshi Nakamura | cs.CL | Accepted at EMNLP 2016 | null | cs.CL | 20160607 | 20161005 | [
{
"id": "1606.02006"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.