id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
listlengths 1
1
|
---|---|---|---|---|---|---|
1606.02960#31 | Sequence-to-Sequence Learning as Beam-Search Optimization | All experiments above have Ktr = 6. Word Ordering Beam Size (BLEU) Kte = 1 Kte = 5 Kte = 10 Ktr = 2 Ktr = 6 Ktr = 11 30.59 28.20 26.88 31.23 34.22 34.42 30.26 34.67 34.88 seq2seq 26.11 30.20 31.04 Table 2: Beam-size experiments on word ordering devel- opment set. All numbers reï¬ ect training with constraints (ConBSO). Dependency Parsing We next apply our model to dependency parsing, which also has hard con- straints and plausibly beneï¬ | 1606.02960#30 | 1606.02960#32 | 1606.02960 | [
"1604.08633"
] |
1606.02960#32 | Sequence-to-Sequence Learning as Beam-Search Optimization | ts from search. We treat dependency parsing with arc-standard transi- tions as a seq2seq task by attempting to map from a source sentence to a target sequence of source sentence words interleaved with the arc-standard, reduce-actions in its parse. For example, we attempt to map the source sentence But it was the Quotron problems that ... to the target sequence But it was @L SBJ @L DEP the Quotron problems @L NMOD @L NMOD that ... We use the standard Penn Treebank dataset splits with Stanford dependency labels, and the standard UAS/LAS evaluation metric (excluding punctua- tion) following Chen and Manning (2014). All models thus see only the words in the source and, when decoding, the actions it has emitted so far; no other features are used. We use 2-layer encoder and decoder LSTMs with 300 hidden units per layer Dependency Parsing (UAS/LAS) Kte = 5 Kte = 1 Kte = 10 seq2seq 87.33/82.26 86.91/82.11 BSO ConBSO 85.11/79.32 88.53/84.16 91.00/87.18 91.25/86.92 88.66/84.33 91.17/87.41 91.57/87.26 Andor 93.17/91.18 - - Table 3: Dependency parsing. UAS/LAS of seq2seq, BSO, ConBSO and baselines on PTB test set. Andor is the current state-of-the-art model for this data set (Andor et al. 2016), and we note that with a beam of size 32 they obtain 94.41/92.55. All experiments above have Ktr = 6. and dropout with a rate of 0.3 between LSTM lay- ers. | 1606.02960#31 | 1606.02960#33 | 1606.02960 | [
"1604.08633"
] |
1606.02960#33 | Sequence-to-Sequence Learning as Beam-Search Optimization | We replace singleton words in the training set with an UNK token, normalize digits to a single symbol, and initialize word embeddings for both source and target words from the publicly available word2vec (Mikolov et al., 2013) embeddings. We use simple 0/1 costs in deï¬ ning the â function. As in the word-ordering case, we also experiment with modifying the succ function in order to train under hard constraints, namely, that the emitted tar- get sequence be a valid parse. In particular, we con- strain the output at each time-step to obey the stack constraint, and we ensure words in the source are emitted in order. We show results on the test-set in Table 3. BSO and ConBSO both show signiï¬ cant improvements over seq2seq, with ConBSO improving most on UAS, and BSO improving most on LAS. We achieve a reasonable ï¬ nal score of 91.57 UAS, which lags behind the state-of-the-art, but is promising for a general-purpose, word-only model. Translation We ï¬ nally evaluate our model on a small machine translation dataset, which allows us to experiment with a cost function that is not 0/1, and to consider other baselines that attempt to mit- igate exposure bias in the seq2seq setting. We use the dataset from the work of Ranzato et al. (2016), which uses data from the German-to-English por- tion of the IWSLT 2014 machine translation eval- uation campaign (Cettolo et al., 2014). The data comes from translated TED talks, and the dataset contains roughly 153K training sentences, 7K devel- opment sentences, and 7K test sentences. We use the same preprocessing and dataset splits as Ranzato et Machine Translation (BLEU) Kte = 1 Kte = 5 Kte = 10 seq2seq 22.53 BSO, SB-â 23.83 24.03 26.36 23.87 25.48 XENT DAD MIXER 17.74 20.12 20.73 20.10 22.25 21.81 20.28 22.40 21.83 Table 4: | 1606.02960#32 | 1606.02960#34 | 1606.02960 | [
"1604.08633"
] |
1606.02960#34 | Sequence-to-Sequence Learning as Beam-Search Optimization | Machine translation experiments on test set; re- sults below middle line are from MIXER model of Ran- zato et al. (2016). SB-â indicates sentence BLEU costs are used in deï¬ ning â . XENT is similar to our seq2seq model but with a convolutional encoder and simpler at- tention. DAD trains seq2seq with scheduled sampling (Bengio et al., 2015). BSO, SB-â experiments above have Ktr = 6. | 1606.02960#33 | 1606.02960#35 | 1606.02960 | [
"1604.08633"
] |
1606.02960#35 | Sequence-to-Sequence Learning as Beam-Search Optimization | al. (2016), and like them we also use a single-layer LSTM decoder with 256 units. We also use dropout with a rate of 0.2 between each LSTM layer. We em- phasize, however, that while our decoder LSTM is of the same size as that of Ranzato et al. (2016), our re- sults are not directly comparable, because we use an LSTM encoder (rather than a convolutional encoder as they do), a slightly different attention mechanism, and input feeding (Luong et al., 2015). | 1606.02960#34 | 1606.02960#36 | 1606.02960 | [
"1604.08633"
] |
1606.02960#36 | Sequence-to-Sequence Learning as Beam-Search Optimization | 1:t ) to 1 â SB(Ë y(K) r+1:t, yr+1:t), where r is the last margin violation and SB denotes smoothed, sentence-level BLEU (Chen and Cherry, 2014). This setting of â should act to penalize erroneous predictions with a relatively low sentence-level BLEU score more than those with a relatively high sentence-level BLEU score. In Table 4 we show our ï¬ nal results and those from Ranzato et al. (2016).8 While we start with an improved baseline, we see similarly large increases in accuracy as those obtained by DAD and MIXER, in particular when Kte > 1. these sequence-level costs in Table 5, which compares us- ing sentence-level BLEU costs in deï¬ | 1606.02960#35 | 1606.02960#37 | 1606.02960 | [
"1604.08633"
] |
1606.02960#37 | Sequence-to-Sequence Learning as Beam-Search Optimization | ning â with using 0/1 costs. We see that the more sophisti- cated sequence-level costs have a moderate effect on BLEU score. # 8Some results from personal communication. Machine Translation (BLEU) Kte = 1 Kte = 5 Kte = 10 0/1-â 25.73 SB-â 25.99 28.21 28.45 27.43 27.58 Table 5: BLEU scores obtained on the machine trans- lation development data when training with â (Ë y(k) 1:t ) = 1 (top) and â (Ë y(k) r+1:t, yr+1:t) (bottom), and Ktr = 6. Timing Given Algorithm 1, we would expect training time to increase linearly with the size of the beam. On the above MT task, our highly tuned seq2seq baseline processes an average of 13,038 to- kens/second (including both source and target to- kens) on a GTX 970 GPU. For beams of size Ktr = 2, 3, 4, 5, and 6, our implementation processes on average 1,985, 1,768, 1,709, 1,521, and 1,458 to- kens/second, respectively. Thus, we appear to pay an initial constant factor of â 3.3 due to the more complicated forward and backward passes, and then training scales with the size of the beam. Because we batch beam predictions on a GPU, however, we ï¬ nd that in practice training time scales sub-linearly with the beam-size. | 1606.02960#36 | 1606.02960#38 | 1606.02960 | [
"1604.08633"
] |
1606.02960#38 | Sequence-to-Sequence Learning as Beam-Search Optimization | # 6 Conclusion We have introduced a variant of seq2seq and an as- sociated beam search training scheme, which ad- dresses exposure bias as well as label bias, and moreover allows for both training with sequence- level cost functions as well as with hard constraints. Future work will examine scaling this approach to much larger datasets. # Acknowledgments We thank Yoon Kim for helpful discussions and for providing the initial seq2seq code on which our im- plementations are based. We thank Allen Schmaltz for help with the word ordering experiments. We also gratefully acknowledge the support of a Google Research Award. | 1606.02960#37 | 1606.02960#39 | 1606.02960 | [
"1604.08633"
] |
1606.02960#39 | Sequence-to-Sequence Learning as Beam-Search Optimization | # References [Andor et al.2016] Daniel Andor, Chris Alberti, David Weiss, Aliaksei Severyn, Alessandro Presta, Kuzman Ganchev, Slav Petrov, and Michael Collins. 2016. Globally normalized transition-based neural networks. ACL. [Bahdanau et al.2015] Dzmitry Bahdanau, Kyunghyun 2015. Neural machine Cho, and Yoshua Bengio. translation by jointly learning to align and translate. In ICLR. Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, and Yoshua Bengio. 2016. An Actor-Critic Algorithm for Sequence Prediction. CoRR, abs/1607.07086. Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. 2015. Scheduled sampling for sequence prediction with recurrent neural networks. In Advances in Neural Information Processing Systems, pages 1171â | 1606.02960#38 | 1606.02960#40 | 1606.02960 | [
"1604.08633"
] |
1606.02960#40 | Sequence-to-Sequence Learning as Beam-Search Optimization | 1179. and Jonas Kuhn. 2014. Learning structured perceptrons for coreference Resolution with Latent Antecedents and Non-local Features. ACL, Baltimore, MD, USA, June. [Cettolo et al.2014] Mauro Cettolo, Jan Niehues, Sebas- tian St¨uker, Luisa Bentivogli, and Marcello Federico. 2014. Report on the 11th iwslt evaluation campaign. In Proceedings of IWSLT, 20014. [Chang et al.2015] Kai-Wei Chang, Hal Daum´e III, John Langford, and Stephane Ross. 2015. | 1606.02960#39 | 1606.02960#41 | 1606.02960 | [
"1604.08633"
] |
1606.02960#41 | Sequence-to-Sequence Learning as Beam-Search Optimization | Efï¬ cient pro- grammable learning to search. In Arxiv. [Chen and Cherry2014] Boxing Chen and Colin Cherry. 2014. A systematic comparison of smoothing tech- niques for sentence-level bleu. ACL 2014, page 362. [Chen and Manning2014] Danqi Chen and Christopher D Manning. 2014. A fast and accurate dependency parser using neural networks. In EMNLP, pages 740â 750. [Cho et al.2014] KyungHyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder- decoder approaches. Eighth Workshop on Syntax, Se- mantics and Structure in Statistical Translation. | 1606.02960#40 | 1606.02960#42 | 1606.02960 | [
"1604.08633"
] |
1606.02960#42 | Sequence-to-Sequence Learning as Beam-Search Optimization | and Brian Roark. 2004. Incremental parsing with the perceptron algorithm. In Proceedings of the 42nd Annual Meet- ing on Association for Computational Linguistics, page 111. Association for Computational Linguistics. [Daum´e III and Marcu2005] Hal Daum´e III and Daniel Marcu. 2005. Learning as search optimization: ap- proximate large margin methods for structured predic- In Proceedings of the Twenty-Second Interna- tion. tional Conference on Machine Learning (ICML 2005), pages 169â | 1606.02960#41 | 1606.02960#43 | 1606.02960 | [
"1604.08633"
] |
1606.02960#43 | Sequence-to-Sequence Learning as Beam-Search Optimization | 176. [Daum´e III et al.2009] Hal Daum´e III, John Langford, and Daniel Marcu. 2009. Search-based structured pre- diction. Machine Learning, 75(3):297â 325. [Duchi et al.2011] John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive Subgradient Methods for On- line Learning and Stochastic Optimization. The Jour- nal of Machine Learning Research, 12:2121â 2159. [Filippova et al.2015] Katja Filippova, Enrique Alfon- seca, Carlos A Colmenares, Lukasz Kaiser, and Oriol Vinyals. 2015. | 1606.02960#42 | 1606.02960#44 | 1606.02960 | [
"1604.08633"
] |
1606.02960#44 | Sequence-to-Sequence Learning as Beam-Search Optimization | Sentence compression by deletion with lstms. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 360â 368. [Hochreiter and Schmidhuber1997] Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Comput., 9:1735â 1780. [Huang et al.2012] Liang Huang, Suphan Fayong, and Yang Guo. 2012. | 1606.02960#43 | 1606.02960#45 | 1606.02960 | [
"1604.08633"
] |
1606.02960#45 | Sequence-to-Sequence Learning as Beam-Search Optimization | Structured perceptron with inexact search. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies, pages 142â 151. Association for Computational Linguistics. [Kingsbury2009] Brian Kingsbury. 2009. Lattice-based optimization of sequence classiï¬ cation criteria for In Acoustics, neural-network acoustic modeling. Speech and Signal Processing, 2009. ICASSP 2009. IEEE International Conference on, pages 3761â 3764. IEEE. [Lafferty et al.2001] John D. Lafferty, Andrew McCal- lum, and Fernando C. N. Pereira. 2001. Condi- tional random ï¬ elds: Probabilistic models for seg- menting and labeling sequence data. In Proceedings of the Eighteenth International Conference on Machine Learning (ICML 2001), pages 282â | 1606.02960#44 | 1606.02960#46 | 1606.02960 | [
"1604.08633"
] |
1606.02960#46 | Sequence-to-Sequence Learning as Beam-Search Optimization | 289. [Liu et al.2015] Yijia Liu, Yue Zhang, Wanxiang Che, and Bing Qin. 2015. Transition-based syntactic lineariza- tion. In Proceedings of NAACL. and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015, pages 1412â 1421. [Mikolov et al.2013] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. | 1606.02960#45 | 1606.02960#47 | 1606.02960 | [
"1604.08633"
] |
1606.02960#47 | Sequence-to-Sequence Learning as Beam-Search Optimization | Dis- tributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111â 3119. [Papineni et al.2002] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311â 318. Associ- ation for Computational Linguistics. [Pham et al.2014] Vu Pham, Th´eodore Bluche, Christo- pher Kermorvant, and J´erË ome Louradour. 2014. | 1606.02960#46 | 1606.02960#48 | 1606.02960 | [
"1604.08633"
] |
1606.02960#48 | Sequence-to-Sequence Learning as Beam-Search Optimization | Dropout improves recurrent neural networks for hand- In Frontiers in Handwriting writing recognition. Recognition (ICFHR), 2014 14th International Con- ference on, pages 285â 290. IEEE. Sumit [Ranzato et al.2016] Marcâ Aurelio Chopra, Michael Auli, and Wojciech Zaremba. 2016. Sequence level training with recurrent neural networks. ICLR. [Ross et al.2011] St´ephane Ross, Geoffrey J. Gordon, and Drew Bagnell. 2011. A reduction of imitation learn- ing and structured prediction to no-regret online learn- In Proceedings of the Fourteenth International ing. | 1606.02960#47 | 1606.02960#49 | 1606.02960 | [
"1604.08633"
] |
1606.02960#49 | Sequence-to-Sequence Learning as Beam-Search Optimization | Conference on Artiï¬ cial Intelligence and Statistics, pages 627â 635. [Sak et al.2014] Hasim Sak, Oriol Vinyals, Georg Heigold, Andrew W. Senior, Erik McDermott, Rajat Monga, and Mark Z. Mao. 2014. Sequence discrimi- native distributed training of long short-term memory In INTERSPEECH 2014, recurrent neural networks. pages 1209â 1213. [Schmaltz et al.2016] Allen Schmaltz, Alexander M Rush, and Stuart M Shieber. 2016. Word ordering without syntax. arXiv preprint arXiv:1604.08633. [Serban et al.2016] Iulian Vlad Serban, Alessandro Sor- doni, Yoshua Bengio, Aaron C. Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using generative hierarchical neural network models. In Proceedings of the Thirtieth AAAI Conference on Artiï¬ cial Intelligence, pages 3776â 3784. [Shen et al.2016] Shiqi Shen, Yong Cheng, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016. | 1606.02960#48 | 1606.02960#50 | 1606.02960 | [
"1604.08633"
] |
1606.02960#50 | Sequence-to-Sequence Learning as Beam-Search Optimization | Minimum risk training for neural machine translation. In Proceedings of the 54th Annual Meeting of the As- sociation for Computational Linguistics, ACL 2016. [Srivastava et al.2014] Nitish Srivastava, Geoffrey Hin- ton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to pre- vent neural networks from overï¬ tting. The Journal of Machine Learning Research, 15(1):1929â | 1606.02960#49 | 1606.02960#51 | 1606.02960 | [
"1604.08633"
] |
1606.02960#51 | Sequence-to-Sequence Learning as Beam-Search Optimization | 1958. [Sutskever et al.2011] Ilya Sutskever, James Martens, and Geoffrey E Hinton. 2011. Generating text with recur- rent neural networks. In Proceedings of the 28th In- ternational Conference on Machine Learning (ICML), pages 1017â 1024. [Sutskever et al.2014] Ilya Sutskever, Oriol Vinyals, and Quoc VV Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Informa- tion Processing Systems (NIPS), pages 3104â 3112. [Venugopalan et al.2015] Subhashini Venugopalan, Mar- cus Rohrbach, Jeffrey Donahue, Raymond J. | 1606.02960#50 | 1606.02960#52 | 1606.02960 | [
"1604.08633"
] |
1606.02960#52 | Sequence-to-Sequence Learning as Beam-Search Optimization | Mooney, Trevor Darrell, and Kate Saenko. 2015. Sequence to sequence - video to text. In ICCV, pages 4534â 4542. [Vinyals et al.2015] Oriol Vinyals, Å ukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. 2015. Grammar as a foreign language. In Advances in Neural Information Processing Systems, pages 2755â 2763. Patrick Doetsch, Simon Wiesler, Ralf Schluter, and Hermann Sequence-discriminative training of Ney. In Acoustics, Speech and recurrent neural networks. Signal Processing (ICASSP), 2015 IEEE International Conference on, pages 2100â | 1606.02960#51 | 1606.02960#53 | 1606.02960 | [
"1604.08633"
] |
1606.02960#53 | Sequence-to-Sequence Learning as Beam-Search Optimization | 2104. IEEE. [Watanabe and Sumita2015] Taro Watanabe and Eiichiro Sumita. 2015. Transition-based neural constituent parsing. Proceedings of ACL-IJCNLP. Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron C. Courville, Ruslan Salakhutdinov, Richard S. Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption In ICML, pages generation with visual attention. 2048â | 1606.02960#52 | 1606.02960#54 | 1606.02960 | [
"1604.08633"
] |
1606.02960#54 | Sequence-to-Sequence Learning as Beam-Search Optimization | 2057. [Yazdani and Henderson2015] Majid Yazdani and James Henderson. 2015. Incremental recurrent neural net- work dependency parser with search-based discrimi- In Proceedings of the 19th Confer- native training. ence on Computational Natural Language Learning, (CoNLL 2015), pages 142â 152. [Zaremba et al.2014] Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. 2014. Recurrent neural network regularization. CoRR, abs/1409.2329. [Zhang and Clark2011] Yue Zhang and Stephen Clark. 2011. Syntax-based grammaticality improvement us- ing ccg and guided search. In Proceedings of the Con- ference on Empirical Methods in Natural Language Processing, pages 1147â 1157. Association for Com- putational Linguistics. [Zhang and Clark2015] Yue Zhang and Stephen Clark. 2015. Discriminative syntax-based word order- ing for text generation. Computational Linguistics, 41(3):503â | 1606.02960#53 | 1606.02960#55 | 1606.02960 | [
"1604.08633"
] |
1606.02960#55 | Sequence-to-Sequence Learning as Beam-Search Optimization | 538. [Zhou et al.2015] Hao Zhou, Yue Zhang, and Jiajun Chen. 2015. A neural probabilistic structured-prediction model for transition-based dependency parsing. In Proceedings of the 53rd Annual Meeting of the As- sociation for Computational Linguistics, pages 1213â 1222. | 1606.02960#54 | 1606.02960 | [
"1604.08633"
] |
|
1606.02006#0 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | 2016 6 1 0 2 # t c O 5 # ] L C . s c [ arXiv:1606.02006v2 [es.CL] 2 v 6 0 0 2 0 . 6 0 6 1 : v i X r a # Incorporating Discrete Translation Lexicons into Neural Machine Translation Philip Arthurâ , Graham Neubigâ â , Satoshi Nakamuraâ â Graduate School of Information Science, Nara Institute of Science and Technology â Language Technologies Institute, Carnegie Mellon University [email protected] [email protected] [email protected] # Abstract Neural machine translation (NMT) often makes mistakes in translating low-frequency content words that are essential to understand- ing the meaning of the sentence. We propose a method to alleviate this problem by aug- menting NMT systems with discrete transla- tion lexicons that efï¬ ciently encode transla- tions of these low-frequency words. We de- scribe a method to calculate the lexicon proba- bility of the next word in the translation candi- date by using the attention vector of the NMT model to select which source word lexical probabilities the model should focus on. We test two methods to combine this probability with the standard NMT probability: (1) using it as a bias, and (2) linear interpolation. Exper- iments on two corpora show an improvement of 2.0-2.3 BLEU and 0.13-0.44 NIST score, and faster convergence time.1 # 1 Introduction Neural machine translation (NMT, §2; Kalchbren- ner and Blunsom (2013), Sutskever et al. (2014)) is a variant of statistical machine translation (SMT; Brown et al. (1993)), using neural networks. NMT has recently gained popularity due to its ability to model the translation process end-to-end using a sin- gle probabilistic model, and for its state-of-the-art performance on several language pairs (Luong et al., 2015a; Sennrich et al., 2016). | 1606.02006#1 | 1606.02006 | [
"1606.02006"
] |
|
1606.02006#1 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | Input: Reference: ã ã ¥ã 㠸㠢 ã ® å ºèº«ã §ã ã I come from Tunisia. System: Chunisia no shusshindesu. (Iâ m from Tunisia.) ã 㠫㠦㠧㠼 ã ® å ºèº«ã §ã ã Noruue- no shusshindesu. (Iâ m from Norway.) Figure 1: An example of a mistake made by NMT on low-frequency content words. continuous-valued numbers. This is in contrast to more traditional SMT methods such as phrase-based machine translation (PBMT; Koehn et al. (2003)), which represent translations as discrete pairs of word strings in the source and target languages. The use of continuous representations is a major advan- tage, allowing NMT to share statistical power be- tween similar words (e.g. â | 1606.02006#0 | 1606.02006#2 | 1606.02006 | [
"1606.02006"
] |
1606.02006#2 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | dogâ and â catâ ) or con- texts (e.g. â this isâ and â that isâ ). However, this property also has a drawback in that NMT systems often mistranslate into words that seem natural in the context, but do not reï¬ ect the content of the source sentence. For example, Figure 1 is a sentence from our data where the NMT system mistakenly trans- lated â Tunisiaâ into the word for â Norway.â This variety of error is particularly serious because the content words that are often mistranslated by NMT are also the words that play a key role in determining the whole meaning of the sentence. One feature of NMT systems is that they treat each word in the vocabulary as a vector of 1Tools to replicate our experiments can be found at http://isw3.naist.jp/~philip-a/emnlp2016/index.html In contrast, PBMT and other traditional SMT methods tend to rarely make this kind of mistake. This is because they base their translations on dis- crete phrase mappings, which ensure that source words will be translated into a target word that has been observed as a translation at least once in the training data. In addition, because the discrete map- pings are memorized explicitly, they can be learned efï¬ ciently from as little as a single instance (barring errors in word alignments). Thus we hypothesize that if we can incorporate a similar variety of infor- mation into NMT, this has the potential to alleviate problems with the previously mentioned fatal errors on low-frequency words. In this paper, we propose a simple, yet effective method to incorporate discrete, probabilistic lexi- cons as an additional information source in NMT (§3). First we demonstrate how to transform lexi- cal translation probabilities (§3.1) into a predictive probability for the next word by utilizing attention vectors from attentional NMT models (Bahdanau et al., 2015). We then describe methods to incorporate this probability into NMT, either through linear in- terpolation with the NMT probabilities (§3.2.2) or as the bias to the NMT predictive distribution (§3.2.1). | 1606.02006#1 | 1606.02006#3 | 1606.02006 | [
"1606.02006"
] |
1606.02006#3 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | We construct these lexicon probabilities by using traditional word alignment methods on the training data (§4.1), other external parallel data resources such as a handmade dictionary (§4.2), or using a hy- brid between the two (§4.3). We perform experiments (§5) on two English- the Japanese methodâ s utility in improving translation accuracy and reducing the time required for training. # 2 Neural Machine Translation The goal of machine translation is to translate a se- quence of source words F = f |F | into a sequence of target words E = e|E| 1 . These words belong to the source vocabulary Vf , and the target vocabulary Ve respectively. NMT performs this translation by cal- culating the conditional probability pm(ei|F, eiâ 1 ) of the ith target word ei based on the source F and the preceding target words eiâ 1 . This is done by en- 1 coding the context hF, eiâ 1 i a ï¬ xed-width vector ηi, and calculating the probability as follows: pm(ei|F, eiâ 1 1 ) = softmax(Wsηi + bs), (1) where Ws and bs are respectively weight matrix and bias vector parameters. The exact variety of the NMT model depends on how we calculate ηi used as input. | 1606.02006#2 | 1606.02006#4 | 1606.02006 | [
"1606.02006"
] |
1606.02006#4 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | While there are many methods to perform this modeling, we opt to use attentional models (Bahdanau et al., 2015), which focus on particular words in the source sen- tence when calculating the probability of ei. These models represent the current state of the art in NMT, and are also convenient for use in our proposed method. Speciï¬ cally, we use the method of Luong et al. (2015a), which we describe brieï¬ y here and refer readers to the original paper for details. | 1606.02006#3 | 1606.02006#5 | 1606.02006 | [
"1606.02006"
] |
1606.02006#5 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | First, an encoder converts the source sentence F into a matrix R where each column represents a sin- gle word in the input sentence as a continuous vec- tor. This representation is generated using a bidirec- tional encoder â â r j = enc(embed(fj), â â r jâ 1) â â r j = enc(embed(fj), â â r j+1) rj = [â â r j; â â r j]. Here the embed(·) function maps the words into a representation (Bengio et al., 2003), and enc(·) is a stacking long short term memory (LSTM) neural network (Hochreiter and Schmidhuber, 1997; Gers et al., 2000; Sutskever et al., 2014). Finally we con- catenate the two vectors â â r j and â â r j into a bidirec- tional representation rj. These vectors are further concatenated into the matrix R where the jth col- umn corresponds to rj. Next, we generate the output one word at a time while referencing this encoded input sentence and tracking progress with a decoder LSTM. The de- coderâ s hidden state hi is a ï¬ xed-length continuous vector representing the previous target words eiâ 1 , initialized as h0 = 0. Based on this hi, we calculate a similarity vector αi, with each element equal to αi,j = sim(hi, rj). (2) sim(·) can be an arbitrary similarity function, which we set to the dot product, following Luong et al. (2015a). We then normalize this into an attention vector, which weights the amount of focus that we put on each word in the source sentence ai = softmax(αi). | 1606.02006#4 | 1606.02006#6 | 1606.02006 | [
"1606.02006"
] |
1606.02006#6 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | (3) This attention vector is then used to weight the en- coded representation R to create a context vector ci for the current time step c = Ra. Finally, we create ηi by concatenating the previous hidden state hiâ 1 with the context vector, and per- forming an afï¬ ne transform ηi = Wη[hiâ 1; ci] + bη, Once we have this representation of the current state, we can calculate pm(ei|F, eiâ 1 ) according to Equation (1). The next word ei is chosen according to this probability, and we update the hidden state by inputting the chosen word into the decoder LSTM hi = enc(embed(ei), hiâ | 1606.02006#5 | 1606.02006#7 | 1606.02006 | [
"1606.02006"
] |
1606.02006#7 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | 1). (4) If we deï¬ ne all the parameters in this model as θ, we can then train the model by minimizing the negative log-likelihood of the training data â log(pm(ei|F, eiâ 1 Ë Î¸ = argmin 1 θ ; θ)). # i XhF, Ei X 3 Integrating Lexicons into NMT In §2 we described how traditional NMT models calculate the probability of the next target word pm(ei|eiâ 1 , F ). Our goal in this paper is to improve the accuracy of this probability estimate by incorpo- rating information from discrete probabilistic lexi- cons. We assume that we have a lexicon that, given a source word f , assigns a probability pl(e|f ) to tar- get word e. For a source word f , this probability will generally be non-zero for a small number of transla- tion candidates, and zero for the majority of words in VE. In this section, we ï¬ rst describe how we in- corporate these probabilities into NMT, and explain how we actually obtain the pl(e|f ) probabilities in §4. # 3.1 Converting Lexicon Probabilities into Conditioned Predictive Proabilities First, we need to convert lexical probabilities pl(e|f ) for the individual words in the source sentence F to a form that can be used together with pm(ei|eiâ 1 , F ). Given input sentence F , we can construct a matrix in which each column corre- sponds to a word in the input sentence, each row corresponds to a word in the VE, and the entry cor- responds to the appropriate lexical probability: LF = pl(e = 1|f1) ... pl(e = 1|f|F |) ... · · · . . . pl(e = |Ve||f1) · · · pl(e = |Ve||f|F |) . This matrix can be precomputed during the encoding stage because it only requires information about the source sentence F . Next we convert this matrix into a predictive prob- ability over the next word: pl(ei|F, eiâ | 1606.02006#6 | 1606.02006#8 | 1606.02006 | [
"1606.02006"
] |
1606.02006#8 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | 1 ). To do so we use the alignment probability a from Equation (3) to weight each column of the LF matrix: pl(ei|F, eiâ 1 1 ) = LF ai = pl(e = 1|f1) · · · plex(e = 1|f|F |) . . . pl(e = Ve|f1) · · · plex(e = Ve|f|F |) ... ... ai,1 ... ai,|F | . This calculation is similar to the way how attentional models calculate the context vector ci, but over a vector representing the probabilities of the target vo- cabulary, instead of the distributed representations of the source words. The process of involving ai is important because at every time step i, the lexi- cal probability pl(ei|eiâ 1 , F ) will be inï¬ uenced by 1 different source words. # 3.2 Combining Predictive Probabilities After calculating the lexicon predictive proba- bility pl(ei|eiâ 1 , F ), next we need to integrate this probability with the NMT model probability pm(ei|eiâ 1 , F ). To do so, we examine two methods: (1) adding it as a bias, and (2) linear interpolation. # 3.2.1 Model Bias | 1606.02006#7 | 1606.02006#9 | 1606.02006 | [
"1606.02006"
] |
1606.02006#9 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | In our ï¬ rst bias method, we use pl(·) to bias the probability distribution calculated by the vanilla NMT model. Speciï¬ cally, we add a small constant Ç« to pl(·), take the logarithm, and add this adjusted log probability to the input of the softmax as follows: pb(ei|F, eiâ 1 1 ) = softmax(Wsηi + bs+ log(pl(ei|F, eiâ 1 1 ) + Ç«)). We take the logarithm of pl(·) so that the values will still be in the probability domain after the softmax is calculated, and add the hyper-parameter Ç« to prevent zero probabilities from becoming â â after taking the log. When Ç« is small, the model will be more heavily biased towards using the lexicon, and when Ç« is larger the lexicon probabilities will be given less weight. We use Ç« = 0.001 for this paper. # 3.2.2 Linear Interpolation We also attempt to incorporate the two probabil- ities through linear interpolation between the stan- dard NMT probability model probability pm(·) and the lexicon probability pl(·). We will call this the linear method, and deï¬ ne it as follows: po(ei|F, eiâ 1 ) = pl(ei = 1|F, eiâ 1 ... pl(ei = |Ve||F, eiâ 1 where λ is an interpolation coefï¬ cient that is the re- sult of the sigmoid function λ = sig(x) = 1 1+eâ x . x is a learnable parameter, and the sigmoid func- tion ensures that the ï¬ nal interpolation level falls be- tween 0 and 1. We choose x = 0 (λ = 0.5) at the beginning of training. ) This notation is partly inspired by Allamanis et al. (2016) and Gu et al. (2016) who use linear inter- polation to merge a standard attentional model with a â copyâ operator that copies a source word as-is into the target sentence. The main difference is that they use this to copy words into the output while our method uses it to inï¬ uence the probabilities of all target words. # 4 Constructing Lexicon Probabilities | 1606.02006#8 | 1606.02006#10 | 1606.02006 | [
"1606.02006"
] |
1606.02006#10 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | In the previous section, we have deï¬ ned some ways to use predictive probabilities pl(ei|F, eiâ 1 ) based on word-to-word lexical probabilities pl(e|f ). Next, we deï¬ ne three ways to construct these lexical prob- abilities using automatically learned lexicons, hand- made lexicons, or a combination of both. # 4.1 Automatically Learned Lexicons In traditional SMT systems, lexical translation prob- abilities are generally learned directly from parallel data in an unsupervised fashion using a model such as the IBM models (Brown et al., 1993; Och and Ney, 2003). These models can be used to estimate the alignments and lexical translation probabilities pl(e|f ) between the tokens of the two languages us- ing the expectation maximization (EM) algorithm. First in the expectation step, the algorithm esti- mates the expected count c(e|f ). In the maximiza- tion step, lexical probabilities are calculated by di- viding the expected count by all possible counts: pl,a(e|f ) = c(f, e) Ë e c(f, Ë e) , The IBM models vary in level of reï¬ nement, with Model 1 relying solely on these lexical probabil- ities, and latter IBM models (Models 2, 3, 4, 5) introducing more sophisticated models of fertility and relative alignment. Even though IBM models also occasionally have problems when dealing with â garbage collectingâ effects the rare words (e.g. (Liang et al., 2006)), traditional SMT systems gen- erally achieve better translation accuracies of low- frequency words than NMT systems (Sutskever et al., 2014), indicating that these problems are less prominent than they are in NMT. Note that in many cases, NMT limits the target vocabulary (Jean et al., 2015) for training speed or memory constraints, resulting in rare words not be- ing covered by the NMT vocabulary VE. Accord- ingly, we allocate the remaining probability assigned by the lexicon to the unknown word symbol hunki: pl,a(e = hunki|f ) = 1 â pl,a(e = i|f ). (5) # iâ Ve X # 4.2 Manual Lexicons | 1606.02006#9 | 1606.02006#11 | 1606.02006 | [
"1606.02006"
] |
1606.02006#11 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | In addition, for many language pairs, broad- coverage handmade dictionaries exist, and it is desir- able that we be able to use the information included in them as well. Unlike automatically learned lexi- cons, however, handmade dictionaries generally do not contain translation probabilities. To construct the probability pl(e|f ), we deï¬ ne the set of trans- lations Kf existing in the dictionary for particular source word f , and assume a uniform distribution over these words: pl,m(e|f ) = 1 |Kf | 0 ( if e â Kf otherwise . Following Equation (5), unknown source words will assign their probability mass to the hunki tag. # 4.3 Hybrid Lexicons Handmade lexicons have broad coverage of words but their probabilities might not be as accurate as the Tokens Corpus Sentence Data Ja En 464K 3.60M 4.97M 377K 7.77M 8.04M 5.3K 3.8K 24.3K 26.8K 5.5K 26.0K 28.4K BTEC KFTT BTEC KFTT BTEC KFTT Train 510 1160 508 1169 Dev 3.8K Test Table 1: Corpus details. learned ones, particularly if the automatic lexicon is constructed on in-domain data. Thus, we also test a hybrid method where we use the handmade lexi- cons to complement the automatically learned lexi- con.2 3 Speciï¬ cally, inspired by phrase table ï¬ ll-up used in PBMT systems (Bisazza et al., 2011), we use the probability of the automatically learned lex- icons pl,a by default, and fall back to the handmade lexicons pl,m only for uncovered words: pl,h(e|f ) = ( pl,a(e|f ) pl,m(e|f ) otherwise if f is covered # 5 Experiment & Result In this section, we describe experiments we use to evaluate our proposed methods. # 5.1 Settings Dataset: We perform experiments on two widely- used tasks for the English-to-Japanese language pair: KFTT (Neubig, 2011) and BTEC (Kikui et al., 2003). | 1606.02006#10 | 1606.02006#12 | 1606.02006 | [
"1606.02006"
] |
1606.02006#12 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | KFTT is a collection of Wikipedia article about city of Kyoto and BTEC is a travel conversa- tion corpus. BTEC is an easier translation task than KFTT, because KFTT covers a broader domain, has a larger vocabulary of rare words, and has relatively long sentences. The details of each corpus are de- picted in Table 1. We tokenize English according to the Penn Tree- bank standard (Marcus et al., 1993) and lowercase, 2Alternatively, we could imagine a method where we com- bined the training data and dictionary before training the word alignments to create the lexicon. We attempted this, and results were comparable to or worse than the ï¬ ll-up method, so we use the ï¬ ll-up method for the remainder of the paper. 3While most words in the Vf will be covered by the learned lexicon, many words (13% in experiments) are still left uncov- ered due to alignment failures or other factors. (6) and tokenize Japanese using KyTea (Neubig et al., 2011). We limit training sentence length up to 50 in both experiments and keep the test data at the original length. We replace words of frequency less than a threshold u in both languages with the hunki symbol and exclude them from our vocabulary. We choose u = 1 for BTEC and u = 3 for KFTT, re- sulting in |Vf | = 17.8k, |Ve| = 21.8k for BTEC and |Vf | = 48.2k, |Ve| = 49.1k for KFTT. NMT Systems: We build the described models us- ing the Chainer4 toolkit. The depth of the stacking LSTM is d = 4 and hidden node size h = 800. We concatenate the forward and backward encod- ings (resulting in a 1600 dimension vector) and then perform a linear transformation to 800 dimensions. We train the system using the Adam (Kingma and Ba, 2014) optimization method with the default set- tings: α = 1eâ 3, β1 = 0.9, β2 = 0.999, Ç« = 1eâ 8. | 1606.02006#11 | 1606.02006#13 | 1606.02006 | [
"1606.02006"
] |
1606.02006#13 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | Additionally, we add dropout (Srivastava et al., 2014) with drop rate r = 0.2 at the last layer of each stacking LSTM unit to prevent overï¬ tting. We use a batch size of B = 64 and we run a total of N = 14 iterations for all data sets. All of the ex- periments are conducted on a single GeForce GTX TITAN X GPU with a 12 GB memory cache. At test time, we use beam search with beam size b = 5. We follow Luong et al. (2015b) in replac- ing every unknown token at position i with the tar- get token that maximizes the probability pl,a(ei|fj). We choose source word fj according to the high- est alignment score in Equation (3). This unknown word replacement is applied to both baseline and proposed systems. Finally, because NMT models tend to give higher probabilities to shorter sentences (Cho et al., 2014), we discount the probability of hEOSi token by 10% to correct for this bias. Traditional SMT Systems: We also prepare two traditional SMT systems for comparison: a PBMT system (Koehn et al., 2003) using Moses5 (Koehn et al., 2007), and a hierarchical phrase-based MT sys- tem (Chiang, 2007) using Travatar6 (Neubig, 2013), Systems are built using the default settings, with models trained on the training data, and weights tuned on the development data. Lexicons: We use a total of 3 lexicons for the 4http://chainer.org/index.html 5http://www.statmt.org/moses/ 6http://www.phontron.com/travatar/ BLEU NIST RECALL BLEU NIST RECALL pbmt 48.18 hiero 52.27 attn 48.31 auto-bias 49.74â 50.34â hyb-bias Table 2: Accuracies for the baseline attentional NMT (attn) and the proposed bias-based method using the automatic (auto-bias) or hybrid (hyb-bias) dictionaries. Bold indicates a gain over the attn baseline, â indicates a signiï¬ cant increase at p < 0.05, and â indicates p < 0.10. | 1606.02006#12 | 1606.02006#14 | 1606.02006 | [
"1606.02006"
] |
1606.02006#14 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | Traditional phrase-based (pbmt) and hierarchical phrase based (hiero) systems are shown for reference. proposed method, and apply bias and linear method for all of them, totaling 6 experiments. The ï¬ rst lexicon (auto) is built on the training data using the automatically learned lexicon method of §4.1 separately for both the BTEC and KFTT ex- periments. Automatic alignment is performed using GIZA++ (Och and Ney, 2003). The second lexicon (man) is built using the popular English-Japanese dictionary Eijiro7 with the manual lexicon method of §4.2. Eijiro contains 104K distinct word-to-word translation entries. The third lexicon (hyb) is built by combining the ï¬ rst and second lexicon with the hybrid method of §4.3. Evaluation: We use standard single reference BLEU-4 (Papineni et al., 2002) to evaluate the trans- lation performance. Additionally, we also use NIST (Doddington, 2002), which is a measure that puts a particular focus on low-frequency word strings, and thus is sensitive to the low-frequency words we are focusing on in this paper. We measure the statistical signiï¬ cant differences between systems using paired bootstrap resampling (Koehn, 2004) with 10,000 it- erations and measure statistical signiï¬ cance at the p < 0.05 and p < 0.10 levels. Additionally, we also calculate the recall of rare words from the references. | 1606.02006#13 | 1606.02006#15 | 1606.02006 | [
"1606.02006"
] |
1606.02006#15 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | We deï¬ ne â rare wordsâ as words that appear less than eight times in the tar- get training corpus or references, and measure the percentage of time they are recovered by each trans- lation system. # 5.2 Effect of Integrating Lexicons In this section, we ï¬ rst a detailed examination of the utility of the proposed bias method when used 7http://eijiro.jp 20 U E L B 15 10 attn auto-bias hyb-bias 5 0 1000 2000 time (minutes) 3000 4000 Figure 2: Training curves for the baseline attn and the proposed bias method. with the auto or hyb lexicons, which empirically gave the best results, and perform a comparison among the other lexicon integration methods in the following section. Table 2 shows the results of these methods, along with the corresponding baselines. First, compared to the baseline attn, our bias method achieved consistently higher scores on both test sets. In particular, the gains on the more difï¬ - cult KFTT set are large, up to 2.3 BLEU, 0.44 NIST, and 30% Recall, demonstrating the utility of the pro- posed method in the face of more diverse content and fewer high-frequency words. Compared to the traditional pbmt systems hiero, particularly on KFTT we can see that the proposed method allows the NMT system to exceed the traditional SMT methods in BLEU. This is de- spite the fact that we are not performing ensembling, which has proven to be essential to exceed tradi- tional systems in several previous works (Sutskever | 1606.02006#14 | 1606.02006#16 | 1606.02006 | [
"1606.02006"
] |
1606.02006#16 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | Input Reference attn Do you have an opinion regarding extramarital affairs? ä¸ å « ã « é ¢ã ã ¦ æ è¦ ã ã ã ã ¾ã ã ã Furin ni kanshite iken ga arimasu ka. ã µã ã «ã ¼ ã « é ¢ã ã æ è¦ ã ¯ ã ã ã ¾ã ã ã Sakk¯a ni kansuru iken wa arimasu ka. (Do you have an opinion about soccer?) auto-bias ä¸ å « ã « é ¢ã ã ¦ æ è¦ ã ã ã ã ¾ã ã ã | 1606.02006#15 | 1606.02006#17 | 1606.02006 | [
"1606.02006"
] |
1606.02006#17 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | Furin ni kanshite iken ga arimasu ka. (Do you have an opinion about affairs?) Could you put these fragile things in a safe place? ã ã ® å£ ã ç © ã å® å ¨ã ª å ´æ ã « ç½®ã ã ¦ ã ã ã ã ¾ã ã ã ã Kono kowaremono o anzenâ na basho ni oite moraemasen ka. è²´é å ã å® å ¨ ã « å ºã ã ã ã ® ã §ã ã ã Kich¯o-hin o anzen ni dashitai nodesuga. (Iâ d like to safely put out these valuables.) Input Reference attn auto-bias ã ã ® å£ ã ç © ã å® å ¨ã ª å ´æ ã « ç½®ã ã ¦ ã ã ã ã ¾ã ã ã ã Kono kowaremono o anzenâ na basho ni oite moraemasen ka. (Could you put these fragile things in a safe place?) | 1606.02006#16 | 1606.02006#18 | 1606.02006 | [
"1606.02006"
] |
1606.02006#18 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | Table 3: Examples where the proposed auto-bias improved over the baseline system attn. Underlines indicate words were mistaken in the baseline output but correct in the proposed modelâ s output. et al., 2014; Luong et al., 2015a; Sennrich et al., 2016). Interestingly, despite gains in BLEU, the NMT methods still fall behind in NIST score on the KFTT data set, demonstrating that traditional SMT systems still tend to have a small advantage in translating lower-frequency words, despite the gains made by the proposed method. In Table 3, we show some illustrative examples where the proposed method (auto-bias) was able to obtain a correct translation while the normal at- tentional model was not. | 1606.02006#17 | 1606.02006#19 | 1606.02006 | [
"1606.02006"
] |
1606.02006#19 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | The ï¬ rst example is a mistake in translating â extramarital affairsâ into the Japanese equivalent of â soccer,â entirely changing the main topic of the sentence. This is typical of the errors that we have observed NMT systems make (the mistake from Figure 1 is also from attn, and was ï¬ xed by our proposed method). The second ex- ample demonstrates how these mistakes can then af- fect the process of choosing the remaining words, propagating the error through the whole sentence. Next, we examine the effect of the proposed method on the training time for each neural MT method, drawing training curves for the KFTT data in Figure 2. Here we can see that the proposed bias training methods achieve reasonable BLEU scores in the upper 10s even after the ï¬ rst iteration. In con- trast, the baseline attn method has a BLEU score of around 5 after the ï¬ rst iteration, and takes signiï¬ - cantly longer to approach values close to its maximal ae Ba Dd ¢ E bd ¢ we On WN & FR ATEW, S FOAKATEW, bias attn Ba E bd ¢ On WN & FR S FOAKATEW, ae Dd ¢ E we ATEW, S Figure 3: Attention matrices for baseline attn and proposed bias methods. Lighter colors indicate stronger attention between the words, and boxes sur- rounding words indicate the correct alignments. accuracy. This shows that by incorporating lexical probabilities, we can effectively bootstrap the learn- ing of the NMT system, allowing it to approach an appropriate answer in a more timely fashion.8 It is also interesting to examine the alignment vec- 8Note that these gains are despite the fact that one iteration of the proposed method takes a longer (167 minutes for attn vs. 275 minutes for auto-bias) due to the necessity to cal- culate and use the lexical probability matrix for each sentence. It also takes an additional 297 minutes to train the lexicon with GIZA++, but this can be greatly reduced with more efï¬ cient training methods (Dyer et al., 2013). (a) BTEC Lexicon bias BLEU NIST linear bias linear 48.31 - auto man hyb 5.98 49.74â 49.08 50.34â 6.11 6.03â 6.10â | 1606.02006#18 | 1606.02006#20 | 1606.02006 | [
"1606.02006"
] |
1606.02006#20 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | 5.90 6.14â 5.94 47.97 51.04â 49.27 (b) KFTT BLEU NIST Lexicon bias linear bias linear - auto man hyb 5.15 20.86 5.59â 5.12 5.55â 23.20â 20.78 22.80â 4.61 5.11 5.03 18.19 20.88 20.33 Table 4: A comparison of the bias and linear lexicon integration methods on the automatic, man- ual, and hybrid lexicons. The ï¬ rst line without lexi- con is the traditional attentional NMT. tors produced by the baseline and proposed meth- ods, a visualization of which we show in Figure 3. For this sentence, the outputs of both meth- ods were both identical and correct, but we can see that the proposed method (right) placed sharper attention on the actual source word correspond- ing to content words in the target sentence. This trend of peakier attention distributions in the pro- posed method held throughout the corpus, with the per-word entropy of the attention vectors being 3.23 bits for auto-bias, compared with 3.81 bits for attn, indicating that the auto-bias method places more certainty in its attention decisions. # 5.3 Comparison of Integration Methods Finally, we perform a full comparison between the various methods for integrating lexicons into the translation process, with results shown in Table 4. In general the bias method improves accuracy for the auto and hyb lexicon, but is less effective for the man lexicon. This is likely due to the fact that the manual lexicon, despite having broad coverage, did not sufï¬ ciently cover target-domain words (cov- erage of unique words in the source vocabulary was 35.3% and 9.7% for BTEC and KFTT respectively). the the trend is linear method, with it improving man systems, but causing decreases when using the auto and hyb lexicons. This indicates that the linear method is more suited for cases where the lexi- con does not closely match the target domain, and plays a more complementary role. | 1606.02006#19 | 1606.02006#21 | 1606.02006 | [
"1606.02006"
] |
1606.02006#21 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | Compared to the log-linear modeling of bias, which strictly en- forces constraints imposed by the lexicon distribu- linear interpolation is intu- tion (Klakow, 1998), itively more appropriate for integrating this type of complimentary information. On the other hand, the performance of linear in- terpolation was generally lower than that of the bias method. One potential reason for this is the fact that we use a constant interpolation coefï¬ cient that was set ï¬ xed in every context. Gu et al. (2016) have re- cently developed methods to use the context infor- mation from the decoder to calculate the different in- terpolation coefï¬ cients for every decoding step, and it is possible that introducing these methods would improve our results. # 6 Additional Experiments To test whether the proposed method is useful on larger data sets, we also performed follow-up ex- periments on the larger Japanese-English ASPEC dataset (Nakazawa et al., 2016) that consist of 2 million training examples, 63 million tokens, and 81,000 vocabulary size. We gained an improvement in BLEU score from 20.82 using the attn baseline to 22.66 using the auto-bias proposed method. This experiment shows that our method scales to larger datasets. # 7 Related Work From the beginning of work on NMT, unknown words that do not exist in the system vocabulary have been focused on as a weakness of these sys- tems. Early methods to handle these unknown words replaced them with appropriate words in the target vocabulary (Jean et al., 2015; Luong et al., 2015b) according to a lexicon similar to the one used in this work. In contrast to our work, these only handle unknown words and do not incorporate information from the lexicon in the learning procedure. There have also been other approaches that incor- porate models that learn when to copy words as-is into the target language (Allamanis et al., 2016; Gu et al., 2016; G¨ulc¸ehre et al., 2016). These models are similar to the linear approach of §3.2.2, but are only applicable to words that can be copied as- is into the target language. | 1606.02006#20 | 1606.02006#22 | 1606.02006 | [
"1606.02006"
] |
1606.02006#22 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | In fact, these models can be thought of as a subclass of the proposed approach that use a lexicon that assigns a all its probability to target words that are the same as the source. On the other hand, while we are simply using a static in- terpolation coefï¬ cient λ, these works generally have a more sophisticated method for choosing the inter- polation between the standard and â copyâ models. Incorporating these into our linear method is a promising avenue for future work. In addition Mi et al. (2016) have also recently pro- posed a similar approach by limiting the number of vocabulary being predicted by each batch or sen- tence. This vocabulary is made by considering the original HMM alignments gathered from the train- ing corpus. Basically, this method is a speciï¬ c ver- sion of our bias method that gives some of the vocab- ulary a bias of negative inï¬ nity and all other vocab- ulary a uniform distribution. Our method improves over this by considering actual translation probabil- ities, and also considering the attention vector when deciding how to combine these probabilities. Finally, there have been a number of recent works that improve accuracy of low-frequency words us- ing character-based translation models (Ling et al., 2015; Costa-Juss`a and Fonollosa, 2016; Chung et al., 2016). However, Luong and Manning (2016) have found that even when using character-based models, incorporating information about words al- lows for gains in translation accuracy, and it is likely that our lexicon-based method could result in im- provements in these hybrid systems as well. # 8 Conclusion & Future Work In this paper, we have proposed a method to in- corporate discrete probabilistic lexicons into NMT systems to solve the difï¬ culties that NMT systems have demonstrated with low-frequency words. As a result, we achieved substantial increases in BLEU (2.0-2.3) and NIST (0.13-0.44) scores, and observed qualitative improvements in the translations of con- tent words. For future work, we are interested in conducting the experiments on larger-scale translation tasks. We also plan to do subjective evaluation, as we expect that improvements in content word translation are critical to subjective impressions of translation re- sults. | 1606.02006#21 | 1606.02006#23 | 1606.02006 | [
"1606.02006"
] |
1606.02006#23 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | Finally, we are also interested in improve- ments to the linear method where λ is calculated based on the context, instead of using a ï¬ xed value. # Acknowledgment We thank Makoto Morishita and Yusuke Oda for their help in this project. We also thank the faculty members of AHC lab for their supports and sugges- tions. This work was supported by grants from the Min- istry of Education, Culture, Sport, Science, and Technology of Japan and in part by JSPS KAKENHI Grant Number 16H05873. # References Miltiadis Allamanis, Hao Peng, and Charles Sutton. 2016. A convolutional attention network for extreme summarization of source code. In Proceedings of the 33th International Conference on Machine Learning (ICML). Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the 4th International Conference on Learning Representa- tions (ICLR). Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A neural probabilistic lan- guage model. Journal of Machine Learning Research, pages 1137â 1155. Arianna Bisazza, Nick Ruiz, and Marcello Federico. 2011. | 1606.02006#22 | 1606.02006#24 | 1606.02006 | [
"1606.02006"
] |
1606.02006#24 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | Fill-up versus interpolation methods for phrase- based SMT adaptation. In Proceedings of the 2011 International Workshop on Spoken Language Transla- tion (IWSLT), pages 136â 143. Peter F. Brown, Vincent J. Della Pietra, Stephen A. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estima- tion. Computational Linguistics, pages 263â 311. David Chiang. 2007. Hierarchical phrase-based transla- tion. Computational Linguistics, pages 201â 228. Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bah- danau, and Yoshua Bengio. 2014. On the properties of neural machine translation: | 1606.02006#23 | 1606.02006#25 | 1606.02006 | [
"1606.02006"
] |
1606.02006#25 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | Encoderâ decoder ap- proaches. In Proceedings of the Workshop on Syntax and Structure in Statistical Translation (SSST), pages 103â 111. Junyoung Chung, Kyunghyun Cho, and Yoshua Bengio. 2016. A character-level decoder without explicit seg- mentation for neural machine translation. In Proceed- ings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL), pages 1693â 1703. Marta R. Costa-Juss`a and Jos´e A. R. Fonollosa. 2016. Character-based neural machine translation. In Pro- ceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL), pages 357â 361. George Doddington. 2002. | 1606.02006#24 | 1606.02006#26 | 1606.02006 | [
"1606.02006"
] |
1606.02006#26 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | Automatic evaluation of ma- chine translation quality using n-gram co-occurrence statistics. In Proceedings of the Second Interna- tional Conference on Human Language Technology Research, pages 138â 145. Chris Dyer, Victor Chahuneau, and Noah A. Smith. 2013. A simple, fast, and effective reparameterization of IBM model 2. In Proceedings of the 2013 Confer- ence of the North American Chapter of the Associa- tion for Computational Linguistics: Human Language Technologies, pages 644â 648. Felix A. Gers, J¨urgen A. Schmidhuber, and Fred A. Cum- mins. 2000. | 1606.02006#25 | 1606.02006#27 | 1606.02006 | [
"1606.02006"
] |
1606.02006#27 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | Learning to forget: Continual prediction with LSTM. Neural Computation, pages 2451â 2471. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O. K. Li. 2016. Incorporating copying mechanism in sequence- to-sequence learning. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (ACL), pages 1631â 1640. C¸ aglar G¨ulc¸ehre, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, and Yoshua Bengio. 2016. Pointing the unknown words. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguis- tics (ACL), pages 140â 149. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long Neural Computation, pages short-term memory. 1735â 1780. S´ebastien Jean, KyungHyun Cho, Roland Memisevic, and Yoshua Bengio. 2015. On using very large tar- get vocabulary for neural machine translation. In Pro- ceedings of the 53th Annual Meeting of the Associa- tion for Computational Linguistics (ACL) and the 7th Internationali Joint Conference on Natural Language Processing of the Asian Federation of Natural Lan- guage Processing, ACL 2015, July 26-31, 2015, Bei- jing, China, Volume 1: Long Papers, pages 1â 10. Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent continuous translation models. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1700â | 1606.02006#26 | 1606.02006#28 | 1606.02006 | [
"1606.02006"
] |
1606.02006#28 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | 1709. Gen-ichiro Kikui, Eiichiro Sumita, Toshiyuki Takezawa, and Seiichi Yamamoto. 2003. Creating corpora for speech-to-speech translation. In 8th European Confer- ence on Speech Communication and Technology, EU- ROSPEECH 2003 - INTERSPEECH 2003, Geneva, Switzerland, September 1-4, 2003, pages 381â 384. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A | 1606.02006#27 | 1606.02006#29 | 1606.02006 | [
"1606.02006"
] |
1606.02006#29 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | method for stochastic optimization. CoRR. Dietrich Klakow. 1998. Log-linear interpolation of lan- guage models. In Proceedings of the 5th International Conference on Speech and Language Processing (IC- SLP). Phillip Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics (HLT-NAACL), pages 48â 54. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, OndË rej Bojar, Alexandra Con- stantin, and Evan Herbst. 2007. | 1606.02006#28 | 1606.02006#30 | 1606.02006 | [
"1606.02006"
] |
1606.02006#30 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | Moses: Open source toolkit for statistical machine translation. In Proceed- ings of the 45th Annual Meeting of the Association for Computational Linguistics (ACL), pages 177â 180. Philipp Koehn. 2004. Statistical signiï¬ cance tests for machine translation evaluation. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing (EMNLP). Percy Liang, Ben Taskar, and Dan Klein. 2006. Align- ment by agreement. In Proceedings of the 2006 Hu- man Language Technology Conference of the North American Chapter of the Association for Computa- tional Linguistics (HLT-NAACL), pages 104â 111. Wang Ling, Isabel Trancoso, Chris Dyer, and Alan W. Black. 2015. Character-based neural machine transla- tion. CoRR. Minh-Thang Luong and Christopher D. Manning. 2016. | 1606.02006#29 | 1606.02006#31 | 1606.02006 | [
"1606.02006"
] |
1606.02006#31 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | Achieving open vocabulary neural machine translation with hybrid word-character models. In Proceedings of the 54th Annual Meeting of the Association for Com- putational Linguistics (ACL), pages 1054â 1063. Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. 2015a. Effective approaches to attention- In Proceedings of based neural machine translation. the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1412â 1421. Minh-Thang Luong, Ilya Sutskever, Quoc V. Le, Oriol Vinyals, and Wojciech Zaremba. 2015b. Addressing the rare word problem in neural machine translation. In Proceedings of the 53th Annual Meeting of the As- sociation for Computational Linguistics (ACL) and the 7th Internationali Joint Conference on Natural Lan- guage Processing of the Asian Federation of Natural Language Processing, ACL 2015, July 26-31, 2015, Beijing, China, Volume 1: Long Papers, pages 11â 19. Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beat- rice Santorini. 1993. Building a large annotated cor- pus of English: The Penn treebank. Computational Linguistics, pages 313â 330. Haitao Mi, Zhiguo Wang, and Abe Ittycheriah. 2016. Vocabulary manipulation for neural machine transla- tion. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL), pages 124â 129. Toshiaki Nakazawa, Manabu Yaguchi, Kiyotaka Uchi- moto, Masao Utiyama, Eiichiro Sumita, Sadao Kuro- hashi, and Hitoshi Isahara. 2016. Aspec: Asian scien- tiï¬ c paper excerpt corpus. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC 2016), pages 2204â 2208. Graham Neubig, Yosuke Nakata, and Shinsuke Mori. 2011. | 1606.02006#30 | 1606.02006#32 | 1606.02006 | [
"1606.02006"
] |
1606.02006#32 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | Pointwise prediction for robust, adaptable In Proceedings of Japanese morphological analysis. the 49th Annual Meeting of the Association for Com- putational Linguistics (ACL), pages 529â 533. Graham Neubig. 2011. The Kyoto free translation task. http://www.phontron.com/kftt. Graham Neubig. 2013. Travatar: A forest-to-string ma- chine translation engine based on tree transducers. In Proceedings of the 51th Annual Meeting of the Associ- ation for Computational Linguistics (ACL), pages 91â 96. | 1606.02006#31 | 1606.02006#33 | 1606.02006 | [
"1606.02006"
] |
1606.02006#33 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | Franz Josef Och and Hermann Ney. 2003. A system- atic comparison of various statistical alignment mod- els. Computational Linguistics, pages 19â 51. Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: A method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computa- tional Linguistics (ACL), pages 311â 318. | 1606.02006#32 | 1606.02006#34 | 1606.02006 | [
"1606.02006"
] |
1606.02006#34 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | Rico Sennrich, Barry Haddow, and Alexandra Birch. Improving neural machine translation models 2016. In Proceedings of the 54th with monolingual data. Annual Meeting of the Association for Computational Linguistics (ACL), pages 86â 96. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, 2014. Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overï¬ tting. Journal of Machine Learning Re- search, pages 1929â 1958. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Se- quence to sequence learning with neural networks. In Proceedings of the 28th Annual Conference on Neural Information Processing Systems (NIPS), pages 3104â 3112. | 1606.02006#33 | 1606.02006 | [
"1606.02006"
] |
|
1606.01781#0 | Very Deep Convolutional Networks for Text Classification | 7 1 0 2 n a J 7 2 ] L C . s c [ 2 v 1 8 7 1 0 . 6 0 6 1 : v i X r a # Very Deep Convolutional Networks for Text Classiï¬ cation Alexis Conneau Facebook AI Research [email protected] Holger Schwenk Facebook AI Research [email protected] Yann Le Cun Facebook AI Research [email protected] # Lo¨ıc Barrault LIUM, University of Le Mans, France [email protected] # Abstract | 1606.01781#1 | 1606.01781 | [
"1502.01710"
] |
|
1606.01781#1 | Very Deep Convolutional Networks for Text Classification | The dominant approach for many NLP tasks are recurrent neural networks, in par- ticular LSTMs, and convolutional neural networks. However, these architectures are rather shallow in comparison to the deep convolutional networks which have pushed the state-of-the-art in computer vi- sion. We present a new architecture (VD- CNN) for text processing which operates directly at the character level and uses only small convolutions and pooling oper- ations. We are able to show that the per- formance of this model increases with the depth: using up to 29 convolutional layers, we report improvements over the state-of- the-art on several public text classiï¬ | 1606.01781#0 | 1606.01781#2 | 1606.01781 | [
"1502.01710"
] |
1606.01781#2 | Very Deep Convolutional Networks for Text Classification | cation tasks. To the best of our knowledge, this is the ï¬ rst time that very deep convolutional nets have been applied to text processing. terest in the research community and they are sys- tematically applied to all NLP tasks. However, while the use of (deep) neural networks in NLP has shown very good results for many tasks, it seems that they have not yet reached the level to outperform the state-of-the-art by a large margin, as it was observed in computer vision and speech recognition. Convolutional neural networks, in short Con- vNets, are very successful in computer vision. In early approaches to computer vision, handcrafted features were used, for instance â scale-invariant feature transform (SIFT)â (Lowe, 2004), followed by some classiï¬ er. The fundamental idea of Con- vNets(LeCun et al., 1998) is to consider feature extraction and classiï¬ cation as one jointly trained task. This idea has been improved over the years, in particular by using many layers of convolutions and pooling to sequentially extract a hierarchical representation(Zeiler and Fergus, 2014) of the in- put. The best networks are using more than 150 layers as in (He et al., 2016a; He et al., 2016b). | 1606.01781#1 | 1606.01781#3 | 1606.01781 | [
"1502.01710"
] |
1606.01781#3 | Very Deep Convolutional Networks for Text Classification | # 1 Introduction The goal of natural language processing (NLP) is to process text with computers in order to analyze it, to extract information and eventually to rep- resent the same information differently. We may want to associate categories to parts of the text (e.g. POS tagging or sentiment analysis), struc- ture text differently (e.g. parsing), or convert it to some other form which preserves all or part of the content (e.g. machine translation, summariza- tion). The level of granularity of this processing can range from individual characters to subword units (Sennrich et al., 2016) or words up to whole sentences or even paragraphs. After a couple of pioneer works (Bengio et al. (2001), Collobert and Weston (2008), Collobert et al. (2011) among others), the use of neural net- works for NLP applications is attracting huge in- Many NLP approaches consider words as ba- sic units. An important step was the introduction of continuous representations of words(Bengio et al., 2003). These word embeddings are now the state-of-the-art in NLP. However, it is less clear how we should best represent a sequence of words, e.g. a whole sentence, which has complicated syn- In general, in the tactic and semantic relations. same sentence, we may be faced with local and long-range dependencies. Currently, the main- stream approach is to consider a sentence as a se- quence of tokens (characters or words) and to pro- cess them with a recurrent neural network (RNN). Tokens are usually processed in sequential order, from left to right, and the RNN is expected to â memorizeâ the whole sequence in its internal states. The most popular and successful RNN vari- ant are certainly LSTMs(Hochreiter and Schmid- | 1606.01781#2 | 1606.01781#4 | 1606.01781 | [
"1502.01710"
] |
1606.01781#4 | Very Deep Convolutional Networks for Text Classification | Dataset Label Yelp P. Sample Been going to Dr. Goldberg for over 10 years. I think I was one of his 1st patients when he started at MHMG. Hes been great over the years and is really all about the big picture. [...] I love this show, however, there are 14 episodes in the ï¬ rst season and this DVD only shows the ï¬ rst eight. [...]. I hope the BBC will release another DVD that contains all the episodes, but for now this one is still somewhat enjoyable. ju4 xi1n hua2 she4 5 yue4 3 ri4 , be3i ji1ng 2008 a4o yu4n hui4 huo3 ju4 jie1 li4 ji1ng guo4 shi4 jie4 wu3 da4 zho1u 21 ge4 che2ng shi4 â | 1606.01781#3 | 1606.01781#5 | 1606.01781 | [
"1502.01710"
] |
1606.01781#5 | Very Deep Convolutional Networks for Text Classification | What should I look for when buying a laptop? What is the best brand and whatâ s reliable?â ,â Weight and dimensions are important if youâ re planning to travel with the laptop. Get something with at least 512 mb of RAM. [..] is a good brand, and has an easy to use site where you can build a custom laptop.â +1 Amz P. 3(/5) Sogou â Sportsâ â Computer, Internetâ Yah. A. Table 1: Examples of text samples and their labels. huber, 1997) â there are many works which have shown the ability of LSTMs to model long-range dependencies in NLP applications, e.g. (Sunder- meyer et al., 2012; Sutskever et al., 2014) to name just a few. However, we argue that LSTMs are generic learning machines for sequence process- ing which are lacking task-speciï¬ | 1606.01781#4 | 1606.01781#6 | 1606.01781 | [
"1502.01710"
] |
1606.01781#6 | Very Deep Convolutional Networks for Text Classification | c structure. several sentence classiï¬ cation tasks, initially pro- posed by (Zhang et al., 2015). These tasks and our experimental results are detailed in section 4. The proposed deep convolutional network shows signiï¬ cantly better results than previous ConvNets approach. The paper concludes with a discus- sion of future research directions for very deep ap- proach in NLP. It is well known that a fully connected one hidden layer neural network can in principle learn any real- valued function, but much better results can be obtained with a deep problem-speciï¬ c architec- ture which develops hierarchical representations. By these means, the search space is heavily con- strained and efï¬ cient solutions can be learned with gradient descent. ConvNets are namely adapted for computer vision because of the compositional structure of an image. Texts have similar proper- ties : characters combine to form n-grams, stems, words, phrase, sentences etc. We believe that a challenge in NLP is to develop deep architectures which are able to learn hierar- chical representations of whole sentences, jointly with the task. In this paper, we propose to use deep architectures of many convolutional layers to ap- proach this goal, using up to 29 layers. The design of our architecture is inspired by recent progress in computer vision, in particular (Simonyan and Zisserman, 2015; He et al., 2016a). This paper is structured as follows. There have been previous attempts to use ConvNets for text processing. We summarize the previous works in the next section and discuss the relations and dif- ferences. Our architecture is described in detail in section 3. We have evaluated our approach on # 2 Related work There is a large body of research on sentiment analysis, or more generally on sentence classiï¬ ca- tion tasks. Initial approaches followed the clas- sical two stage scheme of extraction of (hand- crafted) features, followed by a classiï¬ cation stage. Typical features include bag-of-words or n- grams, and their TF-IDF. These techniques have been compared with ConvNets by (Zhang et al., 2015; Zhang and LeCun, 2015). We use the same corpora for our experiments. | 1606.01781#5 | 1606.01781#7 | 1606.01781 | [
"1502.01710"
] |
1606.01781#7 | Very Deep Convolutional Networks for Text Classification | More recently, words or characters, have been projected into a low-dimensional space, and these embeddings are combined to obtain a ï¬ xed size representation of the input sentence, which then serves as input for the classiï¬ er. The simplest combination is the element-wise mean. This usually performs badly since all notion of token order is disregarded. Another class of approaches are recursive neu- ral networks. The main idea is to use an ex- ternal tool, namely a parser, which speciï¬ es the order in which the word embeddings are com- bined. At each node, the left and right context are combined using weights which are shared for all nodes (Socher et al., 2011). The state of the top node is fed to the classiï¬ | 1606.01781#6 | 1606.01781#8 | 1606.01781 | [
"1502.01710"
] |
1606.01781#8 | Very Deep Convolutional Networks for Text Classification | er. A recurrent neural net- work (RNN) could be considered as a special case of a recursive NN: the combination is performed sequentially, usually from left to right. The last state of the RNN is used as ï¬ xed-sized representa- tion of the sentence, or eventually a combination of all the hidden states. First works using convolutional neural networks for NLP appeared in (Collobert and Weston, 2008; Collobert et al., 2011). They have been subse- quently applied to sentence classiï¬ cation (Kim, 2014; Kalchbrenner et al., 2014; Zhang et al., 2015). We will discuss these techniques in more detail below. If not otherwise stated, all ap- proaches operate on words which are projected into a high-dimensional space. A rather shallow neural net was proposed in (Kim, 2014): one convolutional layer (using multiple widths and ï¬ lters) followed by a max pooling layer over time. The ï¬ nal classiï¬ | 1606.01781#7 | 1606.01781#9 | 1606.01781 | [
"1502.01710"
] |
1606.01781#9 | Very Deep Convolutional Networks for Text Classification | er uses one fully connected layer with drop-out. Results are reported on six data sets, in particular Stanford Sentiment Treebank (SST). A similar system was proposed in (Kalchbrenner et al., 2014), but us- ing ï¬ ve convolutional layers. An important differ- ence is also the introduction of multiple temporal k-max pooling layers. This allows to detect the k most important features in a sentence, independent of their speciï¬ c position, preserving their relative order. The value of k depends on the length of the sentence and the position of this layer in the network. (Zhang et al., 2015) were the ï¬ rst to per- form sentiment analysis entirely at the character level. Their systems use up to six convolutional layers, followed by three fully connected classiï¬ - cation layers. Convolutional kernels of size 3 and 7 are used, as well as simple max-pooling layers. Another interesting aspect of this paper is the in- troduction of several large-scale data sets for text classiï¬ | 1606.01781#8 | 1606.01781#10 | 1606.01781 | [
"1502.01710"
] |
1606.01781#10 | Very Deep Convolutional Networks for Text Classification | cation. We use the same experimental set- ting (see section 4.1). The use of character level information was also proposed by (Dos Santos and Gatti, 2014): all the character embeddings of one word are combined by a max operation and they are then jointly used with the word embedding in- formation in a shallow architecture. In parallel to our work, (Yang et al., 2016) proposed a based hi- erarchical attention network for document classi- ï¬ cation that perform an attention ï¬ rst on the sen- tences in the document, and on the words in the sentence. Their architecture performs very well on datasets whose samples contain multiple sen- tences. In the computer vision community, the com- bination of recurrent and convolutional networks in one architecture has also been investigated, with the goal to â get the best of both worldsâ , e.g. (Pinheiro and Collobert, 2014). The same idea was recently applied to sentence classiï¬ ca- tion (Xiao and Cho, 2016). A convolutional net- work with up to ï¬ ve layers is used to learn high- level features which serve as input for an LSTM. The initial motivation of the authors was to ob- tain the same performance as (Zhang et al., 2015) with networks which have signiï¬ cantly fewer pa- rameters. They report results very close to those of (Zhang et al., 2015) or even outperform Con- vNets for some data sets. In summary, we are not aware of any work that uses VGG-like or ResNet-like architecture to go deeper than than six convolutional layers (Zhang et al., 2015) for sentence classiï¬ | 1606.01781#9 | 1606.01781#11 | 1606.01781 | [
"1502.01710"
] |
1606.01781#11 | Very Deep Convolutional Networks for Text Classification | cation. Deeper networks were not tried or they were re- ported to not improve performance. This is in sharp contrast to the current trend in computer vi- sion where signiï¬ cant improvements have been re- ported using much deeper networks(Krizhevsky et al., 2012), namely 19 layers (Simonyan and Zis- serman, 2015), or even up to 152 layers (He et al., 2016a). In the remainder of this paper, we describe our very deep convolutional architecture and re- port results on the same corpora than (Zhang et al., 2015). We were able to show that performance improves with increased depth, using up to 29 con- volutional layers. # 3 VDCNN Architecture The overall architecture of our network is shown in Figure 1. Our model begins with a look-up ta- ble that generates a 2D tensor of size (f0, s) that contain the embeddings of the s characters. s is ï¬ xed to 1024, and f0 can be seen as the â RGBâ dimension of the input text. We ï¬ rst apply one layer of 64 convolutions of size 3, followed by a stack of temporal â convolu- tional blocksâ . Inspired by the philosophy of VGG and ResNets we apply these two design rules: (i) for the same output temporal resolution, the layers have the same number of feature maps, (ii) when the temporal resolution is halved, the number of feature maps is doubled. This helps reduce the memory footprint of the network. The networks contains 3 pooling operations (halving the tempo- # fc(2048, nClasses) # I fc(2048, 2048), ReLU | 1606.01781#10 | 1606.01781#12 | 1606.01781 | [
"1502.01710"
] |
1606.01781#12 | Very Deep Convolutional Networks for Text Classification | fc(4096, 2048), ReLU output: 512 x k k-max pooling, k=8 Convolutional Block, 3, 512 optional shortcut Convolutional Block, 3, 512 output: 512 x s/8 pool/2 optional shortcut Convolutional Block, 3, 256 optional shortcut Convolutional Block, 3, 256 Convolutional Block, 3, 256 output: 256 x s/4 pool/2 optional shortcut Convolutional Block, 3, 128 optional shortcut Convolutional Block, 3, 128 output: 128 x s/2 pool/2 optional shortcut Convolutional Block, 3, 64 optional shortcut Convolutional Block, 3, 64 output: 64 x s 3, Temp Conv, 64 output: 16 x s Lookup table, 16 input : 1 x s # Text Figure 1: VDCNN architecture. ral resolution each time by 2), resulting in 3 levels of 128, 256 and 512 feature maps (see Figure 1). The output of these convolutional blocks is a ten- sor of size 512 à sd, where sd = s 2p with p = 3 the number of down-sampling operations. At this level of the convolutional network, the resulting tensor can be seen as a high-level representation of the input text. Since we deal with padded in- put text of ï¬ | 1606.01781#11 | 1606.01781#13 | 1606.01781 | [
"1502.01710"
] |
1606.01781#13 | Very Deep Convolutional Networks for Text Classification | xed size, sd is constant. However, in the case of variable size input, the convolu- tional encoder provides a representation of the in- put text that depends on its initial length s. Repre- sentations of a text as a set of vectors of variable size can be valuable namely for neural machine translation, in particular when combined with an In Figure 1, temporal convolu- attention model. tions with kernel size 3 and X feature maps are denoted â 3, Temp Conv, Xâ , fully connected layers which are linear projections (matrix of size I à O) are denoted â fc(I, O)â and â 3-max pooling, stride 2â means temporal max- pooling with kernel size 3 and stride 2. Most of the previous applications of ConvNets to NLP use an architecture which is rather shal- low (up to 6 convolutional layers) and combines convolutions of different sizes, e.g. spanning 3, 5 and 7 tokens. This was motivated by the fact that convolutions extract n-gram features over tokens and that different n-gram lengths are needed to model short- and long-span relations. In this work, we propose to create instead an architecture which uses many layers of small convolutions (size 3). Stacking 4 layers of such convolutions results in a span of 9 tokens, but the network can learn by it- self how to best combine these different â 3-gram featuresâ in a deep hierarchical manner. Our ar- chitecture can be in fact seen as a temporal adap- tation of the VGG network (Simonyan and Zisser- man, 2015). We have also investigated the same kind of â ResNet shortcutâ connections as in (He et al., 2016a), namely identity and 1 à 1 convolu- tions (see Figure 1). For the classiï¬ cation tasks in this work, the tem- poral resolution of the output of the convolution blocks is ï¬ rst down-sampled to a ï¬ xed dimension using k-max pooling. By these means, the net- work extracts the k most important features, inde- pendently of the position they appear in the sen- tence. The 512 à k resulting features are trans- formed into a single vector which is the input to a three layer fully connected classiï¬ | 1606.01781#12 | 1606.01781#14 | 1606.01781 | [
"1502.01710"
] |
1606.01781#14 | Very Deep Convolutional Networks for Text Classification | er with ReLU hidden units and softmax outputs. The number of ReLU Temporal Batch Norm 3, Temp Conv, 256 ReLU Temporal Batch Norm 3, Temp Conv, 256 Figure 2: Convolutional block. output neurons depends on the classiï¬ cation task, the number of hidden units is set to 2048, and k to 8 in all experiments. We do not use drop-out with the fully connected layers, but only temporal batch normalization after convolutional layers to regularize our network. # Convolutional Block Each convolutional block (see Figure 2) is a se- layers, each one quence of two convolutional followed by a temporal BatchNorm (Ioffe and Szegedy, 2015) layer and an ReLU activation. The kernel size of all the temporal convolutions is 3, with padding such that the temporal resolution is preserved (or halved in the case of the convolu- tional pooling with stride 2, see below). Steadily increasing the depth of the network by adding more convolutional layers is feasible thanks to the limited number of parameters of very small con- volutional ï¬ lters in all layers. Different depths of the overall architecture are obtained by vary- ing the number of convolutional blocks in between the pooling layers (see table 2). Temporal batch normalization applies the same kind of regulariza- tion as batch normalization except that the activa- tions in a mini-batch are jointly normalized over temporal (instead of spatial) locations. So, for a mini-batch of size m and feature maps of tempo- ral size s, the sum and the standard deviations re- lated to the BatchNorm algorithm are taken over |B| = m · s terms. We explore three types of down-sampling be- tween blocks Ki and Ki+1 (Figure 1) : (i) The ï¬ rst convolutional stride 2 (ResNet-like). layer of Ki+1 has (ii) Ki is followed by a k-max pooling layer where k is such that the resolution is halved (Kalchbrenner et al., 2014). (iii) Ki is followed by max-pooling with kernel size 3 and stride 2 (VGG-like). All these types of pooling reduce the temporal res- olution by a factor 2. | 1606.01781#13 | 1606.01781#15 | 1606.01781 | [
"1502.01710"
] |
1606.01781#15 | Very Deep Convolutional Networks for Text Classification | At the ï¬ nal convolutional layer, the resolution is thus sd. Depth: conv block 512 conv block 256 conv block 128 conv block 64 First conv. layer #params [in M] 9 2 2 2 2 1 2.2 17 4 4 4 4 1 4.3 29 4 4 10 10 1 4.6 49 6 10 16 16 1 7.8 Table 2: Number of conv. layers per depth. In this work, we have explored four depths for our networks: 9, 17, 29 and 49, which we de- ï¬ ne as being the number of convolutional lay- ers. The depth of a network is obtained by sum- ming the number of blocks with 64, 128, 256 and 512 ï¬ lters, with each block containing two con- In Figure 1, the network has volutional layers. 2 blocks of each type, resulting in a depth of 2 à (2 + 2 + 2 + 2) = 16. Adding the very ï¬ rst convolutional layer, this sums to a depth of 17 con- volutional layers. The depth can thus be increased or decreased by adding or removing convolutional blocks with a certain number of ï¬ lters. The best conï¬ gurations we observed for depths 9, 17, 29 and 49 are described in Table 2. We also give the number of parameters of all convolutional layers. # 4 Experimental evaluation # 4.1 Tasks and data In the computer vision community, the availabil- ity of large data sets for object detection and im- age classiï¬ cation has fueled the development of new architectures. In particular, this made it pos- sible to compare many different architectures and to show the beneï¬ t of very deep convolutional net- works. We present our results on eight freely avail- able large-scale data sets introduced by (Zhang et al., 2015) which cover several classiï¬ cation tasks such as sentiment analysis, topic classiï¬ cation or news categorization (see Table 3). | 1606.01781#14 | 1606.01781#16 | 1606.01781 | [
"1502.01710"
] |
1606.01781#16 | Very Deep Convolutional Networks for Text Classification | The number of training examples varies from 120k up to 3.6M, and the number of classes is comprised between 2 and 14. This is considerably lower than in com- puter vision (e.g. 1 000 classes for ImageNet). #Classes Classiï¬ cation Task #Test 7.6k 60k 70k 38k 50k 60k 650k 400k #Train 120k 450k 560k 560k 650k 1 400k 3 000k 3 600k Data set AGâ | 1606.01781#15 | 1606.01781#17 | 1606.01781 | [
"1502.01710"
] |
1606.01781#17 | Very Deep Convolutional Networks for Text Classification | s news Sogou news DBPedia Yelp Review Polarity Yelp Review Full Yahoo! Answers Amazon Review Full Amazon Review Polarity 4 English news categorization 5 Chinese news categorization 14 Ontology classiï¬ cation 2 Sentiment analysis 5 Sentiment analysis 10 Topic classiï¬ cation 5 Sentiment analysis 2 Sentiment analysis Table 3: Large-scale text classiï¬ cation data sets used in our experiments. See (Zhang et al., 2015) for a detailed description. This has the consequence that each example in- duces less gradient information which may make it harder to train large architectures. It should be also noted that some of the tasks are very ambigu- ous, in particular sentiment analysis for which it is difï¬ cult to clearly associate ï¬ ne grained labels. | 1606.01781#16 | 1606.01781#18 | 1606.01781 | [
"1502.01710"
] |
1606.01781#18 | Very Deep Convolutional Networks for Text Classification | There are equal numbers of examples in each class for both training and test sets. The reader is re- ferred to (Zhang et al., 2015) for more details on the construction of the data sets. Table 4 summa- rizes the best published results on these corpora we are aware of. We do not use â Thesaurus data augmentationâ or any other preprocessing, except lower-casing. Nevertheless, we still outperform the best convolutional neural networks of (Zhang et al., 2015) for all data sets. | 1606.01781#17 | 1606.01781#19 | 1606.01781 | [
"1502.01710"
] |
1606.01781#19 | Very Deep Convolutional Networks for Text Classification | The main goal of our work is to show that it is possible and beneï¬ cial to train very deep convolutional networks as text encoders. Data augmentation may improve our re- sults even further. We will investigate this in future research. # 4.2 Common model settings rate of 0.01 and momentum of 0.9. We follow the same training procedure as in (Zhang et al., layers 2015). We initialize our convolutional following (He et al., 2015). One epoch took from 24 minutes to 2h45 for depth 9, and from 50 minutes to 7h (on the largest datasets) for depth 29. It took between 10 to 15 epoches to converge. The implementation is done using Torch 7. All experiments are performed on a single NVidia K40 GPU. Unlike previous research on the use of ConvNets for text processing, we use temporal batch norm without dropout. # 4.3 Experimental results In this section, we evaluate several conï¬ gurations of our model, namely three different depths and three different pooling types (see Section 3). Our main contribution is a thorough evaluation of net- works of increasing depth using an architecture with small temporal convolution ï¬ lters with dif- ferent types of pooling, which shows that a signif- icant improvement on the state-of-the-art conï¬ gu- rations can be achieved on text classiï¬ cation tasks by pushing the depth to 29 convolutional layers. The following settings have been used in all our experiments. They were found to be best in initial experiments. Following (Zhang et al., 2015), all processing is done at the char- acter level which is the atomic representation of a sentence, same as pixels for images. The dictionary consists of the following characters â abcdefghijklmnopqrstuvwxyz0123456 789-,;.!?:â "/| #$%Ë &*Ë â +=<>()[]{}â plus a special padding, space and unknown token which add up to a total of 69 tokens. The input text is padded to a ï¬ xed size of 1014, larger text are truncated. The character embedding is of size 16. Training is performed with SGD, using a mini-batch of size 128, an initial learning Our deep architecture works well on big data sets in particular, even for small depths. | 1606.01781#18 | 1606.01781#20 | 1606.01781 | [
"1502.01710"
] |
1606.01781#20 | Very Deep Convolutional Networks for Text Classification | Table 5 shows the test errors for depths 9, 17 and 29 and for each type of pooling : convolution with stride 2, k-max pooling and temporal max-pooling. For the smallest depth we use (9 convolutional layers), we see that our model already performs better than Zhangâ s convolutional baselines (which includes 6 convolutional layers and has a different archi- tecture) on the biggest data sets : Yelp Full, Ya- hoo Answers and Amazon Full and Polarity. | 1606.01781#19 | 1606.01781#21 | 1606.01781 | [
"1502.01710"
] |
1606.01781#21 | Very Deep Convolutional Networks for Text Classification | The most important decrease in classiï¬ cation error can be observed on the largest data set Amazon Full which has more than 3 Million training samples. Yah. A. Conv+RNN [Xiao] 28.26 24.2 Yelp F. Conv [Zhang] 37.95â - Amz. F. Amz. P. Corpus: Method Author Error [Yang] Sogou n-TFIDF n-TFIDF n-TFIDF [Zhang] [Zhang] [Zhang] 1.31 2.81 7.64 - - - AG DBP. Conv [Zhang] 40.43â 36.4 Conv [Zhang] 4.93â - Yelp P. ngrams [Zhang] 4.36 - Table 4: Best published results from previous work. Zhang et al. (2015) best results use a Thesaurus data augmentation technique (marked with an â ). Yang et al. (2016)â s hierarchical methods is particularly adapted to datasets whose samples contain multiple sentences. AG Sogou DBP. Yelp P. Yelp F. Yah. A. Amz. F. Amz. P. 37.63 Convolution 10.17 38.04 KMaxPooling 9.83 36.73 9.17 MaxPooling 36.10 Convolution 9.29 37.41 KMaxPooling 9.39 36.07 8.88 MaxPooling 35.28 Convolution 9.36 37.00 KMaxPooling 8.67 35.74 8.73 MaxPooling Table 5: Testing error of our models on the 8 data sets. No data preprocessing or augmentation is used. We also observe that for a small depth, temporal max-pooling works best on all data sets. iments, it seems to hurt performance to perform this type of max operation at intermediate layers (with the exception of the smallest data sets). Depth improves performance. As we increase the network depth to 17 and 29, the test errors decrease on all data sets, for all types of pooling (with 2 exceptions for 48 comparisons). Going from depth 9 to 17 and 29 for Amazon Full re- duces the error rate by 1% absolute. | 1606.01781#20 | 1606.01781#22 | 1606.01781 | [
"1502.01710"
] |
1606.01781#22 | Very Deep Convolutional Networks for Text Classification | Since the test is composed of 650K samples, 6.5K more test samples have been classiï¬ ed correctly. These improvements, especially on large data sets, are signiï¬ cant and show that increasing the depth is useful for text processing. Overall, compared to previous state-of-the-art, our best architecture with depth 29 and max-pooling has a test error of 37.0 compared to 40.43%. This represents a gain of 3.43% absolute accuracy. | 1606.01781#21 | 1606.01781#23 | 1606.01781 | [
"1502.01710"
] |
1606.01781#23 | Very Deep Convolutional Networks for Text Classification | The signiï¬ cant im- provements which we obtain on all data sets com- pared to Zhangâ s convolutional models do not in- clude any data augmentation technique. Max-pooling performs better than other pool- ing types. In terms of pooling, we can also see that max-pooling performs best overall, very close to convolutions with stride 2, but both are signiï¬ - cantly superior to k-max pooling. Both pooling mechanisms perform a max oper- ation which is local and limited to three consec- utive tokens, while k-max polling considers the whole sentence at once. According to our exper- Our models outperform state-of-the-art Con- vNets. We obtain state-of-the-art results for all data sets, except AGâ s news and Sogou news which are the smallest ones. However, with our very deep architecture, we get closer to the state- of-the-art which are ngrams TF-IDF for these data sets and signiï¬ cantly surpass convolutional mod- els presented in (Zhang et al., 2015). As observed in previous work, differences in accuracy between shallow (TF-IDF) and deep (convolutional) mod- els are more signiï¬ cant on large data sets, but we still perform well on small data sets while getting closer to the non convolutional state-of-the-art re- sults on small data sets. The very deep models even perform as well as ngrams and ngrams-TF- IDF respectively on the sentiment analysis task of Yelp Review Polarity and the ontology classi- ï¬ cation task of the DBPedia data set. Results of Yang et al. (only on Yahoo Answers and Amazon Full) outperform our model on the Yahoo Answers dataset, which is probably linked to the fact that their model is task-speciï¬ c to datasets whose sam- ples that contain multiple sentences like (question, answer). They use a hierarchical attention mecha- nism that apply very well to documents (with mul- tiple sentences). Going even deeper degrades accuracy. Short- cut connections help reduce the degradation. As described in (He et al., 2016a), the gain in accu- racy due to the the increase of the depth is limited when using standard ConvNets. | 1606.01781#22 | 1606.01781#24 | 1606.01781 | [
"1502.01710"
] |
1606.01781#24 | Very Deep Convolutional Networks for Text Classification | When the depth increases too much, the accuracy of the model gets saturated and starts degrading rapidly. This degra- dation problem was attributed to the fact that very deep models are harder to optimize. The gradi- ents which are backpropagated through the very deep networks vanish and SGD with momentum is not able to converge to a correct minimum of the loss function. To overcome this degradation of the model, the ResNet model introduced short- cut connections between convolutional blocks that allow the gradients to ï¬ ow more easily in the net- work (He et al., 2016a). We evaluate the impact of shortcut connections by increasing the number of convolutions to 49 layers. We present an adaptation of the ResNet model to the case of temporal convolutions for text (see Figure 1). Table 6 shows the evolution of the test errors on the Yelp Review Full data set with or without shortcut connections. | 1606.01781#23 | 1606.01781#25 | 1606.01781 | [
"1502.01710"
] |
1606.01781#25 | Very Deep Convolutional Networks for Text Classification | When looking at the column â without shortcutâ , we observe the same degradation problem as in the original ResNet ar- ticle: when going from 29 to 49 layers, the test error rate increases from 35.28 to 37.41 (while the training error goes up from 29.57 to 35.54). When using shortcut connections, we observe improved results when the network has 49 layers: both the training and test errors go down and the network is less prone to underï¬ tting than it was without short- cut connections. While shortcut connections give better results when the network is very deep (49 layers), we were not able to reach state-of-the-art results with them. We plan to further explore adaptations of residual networks to temporal convolutions as we think this a milestone for going deeper in NLP. Residual units (He et al., 2016a) better adapted to the text processing task may help for training even deeper models for text processing, and is left for future research. Exploring these models on text classiï¬ cation tasks with more classes sounds promising. Note that one of the most important difference between the classiï¬ cation tasks discussed in this work and ImageNet is that the latter deals with 1000 classes and thus much more information is back-propagated to the network through the gra- | 1606.01781#24 | 1606.01781#26 | 1606.01781 | [
"1502.01710"
] |
1606.01781#26 | Very Deep Convolutional Networks for Text Classification | depth without shortcut with shortcut 9 17 29 49 37.63 36.10 35.28 37.41 40.27 39.18 36.01 36.15 Table 6: Test error on the Yelp Full data set for all depths, with or without residual connections. dients. Exploring the impact of the depth of tem- poral convolutional models on categorization tasks with hundreds or thousands of classes would be an interesting challenge and is left for future research. # 5 Conclusion We have presented a new architecture for NLP which follows two design principles: 1) operate at the lowest atomic representation of text, i.e. char- acters, and 2) use a deep stack of local operations, i.e. convolutions and max-pooling of size 3, to learn a high-level hierarchical representation of a sentence. This architecture has been evaluated on eight freely available large-scale data sets and we were able to show that increasing the depth up to 29 convolutional layers steadily improves perfor- mance. Our models are much deeper than pre- viously published convolutional neural networks and they outperform those approaches on all data sets. | 1606.01781#25 | 1606.01781#27 | 1606.01781 | [
"1502.01710"
] |
1606.01781#27 | Very Deep Convolutional Networks for Text Classification | To the best of our knowledge, this is the ï¬ rst time that the â beneï¬ t of depthsâ was shown for convolutional neural networks in NLP. Eventhough text follows human-deï¬ ned rules and images can be seen as raw signals of our en- vironment, images and small texts have similar properties. Texts are also compositional for many languages. Characters combine to form n-grams, stems, words, phrase, sentences etc. These simi- lar properties make the comparison between com- puter vision and natural language processing very proï¬ table and we believe future research should invest into making text processing models deeper. Our work is a ï¬ rst attempt towards this goal. In this paper, we focus on the use of very deep convolutional neural networks for sentence classi- ï¬ cation tasks. Applying similar ideas to other se- quence processing tasks, in particular neural ma- chine translation is left for future research. It needs to be investigated whether these also beneï¬ t from having deeper convolutional encoders. | 1606.01781#26 | 1606.01781#28 | 1606.01781 | [
"1502.01710"
] |
1606.01781#28 | Very Deep Convolutional Networks for Text Classification | # References Yoshua Bengio, Rejean Ducharme, and Pascal Vin- cent. 2001. A neural probabilistic language model. In NIPS, volume 13, pages 932â 938, Vancouver, British Columbia, Canada. Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic lan- guage model. Journal of machine learning research, 3(Feb):1137â 1155. Ronan Collobert and Jason Weston. 2008. A uniï¬ ed architecture for natural language processing: deep neural networks with multitask learning. In ICML, pages 160â | 1606.01781#27 | 1606.01781#29 | 1606.01781 | [
"1502.01710"
] |
1606.01781#29 | Very Deep Convolutional Networks for Text Classification | 167, Helsinki, Finland. Ronan Collobert, Jason Weston Lon Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa. 2011. Natural language processing (almost) from scratch. JMLR, pages 2493â 2537. C´ıcero Nogueira Dos Santos and Maira Gatti. 2014. Deep convolutional neural networks for sentiment analysis of short texts. In COLING, pages 69â 78, Dublin, Ireland. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015. Delving deep into rectiï¬ ers: Surpass- ing human-level performance on imagenet classiï¬ - In Proceedings of the IEEE international cation. conference on computer vision, pages 1026â | 1606.01781#28 | 1606.01781#30 | 1606.01781 | [
"1502.01710"
] |
1606.01781#30 | Very Deep Convolutional Networks for Text Classification | 1034, Santiago, Chile. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016a. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 770â 778, Las Vegas, Nevada, USA. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Identity mappings in deep residual Sun. 2016b. networks. In European Conference on Computer Vision, pages 630â 645, Amsterdam, Netherlands. Springer. 1997. Long short-term memory. Neural computation, 9(8):1735â | 1606.01781#29 | 1606.01781#31 | 1606.01781 | [
"1502.01710"
] |
1606.01781#31 | Very Deep Convolutional Networks for Text Classification | 1780. Sergey Ioffe and Christian Szegedy. 2015. Batch nor- malization: Accelerating deep network training by reducing internal covariate shift. In ICML, pages 448â 456, Lille, France. Nal Kalchbrenner, Edward Grefenstette, and Phil Blun- som. 2014. A convolutional neural network for modelling sentences. In Proceedings of the 52nd Annual Meeting of the Association for Computa- tional Linguistics, pages 655â | 1606.01781#30 | 1606.01781#32 | 1606.01781 | [
"1502.01710"
] |
1606.01781#32 | Very Deep Convolutional Networks for Text Classification | 665, Baltimore, Mary- land, USA. 2014. Convolutional neural networks for sentence classiï¬ cation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746â 1751, Doha, Qatar. Association for Computational Lin- guistics. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hin- ton. 2012. Imagenet classiï¬ cation with deep con- volutional neural networks. In Advances in neural information processing systems, pages 1097â 1105, Lake Tahoe, California, USA. Yann LeCun, L´eon Bottou, Yoshua Bengio, and Patrick Haffner. 1998. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278â 2324. David G Lowe. 2004. Distinctive image features from International journal of scale-invariant keypoints. computer vision, 60(2):91â 110. Pedro HO Pinheiro and Ronan Collobert. 2014. Re- current convolutional neural networks for scene la- beling. In ICML, pages 82â 90, Beijing, China. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. | 1606.01781#31 | 1606.01781#33 | 1606.01781 | [
"1502.01710"
] |
1606.01781#33 | Very Deep Convolutional Networks for Text Classification | Neural machine translation of rare words with subword units. pages 1715â 1725. Karen Simonyan and Andrew Zisserman. 2015. Very deep convolutional networks for large-scale image recognition. In ICLR, San Diego, California, USA. Richard Socher, Jeffrey Pennington, Eric H Huang, Andrew Y Ng, and Christopher D Manning. 2011. Semi-supervised recursive autoencoders for predict- ing sentiment distributions. In Proceedings of the conference on empirical methods in natural lan- guage processing, pages 151â | 1606.01781#32 | 1606.01781#34 | 1606.01781 | [
"1502.01710"
] |
1606.01781#34 | Very Deep Convolutional Networks for Text Classification | 161, Edinburgh, UK. Association for Computational Linguistics. Martin Sundermeyer, Ralf Schl¨uter, and Hermann Ney. 2012. Lstm neural networks for language model- ing. In Interspeech, pages 194â 197, Portland, Ore- gon, USA. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural net- works. In NIPS, pages 3104â | 1606.01781#33 | 1606.01781#35 | 1606.01781 | [
"1502.01710"
] |
1606.01781#35 | Very Deep Convolutional Networks for Text Classification | 3112, Montreal, Canada. 2016. Efï¬ cient character-level document classiï¬ cation by combin- ing convolution and recurrent layers. Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchi- cal attention networks for document classiï¬ cation. In Proceedings of NAACL-HLT, pages 1480â 1489, San Diego, California, USA. Matthew D Zeiler and Rob Fergus. 2014. | 1606.01781#34 | 1606.01781#36 | 1606.01781 | [
"1502.01710"
] |
1606.01781#36 | Very Deep Convolutional Networks for Text Classification | Visualizing and understanding convolutional networks. In Eu- ropean conference on computer vision, pages 818â 833, Zurich, Switzerland. Springer. Xiang Zhang and Yann LeCun. 2015. Text understand- ing from scratch. arXiv preprint arXiv:1502.01710. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text clas- siï¬ cation. In NIPS, pages 649â 657, Montreal, Canada. | 1606.01781#35 | 1606.01781 | [
"1502.01710"
] |
|
1606.01885#0 | Learning to Optimize | 6 1 0 2 n u J 6 ] G L . s c [ 1 v 5 8 8 1 0 . 6 0 6 1 : v i X r a # Learning to Optimize # Ke Li Jitendra Malik Department of Electrical Engineering and Computer Sciences University of California, Berkeley Berkeley, CA 94720 United States {ke.li,malik}@eecs.berkeley.edu # Abstract Algorithm design is a laborious process and often requires many iterations of ideation and validation. In this paper, we explore automating algorithm design and present a method to learn an optimization algorithm, which we believe to be the ï¬ rst method that can automatically discover a better algorithm. We approach this problem from a reinforcement learning perspective and represent any particular optimization algorithm as a policy. We learn an optimization algorithm using guided policy search and demonstrate that the resulting algorithm outperforms existing hand-engineered algorithms in terms of convergence speed and/or the ï¬ | 1606.01885#1 | 1606.01885 | [
"1505.00521"
] |
|
1606.01885#1 | Learning to Optimize | nal objective value. # Introduction The current approach to designing algorithms is a laborious process. First, the designer must study the problem and devise an algorithm guided by a mixture of intuition, theoretical and/or empirical insight and general design paradigms. She then needs to analyze the algorithmâ s performance on prototypical examples and compare it to that of existing algorithms. If the algorithm falls short, she must uncover the underlying cause and ï¬ nd clever ways to overcome the discovered shortcomings. She iterates on this process until she arrives at an algorithm that is superior than existing algorithms. Given the often protracted nature of this process, a natural question to ask is: can we automate it? In this paper, we focus on automating the design of unconstrained continuous optimization algorithms, which are some of the most powerful and ubiquitous tools used in all areas of science and engineering. Extensive work over the past several decades has yielded many popular methods, like gradient descent, momentum, conjugate gradient and L-BFGS. These algorithms share one commonality: they are all hand-engineered â that is, the steps of these algorithms are carefully designed by human experts. Just as deep learning has achieved tremendous success by automating feature engineering, automating algorithm design could open the way to similar performance gains. We learn a better optimization algorithm by observing its execution. To this end, we formulate the problem as a reinforcement learning problem. Under this framework, any particular optimization algorithm simply corresponds to a policy. We reward optimization algorithms that converge quickly and penalize those that do not. Learning an optimization algorithm then reduces to ï¬ nding an optimal policy, which can be solved using any reinforcement learning method. To differentiate the algorithm that performs learning from the algorithm that is learned, we will henceforth refer to the former as the â learning algorithmâ or â learnerâ and the latter as the â autonomous algorithmâ or â policyâ . | 1606.01885#0 | 1606.01885#2 | 1606.01885 | [
"1505.00521"
] |
1606.01885#2 | Learning to Optimize | We use an off-the-shelf reinforcement learning algorithm known as guided policy search [17], which has demonstrated success in a variety of robotic control settings [18, 10, 19, 12]. We show empirically that the autonomous optimization algorithm we learn converges faster and/or ï¬ nds better optima than existing hand-engineered optimization algorithms. # 2 Related Work Early work has explored the general theme of speeding up learning with accumulation of learning experience. This line of work, known as â learning to learnâ or â meta-learningâ [1, 27, 5, 26], considers the problem of devising methods that can take advantage of knowledge learned on other related tasks to train faster, a problem that is today better known as multi-task learning and transfer learning. In contrast, the proposed method can learn to accelerate the training procedure itself, without necessarily requiring any training on related auxiliary tasks. A different line of work, known as â programming by demonstrationâ [7], considers the problem of learning programs from examples of input and output. Several different approaches have been proposed: Liang et al. [20] represents programs explicitly using a formal language, constructs a hierarchical Bayesian prior over programs and performs inference using an MCMC sampling procedure and Graves et al. [11] represents programs implicitly as sequences of memory access operations and trains a recurrent neural net to learn the underlying patterns in the memory access operations. Subsequent work proposes variants of this model that use different primitive memory access operations [14], more expressive operations [16, 28] or other non-differentiable operations [30, 29]. Others consider building models that permit parallel execution [15] or training models with stronger supervision in the form of execution traces [23]. The aim of this line of work is to replicate the behaviour of simple existing algorithms from examples, rather than to learn a new algorithm that is better than existing algorithms. There is a rich body of work on hyperparameter optimization, which studies the optimization of hyperparameters used to train a model, such as the learning rate, the momentum decay factor and regularization parameters. Most methods [13, 4, 24, 25, 9] rely on sequential model-based Bayesian optimization [22, 6], while others adopt a random search approach [3] or use gradient- based optimization [2, 8, 21]. | 1606.01885#1 | 1606.01885#3 | 1606.01885 | [
"1505.00521"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.