doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
1705.03122
31
BLEU Time (s) GNMT GPU (K80) 31.20 3,028 GNMT CPU 88 cores 31.20 1,322 GNMT TPU 31.21 384 ConvS2S GPU (K40) b = 1 33.45 327 ConvS2S GPU (M40) b= 1 33.45 221 ConvS2S GPU (GTX-1080ti) b = 1 33.45 142 ConvS2S CPU 48 cores b = 1 33.45 142 ConvS2S GPU (K40) b = 5 34.10 587 ConvS2S CPU 48 cores b = 5 34.10 482 ConvS2S GPU (M40) b= 5 34.10 406 ConvS2S GPU (GTX-1080ti) b = 5 34.10 256 Table 2. Accuracy of ensembles with eight models. We show both likelihood and Reinforce (RL) results for GNMT; Zhou et al. (2016) and ConvS2S use simple likelihood training. Table 3.CPU and GPU generation speed in seconds on the de- velopment set of WMT’ 14 English-French. We show results for different beam sizes b. GNMT figures are taken from Wu et al. (2016). CPU speeds are not directly comparable because Wu et al. (2016) use a 88 core machine versus our 48 core setup.
1705.03122#31
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
http://arxiv.org/pdf/1705.03122
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
cs.CL
null
null
cs.CL
20170508
20170725
[ { "id": "1611.01576" }, { "id": "1610.00072" }, { "id": "1609.09106" }, { "id": "1612.08083" }, { "id": "1607.06450" } ]
1705.03122
32
The translations produced by our models often match the length of the references, particularly for the large WMT’ 14 English-French task, or are very close for small to medium data sets such as WMT’ 14 English-German or WMT’ 16 English-Romanian. # 5.2. Ensemble Results Next, we ensemble eight likelihood-trained models for both WMT’ 14 English-German and WMT’ 14 English-French and compare to previous work which also reported ensem- ble results. For the former, we also show the result when ensembling 10 models. Table 2 shows that we outperform the best current ensembles on both datasets. # 5.3. Generation Speed
1705.03122#32
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
http://arxiv.org/pdf/1705.03122
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
cs.CL
null
null
cs.CL
20170508
20170725
[ { "id": "1611.01576" }, { "id": "1610.00072" }, { "id": "1609.09106" }, { "id": "1612.08083" }, { "id": "1607.06450" } ]
1705.03122
33
# 5.3. Generation Speed Next, we evaluate the inference speed of our architecture on the development set of the WMT’ 14 English-French task which is the concatenation of newstest2012 and new- stest2013; it comprises 6003 sentences. We measure gener- ation speed both on GPU and CPU hardware. Specifically, we measure GPU speed on three generations of Nvidia cards: a GTX-1080ti, an M40 as well as an older K40 card. CPU timings are measured on one host with 48 hyper- threaded cores (Intel Xeon E5-2680 @ 2.50GHz) with 40 workers. In all settings, we batch up to 128 sentences, com- posing batches with sentences of equal length. Note that the majority of batches is smaller because of the small size of the development set. We experiment with beams of size 5 as well as greedy search, i.e beam of size 1. To make gen- eration fast, we do not recompute convolution states that have not changed compared to the previous time step but rather copy (shift) these activations.
1705.03122#33
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
http://arxiv.org/pdf/1705.03122
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
cs.CL
null
null
cs.CL
20170508
20170725
[ { "id": "1611.01576" }, { "id": "1610.00072" }, { "id": "1609.09106" }, { "id": "1612.08083" }, { "id": "1607.06450" } ]
1705.03122
34
use Nvidia K80 GPUs which are essentially two K40s. We did not have such a GPU available and therefore run ex- periments on an older K40 card which is inferior to a K80, in addition to the newer M40 and GTX-1080ti cards. The results (Table 3) show that our model can generate transla- tions on a K40 GPU at 9.3 times the speed and 2.25 higher BLEU; on an M40 the speed-up is up to 13.7 times and on a GTX-1080ti card the speed is 21.3 times faster. A larger beam of size 5 decreases speed but gives better BLEU. On CPU, our model is up to 9.3 times faster, however, the GNMT CPU results were obtained with an 88 core machine whereas our results were obtained with just over half the number of cores. On a per CPU core basis, our model is 17 times faster at a better BLEU. Finally, our CPU speed is 2.7 times higher than GNMT on a custom TPU chip which shows that high speed can be achieved on commodity hard- ware. We do no report TPU figures as we do not have ac- cess to this hardware. # 5.4. Position Embeddings
1705.03122#34
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
http://arxiv.org/pdf/1705.03122
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
cs.CL
null
null
cs.CL
20170508
20170725
[ { "id": "1611.01576" }, { "id": "1610.00072" }, { "id": "1609.09106" }, { "id": "1612.08083" }, { "id": "1607.06450" } ]
1705.03122
35
# 5.4. Position Embeddings In the following sections, we analyze the design choices in our architecture. The remaining results in this paper are based on the WMT’ 14 English-German task with 13 en- coder layers at kernel size 3 and 5 decoder layers at kernel size 5. We use a target vocabulary of 160K words as well as vocabulary selection (Mi et al., 2016; L’ Hostis et al., 2016) to decrease the size of the output layer which speeds up training and testing. The average vocabulary size for each training batch is about 20K target words. All figures are av- eraged over three runs (§4) and BLEU is reported on new- stest2014 before unknown word replacement. We compare to results reported in Wu et al. (2016) who We start with an experiment that removes the position em7 Convolutional Sequence to Sequence Learning PPL BLEU ConvS2S 6.64 21.7 -source position 6.69 21.3 -target position 6.63 21.5 -source & target position 6.68 21.2 Table 4. Effect of removing position embeddings from our model in terms of validation perplexity (valid PPL) and BLEU.
1705.03122#35
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
http://arxiv.org/pdf/1705.03122
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
cs.CL
null
null
cs.CL
20170508
20170725
[ { "id": "1611.01576" }, { "id": "1610.00072" }, { "id": "1609.09106" }, { "id": "1612.08083" }, { "id": "1607.06450" } ]
1705.03122
37
beddings from the encoder and decoder (§3.1). These em- beddings allow our model to identify which portion of the source and target sequence it is dealing with but also im- pose a restriction on the maximum sentence length. Ta- ble 4 shows that position embeddings are helpful but that our model still performs well without them. Removing the source position embeddings results in a larger accuracy decrease than target position embeddings. However, re- moving both source and target positions decreases accuracy only by 0.5 BLEU. We had assumed that the model would not be able to calibrate the length of the output sequences very well without explicit position information, however, the output lengths of models without position embeddings closely matches models with position information. This in- dicates that the models can learn relative position informa- tion within the contexts visible to the encoder and decoder networks which can observe up to 27 and 25 words respec- tively. Table 5. Multi-step attention in all five decoder layers or fewer layers in terms of validation perplexity (PPL) and test BLEU. 22 21.5 | - 21 20.5 BLEU 20 |- 19.5 Ncoder : Decoder —— 12345 67 8 9 101112131415161718192021 22232425 Layers 19
1705.03122#37
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
http://arxiv.org/pdf/1705.03122
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
cs.CL
null
null
cs.CL
20170508
20170725
[ { "id": "1611.01576" }, { "id": "1610.00072" }, { "id": "1609.09106" }, { "id": "1612.08083" }, { "id": "1607.06450" } ]
1705.03122
38
Recurrent models typically do not use explicit position em- beddings since they can learn where they are in the se- quence through the recurrent hidden state computation. In our setting, the use of position embeddings requires only a simple addition to the input word embeddings which is a negligible overhead. # 5.5. Multi-step Attention The multiple attention mechanism (§3.3) computes a sep- arate source context vector for each decoder layer. The computation also takes into account contexts computed for preceding decoder layers of the current time step as well as previous time steps that are within the receptive field of the decoder. How does multiple attention compare to at- tention in fewer layers or even only in a single layer as is usual? Table 5 shows that attention in all decoder layers achieves the best validation perplexity (PPL). Furthermore, removing more and more attention layers decreases accu- racy, both in terms of BLEU as well as PPL. The computational overhead for attention is very small compared to the rest of the network. Training with atten- tion in all five decoder layers processes 3624 target words per second on average on a single GPU, compared to 3772 words per second for attention in a single layer. This is only Figure 2. Encoder and decoder with different number of layers.
1705.03122#38
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
http://arxiv.org/pdf/1705.03122
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
cs.CL
null
null
cs.CL
20170508
20170725
[ { "id": "1611.01576" }, { "id": "1610.00072" }, { "id": "1609.09106" }, { "id": "1612.08083" }, { "id": "1607.06450" } ]
1705.03122
39
Figure 2. Encoder and decoder with different number of layers. a 4% slow down when adding 4 attention modules. Most neural machine translation systems only use a single mod- ule. This demonstrates that attention is not the bottleneck in neural machine translation, even though it is quadratic in the sequence length (cf. Kalchbrenner et al., 2016). Part of the reason for the low impact on speed is that we batch the computation of an attention module over all target words, similar to Kalchbrenner et al. (2016). However, for RNNs batching of the attention may be less effective because of the dependence on the previous time step. # 5.6. Kernel size and Depth Figure 2 shows accuracy when we change the number of layers in the encoder or decoder. The kernel width for lay- ers in the encoder is 3 and for the decoder it is 5. Deeper architectures are particularly beneficial for the encoder but less so for the decoder. Decoder setups with two layers al- ready perform well whereas for the encoder accuracy keeps increasing steadily with more layers until up to 9 layers when accuracy starts to plateau. 8 Convolutional Sequence to Sequence Learning
1705.03122#39
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
http://arxiv.org/pdf/1705.03122
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
cs.CL
null
null
cs.CL
20170508
20170725
[ { "id": "1611.01576" }, { "id": "1610.00072" }, { "id": "1609.09106" }, { "id": "1612.08083" }, { "id": "1607.06450" } ]
1705.03122
40
8 Convolutional Sequence to Sequence Learning DUC-2004 Gigaword RG-1(R) RG-2(R) RG-L(R) RG-1(F) RG-2(F) RG-L(F) RNN MLE (Shen et al., 2016) 24.92 8.60 22.25 32.67 15.23 30.56 RNN MRT (Shen et al., 2016) 30.41 10.87 26.79 36.54 16.59 33.44 WFE (Suzuki & Nagata, 2017) 32.28 10.54 27.80 36.30 17.31 33.88 ConvS2S 30.44 10.84 26.90 35.88 17.48 33.29 Table 6. Accuracy on two summarization tasks in terms of Rouge-1 (RG-1), Rouge-2 (RG-2), and Rouge-L (RG-L). Kernel width Encoder layers 5 9 13 3 20.61 21.17 21.63 5 20.80 21.02 21.42 7 20.81 21.30 21.09 Table 7. Encoder with different kernel width in terms of BLEU. Kernel width Decoder layers 3 5 7 3 21.10 21.71 21.62 5 21.09 21.63 21.24 7 21.40 21.31 21.33
1705.03122#40
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
http://arxiv.org/pdf/1705.03122
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
cs.CL
null
null
cs.CL
20170508
20170725
[ { "id": "1611.01576" }, { "id": "1610.00072" }, { "id": "1609.09106" }, { "id": "1612.08083" }, { "id": "1607.06450" } ]
1705.03122
41
Kernel width Decoder layers 3 5 7 3 21.10 21.71 21.62 5 21.09 21.63 21.24 7 21.40 21.31 21.33 model structure. We expect our model to benefit from these improvements as well. # 6. Conclusion and Future Work We introduce the first fully convolutional model for se- quence to sequence learning that outperforms strong re- current models on very large benchmark datasets at an or- der of magnitude faster speed. Compared to recurrent net- works, our convolutional approach allows to discover com- positional structure in the sequences more easily since rep- resentations are built hierarchically. Our model relies on gating and performs multiple attention steps. Table 8. Decoder with different kernel width in terms of BLEU. Aside from increasing the depth of the networks, we can also change the kernel width. Table 7 shows that encoders with narrow kernels and many layers perform better than wider kernels. These networks can also be faster since the amount of work to compute a kernel operating over 3 input elements is less than half compared to kernels over 7 ele- ments. We see a similar picture for decoder networks with large kernel sizes (Table 8). Dauphin et al. (2016) shows that context sizes of 20 words are often sufficient to achieve very good accuracy on language modeling for English.
1705.03122#41
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
http://arxiv.org/pdf/1705.03122
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
cs.CL
null
null
cs.CL
20170508
20170725
[ { "id": "1611.01576" }, { "id": "1610.00072" }, { "id": "1609.09106" }, { "id": "1612.08083" }, { "id": "1607.06450" } ]
1705.03122
42
We achieve a new state of the art on several public trans- lation benchmark data sets. On the WMT’16 English- Romanian task we outperform the previous best result by 1.9 BLEU, on WMT” 14 English-French translation we im- prove over the LSTM model of Wu et al. (2016) by 1.6 BLEU in a comparable setting, and on WMT’ 14 English- German translation we ouperform the same model by 0.5 BLEU. In future work, we would like to apply convolu- tional architectures to other sequence to sequence learn- ing problems which may benefit from learning hierarchical representations as well. # Acknowledgements # 5.7. Summarization We thank Benjamin Graham for providing a fast 1-D con- volution, and Ronan Collobert as well as Yann LeCun for helpful discussions related to this work.
1705.03122#42
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
http://arxiv.org/pdf/1705.03122
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
cs.CL
null
null
cs.CL
20170508
20170725
[ { "id": "1611.01576" }, { "id": "1610.00072" }, { "id": "1609.09106" }, { "id": "1612.08083" }, { "id": "1607.06450" } ]
1705.03122
43
We thank Benjamin Graham for providing a fast 1-D con- volution, and Ronan Collobert as well as Yann LeCun for helpful discussions related to this work. Finally, we evaluate our model on abstractive sentence summarization which takes a long sentence as input and outputs a shortened version. The current best models on this task are recurrent neural networks which either opti- mize the evaluation metric (Shen et al., 2016) or address specific problems of summarization such as avoiding re- peated generations (Suzuki & Nagata, 2017). We use stan- dard likelhood training for our model and a simple model with six layers in the encoder and decoder each, hidden size 256, batch size 128, and we trained on a single GPU in one night. Table 6 shows that our likelhood trained model outperforms the likelihood trained model (RNN MLE) of Shen et al. (2016) and is not far behind the best models on this task which benefit from task-specific optimization and # References Ba, Jimmy Lei, Kiros, Jamie Ryan, and Hinton, Ge- offrey E. Layer normalization. arXiv _ preprint arXiv:1607.06450, 2016.
1705.03122#43
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
http://arxiv.org/pdf/1705.03122
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
cs.CL
null
null
cs.CL
20170508
20170725
[ { "id": "1611.01576" }, { "id": "1610.00072" }, { "id": "1609.09106" }, { "id": "1612.08083" }, { "id": "1607.06450" } ]
1705.03122
44
Bahdanau, Dzmitry, Cho, Kyunghyun, and Bengio, Yoshua. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv: 1409.0473, 2014. Bojar, Ondej, Chatterjee, Rajen, Federmann, Christian, Graham, Yvette, Haddow, Barry, Huck, Matthias, 9 Convolutional Sequence to Sequence Learning Jimeno-Yepes, Antonio, Koehn, Philipp, Logacheva, Varvara, Monz, Christof, Negri, Matteo, Névéol, Aurélie, Neves, Mariana L., Popel, Martin, Post, Matt, Rubino, Raphaél, Scarton, Carolina, Specia, Lucia, Turchi, Marco, Verspoor, Karin M., and Zampieri, Mar- cos. Findings of the 2016 conference on machine trans- lation. In Proc. of WMT, 2016. Bradbury, James, Merity, Stephen, Xiong, Caiming, and Socher, Richard. Quasi-Recurrent Neural Networks. arXiv preprint arXiv:1611.01576, 2016.
1705.03122#44
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
http://arxiv.org/pdf/1705.03122
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
cs.CL
null
null
cs.CL
20170508
20170725
[ { "id": "1611.01576" }, { "id": "1610.00072" }, { "id": "1609.09106" }, { "id": "1612.08083" }, { "id": "1607.06450" } ]
1705.03122
45
Cho, Kyunghyun, Van Merriénboer, Bart, Gulcehre, Caglar, Bahdanau, Dzmitry, Bougares, Fethi, Schwenk, Holger, and Bengio, Yoshua. Learning Phrase Represen- tations using RNN Encoder-Decoder for Statistical Ma- chine Translation. In Proc. of EMNLP, 2014. Chorowski, Jan K, Bahdanau, Dzmitry, Serdyuk, Dmitriy, Cho, Kyunghyun, and Bengio, Yoshua. Attention-based models for speech recognition. In Advances in Neural Information Processing Systems, pp. 577-585, 2015. Collobert, Ronan, Kavukcuoglu, Koray, and Farabet, Clement. Torch7: A Matlab-like Environment for Ma- chine Learning. In BigLearn, NIPS Workshop, 2011. URL http://torch.ch. He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Delving deep into rectifiers: Surpassing human- level performance on imagenet classification. In Pro- ceedings of the IEEE International Conference on Com- puter Vision, pp. 1026-1034, 2015b.
1705.03122#45
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
http://arxiv.org/pdf/1705.03122
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
cs.CL
null
null
cs.CL
20170508
20170725
[ { "id": "1611.01576" }, { "id": "1610.00072" }, { "id": "1609.09106" }, { "id": "1612.08083" }, { "id": "1607.06450" } ]
1705.03122
46
Hochreiter, Sepp and Schmidhuber, Jiirgen. Long short- term memory. Neural computation, 9(8):1735-1780, 1997. loffe, Sergey and Szegedy, Christian. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of The 32nd International Conference on Machine Learning, pp. 448-456, 2015. Jean, Sébastien, Firat, Orhan, Cho, Kyunghyun, Memi- sevic, Roland, and Bengio, Yoshua. Montreal Neural Machine Translation systems for WMT15. In Proc. of WMT, pp. 134-140, 2015. Kalchbrenner, Nal, Espeholt, Lasse, Simonyan, Karen, van den Oord, Aaron, Graves, Alex, and Kavukcuoglu, Koray. Neural Machine Translation in Linear Time. arXiv, 2016. LeCun, Yann and Bengio, Yoshua. Convolutional networks for images, speech, and time series. The handbook of brain theory and neural networks, 3361(10):1995, 1995.
1705.03122#46
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
http://arxiv.org/pdf/1705.03122
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
cs.CL
null
null
cs.CL
20170508
20170725
[ { "id": "1611.01576" }, { "id": "1610.00072" }, { "id": "1609.09106" }, { "id": "1612.08083" }, { "id": "1607.06450" } ]
1705.03122
47
Dauphin, Yann N., Fan, Angela, Auli, Michael, and Grang- ier, David. Language modeling with gated linear units. arXiv preprint arXiv:1612.08083, 2016. LU Hostis, Gurvan, Grangier, David, and Auli, Michael. Vo- cabulary Selection Strategies for Neural Machine Trans- lation. arXiv preprint arXiv:1610.00072, 2016. Dyer, Chris, Chahuneau, Victor, and Smith, Noah A. A Simple, Fast, and Effective Reparameterization of IBM Model 2. In Proc. of ACL, 2013. Lin, Chin-Yew. Rouge: A package for automatic evalu- ation of summaries. In Text Summarization Branches Out: Proceedings of the ACL-04 Workshop, pp. 74-81, 2004. Elman, Jeffrey L. Finding Structure in Time. Cognitive Science, 14:179-211, 1990. Gehring, Jonas, Auli, Michael, Grangier, David, and Dauphin, Yann N. A Convolutional Encoder Model for Neural Machine Translation. arXiv preprint arXiv: 1611.02344, 2016.
1705.03122#47
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
http://arxiv.org/pdf/1705.03122
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
cs.CL
null
null
cs.CL
20170508
20170725
[ { "id": "1611.01576" }, { "id": "1610.00072" }, { "id": "1609.09106" }, { "id": "1612.08083" }, { "id": "1607.06450" } ]
1705.03122
48
Glorot, Xavier and Bengio, Yoshua. Understanding the difficulty of training deep feedforward neural networks. The handbook of brain theory and neural networks, 2010. Luong, Minh-Thang, Pham, Hieu, and Manning, Christo- pher D. Effective approaches to attention-based neural machine translation. In Proc. of EMNLP, 2015. Meng, Fandong, Lu, Zhengdong, Wang, Mingxuan, Li, Hang, Jiang, Wenbin, and Liu, Qun. Encoding Source Language with Convolutional Neural Network for Ma- chine Translation. In Proc. of ACL, 2015. Mi, Haitao, Wang, Zhiguo, and Ittycheriah, Abe. Vocab- ulary Manipulation for Neural Machine Translation. In Proc. of ACL, 2016. Graff, David, Kong, Junbo, Chen, Ke, and Maeda, Kazuaki. English gigaword. Linguistic Data Consor- tium, Philadelphia, 2003. Ha, David, Dai, Andrew, and Le, Quoc V. Hypernetworks. arXiv preprint arXiv:1609.09106, 2016. He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Deep Residual Learning for Image Recognition. In Proc. of CVPR, 2015a.
1705.03122#48
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
http://arxiv.org/pdf/1705.03122
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
cs.CL
null
null
cs.CL
20170508
20170725
[ { "id": "1611.01576" }, { "id": "1610.00072" }, { "id": "1609.09106" }, { "id": "1612.08083" }, { "id": "1607.06450" } ]
1705.03122
49
Miller, Alexander H., Fisch, Adam, Dodge, Jesse, Karimi, Amir-Hossein, Bordes, Antoine, and Weston, Jason. Key-value memory networks for directly reading docu- ments. In Proc. of EMNLP, 2016. Nallapati, Ramesh, Zhou, Bowen, Gulcehre, Caglar, Xi- ang, Bing, et al. Abstractive text summarization us- ing sequence-to-sequence rnns and beyond. In Proc. of EMNLP, 2016. 10 Convolutional Sequence to Sequence Learning Oord, Aaron van den, Kalchbrenner, Nal, and Kavukcuoglu, Koray. Pixel recurrent neural networks. arXiv preprint arXiv: 1601.06759, 2016a. Sutskever, Ilya, Martens, James, Dahl, George E., and Hin- ton, Geoffrey E. On the importance of initialization and momentum in deep learning. In JCML, 2013. Oord, Aaron van den, Kalchbrenner, Nal, Vinyals, Oriol, Espeholt, Lasse, Graves, Alex, and Kavukcuoglu, Koray. Conditional image generation with pixelcnn decoders. arXiv preprint arXiv: 1606.05328, 2016b.
1705.03122#49
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
http://arxiv.org/pdf/1705.03122
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
cs.CL
null
null
cs.CL
20170508
20170725
[ { "id": "1611.01576" }, { "id": "1610.00072" }, { "id": "1609.09106" }, { "id": "1612.08083" }, { "id": "1607.06450" } ]
1705.03122
50
Sutskever, Ilya, Vinyals, Oriol, and Le, Quoc V. Sequence to Sequence Learning with Neural Networks. In Proc. of NIPS, pp. 3104-3112, 2014. Over, Paul, Dang, Hoa, and Harman, Donna. Duc in context. Information Processing & Management, 43(6): 1506-1520, 2007. Suzuki, Jun and Nagata, Masaaki. Cutting-off redundant repeating generations for neural abstractive summariza- tion. arXiv preprint arXiv: 1701.00138, 2017. Pascanu, Razvan, Mikolov, Tomas, and Bengio, Yoshua. On the difficulty of training recurrent neural networks. In Proceedings of The 30th International Conference on Machine Learning, pp. 1310-1318, 2013. Waibel, Alex, Hanazawa, Toshiyuki, Hinton, Geoffrey, Shikano, Kiyohiro, and Lang, Kevin J. Phoneme Recog- nition using Time-delay Neural Networks. /EEE trans- actions on acoustics, speech, and signal processing, 37 (3):328-339, 1989. Rush, Alexander M, Chopra, Sumit, and Weston, Jason. A neural attention model for abstractive sentence summa- rization. In Proc. of EMNLP, 2015.
1705.03122#50
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
http://arxiv.org/pdf/1705.03122
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
cs.CL
null
null
cs.CL
20170508
20170725
[ { "id": "1611.01576" }, { "id": "1610.00072" }, { "id": "1609.09106" }, { "id": "1612.08083" }, { "id": "1607.06450" } ]
1705.03122
51
Rush, Alexander M, Chopra, Sumit, and Weston, Jason. A neural attention model for abstractive sentence summa- rization. In Proc. of EMNLP, 2015. Salimans, Tim and Kingma, Diederik P. Weight nor- malization: A simple reparameterization to acceler- ate training of deep neural networks. arXiv preprint arXiv: 1602.07868, 2016. Schuster, Mike and Nakajima, Kaisuke. Japanese and ko- rean voice search. In Acoustics, Speech and Signal Pro- cessing (ICASSP), 2012 IEEE International Conference on, pp. 5149-5152. IEEE, 2012. Sennrich, Rico, Haddow, Barry, and Birch, Alexandra. Neural Machine Translation of Rare Words with Sub- word Units. In Proc. of ACL, 2016a. Wu, Yonghui, Schuster, Mike, Chen, Zhifeng, Le, Quoc V, Norouzi, Mohammad, Macherey, Wolfgang, Krikun, Maxim, Cao, Yuan, Gao, Qin, Macherey, Klaus, et al. Google’s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation. arXiv preprint arXiv: 1609.08144, 2016.
1705.03122#51
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
http://arxiv.org/pdf/1705.03122
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
cs.CL
null
null
cs.CL
20170508
20170725
[ { "id": "1611.01576" }, { "id": "1610.00072" }, { "id": "1609.09106" }, { "id": "1612.08083" }, { "id": "1607.06450" } ]
1705.03122
52
Yang, Zichao, Hu, Zhiting, Deng, Yuntian, Dyer, Chris, and Smola, Alex. Neural Machine Translation with Recurrent Attention Modeling. arXiv preprint arXiv: 1607.05108, 2016. Zhou, Jie, Cao, Ying, Wang, Xuguang, Li, Peng, and Xu, Wei. Deep Recurrent Models with Fast-Forward Con- nections for Neural Machine Translation. arXiv preprint arXiv: 1606.04199, 2016. Sennrich, Rico, Haddow, Barry, and Birch, Alexandra. Ed- inburgh Neural Machine Translation Systems for WMT 16. In Proc. of WMT, 2016b. Shazeer, Noam, Mirhoseini, Azalia, Maziarz, Krzysztof, Davis, Andy, Le, Quoc, Hinton, Geoffrey, and Dean, Jeff. Outrageously large neural networks: The sparsely- gated mixture-of-experts layer. ArXiv e-prints, January 2016. Shen, Shigi, Zhao, Yu, Liu, Zhiyuan, Sun, Maosong, et al. Neural headline generation with sentence-wise op- timization. arXiv preprint arXiv: 1604.01904, 2016.
1705.03122#52
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
http://arxiv.org/pdf/1705.03122
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
cs.CL
null
null
cs.CL
20170508
20170725
[ { "id": "1611.01576" }, { "id": "1610.00072" }, { "id": "1609.09106" }, { "id": "1612.08083" }, { "id": "1607.06450" } ]
1705.03122
53
Srivastava, Nitish, Hinton, Geoffrey E., Krizhevsky, Alex, Sutskever, Ilya, and Salakhutdinov, Ruslan. Dropout: a simple way to prevent Neural Networks from overfitting. JMLR, 15:1929-1958, 2014. Sukhbaatar, Sainbayar, Weston, Jason, Fergus, Rob, and Szlam, Arthur. End-to-end Memory Networks. In Proc. of NIPS, pp. 2440-2448, 2015. 11 Convolutional Sequence to Sequence Learning # A. Weight Initialization With « ~ N(0, std(zx)), this yields We derive a weight initialization scheme tailored to the GLU activation function similar to Glorot & Bengio (2010); He et al. (2015b) by focusing on the variance of activations within the network for both forward and back- ward passes. We also detail how we modify the weight initialization for dropout. ne lope 11 E[o(x)’] < ig El") ats (13) 1 1 = Var lz] + z (14) With (7) and Var[y_,] = Varly?_,] = Var[y_i], this results in # A.1, Forward Pass
1705.03122#53
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
http://arxiv.org/pdf/1705.03122
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
cs.CL
null
null
cs.CL
20170508
20170725
[ { "id": "1611.01576" }, { "id": "1610.00072" }, { "id": "1609.09106" }, { "id": "1612.08083" }, { "id": "1607.06450" } ]
1705.03122
54
With (7) and Var[y_,] = Varly?_,] = Var[y_i], this results in # A.1, Forward Pass Assuming that the inputs x; of a convolutional layer / and its weights W) are independent and identically distributed (i.i.d.), the variance of its output, computed as yy = W;x;+ by, is Var[y] =n Var [wii] (3) Var [x1] < ne yal + iVar[y-1]. (15) We initialize the embedding matrices in our network with small variances (around 0.01), which allows us to dismiss the quadratic term and approximate the GLU output vari- ance with where mn; is the number inputs to the layer. For one- dimensional convolutional layers with kernel width k and input dimension c, this is kc. We adopt the notation in (He et al., 2015b), i.e. y, wy and x; represent the random vari- ables in y;, W; and x;. With w; and 2; independent from each other and normally distributed with zero mean, this amounts to Var [yi] = nVar{wi|Var [x1]. (4) 1 Var|x] © Vary). (16)
1705.03122#54
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
http://arxiv.org/pdf/1705.03122
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
cs.CL
null
null
cs.CL
20170508
20170725
[ { "id": "1611.01576" }, { "id": "1610.00072" }, { "id": "1609.09106" }, { "id": "1612.08083" }, { "id": "1607.06450" } ]
1705.03122
55
Var [yi] = nVar{wi|Var [x1]. (4) 1 Var|x] © Vary). (16) If L network layers of equal size and with GLU activations are combined, the variance of the final output y,;, is given by Le L Varlyr] © Varlyi] Il q nVar[wi] } . (17) x; is the result of the GLU activation function yf1o(yf_1) with ya = (yfLa,yta) and [email protected]?1 iid. Next, we formulate upper and lower bounds in or- der to approximate Var{x,]. If yi_1 follows a symmetric distribution with mean 0, then Var [x1] = Var[yf_ o(y?_4)] (5) = E[(yiL 1o(yp ) *| — BE’ lyf o(yt_s)] (6) Following (He et al., 2015b), we aim to satisfy the condi- tion 1 qmuVar[wi] =1,Vvl (18) so that the activations in a network are neither exponen- tially magnified nor reduced. This is achieved by initializ- ing W, from N(0, \/4/ni). # A.2. Backward Pass
1705.03122#55
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
http://arxiv.org/pdf/1705.03122
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
cs.CL
null
null
cs.CL
20170508
20170725
[ { "id": "1611.01576" }, { "id": "1610.00072" }, { "id": "1609.09106" }, { "id": "1612.08083" }, { "id": "1607.06450" } ]
1705.03122
56
# A.2. Backward Pass = Varlyf_JE[o(y?_1)’]- ) A lower bound is given by (1/4)Var|yj_,] when expand- ing (6) with E?[o(y?_,)] = 1/4: Var[x] = Var[yf_1 o(y?_a)} (8) = Var yf] EB? o(yi-1)J+ Var[yf_s]Var[o(y?_1)] 1 = Very] + Var[yf_,]Var[o(y?_1)] (10) (9) The gradient of a convolutional layer is computed via back- propagation as Ax, = W/y;. Considering separate gradi- ents Ay? and Ay? for GLU, the gradient of x is given by Ax, = Wf Ay? + WPAy?. (19) Ww corresponds to W with re-arranged weights to enable back-propagation. Analogously to the forward pass, Az, wy, and Ay represent the random variables for the values in Ax, WwW, and Ay, respectively. Note that W and W contain the same values, i.e. Similar to (3), the variance of Az is w=w.
1705.03122#56
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
http://arxiv.org/pdf/1705.03122
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
cs.CL
null
null
cs.CL
20170508
20170725
[ { "id": "1611.01576" }, { "id": "1610.00072" }, { "id": "1609.09106" }, { "id": "1612.08083" }, { "id": "1607.06450" } ]
1705.03122
57
and Var[yf_,|Var[o(y?_,)] > 0. We utilize the relation a(x)? < (1/16)a? — 1/4 + (a) (Appendix B) to provide an upper bound on E[o(x)?]: Blo(o)"|< El aa? -F+olw)) a 1 ‘ 1 = qe) — 7 t Blo(a)] (12) Var|Ax)| = fy (Variwf]Var[Ay?)] + Var(w}|Var(Ayt]) (20) Here, i; is the number of inputs to layer /+1. The gradients for the GLU inputs are: Ay} = Axizio(y?) and Q1) Ay? = Axisiy?o'(y?). (22) 12 Convolutional Sequence to Sequence Learning The approximation for the forward pass can be used for Var[Ayf], and for estimating Var[Ay?] we assume an up- per bound on E[o’(y?)] of 1/16 since o’(y?) € [0, 4]. Hence, of r and Ex] = 0, the variance after dropout is Var(ar] = Elr?Var[2] + Var[r]Var[z] (29)
1705.03122#57
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
http://arxiv.org/pdf/1705.03122
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
cs.CL
null
null
cs.CL
20170508
20170725
[ { "id": "1611.01576" }, { "id": "1610.00072" }, { "id": "1609.09106" }, { "id": "1612.08083" }, { "id": "1607.06450" } ]
1705.03122
58
of r and Ex] = 0, the variance after dropout is Var(ar] = Elr?Var[2] + Var[r]Var[z] (29) = (: + -*) Var{c] (30) P 1 1 Var[Ayf] — qo Acs] < qa Vor ArelVarlyn)) (23) = 1 Vvar{a] (1) P 1 Var[Ay?] < pg aVar An V arly? (24) ) We observe relatively small gradients in our network, typ- ically around 0.001 at the start of training. Therefore, we approximate by discarding the quadratic terms above, i.e. Assuming that a the input of a convolutional layer has been subject to dropout with a retain probability p, the varia- tions of the forward and backward activations from §A.1 and §A.2 can now be approximated with 1 Var[ai41] © gv arledVarler) and (32) 1 Var[Ayt] © 4 Var[Ac+] (25) Var[Ay/] ¥ 0 (26) Var[Aa)] © rinVarlwf]Var{Are (27) As for the forward pass, the above result can be general- ized to backpropagation through many successive layers, resulting in
1705.03122#58
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
http://arxiv.org/pdf/1705.03122
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
cs.CL
null
null
cs.CL
20170508
20170725
[ { "id": "1611.01576" }, { "id": "1610.00072" }, { "id": "1609.09106" }, { "id": "1612.08083" }, { "id": "1607.06450" } ]
1705.03122
59
As for the forward pass, the above result can be general- ized to backpropagation through many successive layers, resulting in Var[Aa)] © FinVarlwflVarlAei43} (33) This amounts to a modified initialization of W; from a nor- mal distribution with zero mean and a standard deviation of \/4p/n. For layers without a succeeding GLU activation function, we initialize weights from N(0, \/p/n) to cali- brate for any immediately preceding dropout application. # B. Upper Bound on Squared Sigmoid L Var[Avs] © Var[Axry+1] Il tinVar(w!] (28) 1=2 and a similar condition, i.e. (1/4)f,Var[wf] = 1. In the networks we consider, successions of convolutional layers usually operate on the same number of inputs so that most cases ny = fy. Note that W/ is discarded in the approx- imation; however, for the sake of consistency we use the same initialization for Wt and W).
1705.03122#59
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
http://arxiv.org/pdf/1705.03122
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
cs.CL
null
null
cs.CL
20170508
20170725
[ { "id": "1611.01576" }, { "id": "1610.00072" }, { "id": "1609.09106" }, { "id": "1612.08083" }, { "id": "1607.06450" } ]
1705.03122
60
The sigmoid function o(a) can be expressed as a hyper- bolic tangent by using the identity tanh(x) = 20(2a) —1. The derivative of tanh is tanh’(x) = 1 — tanh?(x), and with tanh(x) € [0,1], 2 > 0 it holds that tanh’ (x) < 1,2 >0 (34) [ tann'(e) dz < [ora (35) Jo Jo tanh(“) <a,«#>0 (36) For arbitrarily large variances of network inputs and activa- tions, our approximations are invalid; in that case, the ini- tial values for W;? and W) would have to be balanced for the input distribution to be retained. Alternatively, meth- ods that explicitly control the variance in the network, e.g. batch normalization (loffe & Szegedy, 2015) or layer nor- malization (Ba et al., 2016) could be employed. We can express this relation with o() as follows: 1 2o(a)-1< gur20 (37) Both terms of this inequality have rotational symmetry w.r.t 0, and thus # A.3. Dropout
1705.03122#60
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
http://arxiv.org/pdf/1705.03122
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
cs.CL
null
null
cs.CL
20170508
20170725
[ { "id": "1611.01576" }, { "id": "1610.00072" }, { "id": "1609.09106" }, { "id": "1612.08083" }, { "id": "1607.06450" } ]
1705.03122
61
1 2o(a)-1< gur20 (37) Both terms of this inequality have rotational symmetry w.r.t 0, and thus # A.3. Dropout 2 (20(x) -1)’< (5°) Var (38) Dropout retains activations in a neural network with a prob- ability p and sets them to zero otherwise (Srivastava et al., 2014). It is common practice to scale the retained activa- tions by 1/p during training so that the weights of the net- work do not have to be modified at test time when p is set to 1. In this case, dropout amounts to multiplying activations x by a Bernoulli random variable r where Pr[r = 1/p] = p and Pr[r = 0] = 1 — p (Srivastava et al., 2014). It holds that E[r] = 1 and Var[r] = (1 —p)/p. If x is independent €a(2)? < a - ; +(x). (39) # C. Attention Visualization
1705.03122#61
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
http://arxiv.org/pdf/1705.03122
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
cs.CL
null
null
cs.CL
20170508
20170725
[ { "id": "1611.01576" }, { "id": "1610.00072" }, { "id": "1609.09106" }, { "id": "1612.08083" }, { "id": "1607.06450" } ]
1705.03122
63
13 Convolutional Sequence to Sequence Learning and 6 exhibit a linear alignment. The first layer shows the clearest alignment, although it is slig htly off and frequently attends to the corresponding source word of the previously generated target word. Layer 2 an ture and are presumably collecting 8 lack a clear struc- information about the whole source sentence. The fourth layer shows high align- ment scores on nouns such as “festival”, “way” and “work” for both the generated target nouns as well as their preced- ing words. Note that in German, those preceding words depend on gender and object relationship of the respec- tive noun. Finally, the attention scores in layer 5 and 7 focus on “built”, which is reordere in the German trans- lation and is moved from the beginning to the very end of the sentence. One interpretation for tion progresses, the model repeated this is that as genera- ly tries to perform the re-ordering. “aufgebaut” can be generated after a noun or pronoun only, which is reflected in tl sitions 2, 5, 8, 11 and 13. he higher scores at po14
1705.03122#63
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
http://arxiv.org/pdf/1705.03122
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
cs.CL
null
null
cs.CL
20170508
20170725
[ { "id": "1611.01576" }, { "id": "1610.00072" }, { "id": "1609.09106" }, { "id": "1612.08083" }, { "id": "1607.06450" } ]
1705.03122
64
Convolutional Sequence to Sequence Learning Layer | Layer 2 Layer 3 [15] </s> [15] </s> [15] </s> [14] - (14). (14] . [13] them [13] them [13] them [12] with [12] with [12] with [11] work [11] work [11] work [10] to [10] to [10] to [9] continuing [9] continuing [9] continuing [8] of [8] of [8] of [7] way [7] way [7] way [6]a [6] a [6]a [5S] as [5] as [5S] as [4] fesvval [4] festival [4] fesvval [3] this [3] this [3] this [2] built [2] built [2] built [1] We [1] We [1] We _ Ny Ww S uw a ~N @ re) _ —_ _ _ _ N w n cal na I © ~ -_ _ _ _ _ Ny Ww ns uw a ~w @ re) _ —_ a _ Srarce gn ee otsesas szsramrvoaogn ge tteea Srarce gn ee otsesas Sea RES ey BFE 4
1705.03122#64
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
http://arxiv.org/pdf/1705.03122
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
cs.CL
null
null
cs.CL
20170508
20170725
[ { "id": "1611.01576" }, { "id": "1610.00072" }, { "id": "1609.09106" }, { "id": "1612.08083" }, { "id": "1607.06450" } ]
1705.03122
65
_ —_ a _ Srarce gn ee otsesas szsramrvoaogn ge tteea Srarce gn ee otsesas Sea RES ey BFE 4 PeegR 7 2FFSRE RFE 4 Sea RES ey BFE 4 3 8 < 5 2 2 gs 9 v 3 8 < 5 2 ¢ 8 v 3 8 < 5 2 2 e 9 v s = ® 3 S 2 = ® 3 - ¢& s = ® 3 S a ° 3 @ 3 - 3 9 a ° 3 @ & 3 c = 8 c = 2 5S ey ao ey D ® D Layer 4 Layer 5 Layer 6 [15] </s> [15] </s> [15] </s> [14]. [14]. [14]. [13] them [13] them [13] them [12] with [12] with [12] with [11] work [11] work [11] work [10] to [10] to [10] to [9] continuing 9] continuing [9] continuing [8] of [8] of [8] of [7] way [7] way [7] way [6]a [6] a [6]a (S] as (S] as (S] as [4] festival [4] festival [4] festival (3] this [3] this (3] this
1705.03122#65
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
http://arxiv.org/pdf/1705.03122
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
cs.CL
null
null
cs.CL
20170508
20170725
[ { "id": "1611.01576" }, { "id": "1610.00072" }, { "id": "1609.09106" }, { "id": "1612.08083" }, { "id": "1607.06450" } ]
1705.03122
66
(S] as (S] as (S] as [4] festival [4] festival [4] festival (3] this [3] this (3] this [2] built [2] built [2] built [1] We [1] We [1] We FPNWBGBUAN GeO EEPEHRER ER EN WREUTAN OB eOEHRP ER EE EFNUWSBGaAN @eo EERE EE orRFN WwW SO YW o Fr Nn WwW +} WU orRFN WwW SO YW ezeaenreve ge eer ots ezavnrx,exzen ere reese ezeaenreve ge eer ots 7 eeRr eggs srezaFe° 4 -egkr secs eeFe° 4 7 eeRr eggs srezaFe° 4 3 8 < a 2 3 ca) v 5 8< a © 8 e © Vv 3 8 < a 2 3 ca) v = > ® 3 cS £ > ® 3 - ¢ = > ® 3 cS a P 3 @ Fs ° 3 9 a P 3 @ ° Z ° - 8 * # 3 5 ov o ov D ® D Layer 7 Layer 8 [15] </s> [15] </s> [14] - (14). [13] them [13] them [12] with
1705.03122#66
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
http://arxiv.org/pdf/1705.03122
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
cs.CL
null
null
cs.CL
20170508
20170725
[ { "id": "1611.01576" }, { "id": "1610.00072" }, { "id": "1609.09106" }, { "id": "1612.08083" }, { "id": "1607.06450" } ]
1705.03122
67
o ov D ® D Layer 7 Layer 8 [15] </s> [15] </s> [14] - (14). [13] them [13] them [12] with [12] with [11] work [11] work [10] to [10] to [9] continuing 9] continuing [8] of [8] of [7] way [7] way ({6]a [6] a (S] as (S] as [4] festival [4] festival [3] this [3] this [2] built [2] built [1] We [1] We FPNWBGBUAN GeO EEPEHRER ER EN WREUTAN OB eOEHRP ER EE orRFN WwW SO YW o Fr Nn WwW +} WU ezeaenreve ge eer ots ezavnrx,exzen ere reese Fee Rr*esESSERBFe Ss FegkR>™ 3SSS8PRBFe° 4 3 8 < a g 3 g 3 v 3 8 <¢ 5 2. ae v 2 => ® 3 S o > ® 3 “ ¢€ x J 3 o xn J 3 oO a. c 2 © ta cy ta = fc) ts 2 2 ov o > S D o Figure 3. Attention scores for different decoder layers
1705.03122#67
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
http://arxiv.org/pdf/1705.03122
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
cs.CL
null
null
cs.CL
20170508
20170725
[ { "id": "1611.01576" }, { "id": "1610.00072" }, { "id": "1609.09106" }, { "id": "1612.08083" }, { "id": "1607.06450" } ]
1705.00652
0
7 1 0 2 y a M 1 # ] L C . s c [ 1 v 2 5 6 0 0 . 5 0 7 1 : v i X r a # Efficient Natural Language Response Suggestion for Smart Reply MATTHEW HENDERSON, RAMI AL-RFOU, BRIAN STROPE, YUN-HSUAN SUNG, LASZLO LUKACS, RUIQI GUO, SANJIV KUMAR, BALINT MIKLOS, and RAY KURZWEIL, Google This paper presents a computationally efficient machine-learned method for natural language response suggestion. Feed-forward neural networks using n-gram embedding features encode messages into vectors which are optimized to give message-response pairs a high dot-product value. An optimized search finds response suggestions. The method is evaluated in a large-scale commercial e-mail application, Inbox by Gmail. Compared to a sequence-to-sequence approach, the new system achieves the same quality at a small fraction of the computational requirements and latency. Additional Key Words and Phrases: Natural Language Understanding; Deep Learning; Semantics; Email # 1 INTRODUCTION Applications of natural language understanding (NLU) are becoming increasingly interesting with scalable machine learning, web-scale training datasets, and applications that enable fast and nuanced quality evaluations with large numbers of user interactions.
1705.00652#0
Efficient Natural Language Response Suggestion for Smart Reply
This paper presents a computationally efficient machine-learned method for natural language response suggestion. Feed-forward neural networks using n-gram embedding features encode messages into vectors which are optimized to give message-response pairs a high dot-product value. An optimized search finds response suggestions. The method is evaluated in a large-scale commercial e-mail application, Inbox by Gmail. Compared to a sequence-to-sequence approach, the new system achieves the same quality at a small fraction of the computational requirements and latency.
http://arxiv.org/pdf/1705.00652
Matthew Henderson, Rami Al-Rfou, Brian Strope, Yun-hsuan Sung, Laszlo Lukacs, Ruiqi Guo, Sanjiv Kumar, Balint Miklos, Ray Kurzweil
cs.CL
null
null
cs.CL
20170501
20170501
[ { "id": "1606.01269" } ]
1705.00648
1
William Yang Wang Department of Computer Science University of California, Santa Barbara Santa Barbara, CA 93106 USA [email protected] # Abstract Automatic fake news detection is a chal- lenging problem in deception detection, and it has tremendous real-world politi- cal and social impacts. However, statis- tical approaches to combating fake news has been dramatically limited by the lack In this of labeled benchmark datasets. paper, we present LIAR: a new, publicly available dataset for fake news detection. We collected a decade-long, 12.8K man- ually labeled short statements in various contexts from POLITIFACT.COM, which provides detailed analysis report and links to source documents for each case. This dataset can be used for fact-checking re- search as well. Notably, this new dataset is an order of magnitude larger than pre- viously largest public fake news datasets of similar type. Empirically, we investi- gate automatic fake news detection based on surface-level linguistic patterns. We have designed a novel, hybrid convolu- tional neural network to integrate meta- data with text. We show that this hybrid approach can improve a text-only deep learning model. # Introduction
1705.00648#1
"Liar, Liar Pants on Fire": A New Benchmark Dataset for Fake News Detection
Automatic fake news detection is a challenging problem in deception detection, and it has tremendous real-world political and social impacts. However, statistical approaches to combating fake news has been dramatically limited by the lack of labeled benchmark datasets. In this paper, we present liar: a new, publicly available dataset for fake news detection. We collected a decade-long, 12.8K manually labeled short statements in various contexts from PolitiFact.com, which provides detailed analysis report and links to source documents for each case. This dataset can be used for fact-checking research as well. Notably, this new dataset is an order of magnitude larger than previously largest public fake news datasets of similar type. Empirically, we investigate automatic fake news detection based on surface-level linguistic patterns. We have designed a novel, hybrid convolutional neural network to integrate meta-data with text. We show that this hybrid approach can improve a text-only deep learning model.
http://arxiv.org/pdf/1705.00648
William Yang Wang
cs.CL, cs.CY
ACL 2017
null
cs.CL
20170501
20170501
[]
1705.00652
1
Applications of natural language understanding (NLU) are becoming increasingly interesting with scalable machine learning, web-scale training datasets, and applications that enable fast and nuanced quality evaluations with large numbers of user interactions. Early NLU systems parsed natural language with hand-crafted rules to explicit semantic repre- sentations, and used manually written state machines to generate specific responses from the output of parsing [18]. Such systems are generally limited to the situations imagined by the designer, and much of the development work involves writing more rules to improve the robustness of semantic parsing and the coverage of the state machines. These systems are brittle, and progress is slow [31]. Eventually adding more parsing rules and response strategies becomes too complicated for a single designer to manage, and dependencies between the rules become challenging to coordinate across larger teams. Often the best solution is to keep the domains decidedly narrow. Statistical systems can offer a more forgiving path by learning implicit trade-offs, generalizations, and robust behaviors from data. For example, neural network models have been used to learn more robust parsers [14, 24, 29]. In recent work, the components of task-oriented dialog systems have been implemented as neural networks, enabling joint learning of robust models [7, 26, 27]. However these methods all rely on either an explicit semantic representation or an explicit representation of the task, always hand-crafted.
1705.00652#1
Efficient Natural Language Response Suggestion for Smart Reply
This paper presents a computationally efficient machine-learned method for natural language response suggestion. Feed-forward neural networks using n-gram embedding features encode messages into vectors which are optimized to give message-response pairs a high dot-product value. An optimized search finds response suggestions. The method is evaluated in a large-scale commercial e-mail application, Inbox by Gmail. Compared to a sequence-to-sequence approach, the new system achieves the same quality at a small fraction of the computational requirements and latency.
http://arxiv.org/pdf/1705.00652
Matthew Henderson, Rami Al-Rfou, Brian Strope, Yun-hsuan Sung, Laszlo Lukacs, Ruiqi Guo, Sanjiv Kumar, Balint Miklos, Ray Kurzweil
cs.CL
null
null
cs.CL
20170501
20170501
[ { "id": "1606.01269" } ]
1705.00648
2
# Introduction In this past election cycle for the 45th President of the United States, the world has witnessed a growing epidemic of fake news. The plague of fake news not only poses serious threats to the integrity of journalism, but has also created tur- moils in the political world. The worst real-world impact is that fake news seems to create real-life fears: last year, a man carried an AR-15 rifle and walked in a Washington DC Pizzeria, because he recently read online that “this pizzeria was harboring young children as sex slaves as part of a child- abuse ring led by Hillary Clinton”1. The man was later arrested by police, and he was charged for firing an assault rifle in the restaurant (Kang and Goldman, 2016).
1705.00648#2
"Liar, Liar Pants on Fire": A New Benchmark Dataset for Fake News Detection
Automatic fake news detection is a challenging problem in deception detection, and it has tremendous real-world political and social impacts. However, statistical approaches to combating fake news has been dramatically limited by the lack of labeled benchmark datasets. In this paper, we present liar: a new, publicly available dataset for fake news detection. We collected a decade-long, 12.8K manually labeled short statements in various contexts from PolitiFact.com, which provides detailed analysis report and links to source documents for each case. This dataset can be used for fact-checking research as well. Notably, this new dataset is an order of magnitude larger than previously largest public fake news datasets of similar type. Empirically, we investigate automatic fake news detection based on surface-level linguistic patterns. We have designed a novel, hybrid convolutional neural network to integrate meta-data with text. We show that this hybrid approach can improve a text-only deep learning model.
http://arxiv.org/pdf/1705.00648
William Yang Wang
cs.CL, cs.CY
ACL 2017
null
cs.CL
20170501
20170501
[]
1705.00652
2
End-to-end systems avoid using hand-crafted explicit representations, by learning to map to and from natural language via implicit internal vector representations [19, 25]. Such systems avoid the unnecessary contraints and bottlenecks inevitably imposed by the system designer. In that context, natural language understanding might be evaluated less in terms of an explicit semantic representation, and more by the utility of the system itself. The system shows evidence of understanding when it offers useful responses. Such end-to-end tasks are difficult: systems not only need to learn language but also must learn to do something useful with it. This paper addresses the task of suggesting responses in human-to- human conversations. There are further challenges that arise when building an end-to-end dialog Corresponding authors: {matthen, rmyeid, bps}@google.com. © 2017 Copyright held by the owner/author(s). Publication rights licensed to ACM. # Efficient Natural Language Response Suggestion for Smart Reply Henderson et al. system, i.e. a computer agent that interacts directly with a human user. Dialog systems must learn effective and robust interaction strategies, and goal-oriented systems may need to interact with discrete external databases. Dialog systems must also learn to be consistent throughout the course of a dialog, maintaining some kind of memory from turn to turn.
1705.00652#2
Efficient Natural Language Response Suggestion for Smart Reply
This paper presents a computationally efficient machine-learned method for natural language response suggestion. Feed-forward neural networks using n-gram embedding features encode messages into vectors which are optimized to give message-response pairs a high dot-product value. An optimized search finds response suggestions. The method is evaluated in a large-scale commercial e-mail application, Inbox by Gmail. Compared to a sequence-to-sequence approach, the new system achieves the same quality at a small fraction of the computational requirements and latency.
http://arxiv.org/pdf/1705.00652
Matthew Henderson, Rami Al-Rfou, Brian Strope, Yun-hsuan Sung, Laszlo Lukacs, Ruiqi Guo, Sanjiv Kumar, Balint Miklos, Ray Kurzweil
cs.CL
null
null
cs.CL
20170501
20170501
[ { "id": "1606.01269" } ]
1705.00648
3
The broadly-related problem of deception de- tection (Mihalcea and Strapparava, 2009) is not new to the natural language processing commu- nity. A relatively early study by Ott et al. (2011) focuses on detecting deceptive review opinions in sentiment analysis, using a crowdsourcing ap- proach to create training data for the positive class, and then combine with truthful opinions from TripAdvisor. Recent studies have also proposed stylometric (Feng et al., 2012), semi-supervised learning (Hai et al., 2016), and linguistic ap- proaches (P´erez-Rosas and Mihalcea, 2015) to de- tect deceptive text on crowdsourced datasets. Even though crowdsourcing is an important approach to create labeled training data, there is a mismatch between training and testing. When testing on real-world review datasets, the results could be suboptimal since the positive training data was created in a completely different, simulated plat- form.
1705.00648#3
"Liar, Liar Pants on Fire": A New Benchmark Dataset for Fake News Detection
Automatic fake news detection is a challenging problem in deception detection, and it has tremendous real-world political and social impacts. However, statistical approaches to combating fake news has been dramatically limited by the lack of labeled benchmark datasets. In this paper, we present liar: a new, publicly available dataset for fake news detection. We collected a decade-long, 12.8K manually labeled short statements in various contexts from PolitiFact.com, which provides detailed analysis report and links to source documents for each case. This dataset can be used for fact-checking research as well. Notably, this new dataset is an order of magnitude larger than previously largest public fake news datasets of similar type. Empirically, we investigate automatic fake news detection based on surface-level linguistic patterns. We have designed a novel, hybrid convolutional neural network to integrate meta-data with text. We show that this hybrid approach can improve a text-only deep learning model.
http://arxiv.org/pdf/1705.00648
William Yang Wang
cs.CL, cs.CY
ACL 2017
null
cs.CL
20170501
20170501
[]
1705.00652
3
Machine learning requires huge amounts of data, and lots of helpful users to guide development through live interactions, but we also need to make some architectural decisions, in particular how to represent natural language text. Neural natural language understanding models typically represent words, and possibly phrases, sentences, and documents as implicit vectors. Vector representations of words, or word embeddings, have been widely adopted, particularly since the introduction of efficient computational learning algorithms that can derive meaningful embeddings from unlabeled text [15, 17, 20]. Though a simple representation of a sequence of words can be obtained by summing the individual word embeddings, this discards information about the word ordering. The sequence-to-sequence (Seq2Seq) framework uses recurrent neural networks (RNNs), typically long short-term memory (LSTM) networks, to encode sequences of word embeddings into representations that depend on the order, and uses a decoder RNN to generate output sequences word by word. This framework provides a direct path for end-to-end learning [23]. With attention mechanisms and more layers, these systems are revolutionizing the field of machine translation [28]. A similar system was initially used to deployed Google’s Smart Reply system for Inbox by Gmail [11].
1705.00652#3
Efficient Natural Language Response Suggestion for Smart Reply
This paper presents a computationally efficient machine-learned method for natural language response suggestion. Feed-forward neural networks using n-gram embedding features encode messages into vectors which are optimized to give message-response pairs a high dot-product value. An optimized search finds response suggestions. The method is evaluated in a large-scale commercial e-mail application, Inbox by Gmail. Compared to a sequence-to-sequence approach, the new system achieves the same quality at a small fraction of the computational requirements and latency.
http://arxiv.org/pdf/1705.00652
Matthew Henderson, Rami Al-Rfou, Brian Strope, Yun-hsuan Sung, Laszlo Lukacs, Ruiqi Guo, Sanjiv Kumar, Balint Miklos, Ray Kurzweil
cs.CL
null
null
cs.CL
20170501
20170501
[ { "id": "1606.01269" } ]
1705.00648
4
The problem of fake news detection is more challenging than detecting deceptive reviews, since the political language on TV interviews, posts on Facebook and Twitters are mostly short statements. However, the lack of manually la- beled fake news dataset is still a bottleneck for advancing computational-intensive, broad- coverage models in this direction. Vlachos and Riedel (2014) are the first to release a public fake news detection and fact-checking dataset, but it only includes 221 statements, which does not per- mit machine learning based assessments. To address these issues, we introduce the LIAR 1http://www.nytimes.com/2016/12/05/business/media/comet- ping-pong-pizza-shooting-fake-news-consequences.html
1705.00648#4
"Liar, Liar Pants on Fire": A New Benchmark Dataset for Fake News Detection
Automatic fake news detection is a challenging problem in deception detection, and it has tremendous real-world political and social impacts. However, statistical approaches to combating fake news has been dramatically limited by the lack of labeled benchmark datasets. In this paper, we present liar: a new, publicly available dataset for fake news detection. We collected a decade-long, 12.8K manually labeled short statements in various contexts from PolitiFact.com, which provides detailed analysis report and links to source documents for each case. This dataset can be used for fact-checking research as well. Notably, this new dataset is an order of magnitude larger than previously largest public fake news datasets of similar type. Empirically, we investigate automatic fake news detection based on surface-level linguistic patterns. We have designed a novel, hybrid convolutional neural network to integrate meta-data with text. We show that this hybrid approach can improve a text-only deep learning model.
http://arxiv.org/pdf/1705.00648
William Yang Wang
cs.CL, cs.CY
ACL 2017
null
cs.CL
20170501
20170501
[]
1705.00648
5
dataset, which includes 12,836 short statements labeled for truthfulness, subject, context/venue, speaker, state, party, and prior history. With such volume and a time span of a decade, LIAR is an order of magnitude larger than the currently avail- able resources (Vlachos and Riedel, 2014; Ferreira and Vlachos, 2016) of similiar type. Additionally, in contrast to crowdsourced datasets, the instances in LIAR are collected in a grounded, more natural context, such as political debate, TV ads, Face- book posts, tweets, interview, news release, etc. In each case, the labeler provides a lengthy analysis report to ground each judgment, and the links to all supporting documents are also provided. Empirically, we have evaluated several pop- learning based methods on this dataset. ular The baselines include logistic regression, support vector machines, long short-term memory net- works (Hochreiter and Schmidhuber, 1997), and a convolutional neural network model (Kim, 2014). We further introduce a neural network architecture to integrate text and meta-data. Our experiment suggests that this approach improves the perfor- mance of a strong text-only convolutional neural networks baseline. # 2 LIAR: a New Benchmark Dataset
1705.00648#5
"Liar, Liar Pants on Fire": A New Benchmark Dataset for Fake News Detection
Automatic fake news detection is a challenging problem in deception detection, and it has tremendous real-world political and social impacts. However, statistical approaches to combating fake news has been dramatically limited by the lack of labeled benchmark datasets. In this paper, we present liar: a new, publicly available dataset for fake news detection. We collected a decade-long, 12.8K manually labeled short statements in various contexts from PolitiFact.com, which provides detailed analysis report and links to source documents for each case. This dataset can be used for fact-checking research as well. Notably, this new dataset is an order of magnitude larger than previously largest public fake news datasets of similar type. Empirically, we investigate automatic fake news detection based on surface-level linguistic patterns. We have designed a novel, hybrid convolutional neural network to integrate meta-data with text. We show that this hybrid approach can improve a text-only deep learning model.
http://arxiv.org/pdf/1705.00648
William Yang Wang
cs.CL, cs.CY
ACL 2017
null
cs.CL
20170501
20170501
[]
1705.00652
5
In a broader context, Kurzweil’s work outlines a path to create a simulation of the human neocortex (the outer layer of the brain where we do much of our thinking) by building a hierarchy of similarly structured components that encode increasingly abstract ideas as sequences [12]. Kurzweil provides evidence that the neocortex is a self-organizing hierarchy of modules, each of which can learn, remember, recognize and/or generate a sequence, in which each sequence consists of a sequential pattern from lower-level modules. Longer relationships (between elements that are far away in time or spatial distance) are modeled by the hierarchy itself. In this work we adopt such a hierarchical structure, representing each sequential model as a feed-forward vector computation (with underlying sequences implicitly represented using n-grams). Whereas a long short-term memory (LSTM) network could also model such sequences, we don’t need an LSTM’s ability to directly encode long-term relationships (since the hierarchy does that) and LSTMs are much slower than feed-forward networks for training and inference since the computation scales with the length of the sequence.
1705.00652#5
Efficient Natural Language Response Suggestion for Smart Reply
This paper presents a computationally efficient machine-learned method for natural language response suggestion. Feed-forward neural networks using n-gram embedding features encode messages into vectors which are optimized to give message-response pairs a high dot-product value. An optimized search finds response suggestions. The method is evaluated in a large-scale commercial e-mail application, Inbox by Gmail. Compared to a sequence-to-sequence approach, the new system achieves the same quality at a small fraction of the computational requirements and latency.
http://arxiv.org/pdf/1705.00652
Matthew Henderson, Rami Al-Rfou, Brian Strope, Yun-hsuan Sung, Laszlo Lukacs, Ruiqi Guo, Sanjiv Kumar, Balint Miklos, Ray Kurzweil
cs.CL
null
null
cs.CL
20170501
20170501
[ { "id": "1606.01269" } ]
1705.00648
6
# 2 LIAR: a New Benchmark Dataset The major resources for deceptive detection of re- views are crowdsourced datasets (Ott et al., 2011; P´erez-Rosas and Mihalcea, 2015). They are very useful datasets to study deception detection, but the positive training data are collected from a simulated environment. More importantly, these datasets are not suitable for fake statements detec- tion, since the fake news on TVs and social media are much shorter than customer reviews. Vlachos and Riedel (2014) are the first to construct fake news and fact-checking datasets. They obtained 221 statements from CHANNEL 42 and POLITIFACT.COM3, a Pulitzer Prize-winning website. In particular, PolitiFact covers a wide- range of political topics, and they provide detailed judgments with fine-grained labels. Recently, Fer- reira and Vlachos (2016) have released the Emer- gent dataset, which includes 300 labeled rumors from PolitiFact. However, with less than a thou- sand samples, it is impractical to use these datasets as benchmarks for developing and evaluating ma- chine learning algorithms for fake news detection. # 2http://blogs.channel4.com/factcheck/ 3http://www.politifact.com/
1705.00648#6
"Liar, Liar Pants on Fire": A New Benchmark Dataset for Fake News Detection
Automatic fake news detection is a challenging problem in deception detection, and it has tremendous real-world political and social impacts. However, statistical approaches to combating fake news has been dramatically limited by the lack of labeled benchmark datasets. In this paper, we present liar: a new, publicly available dataset for fake news detection. We collected a decade-long, 12.8K manually labeled short statements in various contexts from PolitiFact.com, which provides detailed analysis report and links to source documents for each case. This dataset can be used for fact-checking research as well. Notably, this new dataset is an order of magnitude larger than previously largest public fake news datasets of similar type. Empirically, we investigate automatic fake news detection based on surface-level linguistic patterns. We have designed a novel, hybrid convolutional neural network to integrate meta-data with text. We show that this hybrid approach can improve a text-only deep learning model.
http://arxiv.org/pdf/1705.00648
William Yang Wang
cs.CL, cs.CY
ACL 2017
null
cs.CL
20170501
20170501
[]
1705.00652
6
Similarly, the work on paragraph vectors shows that word embeddings can be back-propagated to arbitrary levels in a contextual hierarchy [13]. Machines can optimize sentence vectors, paragraph vectors, chapter vectors, book vectors, author vectors, and so on, with simple back-propagation and computationally efficient feed-forward networks. Putting a few of these ideas together, we wondered if we could predict a sentence using only the sum of its n-gram embeddings. Without the ordering of the words, can we use the limited sequence information from the n-grams, and the redundancy of language, to recreate the original word sequence? With a simple RNN as a decoder, our preliminary experiments showed perplexities of around 1.2 over a vocabulary of hundreds of thousands of words. A lot of the sequence information remains in this simple n-gram sentence representation. As a corollary, a hierarchy built on top of n-gram representations could indeed adequately represent the increasingly abstract sequences underlying natural language. Networks built on n-gram embeddings such as those presented in this
1705.00652#6
Efficient Natural Language Response Suggestion for Smart Reply
This paper presents a computationally efficient machine-learned method for natural language response suggestion. Feed-forward neural networks using n-gram embedding features encode messages into vectors which are optimized to give message-response pairs a high dot-product value. An optimized search finds response suggestions. The method is evaluated in a large-scale commercial e-mail application, Inbox by Gmail. Compared to a sequence-to-sequence approach, the new system achieves the same quality at a small fraction of the computational requirements and latency.
http://arxiv.org/pdf/1705.00652
Matthew Henderson, Rami Al-Rfou, Brian Strope, Yun-hsuan Sung, Laszlo Lukacs, Ruiqi Guo, Sanjiv Kumar, Balint Miklos, Ray Kurzweil
cs.CL
null
null
cs.CL
20170501
20170501
[ { "id": "1606.01269" } ]
1705.00648
7
# 2http://blogs.channel4.com/factcheck/ 3http://www.politifact.com/ Statement: “The last quarter, it was just announced, our gross domestic product was below zero. Who ever heard of this? Its never below zero.” Speaker: Donald Trump Context: presidential announcement speech Label: Pants on Fire Justification: According to Bureau of Economic Analysis and National Bu- reau of Economic Research, the growth in the gross domestic product has been below zero 42 times over 68 years. Thats a lot more than “never.” We rate his claim Pants on Fire! Statement: “Newly Elected Republican Senators Sign Pledge to Eliminate Food Stamp Program in 2015.” Speaker: Facebook posts Context: social media posting Label: Pants on Fire Justification: More than 115,000 so- cial media users passed along a story headlined, “Newly Elected Republican Senators Sign Pledge to Eliminate Food Stamp Program in 2015.” But they failed to do due diligence and were snook- ered, since the story came from a pub- lication that bills itself (quietly) as a “satirical, parody website.” We rate the claim Pants on Fire.
1705.00648#7
"Liar, Liar Pants on Fire": A New Benchmark Dataset for Fake News Detection
Automatic fake news detection is a challenging problem in deception detection, and it has tremendous real-world political and social impacts. However, statistical approaches to combating fake news has been dramatically limited by the lack of labeled benchmark datasets. In this paper, we present liar: a new, publicly available dataset for fake news detection. We collected a decade-long, 12.8K manually labeled short statements in various contexts from PolitiFact.com, which provides detailed analysis report and links to source documents for each case. This dataset can be used for fact-checking research as well. Notably, this new dataset is an order of magnitude larger than previously largest public fake news datasets of similar type. Empirically, we investigate automatic fake news detection based on surface-level linguistic patterns. We have designed a novel, hybrid convolutional neural network to integrate meta-data with text. We show that this hybrid approach can improve a text-only deep learning model.
http://arxiv.org/pdf/1705.00648
William Yang Wang
cs.CL, cs.CY
ACL 2017
null
cs.CL
20170501
20170501
[]
1705.00652
7
Efficient Natural Language Response Suggestion for Smart Reply Henderson et al. paper (see section 4) are computationally inexpensive relative to RNN and convolutional network [6, 30] encoders. To make sure there is enough data and the necessary live feedback from users, we train on the anonymized Gmail data that was used in Kannan et al. [11], and use our models to give Smart Reply response suggestions to users of Inbox by Gmail (see figure 1). Smart Reply provides a real world application in which we can measure the quality of our response suggestion models. Just as in Kannan et al. [11], we consider natural language response suggestion from a fixed set of candidates. For efficiency, we frame this as a search problem. Inputs are combined with potential responses using final dot products to enable precomputation of the “response side” of the system. Adding deep layers and delaying combination between input and responses encourages the network to derive implicit semantic representations of the input and responses— if we assume that the best way to predict their relationships is to understand them. We precompute a minimal hierarchy of deep feed-forward networks for all potential responses, and at runtime propagate only the input through the hierarchical network. We
1705.00652#7
Efficient Natural Language Response Suggestion for Smart Reply
This paper presents a computationally efficient machine-learned method for natural language response suggestion. Feed-forward neural networks using n-gram embedding features encode messages into vectors which are optimized to give message-response pairs a high dot-product value. An optimized search finds response suggestions. The method is evaluated in a large-scale commercial e-mail application, Inbox by Gmail. Compared to a sequence-to-sequence approach, the new system achieves the same quality at a small fraction of the computational requirements and latency.
http://arxiv.org/pdf/1705.00652
Matthew Henderson, Rami Al-Rfou, Brian Strope, Yun-hsuan Sung, Laszlo Lukacs, Ruiqi Guo, Sanjiv Kumar, Balint Miklos, Ray Kurzweil
cs.CL
null
null
cs.CL
20170501
20170501
[ { "id": "1606.01269" } ]
1705.00648
8
Statement: “Under the health care law, everybody will have lower rates, better quality care and better access.” Speaker: Nancy Pelosi Context: on ’Meet the Press’ Label: False Justification: Even the study that Pelosi’s staff cited as the source of that statement suggested that some people would pay more for health insurance. Analysis at the state level found the same thing. The general understanding of the word “everybody” is every per- son. The predictions dont back that up. We rule this statement False. Figure 1: Some random excerpts from the LIAR dataset. Dataset Statistics Training set size Validation set size Testing set size Avg. statement length (tokens) Top-3 Speaker Affiliations Democrats Republicans None (e.g., FB posts) 10,269 1,284 1,283 17.9 4,150 5,687 2,185 Table 1: The LIAR dataset statistics. Therefore, it is of crucial significance to introduce a larger dataset to facilitate the development of computational approaches to fake news detection and automatic fact-checking.
1705.00648#8
"Liar, Liar Pants on Fire": A New Benchmark Dataset for Fake News Detection
Automatic fake news detection is a challenging problem in deception detection, and it has tremendous real-world political and social impacts. However, statistical approaches to combating fake news has been dramatically limited by the lack of labeled benchmark datasets. In this paper, we present liar: a new, publicly available dataset for fake news detection. We collected a decade-long, 12.8K manually labeled short statements in various contexts from PolitiFact.com, which provides detailed analysis report and links to source documents for each case. This dataset can be used for fact-checking research as well. Notably, this new dataset is an order of magnitude larger than previously largest public fake news datasets of similar type. Empirically, we investigate automatic fake news detection based on surface-level linguistic patterns. We have designed a novel, hybrid convolutional neural network to integrate meta-data with text. We show that this hybrid approach can improve a text-only deep learning model.
http://arxiv.org/pdf/1705.00648
William Yang Wang
cs.CL, cs.CY
ACL 2017
null
cs.CL
20170501
20170501
[]
1705.00652
8
precompute a minimal hierarchy of deep feed-forward networks for all potential responses, and at runtime propagate only the input through the hierarchical network. We use an efficient nearest-neighbor search of the hierarchical embeddings of the responses to find the best suggestions. 2 PROBLEM DEFINITION The Smart Reply system gives short response suggestions to help users respond quickly to emails. Emails are processed by the system according to the pipeline detailed in figure 2. The decision of whether to give suggestions is made by a deep neural network classifier, called the triggering model. This model takes various features of the received email, including a word n-gram representation, and is trained to estimate the probability that the user would type a short reply to the input email, see Kannan et al. [11]. If the output of the triggering model is above a threshold, then Smart Reply will give m (typically 3) short response suggestions for the email. Otherwise no suggestions are given. As a result, suggestions are not shown for emails where a response is not likely (e.g. spam, news letters, and promotional emails), reducing clutter in the user interface and saving unnecessary computation. The system is restricted to a fixed set of response
1705.00652#8
Efficient Natural Language Response Suggestion for Smart Reply
This paper presents a computationally efficient machine-learned method for natural language response suggestion. Feed-forward neural networks using n-gram embedding features encode messages into vectors which are optimized to give message-response pairs a high dot-product value. An optimized search finds response suggestions. The method is evaluated in a large-scale commercial e-mail application, Inbox by Gmail. Compared to a sequence-to-sequence approach, the new system achieves the same quality at a small fraction of the computational requirements and latency.
http://arxiv.org/pdf/1705.00652
Matthew Henderson, Rami Al-Rfou, Brian Strope, Yun-hsuan Sung, Laszlo Lukacs, Ruiqi Guo, Sanjiv Kumar, Balint Miklos, Ray Kurzweil
cs.CL
null
null
cs.CL
20170501
20170501
[ { "id": "1606.01269" } ]
1705.00648
9
We show some random snippets from our The LIAR dataset4 in- dataset cludes 12.8K human labeled short statements from POLITIFACT.COM’s API5, and each statement is evaluated by a POLITIFACT.COM editor for its truthfulness. After initial analysis, we found du- plicate labels, and merged the full-flop, half-flip, no-flip labels into false, half-true, true labels re- spectively. We consider six fine-grained labels for the truthfulness ratings: pants-fire, false, barely- true, half-true, mostly-true, and true. The distri- bution of labels in the LIAR dataset is relatively well-balanced: except for 1,050 pants-fire cases, the instances for all other labels range from 2,063 to 2,638. We randomly sampled 200 instances to examine the accompanied lengthy analysis reports and rulings. Not that fact-checking is not a classic labeling task in NLP. The verdict requires exten- sive training in journalism for finding relevant evi- dence. Therefore, for second-stage verifications, we went
1705.00648#9
"Liar, Liar Pants on Fire": A New Benchmark Dataset for Fake News Detection
Automatic fake news detection is a challenging problem in deception detection, and it has tremendous real-world political and social impacts. However, statistical approaches to combating fake news has been dramatically limited by the lack of labeled benchmark datasets. In this paper, we present liar: a new, publicly available dataset for fake news detection. We collected a decade-long, 12.8K manually labeled short statements in various contexts from PolitiFact.com, which provides detailed analysis report and links to source documents for each case. This dataset can be used for fact-checking research as well. Notably, this new dataset is an order of magnitude larger than previously largest public fake news datasets of similar type. Empirically, we investigate automatic fake news detection based on surface-level linguistic patterns. We have designed a novel, hybrid convolutional neural network to integrate meta-data with text. We show that this hybrid approach can improve a text-only deep learning model.
http://arxiv.org/pdf/1705.00648
William Yang Wang
cs.CL, cs.CY
ACL 2017
null
cs.CL
20170501
20170501
[]
1705.00652
9
spam, news letters, and promotional emails), reducing clutter in the user interface and saving unnecessary computation. The system is restricted to a fixed set of response suggestions, R, selected from millions of common messages. The response selection step involves searching for the top N (typically around 100) scoring responses in R according to a response selection model P(y | x). The output of response selection is a list of suggestions (yi, yo, .--, yn) with y; € R ordered by their probability. Kannan et al. [11] used a sequence-to-sequence model for P(y | x) and used a beam search over the Smart Reply paper IOtv § Matthew Henderson to me * Apr 17 Do you think the abstract looks okay? (ee Reply ~ | think it's fine. Looks good to me. It needs some work. Fig. 1. Our natural language understanding models are trained on email data, and evaluated in the context of the Smart Reply feature of Inbox by Gmail, pictured here.
1705.00652#9
Efficient Natural Language Response Suggestion for Smart Reply
This paper presents a computationally efficient machine-learned method for natural language response suggestion. Feed-forward neural networks using n-gram embedding features encode messages into vectors which are optimized to give message-response pairs a high dot-product value. An optimized search finds response suggestions. The method is evaluated in a large-scale commercial e-mail application, Inbox by Gmail. Compared to a sequence-to-sequence approach, the new system achieves the same quality at a small fraction of the computational requirements and latency.
http://arxiv.org/pdf/1705.00652
Matthew Henderson, Rami Al-Rfou, Brian Strope, Yun-hsuan Sung, Laszlo Lukacs, Ruiqi Guo, Sanjiv Kumar, Balint Miklos, Ray Kurzweil
cs.CL
null
null
cs.CL
20170501
20170501
[ { "id": "1606.01269" } ]
1705.00652
10
Efficient Natural Language Response Suggestion for Smart Reply Henderson et al. # new email x Suggestions? suggestions , uf Response Response selection - - 4 set R and clustering Diversification (iys +1 Yin) Smart Reply suggestions are shown Fig. 2. The Smart Reply pipeline. A re- ceived email is run through the triggering model that decides whether suggestions should be given. Response selection searches the response set for good sug- gestions. Finally, diversification ensures diversity in the final set shown to the user. This paper focuses on the response se- lection step. prefices in R (see section 3). This paper presents a feed-forward neural network model for P(y | x), including a factorized dot-product model where selection can be performed using a highly efficient and accurate approximate search over a precomputed set of vectors, see section 4. Finally the diversification stage ensures diversity in the final m response suggestions. A clus- tering algorithm is used to omit redundant suggestions, and a labeling of R is used to ensure a negative suggestion is given if the other two are affirmative and vice-versa. Full details are given in Kannan et al. [11]. # 3 BASELINE SEQUENCE-TO-SEQUENCE SCORING
1705.00652#10
Efficient Natural Language Response Suggestion for Smart Reply
This paper presents a computationally efficient machine-learned method for natural language response suggestion. Feed-forward neural networks using n-gram embedding features encode messages into vectors which are optimized to give message-response pairs a high dot-product value. An optimized search finds response suggestions. The method is evaluated in a large-scale commercial e-mail application, Inbox by Gmail. Compared to a sequence-to-sequence approach, the new system achieves the same quality at a small fraction of the computational requirements and latency.
http://arxiv.org/pdf/1705.00652
Matthew Henderson, Rami Al-Rfou, Brian Strope, Yun-hsuan Sung, Laszlo Lukacs, Ruiqi Guo, Sanjiv Kumar, Balint Miklos, Ray Kurzweil
cs.CL
null
null
cs.CL
20170501
20170501
[ { "id": "1606.01269" } ]
1705.00648
11
The speakers in the LIAR dataset include a mix of democrats and republicans, as well as a sig- nificant amount of posts from online social me- dia. We include a rich set of meta-data for each speaker—in addition to party affiliations, current 4https://www.cs.ucsb.edu/˜william/ data/liar_dataset.zip 5http://static.politifact.com/api/ v2apidoc.html Fully connected layer ConvNet layer —_w. softmax predictor ..Max-pooling Word embeddings i Is below Concatenation trump republican —>) ConvNet >| Bi-LSTM Meta data Figure 2: The proposed hybrid Convolutional Neural Networks framework for integrating text and meta-data.
1705.00648#11
"Liar, Liar Pants on Fire": A New Benchmark Dataset for Fake News Detection
Automatic fake news detection is a challenging problem in deception detection, and it has tremendous real-world political and social impacts. However, statistical approaches to combating fake news has been dramatically limited by the lack of labeled benchmark datasets. In this paper, we present liar: a new, publicly available dataset for fake news detection. We collected a decade-long, 12.8K manually labeled short statements in various contexts from PolitiFact.com, which provides detailed analysis report and links to source documents for each case. This dataset can be used for fact-checking research as well. Notably, this new dataset is an order of magnitude larger than previously largest public fake news datasets of similar type. Empirically, we investigate automatic fake news detection based on surface-level linguistic patterns. We have designed a novel, hybrid convolutional neural network to integrate meta-data with text. We show that this hybrid approach can improve a text-only deep learning model.
http://arxiv.org/pdf/1705.00648
William Yang Wang
cs.CL, cs.CY
ACL 2017
null
cs.CL
20170501
20170501
[]
1705.00652
11
# 3 BASELINE SEQUENCE-TO-SEQUENCE SCORING The response selection model presented in Kannan et al. [11] is a long short-term memory (LSTM) recurrent neural network [8] — an application of the sequence-to-sequence learning framework (Seq2Seq) [23]. The input email « is tokenized into a word sequence (1, ..., X,) and the LSTM computes the conditional probability over a response sequence y = (yi, ..., Yn) as: Piy|z) = Ply, .--; Yn | 1, ---, &m) n = [Tiki sm (yi | v1, ---5 &ms Ya, +++ Yi-1) where Py srm is the output of the word-level LSTM. The LSTM is trained to maximize the log- probability according to P(y | x) of the training data (a large collection of emails and responses, see section 5.1). At inference time, likely responses from the candidate set R are found using a beam search that is restricted to the prefix trie of R. The time complexity of this search is O(|x| + b|y|) where b is the beam width and should be scaled appropriately with ||. This search dominates the computation of the original Smart Reply system. # 4 FEEDFORWARD APPROACH
1705.00652#11
Efficient Natural Language Response Suggestion for Smart Reply
This paper presents a computationally efficient machine-learned method for natural language response suggestion. Feed-forward neural networks using n-gram embedding features encode messages into vectors which are optimized to give message-response pairs a high dot-product value. An optimized search finds response suggestions. The method is evaluated in a large-scale commercial e-mail application, Inbox by Gmail. Compared to a sequence-to-sequence approach, the new system achieves the same quality at a small fraction of the computational requirements and latency.
http://arxiv.org/pdf/1705.00652
Matthew Henderson, Rami Al-Rfou, Brian Strope, Yun-hsuan Sung, Laszlo Lukacs, Ruiqi Guo, Sanjiv Kumar, Balint Miklos, Ray Kurzweil
cs.CL
null
null
cs.CL
20170501
20170501
[ { "id": "1606.01269" } ]
1705.00648
12
Figure 2: The proposed hybrid Convolutional Neural Networks framework for integrating text and meta-data. job, home state, and credit history are also pro- vided. In particular, the credit history includes the historical counts of inaccurate statements for each speaker. For example, Mitt Romney has a credit history vector h = {19, 32, 34, 58, 33}, which cor- responds to his counts of “pants on fire”, “false”, “barely true”, “half true”, “mostly true” for histor- ical statements. Since this vector also includes the count for the current statement, it is important to subtract the current label from the credit history when using this meta data vector in prediction ex- periments. These statements are sampled from various of contexts/venues, and the top categories in- clude news releases, TV/radio interviews, cam- paign speeches, TV ads, tweets, debates, Face- book posts, etc. To ensure a broad coverage of the topics, there is also a diverse set of subjects discussed by the speakers. The top-10 most dis- cussed subjects in the dataset are economy, health- care, taxes, federal-budget, education, jobs, state- budget, candidates-biography, elections, and im- migration. # 3 Automatic Fake News Detection
1705.00648#12
"Liar, Liar Pants on Fire": A New Benchmark Dataset for Fake News Detection
Automatic fake news detection is a challenging problem in deception detection, and it has tremendous real-world political and social impacts. However, statistical approaches to combating fake news has been dramatically limited by the lack of labeled benchmark datasets. In this paper, we present liar: a new, publicly available dataset for fake news detection. We collected a decade-long, 12.8K manually labeled short statements in various contexts from PolitiFact.com, which provides detailed analysis report and links to source documents for each case. This dataset can be used for fact-checking research as well. Notably, this new dataset is an order of magnitude larger than previously largest public fake news datasets of similar type. Empirically, we investigate automatic fake news detection based on surface-level linguistic patterns. We have designed a novel, hybrid convolutional neural network to integrate meta-data with text. We show that this hybrid approach can improve a text-only deep learning model.
http://arxiv.org/pdf/1705.00648
William Yang Wang
cs.CL, cs.CY
ACL 2017
null
cs.CL
20170501
20170501
[]
1705.00652
12
# 4 FEEDFORWARD APPROACH Rather than learning a generative model, we investigate learning a feedforward network to score potential responses. Efficient Natural Language Response Suggestion for Smart Reply Henderson et al. Recall the goal of response selection is to model P(y | x), which is used to rank possible responses y given an input email «x. This probability distribution can be written as: q) The joint probability of P(x, y) is estimated using a learned neural network scoring function, S such that: P(x, y) x eS) @) Note that the calculation of equation | requires summing over the neural network outputs for all possible responses y;,. (This is only an issue for training, and not inference since the denominator is a constant for any given x and so does not affect the arg max over y). This is prohibitively expensive to calculate, so we will approximate P(a) by sampling K responses including y uniformly from our corpus during training: P(a,y) an P(x, yx) Papprox(y | #) = (3) Combining equations 2 and 3 gives the approximate probability of the training data used to train the neural networks: eS (ay) Sr, eSen) Papprox(y | ©) = (4)
1705.00652#12
Efficient Natural Language Response Suggestion for Smart Reply
This paper presents a computationally efficient machine-learned method for natural language response suggestion. Feed-forward neural networks using n-gram embedding features encode messages into vectors which are optimized to give message-response pairs a high dot-product value. An optimized search finds response suggestions. The method is evaluated in a large-scale commercial e-mail application, Inbox by Gmail. Compared to a sequence-to-sequence approach, the new system achieves the same quality at a small fraction of the computational requirements and latency.
http://arxiv.org/pdf/1705.00652
Matthew Henderson, Rami Al-Rfou, Brian Strope, Yun-hsuan Sung, Laszlo Lukacs, Ruiqi Guo, Sanjiv Kumar, Balint Miklos, Ray Kurzweil
cs.CL
null
null
cs.CL
20170501
20170501
[ { "id": "1606.01269" } ]
1705.00648
13
# 3 Automatic Fake News Detection One of the most obvious applications of our dataset is to facilitate the development of machine learning models for automatic fake news detec- tion. In this task, we frame this as a 6-way multi- class text classification problem. And the research questions are: • Based on surface-level linguistic realizations only, how well can machine learning algo- rithms classify a short statement into a fine- grained category of fakeness? • Can we design a deep neural network archi- tecture to integrate speaker related meta-data with text to enhance the performance of fake news detection? Since convolutional neural networks architec- tures (CNNs) (Collobert et al., 2011; Kim, 2014; Zhang et al., 2015) have obtained the state-of-the- art results on many text classification datasets, we build our neural networks model based on a re- cently proposed CNN model (Kim, 2014). Fig- ure 2 shows the overview of our hybrid convo- lutional neural network for integrating text and meta-data.
1705.00648#13
"Liar, Liar Pants on Fire": A New Benchmark Dataset for Fake News Detection
Automatic fake news detection is a challenging problem in deception detection, and it has tremendous real-world political and social impacts. However, statistical approaches to combating fake news has been dramatically limited by the lack of labeled benchmark datasets. In this paper, we present liar: a new, publicly available dataset for fake news detection. We collected a decade-long, 12.8K manually labeled short statements in various contexts from PolitiFact.com, which provides detailed analysis report and links to source documents for each case. This dataset can be used for fact-checking research as well. Notably, this new dataset is an order of magnitude larger than previously largest public fake news datasets of similar type. Empirically, we investigate automatic fake news detection based on surface-level linguistic patterns. We have designed a novel, hybrid convolutional neural network to integrate meta-data with text. We show that this hybrid approach can improve a text-only deep learning model.
http://arxiv.org/pdf/1705.00648
William Yang Wang
cs.CL, cs.CY
ACL 2017
null
cs.CL
20170501
20170501
[]
1705.00652
13
eS (ay) Sr, eSen) Papprox(y | ©) = (4) The following subsections show several scoring models; how to extend the models to multiple features; how to overcome bias introduced by the sampling procedure; and an efficient search algorithm for response selection. # 4.1 N-gram Representation To represent input emails x and responses y as fixed-dimensional input features, we extract n- gram features from each. During training, we learn a d-dimensional embedding for each n-gram jointly with the other neural network parameters. To represent sequences of words, we combine n-gram embeddings by summing their values. We will denote this bag of n-grams representation as W(x) € R¢. This representation is quick to compute and captures basic semantic and word ordering information. # 4.2 Joint Scoring Model Figure 3a shows the joint scoring neural network model that takes the bag of n-gram representations of the input email x and the response y, and produces a scalar score S(x,y). This deep neural network can model complex joint interactions between input and responses in its computation of the score. # 4.3 Dot-Product Scoring Model
1705.00652#13
Efficient Natural Language Response Suggestion for Smart Reply
This paper presents a computationally efficient machine-learned method for natural language response suggestion. Feed-forward neural networks using n-gram embedding features encode messages into vectors which are optimized to give message-response pairs a high dot-product value. An optimized search finds response suggestions. The method is evaluated in a large-scale commercial e-mail application, Inbox by Gmail. Compared to a sequence-to-sequence approach, the new system achieves the same quality at a small fraction of the computational requirements and latency.
http://arxiv.org/pdf/1705.00652
Matthew Henderson, Rami Al-Rfou, Brian Strope, Yun-hsuan Sung, Laszlo Lukacs, Ruiqi Guo, Sanjiv Kumar, Balint Miklos, Ray Kurzweil
cs.CL
null
null
cs.CL
20170501
20170501
[ { "id": "1606.01269" } ]
1705.00648
14
We randomly initialize a matrix of embedding vectors to encode the metadata embeddings. We use a convolutional layer to capture the depen- dency between the meta-data vector(s). Then, a standard max-pooling operation is performed on the latent space, followed by a bi-directional LSTM layer. We then concatenate the max-pooled text representations with the meta-data representa- tion from the bi-directional LSTM, and feed them to fully connected layer with a softmax activation function to generate the final prediction. # 4 LIAR: Benchmark Evaluation In this section, we first describe the experimental setup, and the baselines. Then, we present the em- pirical results and compare various models. # 4.1 Experimental Settings
1705.00648#14
"Liar, Liar Pants on Fire": A New Benchmark Dataset for Fake News Detection
Automatic fake news detection is a challenging problem in deception detection, and it has tremendous real-world political and social impacts. However, statistical approaches to combating fake news has been dramatically limited by the lack of labeled benchmark datasets. In this paper, we present liar: a new, publicly available dataset for fake news detection. We collected a decade-long, 12.8K manually labeled short statements in various contexts from PolitiFact.com, which provides detailed analysis report and links to source documents for each case. This dataset can be used for fact-checking research as well. Notably, this new dataset is an order of magnitude larger than previously largest public fake news datasets of similar type. Empirically, we investigate automatic fake news detection based on surface-level linguistic patterns. We have designed a novel, hybrid convolutional neural network to integrate meta-data with text. We show that this hybrid approach can improve a text-only deep learning model.
http://arxiv.org/pdf/1705.00648
William Yang Wang
cs.CL, cs.CY
ACL 2017
null
cs.CL
20170501
20170501
[]
1705.00652
14
# 4.3 Dot-Product Scoring Model Figure 3b shows the structure of the dot-product scoring model, where S(x,y) is factorized as a dot-product between a vector h, that depends only on x and a vector h, that depends only on y. This is similar to Deep Structured Semantic Models, which use feedforward networks to project queries and documents into a common space where the relevance of a document given a query is computed as the cosine distance between them [9]. While the interaction between features is not as direct as the joint scoring model (see section 4.2), this factorization allows us to calculate the representation of the input x and possible responses y Efficient Natural Language Response Suggestion for Smart Reply Henderson et al. # ReLU layer h # ReLU layer # U(x) # Wy) =h,7h a’ hy tanh layer hy Ss tanh layer he U(x) Wy) (a) A neural network that calculates a score between emails and their responses. Rec- tified Linear Unit (ReLU) layers are used to reduce the (2d)-dimensional concatenation of the bag of n-gram representations to a scalar S(x,y).
1705.00652#14
Efficient Natural Language Response Suggestion for Smart Reply
This paper presents a computationally efficient machine-learned method for natural language response suggestion. Feed-forward neural networks using n-gram embedding features encode messages into vectors which are optimized to give message-response pairs a high dot-product value. An optimized search finds response suggestions. The method is evaluated in a large-scale commercial e-mail application, Inbox by Gmail. Compared to a sequence-to-sequence approach, the new system achieves the same quality at a small fraction of the computational requirements and latency.
http://arxiv.org/pdf/1705.00652
Matthew Henderson, Rami Al-Rfou, Brian Strope, Yun-hsuan Sung, Laszlo Lukacs, Ruiqi Guo, Sanjiv Kumar, Balint Miklos, Ray Kurzweil
cs.CL
null
null
cs.CL
20170501
20170501
[ { "id": "1606.01269" } ]
1705.00648
15
# 4.1 Experimental Settings We used five baselines: a majority baseline, a reg- ularized logistic regression classifier (LR), a sup- port vector machine classifier (SVM) (Crammer and Singer, 2001), a bi-directional long short-term memory networks model (Bi-LSTMs) (Hochreiter and Schmidhuber, 1997; Graves and Schmidhu- ber, 2005), and a convolutional neural network model (CNNs) (Kim, 2014). For LR and SVM, we used the LIBSHORTTEXT toolkit6, which was shown to provide very strong performances on short text classification problems (Wang and Yang, 2015). For Bi-LSTMs and CNNs, we used Ten- sorFlow for the implementation. We used pre- trained 300-dimensional word2vec embeddings from Google News (Mikolov et al., 2013) to warm-start the text embeddings. We strictly tuned all the hyperparameters on the validation dataset. The best filter sizes for the CNN model was (2,3,4). In all cases, each size has 128 filters. The dropout keep probabilities was optimized to 0.8,
1705.00648#15
"Liar, Liar Pants on Fire": A New Benchmark Dataset for Fake News Detection
Automatic fake news detection is a challenging problem in deception detection, and it has tremendous real-world political and social impacts. However, statistical approaches to combating fake news has been dramatically limited by the lack of labeled benchmark datasets. In this paper, we present liar: a new, publicly available dataset for fake news detection. We collected a decade-long, 12.8K manually labeled short statements in various contexts from PolitiFact.com, which provides detailed analysis report and links to source documents for each case. This dataset can be used for fact-checking research as well. Notably, this new dataset is an order of magnitude larger than previously largest public fake news datasets of similar type. Empirically, we investigate automatic fake news detection based on surface-level linguistic patterns. We have designed a novel, hybrid convolutional neural network to integrate meta-data with text. We show that this hybrid approach can improve a text-only deep learning model.
http://arxiv.org/pdf/1705.00648
William Yang Wang
cs.CL, cs.CY
ACL 2017
null
cs.CL
20170501
20170501
[]
1705.00652
15
(b) Dot-product architecture, where a tower of tanh activation hidden layers encodes x to h, and a separate tower encodes y to hy, such that the score S(x,y) is the dot-product hy, hy. Fig. 3. Feedforward scoring models that take the n-gram representation of an email body and a response, and compute a score. independently. In particular, the representations of the response set R can be precomputed. Then searching for response suggestions reduces to encoding a new email x in a simple feed-forward step to the vector h,,, and then searching for high dot-product scoring responses in the precomputed set (see section 4.7). It is also efficient to compute the scores S(2;, y;) for all pairs of inputs and responses in a training batch of n examples, as that requires only an additional matrix multiplication after computing the h,,, and h,, vectors. This leads to vastly more efficient training with multiple negatives (see section 4.4) than is possible with the joint scoring model. # 4.4 Multiple Negatives
1705.00652#15
Efficient Natural Language Response Suggestion for Smart Reply
This paper presents a computationally efficient machine-learned method for natural language response suggestion. Feed-forward neural networks using n-gram embedding features encode messages into vectors which are optimized to give message-response pairs a high dot-product value. An optimized search finds response suggestions. The method is evaluated in a large-scale commercial e-mail application, Inbox by Gmail. Compared to a sequence-to-sequence approach, the new system achieves the same quality at a small fraction of the computational requirements and latency.
http://arxiv.org/pdf/1705.00652
Matthew Henderson, Rami Al-Rfou, Brian Strope, Yun-hsuan Sung, Laszlo Lukacs, Ruiqi Guo, Sanjiv Kumar, Balint Miklos, Ray Kurzweil
cs.CL
null
null
cs.CL
20170501
20170501
[ { "id": "1606.01269" } ]
1705.00648
16
# 6https://www.csie.ntu.edu.tw/˜cjlin/libshorttext/ Models Majority SVMs Logistic Regress0ion Bi-LSTMs CNNs Hybrid CNNs Text + Subject Text + Speaker Text + Job Text + State Text + Party Text + Context Text + History Text + All Valid. Test 0.204 0.258 0.257 0.223 0.260 0.208 0.255 0.247 0.233 0.270 0.263 0.277 0.270 0.246 0.259 0.251 0.246 0.247 0.235 0.248 0.258 0.256 0.248 0.243 0.241 0.274 Table 2: The evaluation results on the LIAR dataset. The top section: text-only models. The bottom: text + meta-data hybrid models. while no L2 penalty was imposed. The batch size for stochastic gradient descent optimization was set to 64, and the learning process involves 10 passes over the training data for text model. For the hybrid model, we use 3 and 8 as filter sizes, and the number of filters was set to 10. We con- sidered 0.5 and 0.8 as dropout probabilities. The hybrid model requires 5 training epochs.
1705.00648#16
"Liar, Liar Pants on Fire": A New Benchmark Dataset for Fake News Detection
Automatic fake news detection is a challenging problem in deception detection, and it has tremendous real-world political and social impacts. However, statistical approaches to combating fake news has been dramatically limited by the lack of labeled benchmark datasets. In this paper, we present liar: a new, publicly available dataset for fake news detection. We collected a decade-long, 12.8K manually labeled short statements in various contexts from PolitiFact.com, which provides detailed analysis report and links to source documents for each case. This dataset can be used for fact-checking research as well. Notably, this new dataset is an order of magnitude larger than previously largest public fake news datasets of similar type. Empirically, we investigate automatic fake news detection based on surface-level linguistic patterns. We have designed a novel, hybrid convolutional neural network to integrate meta-data with text. We show that this hybrid approach can improve a text-only deep learning model.
http://arxiv.org/pdf/1705.00648
William Yang Wang
cs.CL, cs.CY
ACL 2017
null
cs.CL
20170501
20170501
[]
1705.00652
16
# 4.4 Multiple Negatives Recall from section 4 that a set of K possible responses is used to approximate P(y | x) — one correct response and /v — 1 random negatives. For efficiency and simplicity we use the responses of other examples in a training batch of stochastic gradient descent as negative responses. For a batch of size K, there will be K input emails x = (1,...,a«) and their corresponding responses y =(m,---, yx). Every reply y; is effectively treated as a negative candidate for «; if i A j. The i — 1 negative examples for each x are different at each pass through the data due to shuffling in stochastic gradient descent. The goal of training is to minimize the approximated mean negative log probability of the data. For a single batch this is: T(x, y,9) 1 K = "kK > log Prapprox (Yi | xj) i=l (5) 1 K K = FL [Seon —oe ren using equation 4, where @ represents the word embeddings and neural network parameters used to calculate S. Note that this loss function is invariant to adding any function f(x) to S(«, y), so Efficient Natural Language Response Suggestion for Smart Reply Henderson et al.
1705.00652#16
Efficient Natural Language Response Suggestion for Smart Reply
This paper presents a computationally efficient machine-learned method for natural language response suggestion. Feed-forward neural networks using n-gram embedding features encode messages into vectors which are optimized to give message-response pairs a high dot-product value. An optimized search finds response suggestions. The method is evaluated in a large-scale commercial e-mail application, Inbox by Gmail. Compared to a sequence-to-sequence approach, the new system achieves the same quality at a small fraction of the computational requirements and latency.
http://arxiv.org/pdf/1705.00652
Matthew Henderson, Rami Al-Rfou, Brian Strope, Yun-hsuan Sung, Laszlo Lukacs, Ruiqi Guo, Sanjiv Kumar, Balint Miklos, Ray Kurzweil
cs.CL
null
null
cs.CL
20170501
20170501
[ { "id": "1606.01269" } ]
1705.00648
17
We used grid search to tune the hyperparame- ters for LR and SVM models. We chose accuracy as the evaluation metric, since we found that the accuracy results from various models were equiv- alent to f-measures on this balanced dataset. # 4.2 Results We outline our empirical results in Table 2. First, we compare various models using text features only. We see that the majority baseline on this dataset gives about 0.204 and 0.208 accuracy on the validation and test sets respectively. Standard text classifier such as SVMs and LR models ob- tained significant improvements. Due to overfit- ting, the Bi-LSTMs did not perform well. The CNNs outperformed all models, resulting in an ac- curacy of 0.270 on the heldout test set. We com- pare the predictions from the CNN model with SVMs via a two-tailed paired t-test, and CNN was significantly better (p < .0001). When consider- ing all meta-data and text, the model achieved the best result on the test data. # 5 Conclusion
1705.00648#17
"Liar, Liar Pants on Fire": A New Benchmark Dataset for Fake News Detection
Automatic fake news detection is a challenging problem in deception detection, and it has tremendous real-world political and social impacts. However, statistical approaches to combating fake news has been dramatically limited by the lack of labeled benchmark datasets. In this paper, we present liar: a new, publicly available dataset for fake news detection. We collected a decade-long, 12.8K manually labeled short statements in various contexts from PolitiFact.com, which provides detailed analysis report and links to source documents for each case. This dataset can be used for fact-checking research as well. Notably, this new dataset is an order of magnitude larger than previously largest public fake news datasets of similar type. Empirically, we investigate automatic fake news detection based on surface-level linguistic patterns. We have designed a novel, hybrid convolutional neural network to integrate meta-data with text. We show that this hybrid approach can improve a text-only deep learning model.
http://arxiv.org/pdf/1705.00648
William Yang Wang
cs.CL, cs.CY
ACL 2017
null
cs.CL
20170501
20170501
[]
1705.00652
17
Efficient Natural Language Response Suggestion for Smart Reply Henderson et al. Si ) = Wh ReLU layer h ReLU layer ReLU layer ReLU layer hi ReLU layer ReLU layer V(2') V(y) Vi S(x,y) = hr by tanh layer h, tanh layer hy tanh layer tanh layer Yar hy layer tanh layer tanh layer V(2") W(y) Vi (a) Joint scoring model using multiple features of the input email x’. A subnetwork scores the response using each feature alone, be- fore the top-level hidden representations h‘ are concatenated (@™, h’) and then used to compute the final score. This is an ap- plication of the multi-loss architecture from Al-Rfou et al. [2]. (b) Dot-product scoring model with multiple input features x’. This is a novel setup of the multi-loss architecture, whereby the feature- level scores S(a’,y) and the final score S(x,y) are computed as a dot-product be- tween the parallel inout and response sides. Fig. 4. Scoring models that use multiple features of the input email. S(x,y) is learned up to an additive term that does not affect the arg max over y performed in the inference time search.
1705.00652#17
Efficient Natural Language Response Suggestion for Smart Reply
This paper presents a computationally efficient machine-learned method for natural language response suggestion. Feed-forward neural networks using n-gram embedding features encode messages into vectors which are optimized to give message-response pairs a high dot-product value. An optimized search finds response suggestions. The method is evaluated in a large-scale commercial e-mail application, Inbox by Gmail. Compared to a sequence-to-sequence approach, the new system achieves the same quality at a small fraction of the computational requirements and latency.
http://arxiv.org/pdf/1705.00652
Matthew Henderson, Rami Al-Rfou, Brian Strope, Yun-hsuan Sung, Laszlo Lukacs, Ruiqi Guo, Sanjiv Kumar, Balint Miklos, Ray Kurzweil
cs.CL
null
null
cs.CL
20170501
20170501
[ { "id": "1606.01269" } ]
1705.00648
18
# 5 Conclusion We introduced LIAR, a new dataset for automatic fake news detection. Compared to prior datasets, LIAR is an order of a magnitude larger, which en- ables the development of statistical and computa- tional approaches to fake news detection. LIAR’s authentic, real-world short statements from vari- ous contexts with diverse speakers also make the research on developing broad-coverage fake news detector possible. We show that when combin- ing meta-data with text, significant improvements can be achieved for fine-grained fake news detec- tion. Given the detailed analysis report and links to source documents in this dataset, it is also possible to explore the task of automatic fact-checking over knowledge base in the future. Our corpus can also be used for stance classification, argument min- ing, topic modeling, rumor detection, and political NLP research. # References Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from Journal of Machine Learning Research scratch. 12(Aug):2493–2537.
1705.00648#18
"Liar, Liar Pants on Fire": A New Benchmark Dataset for Fake News Detection
Automatic fake news detection is a challenging problem in deception detection, and it has tremendous real-world political and social impacts. However, statistical approaches to combating fake news has been dramatically limited by the lack of labeled benchmark datasets. In this paper, we present liar: a new, publicly available dataset for fake news detection. We collected a decade-long, 12.8K manually labeled short statements in various contexts from PolitiFact.com, which provides detailed analysis report and links to source documents for each case. This dataset can be used for fact-checking research as well. Notably, this new dataset is an order of magnitude larger than previously largest public fake news datasets of similar type. Empirically, we investigate automatic fake news detection based on surface-level linguistic patterns. We have designed a novel, hybrid convolutional neural network to integrate meta-data with text. We show that this hybrid approach can improve a text-only deep learning model.
http://arxiv.org/pdf/1705.00648
William Yang Wang
cs.CL, cs.CY
ACL 2017
null
cs.CL
20170501
20170501
[]
1705.00652
18
S(x,y) is learned up to an additive term that does not affect the arg max over y performed in the inference time search. # 4.5 Incorporating Multiple Features There is structure in emails that can be used to improve the accuracy of scoring models. We follow the multi-loss architecture of Al-Rfou et al. [2] to incorporate additional features beyond the message body, for example the subject line. Figure 4 shows the multi-loss architecture applied to both the joint and dot-product scoring models. The multi-loss networks have a sub-network for each feature of the email, which are trained to independently score candidate responses using that feature alone. The highest level hidden layer of the sub-network is used in a final sub-network that is trained to combine the information from all the features and give a final score. This hierarchical structure results in models that learn how to use each feature faster than a network that sees all the features at once, and also allows for learning deeper networks than is otherwise possible [2].
1705.00652#18
Efficient Natural Language Response Suggestion for Smart Reply
This paper presents a computationally efficient machine-learned method for natural language response suggestion. Feed-forward neural networks using n-gram embedding features encode messages into vectors which are optimized to give message-response pairs a high dot-product value. An optimized search finds response suggestions. The method is evaluated in a large-scale commercial e-mail application, Inbox by Gmail. Compared to a sequence-to-sequence approach, the new system achieves the same quality at a small fraction of the computational requirements and latency.
http://arxiv.org/pdf/1705.00652
Matthew Henderson, Rami Al-Rfou, Brian Strope, Yun-hsuan Sung, Laszlo Lukacs, Ruiqi Guo, Sanjiv Kumar, Balint Miklos, Ray Kurzweil
cs.CL
null
null
cs.CL
20170501
20170501
[ { "id": "1606.01269" } ]
1705.00648
19
Koby Crammer and Yoram Singer. 2001. On the algo- rithmic implementation of multiclass kernel-based vector machines. Journal of machine learning re- search 2(Dec):265–292. Song Feng, Ritwik Banerjee, and Yejin Choi. 2012. In Syntactic stylometry for deception detection. Proceedings of the the 50th Annual Meeting of Association for Computational Linguistics: Short Papers-Volume 2. Association for Computational Linguistics, pages 171–175. William Ferreira and Andreas Vlachos. 2016. Emer- gent: a novel data-set for stance classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies. ACL. Alex Graves and J¨urgen Schmidhuber. 2005. Frame- wise phoneme classification with bidirectional lstm and other neural network architectures. Neural Net- works 18(5):602–610. Zhen Hai, Peilin Zhao, Peng Cheng, Peng Yang, Xiao- Li Li, Guangxia Li, and Ant Financial. 2016. De- ceptive review spam detection via exploiting task re- latedness and unlabeled data. In EMNLP.
1705.00648#19
"Liar, Liar Pants on Fire": A New Benchmark Dataset for Fake News Detection
Automatic fake news detection is a challenging problem in deception detection, and it has tremendous real-world political and social impacts. However, statistical approaches to combating fake news has been dramatically limited by the lack of labeled benchmark datasets. In this paper, we present liar: a new, publicly available dataset for fake news detection. We collected a decade-long, 12.8K manually labeled short statements in various contexts from PolitiFact.com, which provides detailed analysis report and links to source documents for each case. This dataset can be used for fact-checking research as well. Notably, this new dataset is an order of magnitude larger than previously largest public fake news datasets of similar type. Empirically, we investigate automatic fake news detection based on surface-level linguistic patterns. We have designed a novel, hybrid convolutional neural network to integrate meta-data with text. We show that this hybrid approach can improve a text-only deep learning model.
http://arxiv.org/pdf/1705.00648
William Yang Wang
cs.CL, cs.CY
ACL 2017
null
cs.CL
20170501
20170501
[]
1705.00652
19
Formally, denote the MM features of an input email x as wy ..., a™. Then for each i, a sub- network produces a hidden vector representation h’, and a score of the response y using only 2’, S(x,y). Denoting (xi,..., 24.) as x", a loss function 7(x’, y, 0) encourages S(zx', y) to be high Efficient Natural Language Response Suggestion for Smart Reply Henderson et al. Message: Did you manage to print the document? With response bias Without response bias — Yes, I did. — It’s printed. — Yes, it’s done. — Ihave printed it. —No, I didn’t. — Yes, all done. Table 1. Examples of Smart Reply suggestions with and without the response bias. Without biasing, the model prefers responses that are very closely related to the input email, but are less likely to be chosen than the more generic yes/no responses. for corresponding pairs in the training batch, and low for the random pairs. The second stage of the network produces a final score S(x,y) that is a function of all of the h’ vectors. The network is trained end-to-end with a single loss: M T(x, ¥,0) + 3° I(x", y, 8) i=l
1705.00652#19
Efficient Natural Language Response Suggestion for Smart Reply
This paper presents a computationally efficient machine-learned method for natural language response suggestion. Feed-forward neural networks using n-gram embedding features encode messages into vectors which are optimized to give message-response pairs a high dot-product value. An optimized search finds response suggestions. The method is evaluated in a large-scale commercial e-mail application, Inbox by Gmail. Compared to a sequence-to-sequence approach, the new system achieves the same quality at a small fraction of the computational requirements and latency.
http://arxiv.org/pdf/1705.00652
Matthew Henderson, Rami Al-Rfou, Brian Strope, Yun-hsuan Sung, Laszlo Lukacs, Ruiqi Guo, Sanjiv Kumar, Balint Miklos, Ray Kurzweil
cs.CL
null
null
cs.CL
20170501
20170501
[ { "id": "1606.01269" } ]
1705.00648
20
Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Neural computation Long short-term memory. 9(8):1735–1780. Cecilia Kang and Adam Goldman. 2016. In washing- ton pizzeria attack, fake news brought real guns. In the New York Times. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP). Rada Mihalcea and Carlo Strapparava. 2009. The lie detector: Explorations in the automatic recognition of deceptive language. In Proceedings of the ACL- IJCNLP 2009 Conference Short Papers. Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- Efficient estimation of word arXiv preprint frey Dean. 2013. representations in vector space. arXiv:1301.3781 . Myle Ott, Yejin Choi, Claire Cardie, and Jeffrey T Hancock. 2011. Finding deceptive opinion spam In Proceed- by any stretch of the imagination. ings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1. Association for Computa- tional Linguistics, pages 309–319.
1705.00648#20
"Liar, Liar Pants on Fire": A New Benchmark Dataset for Fake News Detection
Automatic fake news detection is a challenging problem in deception detection, and it has tremendous real-world political and social impacts. However, statistical approaches to combating fake news has been dramatically limited by the lack of labeled benchmark datasets. In this paper, we present liar: a new, publicly available dataset for fake news detection. We collected a decade-long, 12.8K manually labeled short statements in various contexts from PolitiFact.com, which provides detailed analysis report and links to source documents for each case. This dataset can be used for fact-checking research as well. Notably, this new dataset is an order of magnitude larger than previously largest public fake news datasets of similar type. Empirically, we investigate automatic fake news detection based on surface-level linguistic patterns. We have designed a novel, hybrid convolutional neural network to integrate meta-data with text. We show that this hybrid approach can improve a text-only deep learning model.
http://arxiv.org/pdf/1705.00648
William Yang Wang
cs.CL, cs.CY
ACL 2017
null
cs.CL
20170501
20170501
[]
1705.00652
20
M T(x, ¥,0) + 3° I(x", y, 8) i=l Note that the final score produced by the multi-loss dot-product model (figure 4b) is a dot-product of a vector h, that depends only on the input x, and a vector h,, that depends only on the response y, as in the single-feature case. As a result, it can still be used for the fast vector search algorithm described in section 4.7, and training with multiple negatives remains efficient. For the multi-loss joint scoring model, the input feature vector for the final sub-network is the concatenation of the h’ vectors and therefore scales with the number of features, leading to a computational bottleneck. For the dot-product scoring model, the hidden layer representations are learned such that they are meaningful vectors when compared using a dot product. This motivates combining the representations for the final sub-network using vector arithmetic. The features extracted from the input email, x’, are averaged (1/m ye h‘), as are the response representations learned from different sub-networks (1/1 ean h,,), before being passed to the final neural network layers. While this choice may constrain the representations learned by each sub-network, and may limit the ability of the final sub-network to differentiate information from different features, it also encourages them to exist in the same semantic space.
1705.00652#20
Efficient Natural Language Response Suggestion for Smart Reply
This paper presents a computationally efficient machine-learned method for natural language response suggestion. Feed-forward neural networks using n-gram embedding features encode messages into vectors which are optimized to give message-response pairs a high dot-product value. An optimized search finds response suggestions. The method is evaluated in a large-scale commercial e-mail application, Inbox by Gmail. Compared to a sequence-to-sequence approach, the new system achieves the same quality at a small fraction of the computational requirements and latency.
http://arxiv.org/pdf/1705.00652
Matthew Henderson, Rami Al-Rfou, Brian Strope, Yun-hsuan Sung, Laszlo Lukacs, Ruiqi Guo, Sanjiv Kumar, Balint Miklos, Ray Kurzweil
cs.CL
null
null
cs.CL
20170501
20170501
[ { "id": "1606.01269" } ]
1705.00648
21
Ver´onica P´erez-Rosas and Rada Mihalcea. 2015. Ex- periments in open domain deception detection. In EMNLP. pages 1120–1125. Andreas Vlachos and Sebastian Riedel. 2014. Fact checking: Task definition and dataset construction. Proceedings of the ACL 2014 Workshop on Lan- guage Technology and Computational Social Sci- ence . William Yang Wang and Diyi Yang. 2015. That’s so annoying!!!: A lexical and frame-semantic em- bedding based data augmentation approach to au- tomatic categorization of annoying behaviors using #petpeeve tweets. In Proceedings of the 2015 Con- ference on Empirical Methods in Natural Language Processing (EMNLP 2015). ACL, Lisbon, Portugal. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text clas- In Advances in neural information pro- sification. cessing systems. pages 649–657.
1705.00648#21
"Liar, Liar Pants on Fire": A New Benchmark Dataset for Fake News Detection
Automatic fake news detection is a challenging problem in deception detection, and it has tremendous real-world political and social impacts. However, statistical approaches to combating fake news has been dramatically limited by the lack of labeled benchmark datasets. In this paper, we present liar: a new, publicly available dataset for fake news detection. We collected a decade-long, 12.8K manually labeled short statements in various contexts from PolitiFact.com, which provides detailed analysis report and links to source documents for each case. This dataset can be used for fact-checking research as well. Notably, this new dataset is an order of magnitude larger than previously largest public fake news datasets of similar type. Empirically, we investigate automatic fake news detection based on surface-level linguistic patterns. We have designed a novel, hybrid convolutional neural network to integrate meta-data with text. We show that this hybrid approach can improve a text-only deep learning model.
http://arxiv.org/pdf/1705.00648
William Yang Wang
cs.CL, cs.CY
ACL 2017
null
cs.CL
20170501
20170501
[]
1705.00652
21
# 4.6 Response Biasing The discriminative objective function introduced in section 4.4 leads to a biased estimation of the denominator in equation (1). Since our negative examples are sampled from the training data distribution, common responses with high prior likelihood appear more often as negative examples. In practice, we observed that this bias leads to models that favor specific and long responses instead of short and generic ones. To encourage more generic responses, we bias the responses in R using a score derived from the log likelihood of the response as estimated using a language model. Our final score S(x,y) of any input email response pair is calculated as: Ss (x,y) = Sm(x,y) + alog Pim(y) (6) where S;,, is the score calculated by our trained scoring model, P,y(y) is the probability of y according to the language model, and a is tuned with online experiments. Note that the additional term is dependent only on y, and so can be precomputed for every response in F prior to inference time. Table 1 demonstrates the effect of including the response bias using an example email. Efficient Natural Language Response Suggestion for Smart Reply Henderson et al. # 4.7 Hierarchical Quantization for Efficient Search
1705.00652#21
Efficient Natural Language Response Suggestion for Smart Reply
This paper presents a computationally efficient machine-learned method for natural language response suggestion. Feed-forward neural networks using n-gram embedding features encode messages into vectors which are optimized to give message-response pairs a high dot-product value. An optimized search finds response suggestions. The method is evaluated in a large-scale commercial e-mail application, Inbox by Gmail. Compared to a sequence-to-sequence approach, the new system achieves the same quality at a small fraction of the computational requirements and latency.
http://arxiv.org/pdf/1705.00652
Matthew Henderson, Rami Al-Rfou, Brian Strope, Yun-hsuan Sung, Laszlo Lukacs, Ruiqi Guo, Sanjiv Kumar, Balint Miklos, Ray Kurzweil
cs.CL
null
null
cs.CL
20170501
20170501
[ { "id": "1606.01269" } ]
1705.00652
22
Efficient Natural Language Response Suggestion for Smart Reply Henderson et al. # 4.7 Hierarchical Quantization for Efficient Search At inference time, given an input email x, we use the dot-product scoring model to find response suggestions y € R with the highest scores S(x,y), where the scoring function is the dot-product: S(x,y) =h?h,!. The problem of finding datapoints with the largest dot-product values is some- times called Maximum Inner Product Search (MIPS). This is a research topic of its own and is also useful for inference in neural networks with a large number of output classes. Maximum Inner Product Search is related to nearest neighbor search (NNS) in Euclidean space, but comes with its own challenges because the dot-product “distance” is non-metric and many classical approaches such as KD-trees cannot be applied directly. For more background, we refer readers to the relevant works of [3, 5, 21, 22]. In the Smart Reply system, we need to keep very high retrieval recall (for example > 99% in top-30 retrieval). However, many of the existing methods are not designed to work well in the high recall regime without slowing down the search considerably. To achieve such high recall, hashing methods often require a large number of hash bits and tree methods often need to search a large number of leaves.
1705.00652#22
Efficient Natural Language Response Suggestion for Smart Reply
This paper presents a computationally efficient machine-learned method for natural language response suggestion. Feed-forward neural networks using n-gram embedding features encode messages into vectors which are optimized to give message-response pairs a high dot-product value. An optimized search finds response suggestions. The method is evaluated in a large-scale commercial e-mail application, Inbox by Gmail. Compared to a sequence-to-sequence approach, the new system achieves the same quality at a small fraction of the computational requirements and latency.
http://arxiv.org/pdf/1705.00652
Matthew Henderson, Rami Al-Rfou, Brian Strope, Yun-hsuan Sung, Laszlo Lukacs, Ruiqi Guo, Sanjiv Kumar, Balint Miklos, Ray Kurzweil
cs.CL
null
null
cs.CL
20170501
20170501
[ { "id": "1606.01269" } ]
1705.00652
23
In this work, we use a hierarchical quantization approach to solve the search problem. For our use case, the responses y come from a fixed set R and thus the h, vectors are computed ahead of inference time. Unlike the previous work in [5], we propose a hierarchical combination of vector quantization, orthogonal transformation and product quantization of the transformed vector quantization residuals. Our hierarchical model is based on the intuition that data in DNN hidden layers often resemble low dimensional signal with high dimensional residuals. Vector quantization is good at capturing low dimensional signals. Product quantization works by decomposing the high-dimensional vectors into low-dimensional subspaces and then quantizing them separately [4]. We use a learned rotation before product quantization as it has been shown to improve quantization error [16]. Specifically, h,, is approximated by a hierarchical quantization H@Q(h,), which is the sum of the vector quantization component V Q(h,,) and the residuals. A learned orthogonal transformation R is applied to the residual, followed by product quantization. hy © HQ(hy) = VQ(hy) + R™PQ(ry), where ry = R(h, — VQ(h,))
1705.00652#23
Efficient Natural Language Response Suggestion for Smart Reply
This paper presents a computationally efficient machine-learned method for natural language response suggestion. Feed-forward neural networks using n-gram embedding features encode messages into vectors which are optimized to give message-response pairs a high dot-product value. An optimized search finds response suggestions. The method is evaluated in a large-scale commercial e-mail application, Inbox by Gmail. Compared to a sequence-to-sequence approach, the new system achieves the same quality at a small fraction of the computational requirements and latency.
http://arxiv.org/pdf/1705.00652
Matthew Henderson, Rami Al-Rfou, Brian Strope, Yun-hsuan Sung, Laszlo Lukacs, Ruiqi Guo, Sanjiv Kumar, Balint Miklos, Ray Kurzweil
cs.CL
null
null
cs.CL
20170501
20170501
[ { "id": "1606.01269" } ]
1705.00652
24
hy © HQ(hy) = VQ(hy) + R™PQ(ry), where ry = R(h, — VQ(h,)) Here, given a vector quantization codebook Cvq, product quantization codebooks of {Cpq} for each of the subspaces k, and the learned orthogonal matrix R. € R?*4, the vector quantization of hy, is VQ(h,) = arg min, |h, — c||?. The product quantization of the rotated residual ry is computed by first dividing the rotated residuals r,, into K subvectors rh) k =1,2,--- ,K, and then c€Cva | a . . (k) , quantizing the subvectors independently by vector quantizers Cp: PQ (r) = argmin ||s— rl) |/?. s€{Cpaq(*)} Finally the full product quantization PQ(r,) is given by the concatenation of the quantization in each subspace: PQM (ry?) ry PQ (r?) (2) PQ(ry) = . " > Ty = : PAM (ry) ne
1705.00652#24
Efficient Natural Language Response Suggestion for Smart Reply
This paper presents a computationally efficient machine-learned method for natural language response suggestion. Feed-forward neural networks using n-gram embedding features encode messages into vectors which are optimized to give message-response pairs a high dot-product value. An optimized search finds response suggestions. The method is evaluated in a large-scale commercial e-mail application, Inbox by Gmail. Compared to a sequence-to-sequence approach, the new system achieves the same quality at a small fraction of the computational requirements and latency.
http://arxiv.org/pdf/1705.00652
Matthew Henderson, Rami Al-Rfou, Brian Strope, Yun-hsuan Sung, Laszlo Lukacs, Ruiqi Guo, Sanjiv Kumar, Balint Miklos, Ray Kurzweil
cs.CL
null
null
cs.CL
20170501
20170501
[ { "id": "1606.01269" } ]
1705.00652
25
PQM (ry?) ry PQ (r?) (2) PQ(ry) = . " > Ty = : PAM (ry) ne ' The bias term, a log Pim(y), can be included in the dot product e.g. by extending the hz vector with {a} and the hy vector with {log Pim (y)} Efficient Natural Language Response Suggestion for Smart Reply Henderson et al. At training time, the codebook for vector quantization, Cyq, codebooks for product quantization Cho. and the rotation matrix R are jointly learned by minimizing the reconstruction error of h, — HQ(h,) with stochastic gradient descent (SGD). At inference time, prediction is made by taking the candidates with the highest quantized dot product, i.e. hi VQ(hy) + (Rh,z)" PQ(ry) The distance computation can be performed very efficiently without reconstructing HQ(h,), instead utilizing a lookup table for asymmetric distance computation [10]. Furthermore, the lookup operation is carried out in register using SIMD (single instruction, multiple data) instructions in our implementation, providing a further speed improvement.
1705.00652#25
Efficient Natural Language Response Suggestion for Smart Reply
This paper presents a computationally efficient machine-learned method for natural language response suggestion. Feed-forward neural networks using n-gram embedding features encode messages into vectors which are optimized to give message-response pairs a high dot-product value. An optimized search finds response suggestions. The method is evaluated in a large-scale commercial e-mail application, Inbox by Gmail. Compared to a sequence-to-sequence approach, the new system achieves the same quality at a small fraction of the computational requirements and latency.
http://arxiv.org/pdf/1705.00652
Matthew Henderson, Rami Al-Rfou, Brian Strope, Yun-hsuan Sung, Laszlo Lukacs, Ruiqi Guo, Sanjiv Kumar, Balint Miklos, Ray Kurzweil
cs.CL
null
null
cs.CL
20170501
20170501
[ { "id": "1606.01269" } ]
1705.00652
26
Top-30 recall © © 8 —® Hierarchical Quantization N 0.86 ‘ -@- Clustering [3] a A (a> ALSH [20] 1 5 10 15 20 25 30 Speed up over exhaustive search Fig. 5. Evaluation of retrieval speed vs. recall of top-30 neighbors with maximum dot product. The curve is produced by varying the number of approximate neighbors retrieved by our hierarchical quantization method and by Asymmetric LSH [22], and varying the number of leaves searched by the clustering algorithm of [3]. We summarize the speed-recall evaluation of using different approximate MIPS algorithms in figure 5. The y-axis shows the recall of the top-30 retrieved responses, where the ground truth is computed by exhaustive search. The x-axis shows the speed up factor with respect to exhaustive search. Therefore, exhaustive search achieves a recall of 100% with a speed up factor of 1. Our algorithm achieves 99.89% recall with a speed-up factor over 10, outperforming the baselines of [3, 22]. # 5 EVALUATION # 5.1 Experimental Setup Data. Pairs of emails and their responses are sampled from user data to create datasets for training and testing the feedforward response scoring models. In total around 300M pairs are collected. The data is split uniformly at random into two disjoint sets of 95% and 5%, which constitute the training and test sets respectively.
1705.00652#26
Efficient Natural Language Response Suggestion for Smart Reply
This paper presents a computationally efficient machine-learned method for natural language response suggestion. Feed-forward neural networks using n-gram embedding features encode messages into vectors which are optimized to give message-response pairs a high dot-product value. An optimized search finds response suggestions. The method is evaluated in a large-scale commercial e-mail application, Inbox by Gmail. Compared to a sequence-to-sequence approach, the new system achieves the same quality at a small fraction of the computational requirements and latency.
http://arxiv.org/pdf/1705.00652
Matthew Henderson, Rami Al-Rfou, Brian Strope, Yun-hsuan Sung, Laszlo Lukacs, Ruiqi Guo, Sanjiv Kumar, Balint Miklos, Ray Kurzweil
cs.CL
null
null
cs.CL
20170501
20170501
[ { "id": "1606.01269" } ]
1705.00652
27
All email data (raw data, preprocessed data and training/evaluation data) is encrypted. Engineers can only inspect aggregated statistics on anonymized sentences that occurred across many users and do not identify any user. Language identification is run on the emails, and only English language emails are kept. The subject lines and message bodies are tokenized into word sequences, from which n-gram features are extracted. Infrequent words, URLs, email addresses, phone numbers etc. are replaced with special Efficient Natural Language Response Suggestion for Smart Reply Henderson et al. tokens. Quoted text arising from replying and forwarding is also removed. We used hundreds of thousands of the most frequent n-grams as features to represent the text. Training. Each of our DNN sub-networks consists of 3 hidden layers of sizes 500, 300, 100 in the case of the joint scoring models and 300, 300, 500 for the dot-product models . The embedding dimensionality d of our n-grams is 320. We train each model for at least 10 epochs. We set the learning rate to 0.01 during the first 40 million batches, after which it is reduced to 0.001. The models are trained on CPUs across 50 machines using a distributed implementation of TensorFlow [1]. # 5.2 Offline Evaluation
1705.00652#27
Efficient Natural Language Response Suggestion for Smart Reply
This paper presents a computationally efficient machine-learned method for natural language response suggestion. Feed-forward neural networks using n-gram embedding features encode messages into vectors which are optimized to give message-response pairs a high dot-product value. An optimized search finds response suggestions. The method is evaluated in a large-scale commercial e-mail application, Inbox by Gmail. Compared to a sequence-to-sequence approach, the new system achieves the same quality at a small fraction of the computational requirements and latency.
http://arxiv.org/pdf/1705.00652
Matthew Henderson, Rami Al-Rfou, Brian Strope, Yun-hsuan Sung, Laszlo Lukacs, Ruiqi Guo, Sanjiv Kumar, Balint Miklos, Ray Kurzweil
cs.CL
null
null
cs.CL
20170501
20170501
[ { "id": "1606.01269" } ]
1705.00652
28
# 5.2 Offline Evaluation Our models are evaluated offline on their ability to identify the true response to an email in the test data against a set of randomly selected competing responses. In this paper, we score a set of 100 responses that includes the correct response and 99 randomly selected incorrect competitors. We rank responses according to their scores, and report precision at 1 (P@/) as a metric of evaluation. We found that P@1 correlates with the quality of our models as measured in online experiments with users (see section 5.3). Batch Size Scoring Model P@1 25 Joint 49% 25 Dot-product 48% 50 Dot-product 52% Table 2. P@1 results on the test set for the joint and dot-product multi-loss scoring models. The training objective discriminates against more random negative examples for larger batch sizes.
1705.00652#28
Efficient Natural Language Response Suggestion for Smart Reply
This paper presents a computationally efficient machine-learned method for natural language response suggestion. Feed-forward neural networks using n-gram embedding features encode messages into vectors which are optimized to give message-response pairs a high dot-product value. An optimized search finds response suggestions. The method is evaluated in a large-scale commercial e-mail application, Inbox by Gmail. Compared to a sequence-to-sequence approach, the new system achieves the same quality at a small fraction of the computational requirements and latency.
http://arxiv.org/pdf/1705.00652
Matthew Henderson, Rami Al-Rfou, Brian Strope, Yun-hsuan Sung, Laszlo Lukacs, Ruiqi Guo, Sanjiv Kumar, Balint Miklos, Ray Kurzweil
cs.CL
null
null
cs.CL
20170501
20170501
[ { "id": "1606.01269" } ]
1705.00652
29
Table 2. P@1 results on the test set for the joint and dot-product multi-loss scoring models. The training objective discriminates against more random negative examples for larger batch sizes. Table 2 presents the results of the offline evaluation for joint and dot-product scoring models. The joint scoring model outperforms the dot-product model trained on the same batch size. This model learns complex cross-features between the input email and the response leading to a better scoring. However, the joint scoring model does not scale well to larger batches, since each possible pairing of input email and response requires a full forward pass through the network. The number of forward passes through the joint scoring model grows quadratically with the batch size. Recall the dot-product scoring model is a lot faster to train with multiple negatives than the joint scoring models since it requires a linear number of forward passes followed by a single kX by K matrix multiply to score all possible pairings, where K is the batch size. As a result, the multi-loss dot-product models can be trained on larger batches to produce more accurate models.
1705.00652#29
Efficient Natural Language Response Suggestion for Smart Reply
This paper presents a computationally efficient machine-learned method for natural language response suggestion. Feed-forward neural networks using n-gram embedding features encode messages into vectors which are optimized to give message-response pairs a high dot-product value. An optimized search finds response suggestions. The method is evaluated in a large-scale commercial e-mail application, Inbox by Gmail. Compared to a sequence-to-sequence approach, the new system achieves the same quality at a small fraction of the computational requirements and latency.
http://arxiv.org/pdf/1705.00652
Matthew Henderson, Rami Al-Rfou, Brian Strope, Yun-hsuan Sung, Laszlo Lukacs, Ruiqi Guo, Sanjiv Kumar, Balint Miklos, Ray Kurzweil
cs.CL
null
null
cs.CL
20170501
20170501
[ { "id": "1606.01269" } ]
1705.00652
30
Note that the models in table 2 are trained with the multiple negatives training loss of section 4.4. It is also possible to train the models as a classifier with a sigmoid loss. We find that multiple negative training results in a consistent 20% reduction in error rate for P@1 relative to training as a classifier across all of our conversational datasets. For example, in a different version of the Smart Reply data it improved the P@1 of a dot-product model from 47% to 58%. # 5.3 Online Evaluation Though the offline ranking metric gives a useful signal during development, the ultimate proof of a response selection model is how it affects the quality of the suggestions that users see in the end-to-end Smart Reply system. Suggestion quality or usefulness is approximated here by the observed conversion rate, i.e. the percentage of times users click on one of the suggestions when they are shown. # Efficient Natural Language Response Suggestion for Smart Reply Henderson et al. new email « DNN gives score Use heap to si = S(a,y) find top N Wy ER scoring y € R Response set R new email « ID al DNN gives search gives ‘ S(a,y) Vy € (Cabbest list nine Encode to hy Wye R Use heap to \ find top N \ - precomputed scoring y € score sj = M-best list
1705.00652#30
Efficient Natural Language Response Suggestion for Smart Reply
This paper presents a computationally efficient machine-learned method for natural language response suggestion. Feed-forward neural networks using n-gram embedding features encode messages into vectors which are optimized to give message-response pairs a high dot-product value. An optimized search finds response suggestions. The method is evaluated in a large-scale commercial e-mail application, Inbox by Gmail. Compared to a sequence-to-sequence approach, the new system achieves the same quality at a small fraction of the computational requirements and latency.
http://arxiv.org/pdf/1705.00652
Matthew Henderson, Rami Al-Rfou, Brian Strope, Yun-hsuan Sung, Laszlo Lukacs, Ruiqi Guo, Sanjiv Kumar, Balint Miklos, Ray Kurzweil
cs.CL
null
null
cs.CL
20170501
20170501
[ { "id": "1606.01269" } ]
1705.00652
31
(a) The exhaustive search setup scores all the examples in the response set using a joint scoring model. The top N scoring responses are then found using a heap. (b) The two pass setup first uses a fast dot product scoring model to produce an M- best list of response suggestions from the re- sponse set. The -best list is then exhaus- tively searched to find the top N scoring re- sponses according to a more accurate joint scoring model. new email « —>( Encode to hy Dot product search gives N-best list . Encode to hy Vy eR \ precomputed Response set R (c) The single pass setup uses a dot product scoring model and no joint scoring model. Fig. 6. Online system architectures. This section describes the evolution of our system and shows the effect of each iteration on latency and quality relative to the baseline Seq2Seq system. An outline of this series of online experiments is presented in table 3. 5.3.1 Exhaustive Search. Our initial system scored the input email against every response in the response set R using the joint scoring model and the Email body feature alone (see figure 6a). Given that the joint scoring model requires a forward pass for each response in R, this approach is too computationally expensive for an online experiment, see row 1 of table 3.
1705.00652#31
Efficient Natural Language Response Suggestion for Smart Reply
This paper presents a computationally efficient machine-learned method for natural language response suggestion. Feed-forward neural networks using n-gram embedding features encode messages into vectors which are optimized to give message-response pairs a high dot-product value. An optimized search finds response suggestions. The method is evaluated in a large-scale commercial e-mail application, Inbox by Gmail. Compared to a sequence-to-sequence approach, the new system achieves the same quality at a small fraction of the computational requirements and latency.
http://arxiv.org/pdf/1705.00652
Matthew Henderson, Rami Al-Rfou, Brian Strope, Yun-hsuan Sung, Laszlo Lukacs, Ruiqi Guo, Sanjiv Kumar, Balint Miklos, Ray Kurzweil
cs.CL
null
null
cs.CL
20170501
20170501
[ { "id": "1606.01269" } ]
1705.00652
32
5.3.2 Two pass. The computational expense of the initial exhaustive search system motivated the design of a two-stage approach where the first stage is fast and the second is more accurate, as shown in figure 6b. The first stage consists of a dot-product scoring model utilizing the text of the email body alone. As a pre-processing step, all of the responses in the response set R = {y1,..., Yn} are encoded to their vector representations to give a matrix R = [h,,,..., hy,,] (see figure 3b). At inference time, a new input email is encoded to its representation h,., and the vector of all scores is calculated as the dot product with the precomputed matrix: Rh,. A heap is then used to find the M highest scoring responses. The second stage uses the joint scoring model to score the candidates from the first stage. Row 2 of table 3, shows the 50x speedup improvement from using this two pass system. The system tended to suggest overly specific and often long responses because of the biased negative sampling procedure, see section 4.6. Therefore, we added an extra score to boost the scores of more likely responses using a language model. This change significantly improved the quality of Efficient Natural Language Response Suggestion for Smart Reply Henderson et al.
1705.00652#32
Efficient Natural Language Response Suggestion for Smart Reply
This paper presents a computationally efficient machine-learned method for natural language response suggestion. Feed-forward neural networks using n-gram embedding features encode messages into vectors which are optimized to give message-response pairs a high dot-product value. An optimized search finds response suggestions. The method is evaluated in a large-scale commercial e-mail application, Inbox by Gmail. Compared to a sequence-to-sequence approach, the new system achieves the same quality at a small fraction of the computational requirements and latency.
http://arxiv.org/pdf/1705.00652
Matthew Henderson, Rami Al-Rfou, Brian Strope, Yun-hsuan Sung, Laszlo Lukacs, Ruiqi Guo, Sanjiv Kumar, Balint Miklos, Ray Kurzweil
cs.CL
null
null
cs.CL
20170501
20170501
[ { "id": "1606.01269" } ]
1705.00652
33
Efficient Natural Language Response Suggestion for Smart Reply Henderson et al. Conversion Latency System Experiment rate relative to relative to Seq2Seq Seq2Seq Exhaustive (1) Use a joint scoring model to score _ 500% search all responses in R. (2) Two passes: dot-product then joint 61% 10% scoring. Two pass (3) Include response bias. 88% 10% (4) Improve sampling of dataset, and use multi-loss structure. loam 10% . (3) Remove second pass. 104% 2% Single pass (6) Use hierarchical quantization for search. 104% 1% Table 3. Details of several successive iterations of the Smart Reply system, showing the conversion rate and latency relative to the baseline Seq2Seq system of Kannan et al. [11]. the suggestions, see row 3 of table 3, moving the systems toward shorter and more generic responses that users were more likely to find appropriate and click. Improving our dataset sampling, and using the multi-loss structure brought the conversion rate of the system above that of the Seq2Seq system, see row 4 of table 3).
1705.00652#33
Efficient Natural Language Response Suggestion for Smart Reply
This paper presents a computationally efficient machine-learned method for natural language response suggestion. Feed-forward neural networks using n-gram embedding features encode messages into vectors which are optimized to give message-response pairs a high dot-product value. An optimized search finds response suggestions. The method is evaluated in a large-scale commercial e-mail application, Inbox by Gmail. Compared to a sequence-to-sequence approach, the new system achieves the same quality at a small fraction of the computational requirements and latency.
http://arxiv.org/pdf/1705.00652
Matthew Henderson, Rami Al-Rfou, Brian Strope, Yun-hsuan Sung, Laszlo Lukacs, Ruiqi Guo, Sanjiv Kumar, Balint Miklos, Ray Kurzweil
cs.CL
null
null
cs.CL
20170501
20170501
[ { "id": "1606.01269" } ]
1705.00652
34
Improving our dataset sampling, and using the multi-loss structure brought the conversion rate of the system above that of the Seq2Seq system, see row 4 of table 3). 5.3.3 Single pass. To improve the latency, we removed the second pass step and relied solely on the responses found by the first pass dot-product step (see figure 6c). However, to maintain the quality, we had to improve the quality of the dot-product model. Since the dot-product scoring model scales better with more negatives during training, we doubled the number of negatives for training the first pass system. We also applied the multi-loss architecture to the first pass dot-product model, using additional input features (see figure 4b). Together these changes made the dot-product model slightly more accurate than the joint model (see table 2). As a result, the system quality stayed the same while the speed increased 5 times, as shown in row 5 of table 3. So far, we have been computing the dot-product between the new email representation and all the precomputed representations of the responses in the response set, and searching the entire list to find high scoring responses. Switching from this exhaustive search to the hierarchical quantization search described in section 4.7 doubles the speed of the system without compromising quality (see row 6 of table 3).
1705.00652#34
Efficient Natural Language Response Suggestion for Smart Reply
This paper presents a computationally efficient machine-learned method for natural language response suggestion. Feed-forward neural networks using n-gram embedding features encode messages into vectors which are optimized to give message-response pairs a high dot-product value. An optimized search finds response suggestions. The method is evaluated in a large-scale commercial e-mail application, Inbox by Gmail. Compared to a sequence-to-sequence approach, the new system achieves the same quality at a small fraction of the computational requirements and latency.
http://arxiv.org/pdf/1705.00652
Matthew Henderson, Rami Al-Rfou, Brian Strope, Yun-hsuan Sung, Laszlo Lukacs, Ruiqi Guo, Sanjiv Kumar, Balint Miklos, Ray Kurzweil
cs.CL
null
null
cs.CL
20170501
20170501
[ { "id": "1606.01269" } ]
1705.00652
35
As a result, our final system produces better quality suggestions than the baseline Seq2Seq system with a small percentage of the computation and latency. # 6 CONCLUSIONS This paper presents a feed-forward approach for scoring the consistency between input messages and potential responses. A hierarchy of deep networks on top of simple n-gram representations is shown to outperform competitive sequence-to-sequence models in this context. Efficient Natural Language Response Suggestion for Smart Reply Henderson et al. The deep networks use different components for reading inputs and precomputing the representa- tion of possible responses. That architecture enables a highly efficient runtime search. We evaluate the models with the Smart Reply application. Live experiments with production traffic enabled a series of improvements that resulted in a system of higher quality than the original sequence-to-sequence system and a small fraction of the computation and latency. Without addressing the generation of novel responses, this paper suggests a minimal, efficient, and scalable implementation that enables many ranking-based applications. # ACKNOWLEDGMENTS Thanks to Fernando Pereira, Corinna Cortes, Anjuli Kannan, Dilek Hakkani-Tiir and Larry Heck for their valuable input to this paper. We would also like to acknowledge the many engineers at Google whose work on the tools and infrastructure made these experiments possible. Thanks especially to the users of Smart Reply. # REFERENCES
1705.00652#35
Efficient Natural Language Response Suggestion for Smart Reply
This paper presents a computationally efficient machine-learned method for natural language response suggestion. Feed-forward neural networks using n-gram embedding features encode messages into vectors which are optimized to give message-response pairs a high dot-product value. An optimized search finds response suggestions. The method is evaluated in a large-scale commercial e-mail application, Inbox by Gmail. Compared to a sequence-to-sequence approach, the new system achieves the same quality at a small fraction of the computational requirements and latency.
http://arxiv.org/pdf/1705.00652
Matthew Henderson, Rami Al-Rfou, Brian Strope, Yun-hsuan Sung, Laszlo Lukacs, Ruiqi Guo, Sanjiv Kumar, Balint Miklos, Ray Kurzweil
cs.CL
null
null
cs.CL
20170501
20170501
[ { "id": "1606.01269" } ]
1705.00652
36
# REFERENCES 1] M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, et al. Tensorflow: A system for large-scale machine learning. In USENIX Symposium on Operating Systems Design and Implementation (OSDI), 2016. 2] R. Al-Rfou, M. Pickett, J. Snaider, Y. Sung, B. Strope, and R. Kurzweil. Conversational contextual cues: The case of personalization and history for response ranking. arXiv preprint arXiv: 1606.00372, 2016. 3] A. Auvolat, S. Chandar, P. Vincent, H. Larochelle, and Y. Bengio. Clustering is efficient for approximate maximum inner product search. arXiv preprint arXiv: 1507.05910, 2015. 4] R.M. Gray. Vector quantization. ASSP Magazine, IEEE, 1(2):4-29, 1984. 5] R. Guo, S. Kumar, K. Choromanski, and D. Simcha. Quantization based fast inner product search. In International Conference on Artificial Intelligence and Statistics, 2016.
1705.00652#36
Efficient Natural Language Response Suggestion for Smart Reply
This paper presents a computationally efficient machine-learned method for natural language response suggestion. Feed-forward neural networks using n-gram embedding features encode messages into vectors which are optimized to give message-response pairs a high dot-product value. An optimized search finds response suggestions. The method is evaluated in a large-scale commercial e-mail application, Inbox by Gmail. Compared to a sequence-to-sequence approach, the new system achieves the same quality at a small fraction of the computational requirements and latency.
http://arxiv.org/pdf/1705.00652
Matthew Henderson, Rami Al-Rfou, Brian Strope, Yun-hsuan Sung, Laszlo Lukacs, Ruiqi Guo, Sanjiv Kumar, Balint Miklos, Ray Kurzweil
cs.CL
null
null
cs.CL
20170501
20170501
[ { "id": "1606.01269" } ]
1705.00652
37
6] H. He, K. Gimpel, and J. J. Lin. Multi-perspective sentence similarity modeling with convolutional neural networks. In Empirical Methods on Natural Language Processing (EMNLP), 2015. 7| M. Henderson, B. Thomson, and S. Young. Word-based dialog state tracking with recurrent neural networks. In Special Interest Group on Discourse and Dialogue (SIGDIAL), 2014. 8] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Computation, 9(8), Nov. 1997. 9] P-S. Huang, X. He, J. Gao, L. Deng, A. Acero, and L. Heck. Learning deep structured semantic models for web search using clickthrough data. 2013. 10] H. Jegou, M. Douze, and C. Schmid. Product quantization for nearest neighbor search. Pattern Analysis and Machine Intelligence, 33(1), 2011.
1705.00652#37
Efficient Natural Language Response Suggestion for Smart Reply
This paper presents a computationally efficient machine-learned method for natural language response suggestion. Feed-forward neural networks using n-gram embedding features encode messages into vectors which are optimized to give message-response pairs a high dot-product value. An optimized search finds response suggestions. The method is evaluated in a large-scale commercial e-mail application, Inbox by Gmail. Compared to a sequence-to-sequence approach, the new system achieves the same quality at a small fraction of the computational requirements and latency.
http://arxiv.org/pdf/1705.00652
Matthew Henderson, Rami Al-Rfou, Brian Strope, Yun-hsuan Sung, Laszlo Lukacs, Ruiqi Guo, Sanjiv Kumar, Balint Miklos, Ray Kurzweil
cs.CL
null
null
cs.CL
20170501
20170501
[ { "id": "1606.01269" } ]
1705.00652
38
10] H. Jegou, M. Douze, and C. Schmid. Product quantization for nearest neighbor search. Pattern Analysis and Machine Intelligence, 33(1), 2011. 11] A. Kannan, K. Kurach, S. Ravi, T. Kaufman, B. Miklos, G. Corrado, A. Tomkins, L. Lukacs, M. Ganea, P. Young, and V. Ramavajjala. Smart Reply: Automated response suggestion for email. In Conference on Knowledge Discovery and Data Mining (KDD). ACM, 2016. 12] R. Kurzweil. How to Create a Mind: The Secret of Human Thought Revealed. Penguin Books, New York, NY, USA, 2013. 13] Q. V. Le and T. Mikolov. Distributed representations of sentences and documents. In International Conference on Machine Learning (ICML), 2014. 14] G. Mesnil, Y. Dauphin, K. Yao, Y. Bengio, L. Deng, X. He, L. Heck, G. Tur, D. Hakkani-Tiir, D. Yu, and G. Zweig. Using recurrent neural networks for slot filling in spoken language understanding. 2015.
1705.00652#38
Efficient Natural Language Response Suggestion for Smart Reply
This paper presents a computationally efficient machine-learned method for natural language response suggestion. Feed-forward neural networks using n-gram embedding features encode messages into vectors which are optimized to give message-response pairs a high dot-product value. An optimized search finds response suggestions. The method is evaluated in a large-scale commercial e-mail application, Inbox by Gmail. Compared to a sequence-to-sequence approach, the new system achieves the same quality at a small fraction of the computational requirements and latency.
http://arxiv.org/pdf/1705.00652
Matthew Henderson, Rami Al-Rfou, Brian Strope, Yun-hsuan Sung, Laszlo Lukacs, Ruiqi Guo, Sanjiv Kumar, Balint Miklos, Ray Kurzweil
cs.CL
null
null
cs.CL
20170501
20170501
[ { "id": "1606.01269" } ]
1705.00652
39
15] T. Mikolov, K. Chen, G. Corrado, and J. Dean. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013. 16] M. Norouzi and D. J. Fleet. Cartesian k-means. In Conference on Computer Vision and Pattern Recognition, pages 3017-3024. IEEE, 2013. 17] J. Pennington, R. Socher, and C. D. Manning. Glove: Global vectors for word representation. In Empirical Methods on Natural Language Processing (EMNLP), 2014. 18] P.J. Price. Evaluation of spoken language systems: The ATIS domain. In Workshop on Speech and Natural Language, HLT ’90. Association for Computational Linguistics, 1990. 19] I. V. Serban, A. Sordoni, Y. Bengio, A. Courville, and J. Pineau. Building end-to-end dialogue systems using generative hierarchical neural network models. In Conference on Artificial Intelligence. AAAI, 2016. # Efficient Natural Language Response Suggestion for Smart Reply Henderson et al.
1705.00652#39
Efficient Natural Language Response Suggestion for Smart Reply
This paper presents a computationally efficient machine-learned method for natural language response suggestion. Feed-forward neural networks using n-gram embedding features encode messages into vectors which are optimized to give message-response pairs a high dot-product value. An optimized search finds response suggestions. The method is evaluated in a large-scale commercial e-mail application, Inbox by Gmail. Compared to a sequence-to-sequence approach, the new system achieves the same quality at a small fraction of the computational requirements and latency.
http://arxiv.org/pdf/1705.00652
Matthew Henderson, Rami Al-Rfou, Brian Strope, Yun-hsuan Sung, Laszlo Lukacs, Ruiqi Guo, Sanjiv Kumar, Balint Miklos, Ray Kurzweil
cs.CL
null
null
cs.CL
20170501
20170501
[ { "id": "1606.01269" } ]
1705.00652
40
# Efficient Natural Language Response Suggestion for Smart Reply Henderson et al. 20 N. Shazeer, R. Doherty, C. Evans, and C. Waterson. Swivel: Improving embeddings by noticing what’s missing. arXiv preprint arXiv: 1602.02215, 2016. 21 F. Shen, W. Liu, S. Zhang, Y. Yang, and H. Tao Shen. Learning binary codes for maximum inner product search. In International Conference on Computer Vision. IEEE, 2015. 22 A. Shrivastava and P. Li. Asymmetric LSH (ALSH) for sublinear time maximum inner product search (MIPS). In Advances in neural information processing systems (NIPS), 2014. 23 I. Sutskever, O. Vinyals, and Q. V. Le. Sequence to sequence learning with neural networks. In Advances in neural information processing systems (NIPS), 2014. 24 O. Vinyals, L. Kaiser, T. Koo, S. Petrov, I. Sutskever, and G. Hinton. Grammar as a foreign language. In Advances in neural information processing systems (NIPS), 2015. 25 O. Vinyals and Q. V. Le. A neural conversational model. In International Conference on Machine Learning (ICML), 2015.
1705.00652#40
Efficient Natural Language Response Suggestion for Smart Reply
This paper presents a computationally efficient machine-learned method for natural language response suggestion. Feed-forward neural networks using n-gram embedding features encode messages into vectors which are optimized to give message-response pairs a high dot-product value. An optimized search finds response suggestions. The method is evaluated in a large-scale commercial e-mail application, Inbox by Gmail. Compared to a sequence-to-sequence approach, the new system achieves the same quality at a small fraction of the computational requirements and latency.
http://arxiv.org/pdf/1705.00652
Matthew Henderson, Rami Al-Rfou, Brian Strope, Yun-hsuan Sung, Laszlo Lukacs, Ruiqi Guo, Sanjiv Kumar, Balint Miklos, Ray Kurzweil
cs.CL
null
null
cs.CL
20170501
20170501
[ { "id": "1606.01269" } ]
1705.00652
41
25 O. Vinyals and Q. V. Le. A neural conversational model. In International Conference on Machine Learning (ICML), 2015. 26 T.-H. Wen, D. Vandyke, N. Mrksic, M. Gasic, L. M. Rojas-Barahona, P.-H. Su, S. Ultes, and S. Young. A network-based end-to-end trainable task-oriented dialogue system. arXiv preprint arXiv: 1604.04562, 2016. 27 J. D. Williams and G. Zweig. End-to-end LSTM-based dialog control optimized with supervised and reinforcement learning. arXiv preprint arXiv:1606.01269, 2016.
1705.00652#41
Efficient Natural Language Response Suggestion for Smart Reply
This paper presents a computationally efficient machine-learned method for natural language response suggestion. Feed-forward neural networks using n-gram embedding features encode messages into vectors which are optimized to give message-response pairs a high dot-product value. An optimized search finds response suggestions. The method is evaluated in a large-scale commercial e-mail application, Inbox by Gmail. Compared to a sequence-to-sequence approach, the new system achieves the same quality at a small fraction of the computational requirements and latency.
http://arxiv.org/pdf/1705.00652
Matthew Henderson, Rami Al-Rfou, Brian Strope, Yun-hsuan Sung, Laszlo Lukacs, Ruiqi Guo, Sanjiv Kumar, Balint Miklos, Ray Kurzweil
cs.CL
null
null
cs.CL
20170501
20170501
[ { "id": "1606.01269" } ]
1705.00652
42
28 Y. Wu, M. Schuster, Z. Chen, Q. V. Le, M. Norouzi, W. Macherey, M. Krikun, Y. Cao, Q. Gao, K. Macherey, J. Klingner, A. Shah, M. Johnson, X. Liu, L. Kaiser, S. Gouws, Y. Kato, T. Kudo, H. Kazawa, K. Stevens, G. Kurian, N. Patil, W. Wang, C. Young, J. Smith, J. Riesa, A. Rudnick, O. Vinyals, G. Corrado, M. Hughes, and J. Dean. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv: 1609.08144, 2016. K. Yao, G. Zweig, M.-Y. Hwang, Y. Shi, and D. Yu. Recurrent neural networks for language understanding. In Interspeech, 2013. W. Yin, H. Schiitze, B. Xiang, and B. Zhou. ABCNN: Attention-based convolutional neural network for modeling sentence pairs. Transactions of the Association for Computational Linguistics, 4, 2016.
1705.00652#42
Efficient Natural Language Response Suggestion for Smart Reply
This paper presents a computationally efficient machine-learned method for natural language response suggestion. Feed-forward neural networks using n-gram embedding features encode messages into vectors which are optimized to give message-response pairs a high dot-product value. An optimized search finds response suggestions. The method is evaluated in a large-scale commercial e-mail application, Inbox by Gmail. Compared to a sequence-to-sequence approach, the new system achieves the same quality at a small fraction of the computational requirements and latency.
http://arxiv.org/pdf/1705.00652
Matthew Henderson, Rami Al-Rfou, Brian Strope, Yun-hsuan Sung, Laszlo Lukacs, Ruiqi Guo, Sanjiv Kumar, Balint Miklos, Ray Kurzweil
cs.CL
null
null
cs.CL
20170501
20170501
[ { "id": "1606.01269" } ]
1705.00108
0
7 1 0 2 r p A 9 2 ] L C . s c [ 1 v 8 0 1 0 0 . 5 0 7 1 : v i X r a # Semi-supervised sequence tagging with bidirectional language models Matthew E. Peters, Waleed Ammar, Chandra Bhagavatula, Russell Power Allen Institute for Artificial Intelligence {matthewp,waleeda,chandrab,russellp}@allenai.org # Abstract Pre-trained word embeddings learned from unlabeled text have become a stan- dard component of neural network archi- tectures for NLP tasks. However, in most cases, the recurrent network that oper- ates on word-level representations to pro- duce context sensitive representations is trained on relatively little labeled data. In this paper, we demonstrate a general semi-supervised approach for adding pre- trained context embeddings from bidi- rectional language models to NLP sys- tems and apply it to sequence labeling tasks. We evaluate our model on two stan- dard datasets for named entity recognition (NER) and chunking, and in both cases achieve state of the art results, surpassing previous systems that use other forms of transfer or joint learning with additional labeled data and task specific gazetteers.
1705.00108#0
Semi-supervised sequence tagging with bidirectional language models
Pre-trained word embeddings learned from unlabeled text have become a standard component of neural network architectures for NLP tasks. However, in most cases, the recurrent network that operates on word-level representations to produce context sensitive representations is trained on relatively little labeled data. In this paper, we demonstrate a general semi-supervised approach for adding pre- trained context embeddings from bidirectional language models to NLP systems and apply it to sequence labeling tasks. We evaluate our model on two standard datasets for named entity recognition (NER) and chunking, and in both cases achieve state of the art results, surpassing previous systems that use other forms of transfer or joint learning with additional labeled data and task specific gazetteers.
http://arxiv.org/pdf/1705.00108
Matthew E. Peters, Waleed Ammar, Chandra Bhagavatula, Russell Power
cs.CL
To appear in ACL 2017
null
cs.CL
20170429
20170429
[]
1705.00108
1
current neural network (RNN) that encodes token sequences into a context sensitive representation before making token specific predictions (Yang et al., 2017; Ma and Hovy, 2016; Lample et al., 2016; Hashimoto et al., 2016). Although the token representation is initialized with pre-trained embeddings, the parameters of the bidirectional RNN are typically learned only on labeled data. Previous work has explored meth- ods for jointly learning the bidirectional RNN with supplemental labeled data from other tasks (e.g., Søgaard and Goldberg, 2016; Yang et al., 2017). In this paper, we explore an alternate semi- supervised approach which does not require ad- ditional labeled data. We use a neural language model (LM), pre-trained on a large, unlabeled cor- pus to compute an encoding of the context at each position in the sequence (hereafter an LM embed- ding) and use it in the supervised sequence tag- ging model. Since the LM embeddings are used to compute the probability of future words in a neu- ral LM, they are likely to encode both the semantic and syntactic roles of words in context. # Introduction
1705.00108#1
Semi-supervised sequence tagging with bidirectional language models
Pre-trained word embeddings learned from unlabeled text have become a standard component of neural network architectures for NLP tasks. However, in most cases, the recurrent network that operates on word-level representations to produce context sensitive representations is trained on relatively little labeled data. In this paper, we demonstrate a general semi-supervised approach for adding pre- trained context embeddings from bidirectional language models to NLP systems and apply it to sequence labeling tasks. We evaluate our model on two standard datasets for named entity recognition (NER) and chunking, and in both cases achieve state of the art results, surpassing previous systems that use other forms of transfer or joint learning with additional labeled data and task specific gazetteers.
http://arxiv.org/pdf/1705.00108
Matthew E. Peters, Waleed Ammar, Chandra Bhagavatula, Russell Power
cs.CL
To appear in ACL 2017
null
cs.CL
20170429
20170429
[]
1705.00108
2
# Introduction Due to their simplicity and efficacy, pre-trained word embedding have become ubiquitous in NLP systems. Many prior studies have shown that they capture useful semantic and syntactic information (Mikolov et al., 2013; Pennington et al., 2014) and including them in NLP systems has been shown to be enormously helpful for a variety of downstream tasks (Collobert et al., 2011). However, in many NLP tasks it is essential to represent not just the meaning of a word, but also the word in context. For example, in the two phrases “A Central Bank spokesman” and “The Central African Republic”, the word ‘Central’ is used as part of both an Organization and Location. Accordingly, current state of the art sequence tag- ging models typically include a bidirectional reOur main contribution is to show that the con- text sensitive representation captured in the LM embeddings is useful in the supervised sequence tagging setting. When we include the LM embed- dings in our system overall performance increases from 90.87% to 91.93% F1 for the CoNLL 2003 NER task, a more then 1% absolute F1 increase, and a substantial improvement over the previous state of the art. We also establish a new state of the art result (96.37% F1) for the CoNLL 2000 Chunking task.
1705.00108#2
Semi-supervised sequence tagging with bidirectional language models
Pre-trained word embeddings learned from unlabeled text have become a standard component of neural network architectures for NLP tasks. However, in most cases, the recurrent network that operates on word-level representations to produce context sensitive representations is trained on relatively little labeled data. In this paper, we demonstrate a general semi-supervised approach for adding pre- trained context embeddings from bidirectional language models to NLP systems and apply it to sequence labeling tasks. We evaluate our model on two standard datasets for named entity recognition (NER) and chunking, and in both cases achieve state of the art results, surpassing previous systems that use other forms of transfer or joint learning with additional labeled data and task specific gazetteers.
http://arxiv.org/pdf/1705.00108
Matthew E. Peters, Waleed Ammar, Chandra Bhagavatula, Russell Power
cs.CL
To appear in ACL 2017
null
cs.CL
20170429
20170429
[]