doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1609.07843 | 36 | Finally, the pointer component can be seen pointing to words at the very end of the 100 word window (position 97), a far longer horizon than the 35 steps that most lan- guage models truncate their backpropagation training to. This illustrates why the gating function must be integrated into the pointer component. If the gating function could only use the RNN hidden state, it would need to be wary of words that were near the tail of the pointer, especially if it was not able to accurately track exactly how long it
Pointer Sentinel Mixture Models | 1609.07843#36 | Pointer Sentinel Mixture Models | Recent neural network sequence models with softmax classifiers have achieved
their best language modeling performance only with very large hidden states and
large vocabularies. Even then they struggle to predict rare or unseen words
even if the context makes the prediction unambiguous. We introduce the pointer
sentinel mixture architecture for neural sequence models which has the ability
to either reproduce a word from the recent context or produce a word from a
standard softmax classifier. Our pointer sentinel-LSTM model achieves state of
the art language modeling performance on the Penn Treebank (70.9 perplexity)
while using far fewer parameters than a standard softmax LSTM. In order to
evaluate how well language models can exploit longer contexts and deal with
more realistic vocabularies and larger corpora we also introduce the freely
available WikiText corpus. | http://arxiv.org/pdf/1609.07843 | Stephen Merity, Caiming Xiong, James Bradbury, Richard Socher | cs.CL, cs.AI | null | null | cs.CL | 20160926 | 20160926 | [
{
"id": "1607.03474"
},
{
"id": "1608.04207"
},
{
"id": "1606.01305"
},
{
"id": "1603.08148"
},
{
"id": "1603.01547"
},
{
"id": "1512.05287"
}
] |
1609.08144 | 36 | c0i t = LSTMi(ci t, mi t = max(âδ, min(δ, c0i ci x0i t + xiâ1 t = mi t t = max(âδ, min(δ, x0i xi t)) t = LSTMi+1(ci+1 , mi+1 tâ1, mi+1 tâ1, xi t = max(âδ, min(δ, c0i+1 ci+1 )) tâ1, mi tâ1, xiâ1 t)) t ; Wi) t; Wi+1) t (10)
# c0i+1 t
Let us expand LSTMi in equation 10 to include the internal gating logic. For brevity, we drop all the superscripts i.
9
W = [Wi, Wo, Ws, Wa, Ws, Wo, Wz, Ws! i, = sigmoid(W x; + W2m,) iâ, = tanh(W3x; + Wam,) f, = sigmoid(W;x, + Wem;) (11) 0; = sigmoid(W7x, + Wgm;) c= 1 Of +1, 0% my = Cy © O; | 1609.08144#36 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.07843 | 37 | Model Parameters Validation Test Mikolov & Zweig (2012) - KN-5 Mikolov & Zweig (2012) - KN5 + cache Mikolov & Zweig (2012) - RNN Mikolov & Zweig (2012) - RNN-LDA Mikolov & Zweig (2012) - RNN-LDA + KN-5 + cache Pascanu et al. (2013a) - Deep RNN Cheng et al. (2014) - Sum-Prod Net Zaremba et al. (2014) - LSTM (medium) Zaremba et al. (2014) - LSTM (large) Gal (2015) - Variational LSTM (medium, untied) Gal (2015) - Variational LSTM (medium, untied, MC) Gal (2015) - Variational LSTM (large, untied) Gal (2015) - Variational LSTM (large, untied, MC) Kim et al. (2016) - CharCNN Zilly et al. (2016) - Variational RHN 2Mâ¡ 2Mâ¡ 6Mâ¡ 7Mâ¡ 9Mâ¡ 6M 5Mâ¡ 20M 66M 20M 20M 66M 66M 19M 32M â â â â â â â 86.2 82.2 | 1609.07843#37 | Pointer Sentinel Mixture Models | Recent neural network sequence models with softmax classifiers have achieved
their best language modeling performance only with very large hidden states and
large vocabularies. Even then they struggle to predict rare or unseen words
even if the context makes the prediction unambiguous. We introduce the pointer
sentinel mixture architecture for neural sequence models which has the ability
to either reproduce a word from the recent context or produce a word from a
standard softmax classifier. Our pointer sentinel-LSTM model achieves state of
the art language modeling performance on the Penn Treebank (70.9 perplexity)
while using far fewer parameters than a standard softmax LSTM. In order to
evaluate how well language models can exploit longer contexts and deal with
more realistic vocabularies and larger corpora we also introduce the freely
available WikiText corpus. | http://arxiv.org/pdf/1609.07843 | Stephen Merity, Caiming Xiong, James Bradbury, Richard Socher | cs.CL, cs.AI | null | null | cs.CL | 20160926 | 20160926 | [
{
"id": "1607.03474"
},
{
"id": "1608.04207"
},
{
"id": "1606.01305"
},
{
"id": "1603.08148"
},
{
"id": "1603.01547"
},
{
"id": "1512.05287"
}
] |
1609.08144 | 37 | When doing quantized inference, we replace all the ï¬oating point operations in equations 10 and 11 with ï¬xed-point integer operations with either 8-bit or 16-bit resolution. The weight matrix W above is represented using an 8-bit integer matrix WQ and a ï¬oat vector s, as shown below:
si = max(abs(W[i, :])) WQ[i, j] = round(W[i, j]/si à 127.0) (12)
All accumulator values (ci and xi) are represented using 16-bit integers representing the range [â6, 6]. All matrix multiplications (e.g., W1x,, W2m;, etc.) in equation [11] are done using 8-bit integer multiplication accumulated into larger accumulators. All other operations, including all the activations (sigmoid, tanh) and elementwise operations (©, +) are done using 16-bit integer operations.
We now turn our attention to the log-linear softmax layer. During training, given the decoder RNN network output yt, we compute the probability vector pt over all candidate output symbols as follows:
vt = Ws â yt v0 pt = sof tmax(v0 t) t = max(âγ, min(γ, vt)) (13) | 1609.08144#37 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.08144 | 38 | vt = Ws â yt v0 pt = sof tmax(v0 t) t = max(âγ, min(γ, vt)) (13)
In equation 13, Ws is the weight matrix for the linear layer, which has the same number of rows as the number of symbols in the target vocabulary with each row corresponding to one unique target symbol. v represents the raw logits, which are ï¬rst clipped to be between âγ and γ and then normalized into a probability vector p. Input yt is guaranteed to be between âδ and δ due to the quantization scheme we applied to the decoder RNN. The clipping range γ for the logits v is determined empirically, and in our case, it is set to 25. In quantized inference, the weight matrix Ws is quantized into 8 bits as in equation 12, and the matrix multiplication is done using 8 bit arithmetic. The calculations within the sof tmax function and the attention model are not quantized during inference. | 1609.08144#38 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.07843 | 39 | Table 2. Single model perplexity on validation and test sets for the Penn Treebank language modeling task. For our models and the models of Zaremba et al. (2014) and Gal (2015), medium and large refer to a 650 and 1500 units two layer LSTM respectively. The medium pointer sentinel-LSTM model achieves lower perplexity than the large LSTM model of Gal (2015) while using a third of the parameters and without using the computationally expensive Monte Carlo (MC) dropout averaging at test time. Parameter numbers with â¡ are estimates based upon our understanding of the model and with reference to Kim et al. (2016).
Model Parameters Validation Test Variational LSTM implementation from Gal (2015) 20M 101.7 96.3 Zoneout + Variational LSTM Pointer Sentinel-LSTM 20M 21M 108.7 84.8 100.9 80.8
Table 3. Single model perplexity on validation and test sets for the WikiText-2 language modeling task. All compared models use a two layer LSTM with a hidden size of 650 and the same hyperparameters as the best performing Penn Treebank model.
was since seeing a word. By integrating the gating func- tion into the pointer component, we avoid the RNN hidden state having to maintain this intensive bookkeeping.
# 7. Conclusion
# References | 1609.07843#39 | Pointer Sentinel Mixture Models | Recent neural network sequence models with softmax classifiers have achieved
their best language modeling performance only with very large hidden states and
large vocabularies. Even then they struggle to predict rare or unseen words
even if the context makes the prediction unambiguous. We introduce the pointer
sentinel mixture architecture for neural sequence models which has the ability
to either reproduce a word from the recent context or produce a word from a
standard softmax classifier. Our pointer sentinel-LSTM model achieves state of
the art language modeling performance on the Penn Treebank (70.9 perplexity)
while using far fewer parameters than a standard softmax LSTM. In order to
evaluate how well language models can exploit longer contexts and deal with
more realistic vocabularies and larger corpora we also introduce the freely
available WikiText corpus. | http://arxiv.org/pdf/1609.07843 | Stephen Merity, Caiming Xiong, James Bradbury, Richard Socher | cs.CL, cs.AI | null | null | cs.CL | 20160926 | 20160926 | [
{
"id": "1607.03474"
},
{
"id": "1608.04207"
},
{
"id": "1606.01305"
},
{
"id": "1603.08148"
},
{
"id": "1603.01547"
},
{
"id": "1512.05287"
}
] |
1609.08144 | 39 | It is worth emphasizing that during training of the model we use full-precision ï¬oating point numbers. The only constraints we add to the model during training are the clipping of the RNN accumulator values into [âδ, δ] and softmax logits into [âγ, γ]. γ is ï¬xed to be at 25.0, while the value for δ is gradually annealed from a generous bound of δ = 8.0 at the beginning of training, to a rather stringent bound of δ = 1.0 towards the end of training. At inference time, δ is ï¬xed at 1.0. Those additional constraints do not degrade model convergence nor the decoding quality of the model when it has converged. In Figure 4, we compare the loss vs. steps for an unconstrained model (the blue curve) and a constrained model (the red curve) on WMTâ14 English-to-French. We can see that the loss for the constrained model is slightly better, possibly due to regularization roles those constraints play. | 1609.08144#39 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.07843 | 40 | was since seeing a word. By integrating the gating func- tion into the pointer component, we avoid the RNN hidden state having to maintain this intensive bookkeeping.
# 7. Conclusion
# References
Adi, Yossi, Kermany, Einat, Belinkov, Yonatan, Lavi, Ofer, and Goldberg, Yoav. Fine-grained Analysis of Sentence Embeddings Using Auxiliary Prediction Tasks. arXiv preprint arXiv:1608.04207, 2016.
We introduced the pointer sentinel mixture model and the WikiText language modeling dataset. This model achieves state of the art results in language modeling over the Penn Treebank while using few additional parameters and little additional computational complexity at prediction time.
We have also motivated the need to move from Penn Tree- bank to a new language modeling dataset for long range dependencies, providing WikiText-2 and WikiText-103 as potential options. We hope this new dataset can serve as a platform to improve handling of rare words and the usage of long term dependencies in language modeling.
Ahn, Sungjin, Choi, Heeyoul, P¨arnamaa, Tanel, and Ben- gio, Yoshua. A Neural Knowledge Language Model. CoRR, abs/1608.00318, 2016. | 1609.07843#40 | Pointer Sentinel Mixture Models | Recent neural network sequence models with softmax classifiers have achieved
their best language modeling performance only with very large hidden states and
large vocabularies. Even then they struggle to predict rare or unseen words
even if the context makes the prediction unambiguous. We introduce the pointer
sentinel mixture architecture for neural sequence models which has the ability
to either reproduce a word from the recent context or produce a word from a
standard softmax classifier. Our pointer sentinel-LSTM model achieves state of
the art language modeling performance on the Penn Treebank (70.9 perplexity)
while using far fewer parameters than a standard softmax LSTM. In order to
evaluate how well language models can exploit longer contexts and deal with
more realistic vocabularies and larger corpora we also introduce the freely
available WikiText corpus. | http://arxiv.org/pdf/1609.07843 | Stephen Merity, Caiming Xiong, James Bradbury, Richard Socher | cs.CL, cs.AI | null | null | cs.CL | 20160926 | 20160926 | [
{
"id": "1607.03474"
},
{
"id": "1608.04207"
},
{
"id": "1606.01305"
},
{
"id": "1603.08148"
},
{
"id": "1603.01547"
},
{
"id": "1512.05287"
}
] |
1609.08144 | 40 | Our solution strikes a good balance between eï¬ciency and accuracy. Since the computationally expensive operations (the matrix multiplications) are done using 8-bit integer operations, our quantized inference is quite eï¬cient. Also, since error-sensitive accumulator values are stored using 16-bit integers, our solution is very accurate and is robust to quantization errors.
In Table 1 we compare the inference speed and quality when decoding the WMTâ14 English-to-French development set (a concatenation of newstest2012 and newstest2013 test sets for a total of 6003 sentences) on
10
w= Normal training 4.5 === Quantized training | Log perplexity ye} ol wo to 1.5 0 2 4 6 8 10 12 14 Steps 10°
# x
Figure 4: Log perplexity vs. steps for normal (non-quantized) training and quantization-aware training on WMTâ14 English to French during maximum likelihood training. Notice the training losses are similar, with the quantization-aware loss being slightly better. Our conjecture for quantization-aware training being slightly better is that the clipping constraints act as additional regularization which improves the model quality. | 1609.08144#40 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.07843 | 41 | Bahdanau, Dzmitry, Cho, Kyunghyun, and Bengio, Yoshua. Neural Machine Translation by Jointly Learning to Align and Translate. In ICLR, 2015.
Chelba, Ciprian, Mikolov, Tomas, Schuster, Mike, Ge, Qi, Brants, Thorsten, Koehn, Phillipp, and Robin- son, Tony. One Billion Word Benchmark for Measur- ing Progress in Statistical Language Modeling. arXiv preprint arXiv:1312.3005, 2013.
Cheng, Jianpeng, Dong, Li, and Lapata, Mirella. Long
Pointer Sentinel Mixture Models
Short-Term Memory-Networks for Machine Reading. CoRR, abs/1601.06733, 2016.
Cheng, Wei-Chen, Kok, Stanley, Pham, Hoai Vu, Chieu, Hai Leong, and Chai, Kian Ming Adam. Language Mod- eling with Sum-Product Networks. In INTERSPEECH, 2014.
Marcus, Mitchell P., Santorini, and Beatrice, Building a Large An- The Penn Treebank. Marcinkiewicz, Mary Ann. notated Corpus of English: Computational Linguistics, 19:313â330, 1993. | 1609.07843#41 | Pointer Sentinel Mixture Models | Recent neural network sequence models with softmax classifiers have achieved
their best language modeling performance only with very large hidden states and
large vocabularies. Even then they struggle to predict rare or unseen words
even if the context makes the prediction unambiguous. We introduce the pointer
sentinel mixture architecture for neural sequence models which has the ability
to either reproduce a word from the recent context or produce a word from a
standard softmax classifier. Our pointer sentinel-LSTM model achieves state of
the art language modeling performance on the Penn Treebank (70.9 perplexity)
while using far fewer parameters than a standard softmax LSTM. In order to
evaluate how well language models can exploit longer contexts and deal with
more realistic vocabularies and larger corpora we also introduce the freely
available WikiText corpus. | http://arxiv.org/pdf/1609.07843 | Stephen Merity, Caiming Xiong, James Bradbury, Richard Socher | cs.CL, cs.AI | null | null | cs.CL | 20160926 | 20160926 | [
{
"id": "1607.03474"
},
{
"id": "1608.04207"
},
{
"id": "1606.01305"
},
{
"id": "1603.08148"
},
{
"id": "1603.01547"
},
{
"id": "1512.05287"
}
] |
1609.08144 | 41 | CPU, GPU and Googleâs Tensor Processing Unit (TPU) respectively.1 The model used here for comparison is trained with quantization constraints on the ML objective only (i.e., without reinforcement learning based model reï¬nement). When the model is decoded on CPU and GPU, it is not quantized and all operations are done using full-precision ï¬oats. When it is decoded on TPU, certain operations, such as embedding lookup and attention module, remain on the CPU, and all other quantized operations are oï¬-loaded to the TPU. In all cases, decoding is done on a single machine with two Intel Haswell CPUs, which consists in total of 88 CPU cores (hyperthreads). The machine is equipped with an NVIDIA GPU (Tesla k80) for the experiment with GPU or a single Google TPU for the experiment with TPU.
Table 1 shows that decoding using reduced precision arithmetics on the TPU suï¬ers a very minimal loss of 0.0072 on log perplexity, and no loss on BLEU at all. This result matches previous work reporting that quantizing convolutional neural network models can retain most of the model quality. | 1609.08144#41 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.07843 | 42 | Mikolov, Tomas and Zweig, Geoffrey. Context dependent recurrent neural network language model. In SLT, 2012.
Gal, Yarin. A Theoretically Grounded Application of Dropout in Recurrent Neural Networks. arXiv preprint arXiv:1512.05287, 2015.
Mikolov, Tomas, Karaï¬Â´at, Martin, Burget, Luk´as, Cer- nock´y, Jan, and Khudanpur, Sanjeev. Recurrent neu- ral network based language model. In INTERSPEECH, 2010.
Gu, Jiatao, Lu, Zhengdong, Li, Hang, and Li, Victor O. K. Incorporating Copying Mechanism in Sequence- to-Sequence Learning. CoRR, abs/1603.06393, 2016.
Pascanu, Razvan, C¸ aglar G¨ulc¸ehre, Cho, Kyunghyun, and Bengio, Yoshua. How to Construct Deep Recurrent Neu- ral Networks. CoRR, abs/1312.6026, 2013a. | 1609.07843#42 | Pointer Sentinel Mixture Models | Recent neural network sequence models with softmax classifiers have achieved
their best language modeling performance only with very large hidden states and
large vocabularies. Even then they struggle to predict rare or unseen words
even if the context makes the prediction unambiguous. We introduce the pointer
sentinel mixture architecture for neural sequence models which has the ability
to either reproduce a word from the recent context or produce a word from a
standard softmax classifier. Our pointer sentinel-LSTM model achieves state of
the art language modeling performance on the Penn Treebank (70.9 perplexity)
while using far fewer parameters than a standard softmax LSTM. In order to
evaluate how well language models can exploit longer contexts and deal with
more realistic vocabularies and larger corpora we also introduce the freely
available WikiText corpus. | http://arxiv.org/pdf/1609.07843 | Stephen Merity, Caiming Xiong, James Bradbury, Richard Socher | cs.CL, cs.AI | null | null | cs.CL | 20160926 | 20160926 | [
{
"id": "1607.03474"
},
{
"id": "1608.04207"
},
{
"id": "1606.01305"
},
{
"id": "1603.08148"
},
{
"id": "1603.01547"
},
{
"id": "1512.05287"
}
] |
1609.08144 | 42 | Table 1 also shows that decoding our model on CPU is actually 2.3 times faster than on GPU. Firstly, our dual-CPUs host machine oï¬ers a theoretical peak FLOP performance which is more than two thirds that of the GPU. Secondly, the beam search algorithm forces the decoder to incur a non-trivial amount of data transfer between the host and the GPU at every decoding step. Hence, our current decoder implementation
1https://cloudplatform.googleblog.com/2016/05/Google-supercharges-machine-learning-tasks-with-custom-chip.html
11
is not fully utilizing the computation capacities that a GPU can theoretically oï¬er during inference. Finally, Table 1 shows that decoding on TPUs is 3.4 times faster than decoding on CPUs, demonstrating that quantized arithmetics is much faster on TPUs than both CPUs or GPUs.
Table 1: Model inference on CPU, GPU and TPU. The model used here for comparison is trained with the ML objective only with quantization constraints. Results are obtained by decoding the WMT EnâFr development set on CPU, GPU and TPU respectively.
BLEU Log Perplexity Decoding time (s) CPU 31.20 GPU 31.20 TPU 31.21 1.4553 1.4553 1.4626 1322 3028 384 | 1609.08144#42 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.07843 | 43 | G¨ulc¸ehre, C¸ aglar, Ahn, Sungjin, Nallapati, Ramesh, Zhou, Bowen, and Bengio, Yoshua. Pointing the Unknown Words. arXiv preprint arXiv:1603.08148, 2016.
Pascanu, Razvan, Mikolov, Tomas, and Bengio, Yoshua. On the difï¬culty of training recurrent neural networks. In ICML, 2013b.
Hochreiter, Sepp and Schmidhuber, J¨urgen. Long Short- Term Memory. Neural Computation, 9(8):1735â1780, Nov 1997. ISSN 0899-7667.
Rosenfeld, Roni. A Maximum Entropy Approach to Adap- tive Statistical Language Modeling. 1996.
Kadlec, Rudolf, Schmid, Martin, Bajgar, Ondrej, and Kleindienst, Jan. Text Understanding with the Attention Sum Reader Network. arXiv preprint arXiv:1603.01547, 2016.
Kim, Yoon, Jernite, Yacine, Sontag, David, and Rush, Alexander M. Character-aware neural language models. CoRR, abs/1508.06615, 2016. | 1609.07843#43 | Pointer Sentinel Mixture Models | Recent neural network sequence models with softmax classifiers have achieved
their best language modeling performance only with very large hidden states and
large vocabularies. Even then they struggle to predict rare or unseen words
even if the context makes the prediction unambiguous. We introduce the pointer
sentinel mixture architecture for neural sequence models which has the ability
to either reproduce a word from the recent context or produce a word from a
standard softmax classifier. Our pointer sentinel-LSTM model achieves state of
the art language modeling performance on the Penn Treebank (70.9 perplexity)
while using far fewer parameters than a standard softmax LSTM. In order to
evaluate how well language models can exploit longer contexts and deal with
more realistic vocabularies and larger corpora we also introduce the freely
available WikiText corpus. | http://arxiv.org/pdf/1609.07843 | Stephen Merity, Caiming Xiong, James Bradbury, Richard Socher | cs.CL, cs.AI | null | null | cs.CL | 20160926 | 20160926 | [
{
"id": "1607.03474"
},
{
"id": "1608.04207"
},
{
"id": "1606.01305"
},
{
"id": "1603.08148"
},
{
"id": "1603.01547"
},
{
"id": "1512.05287"
}
] |
1609.07843 | 44 | Koehn, Philipp, Hoang, Hieu, Birch, Alexandra, Callison- Burch, Chris, Federico, Marcello, Bertoldi, Nicola, Cowan, Brooke, Shen, Wade, Moran, Christine, Zens, Richard, Dyer, Chris, Bojar, Ondej, Constantin, Alexan- dra, and Herbst, Evan. Moses: Open Source Toolkit for Statistical Machine Translation. In ACL, 2007.
Sukhbaatar, Sainbayar, Szlam, Arthur, Weston, Jason, and Fergus, Rob. End-To-End Memory Networks. In NIPS, 2015.
Vinyals, Oriol, Fortunato, Meire, and Jaitly, Navdeep. In Advances in Neural Information Pointer networks. Processing Systems, pp. 2692â2700, 2015.
Xiong, Caiming, Merity, Stephen, and Socher, Richard. Dynamic Memory Networks for Visual and Textual Question Answering. In ICML, 2016.
Zaremba, Wojciech, Sutskever, Ilya, and Vinyals, Oriol. Recurrent neural network regularization. arXiv preprint arXiv:1409.2329, 2014. | 1609.07843#44 | Pointer Sentinel Mixture Models | Recent neural network sequence models with softmax classifiers have achieved
their best language modeling performance only with very large hidden states and
large vocabularies. Even then they struggle to predict rare or unseen words
even if the context makes the prediction unambiguous. We introduce the pointer
sentinel mixture architecture for neural sequence models which has the ability
to either reproduce a word from the recent context or produce a word from a
standard softmax classifier. Our pointer sentinel-LSTM model achieves state of
the art language modeling performance on the Penn Treebank (70.9 perplexity)
while using far fewer parameters than a standard softmax LSTM. In order to
evaluate how well language models can exploit longer contexts and deal with
more realistic vocabularies and larger corpora we also introduce the freely
available WikiText corpus. | http://arxiv.org/pdf/1609.07843 | Stephen Merity, Caiming Xiong, James Bradbury, Richard Socher | cs.CL, cs.AI | null | null | cs.CL | 20160926 | 20160926 | [
{
"id": "1607.03474"
},
{
"id": "1608.04207"
},
{
"id": "1606.01305"
},
{
"id": "1603.08148"
},
{
"id": "1603.01547"
},
{
"id": "1512.05287"
}
] |
1609.08144 | 44 | # 7 Decoder
We use beam search during decoding to ï¬nd the sequence Y that maximizes a score function s(Y, X) given a trained model. We introduce two important reï¬nements to the pure max-probability based beam search algorithm: a coverage penalty [42] and length normalization. With length normalization, we aim to account for the fact that we have to compare hypotheses of diï¬erent length. Without some form of length-normalization regular beam search will favor shorter results over longer ones on average since a negative log-probability is added at each step, yielding lower (more negative) scores for longer sentences. We ï¬rst tried to simply divide by the length to normalize. We then improved on that original heuristic by dividing by lengthα, with 0 < α < 1 where α is optimized on a development set (α â [0.6 â 0.7] was usually found to be best). Eventually we designed the empirically-better scoring function below, which also includes a coverage penalty to favor translations that fully cover the source sentence according to the attention module.
More concretely, the scoring function s(Y, X) that we employ to rank candidate translations is deï¬ned as follows: | 1609.08144#44 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.07843 | 45 | Krueger, David, Maharaj, Tegan, Kram´ar, J´anos, Pezeshki, Mohammad, Ballas, Nicolas, Ke, Nan Rosemary, Goyal, Anirudh, Bengio, Yoshua, Larochelle, Hugo, Courville, Aaron, et al. Zoneout: Regularizing RNNs by Ran- domly Preserving Hidden Activations. arXiv preprint arXiv:1606.01305, 2016.
Zilly, Julian Georg, Srivastava, Rupesh Kumar, Koutn´ık, Jan, and Schmidhuber, J¨urgen. Recurrent Highway Net- works. arXiv preprint arXiv:1607.03474, 2016.
Kumar, Ankit, Irsoy, Ozan, Ondruska, Peter, Iyyer, Mo- hit, Bradbury, James, Gulrajani, Ishaan, Zhong, Victor, Paulus, Romain, and Socher, Richard. Ask me any- thing: Dynamic memory networks for natural language processing. In ICML, 2016. | 1609.07843#45 | Pointer Sentinel Mixture Models | Recent neural network sequence models with softmax classifiers have achieved
their best language modeling performance only with very large hidden states and
large vocabularies. Even then they struggle to predict rare or unseen words
even if the context makes the prediction unambiguous. We introduce the pointer
sentinel mixture architecture for neural sequence models which has the ability
to either reproduce a word from the recent context or produce a word from a
standard softmax classifier. Our pointer sentinel-LSTM model achieves state of
the art language modeling performance on the Penn Treebank (70.9 perplexity)
while using far fewer parameters than a standard softmax LSTM. In order to
evaluate how well language models can exploit longer contexts and deal with
more realistic vocabularies and larger corpora we also introduce the freely
available WikiText corpus. | http://arxiv.org/pdf/1609.07843 | Stephen Merity, Caiming Xiong, James Bradbury, Richard Socher | cs.CL, cs.AI | null | null | cs.CL | 20160926 | 20160926 | [
{
"id": "1607.03474"
},
{
"id": "1608.04207"
},
{
"id": "1606.01305"
},
{
"id": "1603.08148"
},
{
"id": "1603.01547"
},
{
"id": "1512.05287"
}
] |
1609.07843 | 46 | Ling, Wang, Grefenstette, Edward, Hermann, Karl Moritz, Kocisk´y, Tom´as, Senior, Andrew, Wang, Fumin, and Blunsom, Phil. Latent Predictor Networks for Code Gen- eration. CoRR, abs/1603.06744, 2016.
Pointer Sentinel Mixture Models
# Supplementary material
# Pointer usage on the Penn Treebank
For a qualitative analysis, we visualize how the pointer component is used within the pointer sentinel mixture model. The gate refers to the result of the gating function, with 1 indicating the RNN component is exclusively used whilst 0 indicates the pointer component is exclusively used. We begin with predictions that are using the RNN component primarily and move to ones that use the pointer component primarily.
Figure 5. In predicting the fall season has been a good one especially for those retailers, the pointer component suggests many words from the historical window that would ï¬t - retailers, investments, chains, and institutions. The gate is still primarily weighted towards the RNN component however.
Figure 6. In predicting the national cancer institute also projected that overall u.s. mortality, the pointer component is focused on mortality and rates, both of which would ï¬t. The gate is still primarily weighted towards the RNN component. | 1609.07843#46 | Pointer Sentinel Mixture Models | Recent neural network sequence models with softmax classifiers have achieved
their best language modeling performance only with very large hidden states and
large vocabularies. Even then they struggle to predict rare or unseen words
even if the context makes the prediction unambiguous. We introduce the pointer
sentinel mixture architecture for neural sequence models which has the ability
to either reproduce a word from the recent context or produce a word from a
standard softmax classifier. Our pointer sentinel-LSTM model achieves state of
the art language modeling performance on the Penn Treebank (70.9 perplexity)
while using far fewer parameters than a standard softmax LSTM. In order to
evaluate how well language models can exploit longer contexts and deal with
more realistic vocabularies and larger corpora we also introduce the freely
available WikiText corpus. | http://arxiv.org/pdf/1609.07843 | Stephen Merity, Caiming Xiong, James Bradbury, Richard Socher | cs.CL, cs.AI | null | null | cs.CL | 20160926 | 20160926 | [
{
"id": "1607.03474"
},
{
"id": "1608.04207"
},
{
"id": "1606.01305"
},
{
"id": "1603.08148"
},
{
"id": "1603.01547"
},
{
"id": "1512.05287"
}
] |
1609.08144 | 46 | where pi,j is the attention probability of the j-th target word yj on the i-th source word xi. By construction (equation 4), P|X| i=0 pi,j is equal to 1. Parameters α and β control the strength of the length normalization and the coverage penalty. When α = 0 and β = 0, our decoder falls back to pure beam search by probability. During beam search, we typically keep 8-12 hypotheses but we ï¬nd that using fewer (4 or 2) has only slight negative eï¬ects on BLEU scores. Besides pruning the number of considered hypotheses, two other forms of pruning are used. Firstly, at each step, we only consider tokens that have local scores that are not more than beamsize below the best token for this step. Secondly, after a normalized best score has been found according to equation 14, we prune all hypotheses that are more than beamsize below the best normalized score so far. The latter type of pruning only applies to full hypotheses because it compares scores in the normalized space, which is only available when a hypothesis ends. This latter form of pruning also has the eï¬ect that very quickly no more hypotheses will be generated once a suï¬ciently good hypothesis has been found, so the search will end quickly. The pruning speeds up search by 30% â 40% when run on CPUs
12 | 1609.08144#46 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.07843 | 47 | Figure 7. In predicting people do nât seem to be unhappy with it he said, the pointer component correctly selects said and is almost equally weighted with the RNN component. This is surprising given how frequent the word said is used within the Penn Treebank.
Pointer Sentinel Mixture Models
Predicting billion using 100 words of history (gate = 0.44)
Figure 8. For predicting the federal government has had to pump in $ N billion, the pointer component focuses on the recent usage of billion with highly similar context. The pointer component is also relied upon more heavily than the RNN component - surprising given the frequency of billion within the Penn Treebank and that the usage was quite recent.
Predicting noriega using 100 words of history (gate = 0.12)
entity gen. douglas would have fallen out of the window in only four more timesteps, a fact that the RNN hidden state would not be able guessed the same word. This additionally illustrates why the gating function must be integrated into the pointer component. The named to accurately retain for almost 100 timesteps. Figure 9. For predicting (unk) âs ghost sometimes runs through the e ring dressed like gen. noriega, the pointer component reaches 97 timesteps back to retrieve gen. douglas. Unfortunately this prediction is incorrect but without additional context a human would have
Predicting iverson using 100 words of history (gate = 0.03) | 1609.07843#47 | Pointer Sentinel Mixture Models | Recent neural network sequence models with softmax classifiers have achieved
their best language modeling performance only with very large hidden states and
large vocabularies. Even then they struggle to predict rare or unseen words
even if the context makes the prediction unambiguous. We introduce the pointer
sentinel mixture architecture for neural sequence models which has the ability
to either reproduce a word from the recent context or produce a word from a
standard softmax classifier. Our pointer sentinel-LSTM model achieves state of
the art language modeling performance on the Penn Treebank (70.9 perplexity)
while using far fewer parameters than a standard softmax LSTM. In order to
evaluate how well language models can exploit longer contexts and deal with
more realistic vocabularies and larger corpora we also introduce the freely
available WikiText corpus. | http://arxiv.org/pdf/1609.07843 | Stephen Merity, Caiming Xiong, James Bradbury, Richard Socher | cs.CL, cs.AI | null | null | cs.CL | 20160926 | 20160926 | [
{
"id": "1607.03474"
},
{
"id": "1608.04207"
},
{
"id": "1606.01305"
},
{
"id": "1603.08148"
},
{
"id": "1603.01547"
},
{
"id": "1512.05287"
}
] |
1609.08144 | 47 | 12
compared to not pruning (where we simply stop decoding after a predetermined maximum output length of twice the source length). Typically we use beamsize = 3.0, unless otherwise noted.
To improve throughput during decoding we can put many sentences (typically up to 35) of similar length into a batch and decode all of those in parallel to make use of available hardware optimized for parallel computations. In this case the beam search only ï¬nishes if all hypotheses for all sentences in the batch are out of beam, which is slightly less eï¬cient theoretically, but in practice is of negligible additional computational cost.
α BLEU 0.0 0.2 0.4 0.6 0.8 1.0 β 0.0 30.3 31.4 31.4 31.4 31.4 31.4 0.2 30.7 31.4 31.4 31.4 31.4 31.3 0.4 30.9 31.4 31.4 31.3 31.2 31.2 0.6 31.1 31.3 31.1 30.9 30.8 30.6 0.8 31.2 30.8 30.5 30.1 29.8 29.4 1.0 31.1 30.3 29.6 28.9 28.1 27.2 | 1609.08144#47 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.07843 | 48 | Predicting iverson using 100 words of history (gate = 0.03)
Figure 10. For predicting mr. iverson, the pointer component has learned the ability to point to the last name of the most recent named entity. The named entity also occurs 45 timesteps ago, which is longer than the 35 steps that most language models truncate their backpropagation to.
Predicting rosenthal using 100 words of history (gate = 0.00)
Figure 11. For predicting mr. rosenthal, the pointer is almost exclusively used and reaches back 65 timesteps to identify bruce rosenthal as the person speaking, correctly only selecting the last name.
Predicting integrated using 100 words of history (gate = 0.00)
Figure 12. For predicting in composite trading on the new york stock exchange yesterday integrated, the company Integrated and the (unk) token are primarily attended to by the pointer component, with nearly the full prediction being determined by the pointer component.
Pointer Sentinel Mixture Models
# Zipï¬an plot over WikiText-103
Zipf plot for WikiText 7 10 the 106 ry Oo a ry fo} rg ry fo} o servitude Absolute frequency of token 102 Schmerber 101 Goddet 10° 10° 101 102 103 104 105 106 Frequency rank of token | 1609.07843#48 | Pointer Sentinel Mixture Models | Recent neural network sequence models with softmax classifiers have achieved
their best language modeling performance only with very large hidden states and
large vocabularies. Even then they struggle to predict rare or unseen words
even if the context makes the prediction unambiguous. We introduce the pointer
sentinel mixture architecture for neural sequence models which has the ability
to either reproduce a word from the recent context or produce a word from a
standard softmax classifier. Our pointer sentinel-LSTM model achieves state of
the art language modeling performance on the Penn Treebank (70.9 perplexity)
while using far fewer parameters than a standard softmax LSTM. In order to
evaluate how well language models can exploit longer contexts and deal with
more realistic vocabularies and larger corpora we also introduce the freely
available WikiText corpus. | http://arxiv.org/pdf/1609.07843 | Stephen Merity, Caiming Xiong, James Bradbury, Richard Socher | cs.CL, cs.AI | null | null | cs.CL | 20160926 | 20160926 | [
{
"id": "1607.03474"
},
{
"id": "1608.04207"
},
{
"id": "1606.01305"
},
{
"id": "1603.08148"
},
{
"id": "1603.01547"
},
{
"id": "1512.05287"
}
] |
1609.08144 | 48 | Table 2: WMTâ14 EnâFr BLEU score with respect to diï¬erent values of α and β. The model in this experiment trained using ML without RL reï¬nement. A single WMT EnâFr model achieves a BLEU score of 30.3 on the development set when the beam search scoring function is purely based on the sequence probability (i.e., both α and β are 0). Slightly larger α and β values improve BLEU score by up to +1.1 (α = 0.2, β = 0.2), with a wide range of α and β values giving results very close to the best BLEU scores.
Table 2 shows the impact of α and β on the BLEU score when decoding the WMTâ14 English-to-French development set. The model used here for experiments is trained using the ML objective only (without RL reï¬nement). As can be seen from the results, having some length normalization and coverage penalty improves BLEU score considerably (from 30.3 to 31.4). | 1609.08144#48 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.08144 | 49 | We ï¬nd that length normalization (α) and coverage penalty (β) are less eï¬ective for models with RL reï¬nement. Table 3 summarizes our results. This is understandable, as during RL reï¬nement, the models already learn to pay attention to the full source sentence to not under-translate or over-translate, which would result in a penalty on the BLEU (or GLEU) scores.
α BLEU 0.0 0.2 0.4 0.6 0.8 1.0 β 0.0 0.320 0.322 0.322 0.322 0.322 0.322 0.2 0.321 0.322 0.322 0.322 0.322 0.321 0.4 0.322 0.322 0.322 0.321 0.321 0.321 0.6 0.322 0.322 0.321 0.321 0.321 0.320 0.8 0.322 0.321 0.321 0.319 0.316 0.313 1.0 0.322 0.321 0.316 0.309 0.302 0.295 | 1609.08144#49 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.08144 | 50 | Table 3: WMT EnâFr BLEU score with respect to diï¬erent values of α and β. The model used here is trained using ML, then reï¬ned with RL. Compared to the results in Table 2, coverage penalty and length normalization appear to be less eï¬ective for models after RL-based model reï¬nements. Results are obtained on the development set.
We found that the optimal α and β vary slightly for diï¬erent models. Based on tuning results using internal Google datasets, we use α = 0.2 and β = 0.2 in our experiments, unless noted otherwise.
# 8 Experiments and Results
In this section, we present our experimental results on two publicly available corpora used extensively as benchmarks for Neural Machine Translation systems: WMTâ14 English-to-French (WMT EnâFr) and English-to-German (WMT EnâDe). On these two datasets, we benchmark GNMT models with word-based,
13
character-based, and wordpiece-based vocabularies. We also present the improved accuracy of our models after ï¬ne-tuning with RL and model ensembling. Our main objective with these datasets is to show the contributions of various components in our implementation, in particular the wordpiece model, RL model reï¬nement, and model ensembling. | 1609.08144#50 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.08144 | 51 | In addition to testing on publicly available corpora, we also test GNMT on Googleâs translation production corpora, which are two to three decimal orders of magnitudes bigger than the WMT corpora for a given language pair. We compare the accuracy of our model against human accuracy and the best Phrase-Based Machine Translation (PBMT) production system for Google Translate.
In all experiments, our models consist of 8 encoder layers and 8 decoder layers. (Since the bottom encoder layer is actually bi-directional, in total there are 9 logically distinct LSTM passes in the encoder.) The attention network is a simple feedforward network with one hidden layer with 1024 nodes. All of the models use 1024 LSTM nodes per encoder and decoder layers.
# 8.1 Datasets
We evaluate our model on the WMT EnâFr dataset, the WMT EnâDe dataset, as well as many Google- internal production datasets. On WMT EnâFr, the training set contains 36M sentence pairs. On WMT EnâDe, the training set contains 5M sentence pairs. In both cases, we use newstest2014 as the test sets to compare against previous work [31, 37, 45]. The combination of newstest2012 and newstest2013 is used as the development set. | 1609.08144#51 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.08144 | 52 | In addition to WMT, we also evaluate our model on some Google-internal datasets representing a wider spectrum of languages with distinct linguistic properties: English â French, English â Spanish and English â Chinese.
# 8.2 Evaluation Metrics
We evaluate our models using the standard BLEU score metric. To be comparable to previous work [41, 31, 45], we report tokenized BLEU score as computed by the multi-bleu.pl script, downloaded from the public implementation of Moses (on Github), which is also used in [31]. | 1609.08144#52 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.08144 | 53 | As is well-known, BLEU score does not fully capture the quality of a translation. For that reason we also carry out side-by-side (SxS) evaluations where we have human raters evaluate and compare the quality of two translations presented side by side for a given source sentence. Side-by-side scores range from 0 to 6, with a score of 0 meaning âcompletely nonsense translationâ, and a score of 6 meaning âperfect translation: the meaning of the translation is completely consistent with the source, and the grammar is correctâ. A translation is given a score of 4 if âthe sentence retains most of the meaning of the source sentence, but may have some grammar mistakesâ, and a translation is given a score of 2 if âthe sentence preserves some of the meaning of the source sentence but misses signiï¬cant partsâ. These scores are generated by human raters who are ï¬uent in both languages and hence often capture translation quality better than BLEU scores.
# 8.3 Training Procedure
The models are trained by a system we implemented using TensorFlow[1]. The training setup follows the classic data parallelism paradigm. There are 12 replicas running concurrently on separate machines. Every replica updates the shared parameters asynchronously. | 1609.08144#53 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.08144 | 54 | We initialize all trainable parameters uniformly between [-0.04, 0.04]. As is common wisdom in training RNN models, we apply gradient clipping (similar to [41]): all gradients are uniformly scaled down such that the norm of the modiï¬ed gradients is no larger than a ï¬xed constant, which is 5.0 in our case. If the norm of the original gradients is already smaller than or equal to the given threshold, then gradients are not changed. For the ï¬rst stage of maximum likelihood training (that is, to optimize for objective function 7), we use a combination of Adam [25] and simple SGD learning algorithms provided by the TensorFlow runtime system. We run Adam for the ï¬rst 60k steps, after which we switch to simple SGD. Each step in training is a mini-batch of 128 examples.
We ï¬nd that Adam accelerates training at the beginning, but Adam alone converges to a worse point than a combination of Adam ï¬rst, followed by SGD (Figure 5). For the Adam part, we use a learning rate of
14
=e= SGD only 45 === Adam only a Adam then SGD Log perplexity 0 1 1 1 1 1 1 1 0 2 4 6 8 10 12 14 16 Steps 10°
# x | 1609.08144#54 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.08144 | 55 | =e= SGD only 45 === Adam only a Adam then SGD Log perplexity 0 1 1 1 1 1 1 1 0 2 4 6 8 10 12 14 16 Steps 10°
# x
Figure 5: Log perplexity vs. steps for Adam, SGD and Adam-then-SGD on WMT EnâFr during maximum likelihood training. Adam converges much faster than SGD at the beginning. Towards the end, however, Adam-then-SGD is gradually better. Notice the bump in the red curve (Adam-then-SGD) at around 60k steps where we switch from Adam to SGD. We suspect that this bump occurs due to diï¬erent optimization trajectories of Adam vs. SGD. When we switch from Adam to SGD, the model ï¬rst suï¬ers a little, but is able to quickly recover afterwards.
0.0002, and for the SGD part, we use a learning rate of 0.5. We ï¬nd that it is important to also anneal the learning rate after a certain number of total steps. For the WMT EnâFr dataset, we begin to anneal the learning rate after 1.2M steps, after which we halve the learning rate every 200k steps for an additional 800k steps. On WMT EnâFr, it takes around 6 days to train a basic model using 96 NVIDIA K80 GPUs. | 1609.08144#55 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.08144 | 56 | Once a model is fully converged using the ML objective, we switch to RL based model reï¬nement, i.e., we further optimize the objective function as in equation 9. We reï¬ne a model until the BLEU score does not change much on the development set. For this model reï¬nement phase, we simply run the SGD optimization algorithm. The number of steps needed to reï¬ne a model varies from dataset to dataset. For WMT EnâFr, it takes around 3 days to complete 400k steps.
To prevent overï¬tting, we apply dropout during training with a scheme similar to [44]. For the WMT EnâFr and EnâDe datasets, we set the dropout probability to be 0.2 and 0.3 respectively. Due to various technical reasons, dropout is only applied during the ML training phase, not during the RL reï¬nement phase. The exact hyper-parameters vary from dataset to dataset and from model to model. For the WMT EnâDe dataset, since it is signiï¬cantly smaller than the WMT EnâFr dataset, we use a higher dropout
15
probability, and also train smaller models for fewer steps overall. On the production data sets, we typically do not use dropout, and we train the models for more steps. | 1609.08144#56 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.08144 | 57 | 15
probability, and also train smaller models for fewer steps overall. On the production data sets, we typically do not use dropout, and we train the models for more steps.
# 8.4 Evaluation after Maximum Likelihood Training
The models in our experiments are word-based, character-based, mixed word-character-based or several wordpiece models with varying vocabulary sizes.
For the word model, we selected the most frequent 212K source words as the source vocabulary and the most popular 80k target words as the target vocabulary. Words not in the source vocabulary or the target vocabulary (unknown words) are converted into special <first_char>_UNK_<last_char> symbols. Note, in this case, there is more than one UNK (e.g., our production word models have roughly 5000 diï¬erent UNKs in this case). We then use the attention mechanism to copy a corresponding word from the source to replace these unknown words during decoding [37]. | 1609.08144#57 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.08144 | 58 | The mixed word-character model is similar to the word model, except the out-of-vocabulary (OOV) words are converted into sequences of characters with special delimiters around them as described in section 4.2 in more detail. In our experiments, the vocabulary size for the mixed word-character model is 32K. For the pure character model, we simply split all words into constituent characters, resulting typically in a few hundred basic characters (including special symbols appearing in the data). For the wordpiece models, we train 3 diï¬erent models with vocabulary sizes of 8K, 16K, and 32K.
Table 4 summarizes our results on the WMT EnâFr dataset. In this table, we also compare against other strong baselines without model ensembling. As can be seen from the table, âWPM-32Kâ, a wordpiece model with a shared source and target vocabulary of 32K wordpieces, performs well on this dataset and achieves the best quality as well as the fastest inference speed.
The pure character model (char input, char output) works surprisingly well on this task, not much worse than the best wordpiece models in BLEU score. However, these models are rather slow to train and slow to use as the sequences are much longer. | 1609.08144#58 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.08144 | 59 | Our best model, WPM-32K, achieves a BLEU score of 38.95. Note that this BLEU score represents the averaged score of 8 models we trained. The maximum BLEU score of the 8 models is higher at 39.37. We point out that our models are completely self-contained, as opposed to previous models reported in [45], which depend on some external alignment models to achieve their best results. Also note that all our test set numbers were achieved by picking an optimal model on the development set which was then used to decode the test set.
Note that the timing numbers for this section are obtained on CPUs, not TPUs. We use here the same CPU machine as described above, and run the decoder with a batchsize of 16 sentences in parallel and a maximum of 4 concurrent hypotheses at any time per sentence. The time per sentence is the total decoding time divided by the number of respective sentences in the test set.
Table 4: Single model results on WMT EnâFr (newstest2014) Model BLEU CPU decoding time | 1609.08144#59 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.08144 | 60 | Table 4: Single model results on WMT EnâFr (newstest2014) Model BLEU CPU decoding time
37.90 Word 38.01 Character WPM-8K 38.27 WPM-16K 37.60 WPM-32K 38.95 38.39 37.0 31.5 33.1 37.7 39.2 Mixed Word/Character PBMT [15] LSTM (6 layers) [31] LSTM (6 layers + PosUnk) [31] Deep-Att [45] Deep-Att + PosUnk [45] per sentence (s) 0.2226 1.0530 0.1919 0.1874 0.2118 0.2774
Similarly, the results of WMT EnâDe are presented in Table 5. Again, we ï¬nd that wordpiece models
16
achieves the best BLEU scores.
Table 5: Single model results on WMT EnâDe (newstest2014) | 1609.08144#60 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.08144 | 62 | WMT EnâDe is considered a more diï¬cult task than WMT EnâFr as it has much less training data, and German, as a more morphologically rich language, needs a huge vocabulary for word models. Thus it is more advantageous to use wordpiece or mixed word/character models, which provide a gain of more than 2 BLEU points on top of the word model and about 4 BLEU points on top of previously reported results in [6, 45]. Our best model, WPM-32K, achieves a BLEU score of 24.61, which is averaged over 8 runs. Consistently, on the production corpora, wordpiece models tend to be better than other models both in terms of speed and accuracy.
# 8.5 Evaluation of RL-reï¬ned Models
The models trained in the previous section are optimized for log-likelihood of the next step prediction which may not correlate well with translation quality, as discussed in section 5. We use RL training to ï¬ne-tune sentence BLEU scores after normal maximum-likelihood training. | 1609.08144#62 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.08144 | 63 | The results of RL ï¬ne-tuning on the best EnâFr and EnâDe models are presented in Table 6, which show that ï¬ne-tuning the models with RL can improve BLEU scores. On WMT EnâFr, model reï¬nement improves BLEU score by close to 1 point. On EnâDe, RL-reï¬nement slightly hurts the test performance even though we observe about 0.4 BLEU points improvement on the development set. The results presented in Table 6 are the average of 8 independent models. We also note that there is an overlap between the wins from the RL reï¬nement and the decoder ï¬ne-tuning (i.e., the introduction of length normalization and coverage penalty). On a less ï¬ne-tuned decoder (e.g., if the decoder does beam search by log-probability only), the win from RL would have been bigger (as is evident from comparing results in Table 2 and Table 3).
Table 6: Single model test BLEU scores, averaged over 8 runs, on WMT EnâFr and EnâDe Dataset Trained with log-likelihood Reï¬ned with RL EnâFr EnâDe
38.95 24.67 39.92 24.60
# 8.6 Model Ensemble and Human Evaluation | 1609.08144#63 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.08144 | 64 | 38.95 24.67 39.92 24.60
# 8.6 Model Ensemble and Human Evaluation
We ensemble 8 RL-reï¬ned models to obtain a state-of-the-art result of 41.16 BLEU points on the WMT EnâFr dataset. Our results are reported in Table 7.
We ensemble 8 RL-reï¬ned models to obtain a state-of-the-art result of 26.30 BLEU points on the WMT EnâDe dataset. Our results are reported in Table 8.
Finally, to better understand the quality of our models and the eï¬ect of RL reï¬nement, we carried out a four-way side-by-side human evaluation to compare our NMT translations against the reference translations
17
Table 7: Model ensemble results on WMT EnâFr (newstest2014)
Model BLEU 40.35 41.16 35.6 37.5 40.4 WPM-32K (8 models) RL-reï¬ned WPM-32K (8 models) LSTM (6 layers) [31] LSTM (6 layers + PosUnk) [31] Deep-Att + PosUnk (8 models) [45]
Table 8: Model ensemble results on WMT EnâDe (newstest2014). See Table 5 for a comparison against non-ensemble models. | 1609.08144#64 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.08144 | 65 | Table 8: Model ensemble results on WMT EnâDe (newstest2014). See Table 5 for a comparison against non-ensemble models.
Model BLEU 26.20 26.30 WPM-32K (8 models) RL-reï¬ned WPM-32K (8 models)
and the best phrase-based statistical machine translations. During the side-by-side comparison, humans are asked to rate four translations given a source sentence. The four translations are: 1) the best phrase- based translations as downloaded from http://matrix.statmt.org/systems/show/2065, 2) an ensemble of 8 ML-trained models, 3) an ensemble of 8 ML-trained and then RL-reï¬ned models, and 4) reference human translations as taken directly from newstest2014, Our results are presented in Table 9.
Table 9: Human side-by-side evaluation scores of WMT EnâFr models.
Model BLEU PBMT [15] NMT before RL NMT after RL Human 37.0 40.35 41.16 Side-by-side averaged score 3.87 4.46 4.44 4.82 | 1609.08144#65 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.08144 | 66 | The results show that even though RL reï¬nement can achieve better BLEU scores, it barely improves the human impression of the translation quality. This could be due to a combination of factors including: 1) the relatively small sample size for the experiment (only 500 examples for side-by-side), 2) the improvement in BLEU score by RL is relatively small after model ensembling (0.81), which may be at a scale that human side-by-side evaluations are insensitive to, and 3) the possible mismatch between BLEU as a metric and real translation quality as perceived by human raters. Table 11 contains some example translations from PBMT, "NMT before RL" and "Human", along with the side-by-side scores that human raters assigned to each translation (some of which we disagree with, see the table caption).
# 8.7 Results on Production Data
We have carried out extensive experiments on many Google-internal production data sets. As the experiments above cast doubt on whether RL improves the real translation quality or simply the BLEU metric, RL-based model reï¬nement is not used during these experiments. Given the larger volume of training data available in the Google corpora, dropout is also not needed in these experiments. | 1609.08144#66 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.08144 | 67 | In this section we describe our experiments with human perception of the translation quality. We asked human raters to rate translations in a three-way side-by-side comparison. The three sides are from: 1) translations from the production phrase-based statistical translation system used by Google, 2) translations from our GNMT system, and 3) translations by humans ï¬uent in both languages. Reported here in Table 10 are averaged rated scores for English â French, English â Spanish and English â Chinese. All the GNMT models are wordpiece models, without model ensembling, and use a shared source and target vocabulary with 32K wordpieces. On each pair of languages, the evaluation data consist of 500 randomly sampled sentences from Wikipedia and news websites, and the corresponding human translations to the target language. The
18
Table 10: Mean of side-by-side scores on production data Relative Improvement 87% 64% 58% 63% 83% 60%
results show that our model reduces translation errors by more than 60% compared to the PBMT model on these major pairs of languages. A typical distribution of side-by-side scores is shown in Figure 6.
400 200 Count (total 500) 7 ln ll 0 on LL. 2 3 4 5 6 PBMT - GNMT - Human | 1609.08144#67 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.08144 | 68 | 400 200 Count (total 500) 7 ln ll 0 on LL. 2 3 4 5 6 PBMT - GNMT - Human
Figure 6: Histogram of side-by-side scores on 500 sampled sentences from Wikipedia and news websites for a typical language pair, here English â Spanish (PBMT blue, GNMT red, Human orange). It can be seen that there is a wide distribution in scores, even for the human translation when rated by other humans, which shows how ambiguous the task is. It is clear that GNMT is much more accurate than PBMT.
As expected, on this metric the GNMT system improves also compared to the PBMT system. In some cases human and GNMT translations are nearly indistinguishable on the relatively simplistic and isolated sentences sampled from Wikipedia and news articles for this experiment. Note that we have observed that human raters, even though ï¬uent in both languages, do not necessarily fully understand each randomly sampled sentence suï¬ciently and hence cannot necessarily generate the best possible translation or rate a given translation accurately. Also note that, although the scale for the scores goes from 0 (complete nonsense) to 6 (perfect translation) the human translations get an imperfect score of only around 5 in Table 10, which shows possible ambiguities in the translations and also possibly non-calibrated raters and translators with a varying level of proï¬ciency. | 1609.08144#68 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.08144 | 69 | Testing our GNMT system on particularly diï¬cult translation cases and longer inputs than just single sentences is the subject of future work.
19
# 9 Conclusion
In this paper, we describe in detail the implementation of Googleâs Neural Machine Translation (GNMT) system, including all the techniques that are critical to its accuracy, speed, and robustness. On the public WMTâ14 translation benchmark, our systemâs translation quality approaches or surpasses all currently published results. More importantly, we also show that our approach carries over to much larger production data sets, which have several orders of magnitude more data, to deliver high quality translations.
Our key ï¬ndings are: 1) that wordpiece modeling eï¬ectively handles open vocabularies and the challenge of morphologically rich languages for translation quality and inference speed, 2) that a combination of model and data parallelism can be used to eï¬ciently train state-of-the-art sequence-to-sequence NMT models in roughly a week, 3) that model quantization drastically accelerates translation inference, allowing the use of these large models in a deployed production environment, and 4) that many additional details like length-normalization, coverage penalties, and similar are essential to making NMT systems work well on real data. | 1609.08144#69 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.08144 | 70 | Using human-rated side-by-side comparison as a metric, we show that our GNMT system approaches the accuracy achieved by average bilingual human translators on some of our test sets. In particular, compared to the previous phrase-based production system, this GNMT system delivers roughly a 60% reduction in translation errors on several popular language pairs.
# Acknowledgements
We would like to thank the entire Google Brain Team and Google Translate Team for their foundational contributions to this project.
# References
[1] Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., Devin, M., Ghemawat, S., Irving, G., Isard, M., Kudlur, M., Levenberg, J., Monga, R., Moore, S., Murray, D. G., Steiner, B., Tucker, P., Vasudevan, V., Warden, P., Wicke, M., Yu, Y., and Zheng, X. Tensorï¬ow: A system for large-scale machine learning. Tech. rep., Google Brain, 2016. arXiv preprint.
[2] Bahdanau, D., Cho, K., and Bengio, Y. Neural machine translation by jointly learning to align and translate. In International Conference on Learning Representations (2015). | 1609.08144#70 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.08144 | 71 | [2] Bahdanau, D., Cho, K., and Bengio, Y. Neural machine translation by jointly learning to align and translate. In International Conference on Learning Representations (2015).
[3] Brown, P., Cocke, J., Pietra, S. D., Pietra, V. D., Jelinek, F., Mercer, R., and Roossin, P. A statistical approach to language translation. In Proceedings of the 12th Conference on Computational Linguistics - Volume 1 (Stroudsburg, PA, USA, 1988), COLING â88, Association for Computational Linguistics, pp. 71â76.
[4] Brown, P. F., Cocke, J., Pietra, S. A. D., Pietra, V. J. D., Jelinek, F., Lafferty, J. D., Mercer, R. L., and Roossin, P. S. A statistical approach to machine translation. Computational linguistics 16, 2 (1990), 79â85.
[5] Brown, P. F., Pietra, V. J. D., Pietra, S. A. D., and Mercer, R. L. The mathematics of statistical machine translation: Parameter estimation. Comput. Linguist. 19, 2 (June 1993), 263â311. | 1609.08144#71 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.08144 | 72 | [6] Buck, C., Heafield, K., and Van Ooyen, B. N-gram counts and language models from the common crawl. In LREC (2014), vol. 2, Citeseer, p. 4.
[7] Cho, K., van Merrienboer, B., Gülçehre, Ã., Bougares, F., Schwenk, H., and Bengio, Y. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Conference on Empirical Methods in Natural Language Processing (2014).
[8] Chrisman, L. Learning recursive distributed representations for holistic computation. Connection Science 3, 4 (1991), 345â366.
20
[9] Chung, J., Cho, K., and Bengio, Y. A character-level decoder without explicit segmentation for neural machine translation. arXiv preprint arXiv:1603.06147 (2016).
[10] Chung, J., Cho, K., and Bengio, Y. A character-level decoder without explicit segmentation for neural machine translation. CoRR abs/1603.06147 (2016).
[11] Costa-Jussà , M. R., and Fonollosa, J. A. R. Character-based neural machine translation. CoRR abs/1603.00810 (2016). | 1609.08144#72 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.08144 | 73 | [12] Dean, J., Corrado, G. S., Monga, R., Chen, K., Devin, M., Le, Q. V., Mao, M. Z., Ranzato, M., Senior, A., Tucker, P., Yang, K., and Ng, A. Y. Large scale distributed deep networks. In NIPS (2012).
[13] Devlin, J., Zbib, R., Huang, Z., Lamar, T., Schwartz, R. M., and Makhoul, J. Fast and robust neural network joint models for statistical machine translation. In ACL (1) (2014), Citeseer, pp. 1370â1380.
[14] Dong, D., Wu, H., He, W., Yu, D., and Wang, H. Multi-task learning for multiple language translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics (2015), pp. 1723â1732.
[15] Durrani, N., Haddow, B., Koehn, P., and Heafield, K. Edinburghâs phrase-based machine translation systems for WMT-14. In Proceedings of the Ninth Workshop on Statistical Machine Translation (2014), Association for Computational Linguistics Baltimore, MD, USA, pp. 97â104. | 1609.08144#73 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.08144 | 74 | [16] Fahlman, S. E., and Lebiere, C. The cascade-correlation learning architecture. In Advances in Neural Information Processing Systems 2 (1990), Morgan Kaufmann, pp. 524â532.
[17] Gers, F. A., Schmidhuber, J., and Cummins, F. Learning to forget: Continual prediction with LSTM. Neural computation 12, 10 (2000), 2451â2471.
[18] Gülçehre, Ã., Ahn, S., Nallapati, R., Zhou, B., and Bengio, Y. Pointing the unknown words. CoRR abs/1603.08148 (2016).
[19] Gupta, S., Agrawal, A., Gopalakrishnan, K., and Narayanan, P. Deep learning with limited numerical precision. CoRR abs/1502.02551 (2015).
[20] Han, S., Mao, H., and Dally, W. J. Deep compression: Compressing deep neural network with pruning, trained quantization and huï¬man coding. CoRR abs/1510.00149 (2015).
[21] He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition (2015). | 1609.08144#74 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.08144 | 75 | [21] He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition (2015).
[22] Hochreiter, S., Bengio, Y., Frasconi, P., and Schmidhuber, J. Gradient ï¬ow in recurrent nets: the diï¬culty of learning long-term dependencies, 2001.
[23] Hochreiter, S., and Schmidhuber, J. Long short-term memory. Neural computation 9, 8 (1997), 1735â1780.
[24] Kalchbrenner, N., and Blunsom, P. Recurrent continuous translation models. In Conference on Empirical Methods in Natural Language Processing (2013).
[25] Kingma, D. P., and Ba, J. Adam: A method for stochastic optimization. CoRR abs/1412.6980 (2014).
[26] Koehn, P., Och, F. J., and Marcu, D. Statistical phrase-based translation. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics (2003).
[27] Li, F., and Liu, B. Ternary weight networks. CoRR abs/1605.04711 (2016).
21 | 1609.08144#75 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.08144 | 76 | [27] Li, F., and Liu, B. Ternary weight networks. CoRR abs/1605.04711 (2016).
21
[28] Luong, M., and Manning, C. D. Achieving open vocabulary neural machine translation with hybrid word-character models. CoRR abs/1604.00788 (2016).
[29] Luong, M.-T., Le, Q. V., Sutskever, I., Vinyals, O., and Kaiser, L. Multi-task sequence to sequence learning. In International Conference on Learning Representations (2015).
[30] Luong, M.-T., Pham, H., and Manning, C. D. Eï¬ective approaches to attention-based neural machine translation. In Conference on Empirical Methods in Natural Language Processing (2015).
[31] Luong, M.-T., Sutskever, I., Le, Q. V., Vinyals, O., and Zaremba, W. Addressing the rare word problem in neural machine translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (2015). | 1609.08144#76 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.08144 | 77 | [32] Norouzi, M., Bengio, S., Chen, Z., Jaitly, N., Schuster, M., Wu, Y., and Schuurmans, D. Reward augmented maximum likelihood for neural structured prediction. In Neural Information Processing Systems (2016).
[33] Pascanu, R., Mikolov, T., and Bengio, Y. Understanding the exploding gradient problem. CoRR abs/1211.5063 (2012).
[34] Ranzato, M., Chopra, S., Auli, M., and Zaremba, W. Sequence level training with recurrent neural networks. In International Conference on Learning Representations (2015).
[35] Schuster, M., and Nakajima, K. Japanese and Korean voice search. 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (2012).
[36] Schuster, M., and Paliwal, K. Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing 45, 11 (Nov. 1997), 2673â2681.
[37] Sébastien, J., Kyunghyun, C., Memisevic, R., and Bengio, Y. On using very large target vocabulary for neural machine translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (2015). | 1609.08144#77 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.08144 | 78 | [38] Sennrich, R., Haddow, B., and Birch, A. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (2016).
[39] Shen, S., Cheng, Y., He, Z., He, W., Wu, H., Sun, M., and Liu, Y. Minimum risk training In Proceedings of the 54th Annual Meeting of the Association for for neural machine translation. Computational Linguistics (2016).
[40] Srivastava, R. K., Greff, K., and Schmidhuber, J. Highway networks. CoRR abs/1505.00387 (2015).
[41] Sutskever, I., Vinyals, O., and Le, Q. V. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems (2014), pp. 3104â3112.
[42] Tu, Z., Lu, Z., Liu, Y., Liu, X., and Li, H. Coverage-based neural machine translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (2016). | 1609.08144#78 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.08144 | 79 | [43] Wu, J., Leng, C., Wang, Y., Hu, Q., and Cheng, J. Quantized convolutional neural networks for mobile devices. CoRR abs/1512.06473 (2015).
[44] Zaremba, W., Sutskever, I., and Vinyals, O. Recurrent neural network regularization, 2014.
[45] Zhou, J., Cao, Y., Wang, X., Li, P., and Xu, W. Deep recurrent models with fast-forward connections for neural machine translation. CoRR abs/1606.04199 (2016).
22
Table 11: Some example translations from PBMT [15], our GNMT system (the "NMT before RL", Table 9), and Human. Source and target sentences (human translations) are from the public benchmark WMT EnâFr (newstest2014) data set. The right-hand column shows the human ratings on a scale of 0 (complete nonsense) to 6 (perfect translation). We disagree with some of the human ratings, e.g., the translation âElle a été repéré trois jours plus tard par un promeneur de chien piégé dans la carrièreâ contains grammatical mistakes and changes semantics, and is still scored 6. We present it to illustrate the potential problems of the scoring process. | 1609.08144#79 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.08144 | 80 | Source PBMT GNMT Human Source PBMT GNMT Human Source PBMT GNMT "The reason Boeing are doing this is to cram more seats in to make their plane more competitive with our products," said Kevin Keniston, head of passenger comfort at Europeâs Airbus. "La raison pour laquelle Boeing sont en train de faire, câest de concentrer davantage de sièges pour prendre leur avion plus compétitive avec nos produits", a déclaré Kevin M. Keniston, chef du confort des passagers de lâAirbus de lâEurope. "La raison pour laquelle Boeing fait cela est de créer plus de sièges pour rendre son avion plus compétitif avec nos produits", a déclaré Kevin Keniston, chef du confort des passagers chez Airbus. "Boeing fait ça pour pouvoir caser plus de sièges et rendre ses avions plus compétitifs par rapports à nos produits", a déclaré Kevin Keniston, directeur de Confort Passager chez lâavionneur européen Airbus. When asked about this, an oï¬cial | 1609.08144#80 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.08144 | 81 | Kevin Keniston, directeur de Confort Passager chez lâavionneur européen Airbus. When asked about this, an oï¬cial of the American administration replied: "The United States is not conducting electronic surveillance aimed at oï¬ces of the World Bank and IMF in Washington." Interrogé à ce sujet, un responsable de lâadministration américaine a répondu : "Les Etats-Unis nâest pas eï¬ectuer une surveillance électronique destiné aux bureaux de la Banque mondiale et du FMI à Washington". Interrogé à ce sujet, un fonctionnaire de lâadministration américaine a répondu: "Les Ãtats-Unis nâeï¬ectuent pas de surveillance électronique à lâintention des bureaux de la Banque mondiale et du FMI à Washington". Interrogé sur le sujet, un responsable de lâadministration américaine a répondu: "les Etats-Unis ne mènent pas de surveillance électronique visant les sièges de la Banque mondiale et du FMI à | 1609.08144#81 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.08144 | 82 | "les Etats-Unis ne mènent pas de surveillance électronique visant les sièges de la Banque mondiale et du FMI à Washington". Martin told CNN that he asked Daley whether his then-boss knew about the potential shuï¬e. Martin a déclaré à CNN quâil a demandé Daley si son patron de lâépoque connaissaient le potentiel remaniement ministériel. Martin a dit à CNN quâil avait demandé à Daley si son patron dâalors était au courant du remaniement potentiel. Martin a dit sur CNN quâil avait demandé à Daley si son patron dâalors était au courant du remaniement éventuel. She was spotted three days later by a dog walker trapped in the quarry Human Source PBMT Elle a été repéré trois jours plus tard par un promeneur de chien piégé dans la carrière GNMT Elle a été repérée trois jours plus tard par un traîneau à chiens piégé dans la carrière. Human Source PBMT GNMT 3.0 6.0 6.0 | 1609.08144#82 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.08144 | 84 | Elle a été repérée trois jours plus tard par une personne qui promenait son chien coincée dans la carrière Analysts believe the country is unlikely to slide back into full-blown conï¬ict, but recent events have unnerved foreign investors and locals. Les analystes estiment que le pays a peu de chances de retomber dans un conï¬it total, mais les événements récents ont inquiété les investisseurs étrangers et locaux. Selon les analystes, il est peu probable que le pays retombe dans un conï¬it généralisé, mais les événements récents ont attiré des investisseurs étrangers et des habitants locaux. Les analystes pensent que le pays ne devrait pas retomber dans un conï¬it ouvert, mais les récents évènements ont ébranlé les investisseurs étrangers et la population locale.
# Human
23
5.0 | 1609.08144#84 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.07410 | 1 | Michalis K. Titsias Department of Informatics Athens University of Economics and Business [email protected]
# Abstract
The softmax representation of probabilities for categorical variables plays a promi- nent role in modern machine learning with numerous applications in areas such as large scale classiï¬cation, neural language modeling and recommendation systems. However, softmax estimation is very expensive for large scale inference because of the high cost associated with computing the normalizing constant. Here, we in- troduce an efï¬cient approximation to softmax probabilities which takes the form of a rigorous lower bound on the exact probability. This bound is expressed as a product over pairwise probabilities and it leads to scalable estimation based on stochastic optimization. It allows us to perform doubly stochastic estimation by subsampling both training instances and class labels. We show that the new bound has interesting theoretical properties and we demonstrate its use in classiï¬cation problems.
# 1 Introduction
Based on the softmax representation, the probability of a variable y to take the value k â {1, . . . , K}, where K is the number of categorical symbols or classes, is modeled by
p(y = k|x) = efk(x;w) K m=1 efm(x;w) , (1) | 1609.07410#1 | One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities | The softmax representation of probabilities for categorical variables plays a
prominent role in modern machine learning with numerous applications in areas
such as large scale classification, neural language modeling and recommendation
systems. However, softmax estimation is very expensive for large scale
inference because of the high cost associated with computing the normalizing
constant. Here, we introduce an efficient approximation to softmax
probabilities which takes the form of a rigorous lower bound on the exact
probability. This bound is expressed as a product over pairwise probabilities
and it leads to scalable estimation based on stochastic optimization. It allows
us to perform doubly stochastic estimation by subsampling both training
instances and class labels. We show that the new bound has interesting
theoretical properties and we demonstrate its use in classification problems. | http://arxiv.org/pdf/1609.07410 | Michalis K. Titsias | stat.ML | To appear in NIPS 2016 | null | stat.ML | 20160923 | 20161029 | [
{
"id": "1609.07410"
}
] |
1609.07410 | 2 | where each fk(x; w) is often referred to as the score function and it is a real-valued function in- P dexed by an input vector x and parameterized by w. The score function measures the compatibility of input x with symbol y = k so that the higher the score is the more compatible x becomes with y = k. The most common application of softmax is multiclass classiï¬cation where x is an observed input vector and fk(x; w) is often chosen to be a linear function or more generally a non-linear func- tion such as a neural network (Bishop, 2006; Goodfellow et al., 2016). Several other applications of softmax arise, for instance, in neural language modeling for learning word vector embeddings (Mnih and Teh, 2012; Mikolov et al., 2013; Pennington et al., 2014) and also in collaborating ï¬lter- ing for representing probabilities of (user, item) pairs (Paquet et al., 2012). In such applications the number of symbols K could often be very large, e.g. of the order of tens of thousands or millions, which makes the computation of softmax probabilities very expensive due to the large sum in the normalizing constant of | 1609.07410#2 | One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities | The softmax representation of probabilities for categorical variables plays a
prominent role in modern machine learning with numerous applications in areas
such as large scale classification, neural language modeling and recommendation
systems. However, softmax estimation is very expensive for large scale
inference because of the high cost associated with computing the normalizing
constant. Here, we introduce an efficient approximation to softmax
probabilities which takes the form of a rigorous lower bound on the exact
probability. This bound is expressed as a product over pairwise probabilities
and it leads to scalable estimation based on stochastic optimization. It allows
us to perform doubly stochastic estimation by subsampling both training
instances and class labels. We show that the new bound has interesting
theoretical properties and we demonstrate its use in classification problems. | http://arxiv.org/pdf/1609.07410 | Michalis K. Titsias | stat.ML | To appear in NIPS 2016 | null | stat.ML | 20160923 | 20161029 | [
{
"id": "1609.07410"
}
] |
1609.07410 | 3 | of the order of tens of thousands or millions, which makes the computation of softmax probabilities very expensive due to the large sum in the normalizing constant of Eq. (1). Thus, exact training procedures based on maximum likelihood or Bayesian approaches are computationally prohibitive and approximations are needed. While some rigorous bound-based approximations to the softmax exists (Bouchard, 2007), they are not so accu- rate or scalable and therefore it would be highly desirable to develop accurate and computationally efï¬cient approximations. | 1609.07410#3 | One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities | The softmax representation of probabilities for categorical variables plays a
prominent role in modern machine learning with numerous applications in areas
such as large scale classification, neural language modeling and recommendation
systems. However, softmax estimation is very expensive for large scale
inference because of the high cost associated with computing the normalizing
constant. Here, we introduce an efficient approximation to softmax
probabilities which takes the form of a rigorous lower bound on the exact
probability. This bound is expressed as a product over pairwise probabilities
and it leads to scalable estimation based on stochastic optimization. It allows
us to perform doubly stochastic estimation by subsampling both training
instances and class labels. We show that the new bound has interesting
theoretical properties and we demonstrate its use in classification problems. | http://arxiv.org/pdf/1609.07410 | Michalis K. Titsias | stat.ML | To appear in NIPS 2016 | null | stat.ML | 20160923 | 20161029 | [
{
"id": "1609.07410"
}
] |
1609.07410 | 4 | In this paper we introduce a new efï¬cient approximation to softmax probabilities which takes the form of a lower bound on the probability of Eq. (1). This bound draws an interesting connection be- tween the exact softmax probability and all its one-vs-each pairwise probabilities, and it has several
29th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
desirable properties. Firstly, for the non-parametric estimation case it leads to an approximation of the likelihood that shares the same global optimum with exact maximum likelihood, and thus estima- tion based on the approximation is a perfect surrogate for the initial estimation problem. Secondly, the bound allows for scalable learning through stochastic optimization where data subsampling can be combined with subsampling categorical symbols. Thirdly, whenever the initial exact softmax cost function is convex the bound remains also convex. | 1609.07410#4 | One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities | The softmax representation of probabilities for categorical variables plays a
prominent role in modern machine learning with numerous applications in areas
such as large scale classification, neural language modeling and recommendation
systems. However, softmax estimation is very expensive for large scale
inference because of the high cost associated with computing the normalizing
constant. Here, we introduce an efficient approximation to softmax
probabilities which takes the form of a rigorous lower bound on the exact
probability. This bound is expressed as a product over pairwise probabilities
and it leads to scalable estimation based on stochastic optimization. It allows
us to perform doubly stochastic estimation by subsampling both training
instances and class labels. We show that the new bound has interesting
theoretical properties and we demonstrate its use in classification problems. | http://arxiv.org/pdf/1609.07410 | Michalis K. Titsias | stat.ML | To appear in NIPS 2016 | null | stat.ML | 20160923 | 20161029 | [
{
"id": "1609.07410"
}
] |
1609.07410 | 5 | Regarding related work, there exist several other methods that try to deal with the high cost of softmax such as methods that attempt to perform the exact computations (Gopal and Yang, 2013; Vijayanarasimhan et al., 2014), methods that change the model based on hierarchical or stick- breaking constructions (Morin and Bengio, 2005; Khan et al., 2012) and sampling-based methods (Bengio and Sénécal, 2003; Mikolov et al., 2013; Devlin et al., 2014; Ji et al., 2015). Our method is a lower bound based approach that follows the variational inference framework. Other rigorous variational lower bounds on the softmax have been used before (Bohning, 1992; Bouchard, 2007), however they are not easily scalable since they require optimizing data-speciï¬c variational param- eters. In contrast, the bound we introduce in this paper does not contain any variational parameter, which greatly facilitates stochastic minibatch training. At the same time it can be much tighter than previous bounds (Bouchard, 2007) as we will demonstrate empirically in several classiï¬cation datasets.
# 2 One-vs-each lower bound on the softmax | 1609.07410#5 | One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities | The softmax representation of probabilities for categorical variables plays a
prominent role in modern machine learning with numerous applications in areas
such as large scale classification, neural language modeling and recommendation
systems. However, softmax estimation is very expensive for large scale
inference because of the high cost associated with computing the normalizing
constant. Here, we introduce an efficient approximation to softmax
probabilities which takes the form of a rigorous lower bound on the exact
probability. This bound is expressed as a product over pairwise probabilities
and it leads to scalable estimation based on stochastic optimization. It allows
us to perform doubly stochastic estimation by subsampling both training
instances and class labels. We show that the new bound has interesting
theoretical properties and we demonstrate its use in classification problems. | http://arxiv.org/pdf/1609.07410 | Michalis K. Titsias | stat.ML | To appear in NIPS 2016 | null | stat.ML | 20160923 | 20161029 | [
{
"id": "1609.07410"
}
] |
1609.07410 | 6 | # 2 One-vs-each lower bound on the softmax
Here, we derive the new bound on the softmax (Section 2.1) and we prove its optimality property when performing approximate maximum likelihood estimation (Section 2.2). Such a property holds for the non-parametric case, where we estimate probabilities of the form p(y = k), without condi- tioning on some x, so that the score functions fk(x; w) reduce to unrestricted parameters fk; see Eq. (2) below. Finally, we also analyze the related bound derived by Bouchard (Bouchard, 2007) and we compare it with our approach (Section 2.3).
# 2.1 Derivation of the bound
Consider a discrete random variable y â {1, . . . , K} that takes the value k with probability,
p(y = k) = Softmaxk(f1, . . . , fK) = efk K m=1 efm , (2) | 1609.07410#6 | One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities | The softmax representation of probabilities for categorical variables plays a
prominent role in modern machine learning with numerous applications in areas
such as large scale classification, neural language modeling and recommendation
systems. However, softmax estimation is very expensive for large scale
inference because of the high cost associated with computing the normalizing
constant. Here, we introduce an efficient approximation to softmax
probabilities which takes the form of a rigorous lower bound on the exact
probability. This bound is expressed as a product over pairwise probabilities
and it leads to scalable estimation based on stochastic optimization. It allows
us to perform doubly stochastic estimation by subsampling both training
instances and class labels. We show that the new bound has interesting
theoretical properties and we demonstrate its use in classification problems. | http://arxiv.org/pdf/1609.07410 | Michalis K. Titsias | stat.ML | To appear in NIPS 2016 | null | stat.ML | 20160923 | 20161029 | [
{
"id": "1609.07410"
}
] |
1609.07410 | 7 | where each fk is a free real-valued scalar parameter. We wish to express a lower bound on p(y = k) and the key step of our derivation is to re-write p(y = k) as
p(y = k) = 1 m6=k eâ(fkâfm) . 1 + (3)
Then, by exploiting the fact that for any non-negative numbers α1 and α2 it holds 1 + α1 + α2 ⤠P 1 + α1 + α2 + α1α2 = (1 + α1)(1 + α2), and more generally it holds (1 + i(1 + αi) where each αi ⥠0, we obtain the following lower bound on the above probability,
# P
# Q
p(y = k) ⥠1 1 + eâ(fkâfm) = efk efk + efm = Ï(fk â fm). (4) | 1609.07410#7 | One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities | The softmax representation of probabilities for categorical variables plays a
prominent role in modern machine learning with numerous applications in areas
such as large scale classification, neural language modeling and recommendation
systems. However, softmax estimation is very expensive for large scale
inference because of the high cost associated with computing the normalizing
constant. Here, we introduce an efficient approximation to softmax
probabilities which takes the form of a rigorous lower bound on the exact
probability. This bound is expressed as a product over pairwise probabilities
and it leads to scalable estimation based on stochastic optimization. It allows
us to perform doubly stochastic estimation by subsampling both training
instances and class labels. We show that the new bound has interesting
theoretical properties and we demonstrate its use in classification problems. | http://arxiv.org/pdf/1609.07410 | Michalis K. Titsias | stat.ML | To appear in NIPS 2016 | null | stat.ML | 20160923 | 20161029 | [
{
"id": "1609.07410"
}
] |
1609.07410 | 8 | # m6=k Y
# m6=k Y
# m6=k Y
where Ï(·) denotes the sigmoid function. Clearly, the terms in the product are pairwise probabilities each corresponding to the event y = k conditional on the union of pairs of events, i.e. y â {k, m} where m is one of the remaining values. We will refer to this bound as one-vs-each bound on the softmax probability, since it involves K â 1 comparisons of a speciï¬c event y = k versus each of the K â 1 remaining events. Furthermore, the above result can be stated more generally to deï¬ne bounds on arbitrary probabilities as the following statement shows. Proposition 1. Assume a probability model with state space ⦠and probability measure P (·). For any event A â ⦠and an associated countable set of disjoint events {Bi} such that âªiBi = ⦠\ A, it holds
P (A) ⥠P (A|A ⪠Bi). (5)
i Y
2 | 1609.07410#8 | One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities | The softmax representation of probabilities for categorical variables plays a
prominent role in modern machine learning with numerous applications in areas
such as large scale classification, neural language modeling and recommendation
systems. However, softmax estimation is very expensive for large scale
inference because of the high cost associated with computing the normalizing
constant. Here, we introduce an efficient approximation to softmax
probabilities which takes the form of a rigorous lower bound on the exact
probability. This bound is expressed as a product over pairwise probabilities
and it leads to scalable estimation based on stochastic optimization. It allows
us to perform doubly stochastic estimation by subsampling both training
instances and class labels. We show that the new bound has interesting
theoretical properties and we demonstrate its use in classification problems. | http://arxiv.org/pdf/1609.07410 | Michalis K. Titsias | stat.ML | To appear in NIPS 2016 | null | stat.ML | 20160923 | 20161029 | [
{
"id": "1609.07410"
}
] |
1609.07410 | 10 | # P
# Q
Remark. If the set {Bi} consists of a single event B then by deï¬nition B = ⦠\ A and the bound is exact since in such case P (A|A ⪠B) = P (A).
Furthermore, based on the above construction we can express a full class of hierarchically ordered bounds. For instance, if we merge two events Bi and Bj into a single one, then the term P (A|A ⪠Bi)P (A|A ⪠Bj) in the initial bound is replaced with P (A|A ⪠Bi ⪠Bj) and the associated new bound, obtained after this merge, can only become tighter. To see a more speciï¬c example in the softmax probabilistic model, assume a small subset of categorical symbols Ck, that does not include k, and denote the remaining symbols excluding k as ¯Ck so that k ⪠Ck ⪠¯Ck = {1, . . . , K}. Then, a tighter bound, that exists higher in the hierarchy, than the one-vs-each bound (see Eq. 4) takes the form, p(y = k) ⥠Softmaxk(fk, fCk ) à Softmaxk(fk, f ¯Ck ) ⥠Softmaxk(fk, fCk ) à | 1609.07410#10 | One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities | The softmax representation of probabilities for categorical variables plays a
prominent role in modern machine learning with numerous applications in areas
such as large scale classification, neural language modeling and recommendation
systems. However, softmax estimation is very expensive for large scale
inference because of the high cost associated with computing the normalizing
constant. Here, we introduce an efficient approximation to softmax
probabilities which takes the form of a rigorous lower bound on the exact
probability. This bound is expressed as a product over pairwise probabilities
and it leads to scalable estimation based on stochastic optimization. It allows
us to perform doubly stochastic estimation by subsampling both training
instances and class labels. We show that the new bound has interesting
theoretical properties and we demonstrate its use in classification problems. | http://arxiv.org/pdf/1609.07410 | Michalis K. Titsias | stat.ML | To appear in NIPS 2016 | null | stat.ML | 20160923 | 20161029 | [
{
"id": "1609.07410"
}
] |
1609.07410 | 12 | The computationally useful aspect of the bound in Eq. (4) is that it factorizes into a product, where each factor depends only on a pair of parameters (fk, fm). Crucially, this avoids the evaluation of the normalizing constant associated with the global probability in Eq. (2) and, as discussed in Section 3, it leads to scalable training using stochastic optimization that can deal with very large K. Furthermore, approximate maximum likelihood estimation based on the bound can be very accurate and, as shown in the next section, it is exact for the non-parametric estimation case.
The fact that the one-vs-each bound in (4) is a product of pairwise probabilities suggests that there is a connection with Bradley-Terry (BT) models (Bradley and Terry, 1952; Huang et al., 2006) for learning individual skills from paired comparisons and the associated multiclass classiï¬cation systems obtained by combining binary classiï¬ers, such as one-vs-rest and one-vs-one approaches (Huang et al., 2006). Our method differs from BT models, since we do not combine binary proba- bilistic models to a posteriori form a multiclass model. Instead, we wish to develop scalable approx- imate algorithms that can surrogate the training of multiclass softmax-based models by maximizing lower bounds on the exact likelihoods of these models. | 1609.07410#12 | One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities | The softmax representation of probabilities for categorical variables plays a
prominent role in modern machine learning with numerous applications in areas
such as large scale classification, neural language modeling and recommendation
systems. However, softmax estimation is very expensive for large scale
inference because of the high cost associated with computing the normalizing
constant. Here, we introduce an efficient approximation to softmax
probabilities which takes the form of a rigorous lower bound on the exact
probability. This bound is expressed as a product over pairwise probabilities
and it leads to scalable estimation based on stochastic optimization. It allows
us to perform doubly stochastic estimation by subsampling both training
instances and class labels. We show that the new bound has interesting
theoretical properties and we demonstrate its use in classification problems. | http://arxiv.org/pdf/1609.07410 | Michalis K. Titsias | stat.ML | To appear in NIPS 2016 | null | stat.ML | 20160923 | 20161029 | [
{
"id": "1609.07410"
}
] |
1609.07410 | 13 | # 2.2 Optimality of the bound for maximum likelihood estimation
Assume a set of observation (y1, . . . , yN ) where each yi â {1, . . . , K}. The log likelihood of the data takes the form,
N K L(f ) = log p(yi) = log p(y = k)Nk , (7)
# i=1 Y
# k=1 Y
where f = (f1, . . . , fK) and Nk denotes the number of data points with value k. By substitut- ing p(y = k) from Eq. (2) and then taking derivatives with respect to f we arrive at the standard stationary conditions of the maximum likelihood solution,
Nk N These stationary conditions are satisï¬ed for fk = log Nk + c where c â R is an arbitrary constant. What is rather surprising is that the same solutions fk = log Nk + c satisfy also the stationary conditions when maximizing a lower bound on the exact log likelihood obtained from the product of one-vs-each probabilities. | 1609.07410#13 | One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities | The softmax representation of probabilities for categorical variables plays a
prominent role in modern machine learning with numerous applications in areas
such as large scale classification, neural language modeling and recommendation
systems. However, softmax estimation is very expensive for large scale
inference because of the high cost associated with computing the normalizing
constant. Here, we introduce an efficient approximation to softmax
probabilities which takes the form of a rigorous lower bound on the exact
probability. This bound is expressed as a product over pairwise probabilities
and it leads to scalable estimation based on stochastic optimization. It allows
us to perform doubly stochastic estimation by subsampling both training
instances and class labels. We show that the new bound has interesting
theoretical properties and we demonstrate its use in classification problems. | http://arxiv.org/pdf/1609.07410 | Michalis K. Titsias | stat.ML | To appear in NIPS 2016 | null | stat.ML | 20160923 | 20161029 | [
{
"id": "1609.07410"
}
] |
1609.07410 | 14 | More precisely, by replacing p(y = k) with the bound from Eq. (4) we obtain a lower bound on the exact log likelihood,
F (f ) = log K efk efk + efm Nk = log P (fk, fm), (9)
# k=1 Y
# m6=k Y
# k>m X
3
â
# Nm
is a likelihood involving only the data of the pair where P (fk, fm) = of states (k, m), while there exist K(Kâ1)/2 possible such pairs. If instead of maximizing the exact log likelihood from Eq. (7) we maximize the lower bound we obtain the same parameter estimates. Proposition 2. The maximum likelihood parameter estimates fk = log Nk + c, k = 1, . . . , K for the exact log likelihood from Eq. (7) globally also maximize the lower bound from Eq. (9).
Proof. By computing the derivatives of F (f ) we obtain the following stationary conditions
K â 1 = Nk + Nm Nk efk efk + efm , k = 1, . . . , K, (10) | 1609.07410#14 | One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities | The softmax representation of probabilities for categorical variables plays a
prominent role in modern machine learning with numerous applications in areas
such as large scale classification, neural language modeling and recommendation
systems. However, softmax estimation is very expensive for large scale
inference because of the high cost associated with computing the normalizing
constant. Here, we introduce an efficient approximation to softmax
probabilities which takes the form of a rigorous lower bound on the exact
probability. This bound is expressed as a product over pairwise probabilities
and it leads to scalable estimation based on stochastic optimization. It allows
us to perform doubly stochastic estimation by subsampling both training
instances and class labels. We show that the new bound has interesting
theoretical properties and we demonstrate its use in classification problems. | http://arxiv.org/pdf/1609.07410 | Michalis K. Titsias | stat.ML | To appear in NIPS 2016 | null | stat.ML | 20160923 | 20161029 | [
{
"id": "1609.07410"
}
] |
1609.07410 | 15 | # m6=k X
which form a system of K non-linear equations over the unknowns (f1, . . . , fK). By substituting the values fk = log Nk + c we can observe that all K equations are simultaneously satisï¬ed which means that these values are solutions. Furthermore, since F (f ) is a concave function of f we can conclude that the solutions fk = log Nk + c globally maximize F (f ).
Remark. Not only is F (f ) globally maximized by setting fk = log Nk + c, but also each pairwise likelihood P (fk, fm) in Eq. (9) is separately maximized by the same setting of parameters.
# 2.3 Comparison with Bouchardâs bound
Bouchard (Bouchard, 2007) proposed a related bound that next we analyze in terms of its ability to approximate the exact maximum likelihood training in the non-parametric case, and then we com- pare it against our method. Bouchard (Bouchard, 2007) was motivated by the problem of applying variational Bayesian inference to multiclass classiï¬cation and he derived the following upper bound on the log-sum-exp function,
K K log efm ⤠α + log 1 + efmâα , (11) | 1609.07410#15 | One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities | The softmax representation of probabilities for categorical variables plays a
prominent role in modern machine learning with numerous applications in areas
such as large scale classification, neural language modeling and recommendation
systems. However, softmax estimation is very expensive for large scale
inference because of the high cost associated with computing the normalizing
constant. Here, we introduce an efficient approximation to softmax
probabilities which takes the form of a rigorous lower bound on the exact
probability. This bound is expressed as a product over pairwise probabilities
and it leads to scalable estimation based on stochastic optimization. It allows
us to perform doubly stochastic estimation by subsampling both training
instances and class labels. We show that the new bound has interesting
theoretical properties and we demonstrate its use in classification problems. | http://arxiv.org/pdf/1609.07410 | Michalis K. Titsias | stat.ML | To appear in NIPS 2016 | null | stat.ML | 20160923 | 20161029 | [
{
"id": "1609.07410"
}
] |
1609.07410 | 17 | This is not the same as Eq. (4), since there is not a value for α for which the above bound will reduce to our proposed one. For instance, if we set α = fk, then Bouchardâs bound becomes half the one in Eq. (4) due to the extra term 1 + efkâfk = 2 in the product in the denominator.1 Furthermore, such a value for α may not be the optimal one and in practice α must be chosen by minimizing the upper bound in Eq. (11). While such an optimization is a convex problem, it requires iterative optimization since there is not in general an analytical solution for α. However, for the simple case where K = 2 we can analytically ï¬nd the optimal α and the optimal f parameters. The following proposition carries out this analysis and provides a clear understanding of how Bouchardâs bound behaves when applied for approximate maximum likelihood estimation. Proposition 3. Assume that K = 2 and we approximate the probabilities p(y = 1) and p(y = 2) from (2) with the corresponding Bouchardâs bounds given by (1+ef1 âα)(1+ef2 âα) and | 1609.07410#17 | One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities | The softmax representation of probabilities for categorical variables plays a
prominent role in modern machine learning with numerous applications in areas
such as large scale classification, neural language modeling and recommendation
systems. However, softmax estimation is very expensive for large scale
inference because of the high cost associated with computing the normalizing
constant. Here, we introduce an efficient approximation to softmax
probabilities which takes the form of a rigorous lower bound on the exact
probability. This bound is expressed as a product over pairwise probabilities
and it leads to scalable estimation based on stochastic optimization. It allows
us to perform doubly stochastic estimation by subsampling both training
instances and class labels. We show that the new bound has interesting
theoretical properties and we demonstrate its use in classification problems. | http://arxiv.org/pdf/1609.07410 | Michalis K. Titsias | stat.ML | To appear in NIPS 2016 | null | stat.ML | 20160923 | 20161029 | [
{
"id": "1609.07410"
}
] |
1609.07410 | 19 | α = f1 + f2 2 , fk = 2 log Nk + c, k = 1, 2. (13)
The proof of the above is given in the Appendix. Notice that the above estimates are biased so that the probability of the most populated class (say the y = 1 for which N1 > N2) is overestimated
1Notice that the product in Eq. (4) excludes the value k, while Bouchardâs bound includes it.
4
while the other probability is underestimated. This is due to the factor 2 that multiplies log N1 and log N2 in (13). Also notice that the solution α = f1+f2 is not a general trend, i.e. for K > 2 the optimal α is not the mean of fks. In such cases approximate maximum likelihood estimation based on Bouchardâs bound requires iterative optimization. Figure 1a shows some estimated softmax probabilities, using a dataset of 200 points each taking one out of ten values, where f is found by exact maximum likelihood, the proposed one-vs-each bound and Bouchardâs method. As expected estimation based on the bound in Eq. (4) gives the exact probabilities, while Bouchardâs bound tends to overestimate large probabilities and underestimate small ones. | 1609.07410#19 | One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities | The softmax representation of probabilities for categorical variables plays a
prominent role in modern machine learning with numerous applications in areas
such as large scale classification, neural language modeling and recommendation
systems. However, softmax estimation is very expensive for large scale
inference because of the high cost associated with computing the normalizing
constant. Here, we introduce an efficient approximation to softmax
probabilities which takes the form of a rigorous lower bound on the exact
probability. This bound is expressed as a product over pairwise probabilities
and it leads to scalable estimation based on stochastic optimization. It allows
us to perform doubly stochastic estimation by subsampling both training
instances and class labels. We show that the new bound has interesting
theoretical properties and we demonstrate its use in classification problems. | http://arxiv.org/pdf/1609.07410 | Michalis K. Titsias | stat.ML | To appear in NIPS 2016 | null | stat.ML | 20160923 | 20161029 | [
{
"id": "1609.07410"
}
] |
1609.07410 | 20 | â50
y t i l i 0.25 2 â100 b a b o r P d e t a m i t s E 0.2 0.15 0.1 0.05 1 0 â1 d n u o b r e w o L â150 â200 â250 â2 0 1 2 3 4 5 6 Values (a) 7 8 9 10 â3 â2 â1 (b) 0 1 2 â300 0 2000 4000 6000 Iterations (c) 8000 10000
Figure 1: (a) shows the probabilities estimated by exact softmax (blue bar), one-vs-each approxima- tion (red bar) and Bouchardâs method (green bar). (b) shows the 5-class artiï¬cial data together with the decision boundaries found by exact softmax (blue line), one-vs-each (red line) and Bouchardâs bound (green line). (c) shows the maximized (approximate) log likelihoods for the different ap- proaches when applied to the data of panel (b) (see Section 3). Notice that the blue line in (c) is the exact maximized log likelihood while the remaining lines correspond to lower bounds. | 1609.07410#20 | One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities | The softmax representation of probabilities for categorical variables plays a
prominent role in modern machine learning with numerous applications in areas
such as large scale classification, neural language modeling and recommendation
systems. However, softmax estimation is very expensive for large scale
inference because of the high cost associated with computing the normalizing
constant. Here, we introduce an efficient approximation to softmax
probabilities which takes the form of a rigorous lower bound on the exact
probability. This bound is expressed as a product over pairwise probabilities
and it leads to scalable estimation based on stochastic optimization. It allows
us to perform doubly stochastic estimation by subsampling both training
instances and class labels. We show that the new bound has interesting
theoretical properties and we demonstrate its use in classification problems. | http://arxiv.org/pdf/1609.07410 | Michalis K. Titsias | stat.ML | To appear in NIPS 2016 | null | stat.ML | 20160923 | 20161029 | [
{
"id": "1609.07410"
}
] |
1609.07410 | 21 | # 3 Stochastic optimization for extreme classiï¬cation
Here, we return to the general form of the softmax probabilities as deï¬ned by Eq. (1) where the score functions are indexed by input x and parameterized by w. We consider a classiï¬cation task n=1, where yn â {1, . . . , K}, we wish to ï¬t the parameters w where given a training set {xn, yn}N by maximizing the log likelihood,
# N
N efyn (xn;w) K m=1 efm(xn;w) L = log . (14) | 1609.07410#21 | One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities | The softmax representation of probabilities for categorical variables plays a
prominent role in modern machine learning with numerous applications in areas
such as large scale classification, neural language modeling and recommendation
systems. However, softmax estimation is very expensive for large scale
inference because of the high cost associated with computing the normalizing
constant. Here, we introduce an efficient approximation to softmax
probabilities which takes the form of a rigorous lower bound on the exact
probability. This bound is expressed as a product over pairwise probabilities
and it leads to scalable estimation based on stochastic optimization. It allows
us to perform doubly stochastic estimation by subsampling both training
instances and class labels. We show that the new bound has interesting
theoretical properties and we demonstrate its use in classification problems. | http://arxiv.org/pdf/1609.07410 | Michalis K. Titsias | stat.ML | To appear in NIPS 2016 | null | stat.ML | 20160923 | 20161029 | [
{
"id": "1609.07410"
}
] |
1609.07410 | 22 | # n=1 Y
When the number of training instances is very large, the above maximization can be carried out by ap- plying stochastic gradient descent (by minimizing âL) where we cycle over minibatches. However, this stochastic optimization procedure cannot deal with large values of K because the normalizing constant in the softmax couples all scores functions so that the log likelihood cannot be expressed as a sum across class labels. To overcome this, we can use the one-vs-each lower bound on the softmax probability from Eq. (4) and obtain the following lower bound on the previous log likelihood,
N N 1 1 + eâ[fyn (xn;w)âfm(xn;w)] F = log 1 + eâ[fyn (xn;w)âfm(xn;w)] = â log | 1609.07410#22 | One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities | The softmax representation of probabilities for categorical variables plays a
prominent role in modern machine learning with numerous applications in areas
such as large scale classification, neural language modeling and recommendation
systems. However, softmax estimation is very expensive for large scale
inference because of the high cost associated with computing the normalizing
constant. Here, we introduce an efficient approximation to softmax
probabilities which takes the form of a rigorous lower bound on the exact
probability. This bound is expressed as a product over pairwise probabilities
and it leads to scalable estimation based on stochastic optimization. It allows
us to perform doubly stochastic estimation by subsampling both training
instances and class labels. We show that the new bound has interesting
theoretical properties and we demonstrate its use in classification problems. | http://arxiv.org/pdf/1609.07410 | Michalis K. Titsias | stat.ML | To appear in NIPS 2016 | null | stat.ML | 20160923 | 20161029 | [
{
"id": "1609.07410"
}
] |
1609.07410 | 23 | # n=1 Y
# n=1 X
# m6=yn Y
# m6=yn X
n=lm#un n=1m#yn (15) which now consists of a sum over both data points and labels. Interestingly, the sum over the la- bels, ing yn? runs over all remaining classes that are different from the label y,, assigned to x,. Each term in the sum is a logistic regression cost, that depends on the pairwise score difference fun (ni W) â fm(Xn; w), and encourages the n-th data point to get separated from the m-th remain- ing class. The above lower bound can be optimized by stochastic gradient descent by subsampling terms in the double sum in Eq. (15), thus resulting in a doubly stochastic approximation scheme. Next we further discuss the stochasticity associated with subsampling remaining classes. The gradient for the cost associated with a single training instance (x, yn) is
âFn = Ï (fm(xn; w) â fyn(xn; w)) [âwfyn(xn; w) â âwfm(xn; w)] . (16)
# m6=yn X
5 | 1609.07410#23 | One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities | The softmax representation of probabilities for categorical variables plays a
prominent role in modern machine learning with numerous applications in areas
such as large scale classification, neural language modeling and recommendation
systems. However, softmax estimation is very expensive for large scale
inference because of the high cost associated with computing the normalizing
constant. Here, we introduce an efficient approximation to softmax
probabilities which takes the form of a rigorous lower bound on the exact
probability. This bound is expressed as a product over pairwise probabilities
and it leads to scalable estimation based on stochastic optimization. It allows
us to perform doubly stochastic estimation by subsampling both training
instances and class labels. We show that the new bound has interesting
theoretical properties and we demonstrate its use in classification problems. | http://arxiv.org/pdf/1609.07410 | Michalis K. Titsias | stat.ML | To appear in NIPS 2016 | null | stat.ML | 20160923 | 20161029 | [
{
"id": "1609.07410"
}
] |
1609.07410 | 24 | # m6=yn X
5
This gradient consists of a weighted sum where the sigmoidal weights Ï (fm(xn; w) â fyn(xn; w)) quantify the contribution of the remaining classes to the whole gradient; the more a remaining class overlaps with yn (given xn) the higher its contribution is. A simple way to get an unbiased stochastic estimate of (16) is to randomly subsample a small subset of remaining classes from the set {m|m 6= yn}. More advanced schemes could be based on importance sampling where we introduce a proposal distribution pn(m) deï¬ned on the set {m|m 6= yn} that could favor selecting classes with large sigmoidal weights. While such more advanced schemes could reduce variance, they require prior knowledge (or on-the-ï¬y learning) about how classes overlap with one another. Thus, in Section 4 we shall experiment only with the simple random subsampling approach and leave the above advanced schemes for future work. | 1609.07410#24 | One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities | The softmax representation of probabilities for categorical variables plays a
prominent role in modern machine learning with numerous applications in areas
such as large scale classification, neural language modeling and recommendation
systems. However, softmax estimation is very expensive for large scale
inference because of the high cost associated with computing the normalizing
constant. Here, we introduce an efficient approximation to softmax
probabilities which takes the form of a rigorous lower bound on the exact
probability. This bound is expressed as a product over pairwise probabilities
and it leads to scalable estimation based on stochastic optimization. It allows
us to perform doubly stochastic estimation by subsampling both training
instances and class labels. We show that the new bound has interesting
theoretical properties and we demonstrate its use in classification problems. | http://arxiv.org/pdf/1609.07410 | Michalis K. Titsias | stat.ML | To appear in NIPS 2016 | null | stat.ML | 20160923 | 20161029 | [
{
"id": "1609.07410"
}
] |
1609.07410 | 25 | To illustrate the above stochastic gradient descent algorithm we simulated a two-dimensional data set of 200 instances, shown in Figure 1b, that belong to ï¬ve classes. We consider a linear classiï¬- cation model where the score functions take the form fk(xn, w) = wT xn and where the full set of k parameters is w = (w1, . . . , wK). We consider minibatches of size ten to approximate the sum n . Figure 1c shows the stochastic and subsets of remaining classes of size one to approximate evolution of the approximate log likelihood (dashed red line), i.e. the unbiased subsampling based approximation of (15), together with the maximized exact softmax log likelihood (blue line), the non-stochastically maximized approximate lower bound from (15) (red solid line) and Bouchardâs method (green line). To apply Bouchardâs method we construct a lower bound on the log likelihood by replacing each softmax probability with the bound from (12) where we also need to optimize a separate variational parameter αn for each data point. As shown in Figure 1c our method provides a tighter lower bound than | 1609.07410#25 | One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities | The softmax representation of probabilities for categorical variables plays a
prominent role in modern machine learning with numerous applications in areas
such as large scale classification, neural language modeling and recommendation
systems. However, softmax estimation is very expensive for large scale
inference because of the high cost associated with computing the normalizing
constant. Here, we introduce an efficient approximation to softmax
probabilities which takes the form of a rigorous lower bound on the exact
probability. This bound is expressed as a product over pairwise probabilities
and it leads to scalable estimation based on stochastic optimization. It allows
us to perform doubly stochastic estimation by subsampling both training
instances and class labels. We show that the new bound has interesting
theoretical properties and we demonstrate its use in classification problems. | http://arxiv.org/pdf/1609.07410 | Michalis K. Titsias | stat.ML | To appear in NIPS 2016 | null | stat.ML | 20160923 | 20161029 | [
{
"id": "1609.07410"
}
] |
1609.07410 | 26 | (12) where we also need to optimize a separate variational parameter αn for each data point. As shown in Figure 1c our method provides a tighter lower bound than Bouchardâs method despite the fact that it does not contain any variational parameters. Also, Bouchardâs method can become very slow when combined with stochastic gra- dient descent since it requires tuning a separate variational parameter αn for each training instance. Figure 1b also shows the decision boundaries discovered by the exact softmax, one-vs-each bound and Bouchardâs bound. Finally, the actual parameters values found by maximizing the one-vs-each bound were remarkably close (although not identical) to the parameters found by the exact softmax. | 1609.07410#26 | One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities | The softmax representation of probabilities for categorical variables plays a
prominent role in modern machine learning with numerous applications in areas
such as large scale classification, neural language modeling and recommendation
systems. However, softmax estimation is very expensive for large scale
inference because of the high cost associated with computing the normalizing
constant. Here, we introduce an efficient approximation to softmax
probabilities which takes the form of a rigorous lower bound on the exact
probability. This bound is expressed as a product over pairwise probabilities
and it leads to scalable estimation based on stochastic optimization. It allows
us to perform doubly stochastic estimation by subsampling both training
instances and class labels. We show that the new bound has interesting
theoretical properties and we demonstrate its use in classification problems. | http://arxiv.org/pdf/1609.07410 | Michalis K. Titsias | stat.ML | To appear in NIPS 2016 | null | stat.ML | 20160923 | 20161029 | [
{
"id": "1609.07410"
}
] |
1609.07410 | 27 | # 4 Experiments
# 4.1 Toy example in large scale non-parametric estimation
Here, we illustrate the ability to stochastically maximize the bound in Eq. (9) for the simple non- parametric estimation case. In such case, we can also maximize the bound based on the analytic formulas and therefore we will be able to test how well the stochastic algorithm can approximate the optimal/known solution. We consider a data set of N = 106 instances each taking one out of K = 104 possible categorical values. The data were generated from a distribution p(k) â u2 k, where each uk was randomly chosen in [0, 1]. The probabilities estimated based on the analytic formulas are shown in Figure 2a. To stochastically estimate these probabilities we follow the doubly stochas- tic framework of Section 3 so that we subsample data instances of minibatch size b = 100 and for each instance we subsample 10 remaining categorical values. We use a learning rate initialized to 0.5/b (and then decrease it by a factor of 0.9 after each epoch) and performed 2 à 105 iterations. Fig- ure 2b shows the ï¬nal values for the estimated probabilities, while Figure 2c shows the evolution of the estimation error during the optimization iterations. We can observe that the algorithm performs well and exhibits a typical stochastic approximation convergence. | 1609.07410#27 | One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities | The softmax representation of probabilities for categorical variables plays a
prominent role in modern machine learning with numerous applications in areas
such as large scale classification, neural language modeling and recommendation
systems. However, softmax estimation is very expensive for large scale
inference because of the high cost associated with computing the normalizing
constant. Here, we introduce an efficient approximation to softmax
probabilities which takes the form of a rigorous lower bound on the exact
probability. This bound is expressed as a product over pairwise probabilities
and it leads to scalable estimation based on stochastic optimization. It allows
us to perform doubly stochastic estimation by subsampling both training
instances and class labels. We show that the new bound has interesting
theoretical properties and we demonstrate its use in classification problems. | http://arxiv.org/pdf/1609.07410 | Michalis K. Titsias | stat.ML | To appear in NIPS 2016 | null | stat.ML | 20160923 | 20161029 | [
{
"id": "1609.07410"
}
] |
1609.07410 | 28 | x 10 â4 x 10 â4 0.7 3.5 3.5 0.6 y t i l i b a b o r P d e a m t 3 2.5 2 1.5 y t i l i b a b o r P d e a m t 3 2.5 2 1.5 r o r r E 0.5 0.4 0.3 i t s E 1 i t s E 1 0.2 0.5 0.5 0.1 0 0 2000 4000 6000 Values 8000 10000 0 0 2000 4000 6000 Values 8000 10000 0 0 0.5 1 Iterations 1.5 (a) (c) 2 5 x 10
(b) Figure 2: (a) shows the optimally estimated probabilities which have been sorted for visualizations purposes. (b) shows the corresponding probabilities estimated by stochastic optimization. (c) shows the absolute norm for the vector of differences between exact estimates and stochastic estimates.
6 | 1609.07410#28 | One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities | The softmax representation of probabilities for categorical variables plays a
prominent role in modern machine learning with numerous applications in areas
such as large scale classification, neural language modeling and recommendation
systems. However, softmax estimation is very expensive for large scale
inference because of the high cost associated with computing the normalizing
constant. Here, we introduce an efficient approximation to softmax
probabilities which takes the form of a rigorous lower bound on the exact
probability. This bound is expressed as a product over pairwise probabilities
and it leads to scalable estimation based on stochastic optimization. It allows
us to perform doubly stochastic estimation by subsampling both training
instances and class labels. We show that the new bound has interesting
theoretical properties and we demonstrate its use in classification problems. | http://arxiv.org/pdf/1609.07410 | Michalis K. Titsias | stat.ML | To appear in NIPS 2016 | null | stat.ML | 20160923 | 20161029 | [
{
"id": "1609.07410"
}
] |
1609.07410 | 29 | 6
4.2 Classiï¬cation Small scale classiï¬cation comparisons. Here, we wish to investigate whether the proposed lower bound on the softmax is a good surrogate for exact softmax training in classiï¬cation. More precisely, we wish to compare the parameter estimates obtained by the one-vs-each bound with the estimates obtained by exact softmax training. To quantify closeness we use the normalized absolute norm
norm = |wsoftmax â wâ| |wsoftmax| , (17) | 1609.07410#29 | One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities | The softmax representation of probabilities for categorical variables plays a
prominent role in modern machine learning with numerous applications in areas
such as large scale classification, neural language modeling and recommendation
systems. However, softmax estimation is very expensive for large scale
inference because of the high cost associated with computing the normalizing
constant. Here, we introduce an efficient approximation to softmax
probabilities which takes the form of a rigorous lower bound on the exact
probability. This bound is expressed as a product over pairwise probabilities
and it leads to scalable estimation based on stochastic optimization. It allows
us to perform doubly stochastic estimation by subsampling both training
instances and class labels. We show that the new bound has interesting
theoretical properties and we demonstrate its use in classification problems. | http://arxiv.org/pdf/1609.07410 | Michalis K. Titsias | stat.ML | To appear in NIPS 2016 | null | stat.ML | 20160923 | 20161029 | [
{
"id": "1609.07410"
}
] |
1609.07410 | 31 | where ti denotes the true label of a test point and yi the predicted one. We trained the linear multi- class model of Section 3 with the following alternative methods: exact softmax training (SOFT), the one-vs-each bound (OVE), the stochastically optimized one-vs-each bound (OVE-SGD) and Bouchardâs bound (BOUCHARD). For all approaches, the associated cost function was maximized 2 λ||w||2, which ensures that the global max- together with an added regularization penalty term, â 1 imum of the cost function is achieved for ï¬nite w. Since we want to investigate how well we surrogate exact softmax training, we used the same ï¬xed value λ = 1 in all experiments. We considered three small scale multiclass classiï¬cation datasets: MNIST2, 20NEWS3 and BIBTEX (Katakis et al., 2008); see Table 1 for details. Notice that BIBTEX is originally a multi-label classiï¬- cation dataset (Bhatia et al., 2015). where each example may have more than one labels. Here, we maintained only a single label for each data point in order to apply standard multiclass classiï¬cation. The | 1609.07410#31 | One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities | The softmax representation of probabilities for categorical variables plays a
prominent role in modern machine learning with numerous applications in areas
such as large scale classification, neural language modeling and recommendation
systems. However, softmax estimation is very expensive for large scale
inference because of the high cost associated with computing the normalizing
constant. Here, we introduce an efficient approximation to softmax
probabilities which takes the form of a rigorous lower bound on the exact
probability. This bound is expressed as a product over pairwise probabilities
and it leads to scalable estimation based on stochastic optimization. It allows
us to perform doubly stochastic estimation by subsampling both training
instances and class labels. We show that the new bound has interesting
theoretical properties and we demonstrate its use in classification problems. | http://arxiv.org/pdf/1609.07410 | Michalis K. Titsias | stat.ML | To appear in NIPS 2016 | null | stat.ML | 20160923 | 20161029 | [
{
"id": "1609.07410"
}
] |
1609.07410 | 33 | Figure 3 displays convergence of the lower bounds (and for the exact softmax cost) for all meth- ods. Recall, that the methods SOFT, OVE and BOUCHARD are non-stochastic and therefore their optimization can be carried out by standard gradient descent. Notice that in all three datasets the one-vs-each bound gets much closer to the exact softmax cost compared to Bouchardâs bound. Thus, OVE tends to give a tighter bound despite that it does not contain any variational parameters, while BOUCHARD has N extra variational parameters, i.e. as many as the training instances. The appli- cation of OVE-SGD method (the stochastic version of OVE) is based on a doubly stochastic scheme where we subsample minibatches of size 200 and subsample remaining classes of size one. We can observe that OVE-SGD is able to stochastically approach its maximum value which corresponds to OVE. | 1609.07410#33 | One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities | The softmax representation of probabilities for categorical variables plays a
prominent role in modern machine learning with numerous applications in areas
such as large scale classification, neural language modeling and recommendation
systems. However, softmax estimation is very expensive for large scale
inference because of the high cost associated with computing the normalizing
constant. Here, we introduce an efficient approximation to softmax
probabilities which takes the form of a rigorous lower bound on the exact
probability. This bound is expressed as a product over pairwise probabilities
and it leads to scalable estimation based on stochastic optimization. It allows
us to perform doubly stochastic estimation by subsampling both training
instances and class labels. We show that the new bound has interesting
theoretical properties and we demonstrate its use in classification problems. | http://arxiv.org/pdf/1609.07410 | Michalis K. Titsias | stat.ML | To appear in NIPS 2016 | null | stat.ML | 20160923 | 20161029 | [
{
"id": "1609.07410"
}
] |
1609.07410 | 34 | Table 2 shows the parameter closeness score from Eq. (17) as well as the classiï¬cation predictive scores. We can observe that OVE and OVE-SGD provide parameters closer to those of SOFT than the parameters provided by BOUCHARD. Also, the predictive scores for OVE and OVE-SGD are similar to SOFT, although they tend to be slightly worse. Interestingly, BOUCHARD gives the best classiï¬cation error, even better than the exact softmax training, but at the same time it always gives the worst nlpd which suggests sensitivity to overï¬tting. However, recall that the regularization parameter λ was ï¬xed to the value one and it was not optimized separately for each method using cross validation. Also notice that BOUCHARD cannot be easily scaled up (with stochastic optimization) to massive datasets since it introduces an extra variational parameter for each training instance. | 1609.07410#34 | One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities | The softmax representation of probabilities for categorical variables plays a
prominent role in modern machine learning with numerous applications in areas
such as large scale classification, neural language modeling and recommendation
systems. However, softmax estimation is very expensive for large scale
inference because of the high cost associated with computing the normalizing
constant. Here, we introduce an efficient approximation to softmax
probabilities which takes the form of a rigorous lower bound on the exact
probability. This bound is expressed as a product over pairwise probabilities
and it leads to scalable estimation based on stochastic optimization. It allows
us to perform doubly stochastic estimation by subsampling both training
instances and class labels. We show that the new bound has interesting
theoretical properties and we demonstrate its use in classification problems. | http://arxiv.org/pdf/1609.07410 | Michalis K. Titsias | stat.ML | To appear in NIPS 2016 | null | stat.ML | 20160923 | 20161029 | [
{
"id": "1609.07410"
}
] |
1609.07410 | 35 | Large scale classiï¬cation. Here, we consider AMAZONCAT-13K (see footnote 4) which is a large scale classiï¬cation dataset. This dataset is originally multi-labelled (Bhatia et al., 2015) and here we maintained only a single label, as done for the BIBTEX dataset, in order to apply standard multiclass classiï¬cation. This dataset is also highly imbalanced since there are about 15 classes having the half of the training instances while they are many classes having very few (or just a single) training instances.
# 2http://yann.lecun.com/exdb/mnist 3http://qwone.com/~jason/20Newsgroups/ 4http://research.microsoft.com/en-us/um/people/manik/downloads/XC/XMLRepository.html
7
Table 1: Summaries of the classiï¬cation datasets.
Name Dimensionality Classes Training examples Test examples MNIST 20NEWS BIBTEX AMAZONCAT-13K 784 61188 1836 203882 10 20 148 2919 60000 11269 4880 1186239 10000 7505 2515 306759
Table 2: Score measures for the small scale classiï¬cation datasets. | 1609.07410#35 | One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities | The softmax representation of probabilities for categorical variables plays a
prominent role in modern machine learning with numerous applications in areas
such as large scale classification, neural language modeling and recommendation
systems. However, softmax estimation is very expensive for large scale
inference because of the high cost associated with computing the normalizing
constant. Here, we introduce an efficient approximation to softmax
probabilities which takes the form of a rigorous lower bound on the exact
probability. This bound is expressed as a product over pairwise probabilities
and it leads to scalable estimation based on stochastic optimization. It allows
us to perform doubly stochastic estimation by subsampling both training
instances and class labels. We show that the new bound has interesting
theoretical properties and we demonstrate its use in classification problems. | http://arxiv.org/pdf/1609.07410 | Michalis K. Titsias | stat.ML | To appear in NIPS 2016 | null | stat.ML | 20160923 | 20161029 | [
{
"id": "1609.07410"
}
] |
1609.07410 | 37 | 4 x 10 0 d n u o b r e w o L â2 â3 â4 â5 â6 SOFT OVO OVOâSGD BOUCHARD d n u o b r e w o L â1500 â2000 â2500 â3000 â3500 d n u o b r e w o L â3000 â4000 â5000 d n u o b r e w o L â200 â400 â600 â800 â7 0 0.5 1 Iterations 1.5 2 5 x 10 â4000 0 5 Iterations 10 5 x 10 â6000 0 5 Iterations 10 5 x 10 â1000 0 5 Iterations 10 5 x 10 (a) (b) (c) (d) | 1609.07410#37 | One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities | The softmax representation of probabilities for categorical variables plays a
prominent role in modern machine learning with numerous applications in areas
such as large scale classification, neural language modeling and recommendation
systems. However, softmax estimation is very expensive for large scale
inference because of the high cost associated with computing the normalizing
constant. Here, we introduce an efficient approximation to softmax
probabilities which takes the form of a rigorous lower bound on the exact
probability. This bound is expressed as a product over pairwise probabilities
and it leads to scalable estimation based on stochastic optimization. It allows
us to perform doubly stochastic estimation by subsampling both training
instances and class labels. We show that the new bound has interesting
theoretical properties and we demonstrate its use in classification problems. | http://arxiv.org/pdf/1609.07410 | Michalis K. Titsias | stat.ML | To appear in NIPS 2016 | null | stat.ML | 20160923 | 20161029 | [
{
"id": "1609.07410"
}
] |
1609.07410 | 38 | Figure 3: (a) shows the evolution of the lower bound values for MNIST, (b) for 20NEWS and (c) for BIBTEX. For more clear visualization the bounds of the stochastic OVE-SGD have been smoothed using a rolling window of 400 previous values. (d) shows the evolution of the OVE-SGD lower bound (scaled to correspond to a single data point) in the large scale AMAZONCAT-13K dataset. Here, the plotted values have been also smoothed using a rolling window of size 4000 and then thinned by a factor of 5.
Further, notice that in this large dataset the number of parameters we need to estimate for the linear classiï¬cation model is very large: K à (D + 1) = 2919 à 203883 parameters where the plus one accounts for the biases. All methods apart from OVE-SGD are practically very slow in this massive dataset, and therefore we consider OVE-SGD which is scalable. | 1609.07410#38 | One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities | The softmax representation of probabilities for categorical variables plays a
prominent role in modern machine learning with numerous applications in areas
such as large scale classification, neural language modeling and recommendation
systems. However, softmax estimation is very expensive for large scale
inference because of the high cost associated with computing the normalizing
constant. Here, we introduce an efficient approximation to softmax
probabilities which takes the form of a rigorous lower bound on the exact
probability. This bound is expressed as a product over pairwise probabilities
and it leads to scalable estimation based on stochastic optimization. It allows
us to perform doubly stochastic estimation by subsampling both training
instances and class labels. We show that the new bound has interesting
theoretical properties and we demonstrate its use in classification problems. | http://arxiv.org/pdf/1609.07410 | Michalis K. Titsias | stat.ML | To appear in NIPS 2016 | null | stat.ML | 20160923 | 20161029 | [
{
"id": "1609.07410"
}
] |
1609.07410 | 39 | We applied OVE-SGD where at each stochastic gradient update we consider a single training instance (i.e. the minibatch size was one) and for that instance we randomly select ï¬ve remaining classes. This leads to sparse parameter updates, where the score function parameters of only six classes (the class of the current training instance plus the remaining ï¬ve ones) are updated at each iteration. We used a very small learning rate having value 10â8 and we performed ï¬ve epochs across the full dataset, that is we performed in total 5 à 1186239 stochastic gradient updates. After each epoch we halve the value of the learning rate before next epoch starts. By taking into account also the sparsity of the input vectors each iteration is very fast and full training is completed in just 26 minutes in a stand- alone PC. The evolution of the variational lower bound that indicates convergence is shown in Figure 3d. Finally, the classiï¬cation error in test data was 53.11% which is signiï¬cantly better than random guessing or by a method that decides always the most populated class (where in AMAZONCAT-13K the most populated class occupies the 19% of the data so the error of that method is around 79%).
# 5 Discussion | 1609.07410#39 | One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities | The softmax representation of probabilities for categorical variables plays a
prominent role in modern machine learning with numerous applications in areas
such as large scale classification, neural language modeling and recommendation
systems. However, softmax estimation is very expensive for large scale
inference because of the high cost associated with computing the normalizing
constant. Here, we introduce an efficient approximation to softmax
probabilities which takes the form of a rigorous lower bound on the exact
probability. This bound is expressed as a product over pairwise probabilities
and it leads to scalable estimation based on stochastic optimization. It allows
us to perform doubly stochastic estimation by subsampling both training
instances and class labels. We show that the new bound has interesting
theoretical properties and we demonstrate its use in classification problems. | http://arxiv.org/pdf/1609.07410 | Michalis K. Titsias | stat.ML | To appear in NIPS 2016 | null | stat.ML | 20160923 | 20161029 | [
{
"id": "1609.07410"
}
] |
1609.07410 | 40 | # 5 Discussion
We have presented the one-vs-each lower bound on softmax probabilities and we have analyzed its theoretical properties. This bound is just the most extreme case of a full family of hierarchi- cally ordered bounds. We have explored the ability of the bound to perform parameter estimation through stochastic optimization in models having large number of categorical symbols, and we have demonstrated this ability to classiï¬cation problems.
There are several directions for future research. Firstly, it is worth investigating the usefulness of the bound in different applications from classiï¬cation, such as for learning word embeddings in natural
8
language processing and for training recommendation systems. Another interesting direction is to consider the bound not for point estimation, as done in this paper, but for Bayesian estimation using variational inference.
# Acknowledgments
We thank the reviewers for insightful comments. We would like also to thank Francisco J. R. Ruiz for useful discussions and David Blei for suggesting the name one-vs-each for the proposed method.
# A Proof of Proposition 3
Here we re-state and prove Proposition 3. | 1609.07410#40 | One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities | The softmax representation of probabilities for categorical variables plays a
prominent role in modern machine learning with numerous applications in areas
such as large scale classification, neural language modeling and recommendation
systems. However, softmax estimation is very expensive for large scale
inference because of the high cost associated with computing the normalizing
constant. Here, we introduce an efficient approximation to softmax
probabilities which takes the form of a rigorous lower bound on the exact
probability. This bound is expressed as a product over pairwise probabilities
and it leads to scalable estimation based on stochastic optimization. It allows
us to perform doubly stochastic estimation by subsampling both training
instances and class labels. We show that the new bound has interesting
theoretical properties and we demonstrate its use in classification problems. | http://arxiv.org/pdf/1609.07410 | Michalis K. Titsias | stat.ML | To appear in NIPS 2016 | null | stat.ML | 20160923 | 20161029 | [
{
"id": "1609.07410"
}
] |
1609.07410 | 43 | Proof. The lower bound is written as
N1(f1 â α) + N2(f2 â α) â (N1 + N2) log(1 + ef1âα) + log(1 + ef2âα) .
.
We will ï¬rst maximize this quantity wrt α. For that is sufï¬ces to minimize the upper bound on the following log-sum-exp function
α + log(1 + ef1âα) + log(1 + ef2âα),
which is a convex function of α. By taking the derivative wrt α and setting to zero we obtain the stationary condition
ef1âα 1 + ef1âα + ef2âα 1 + ef2âα = 1.
Clearly, the value of α that satisï¬es the condition is α = f1+f2 into the initial bound we have 2 . Now if we substitute this value back
N1 f1 â f2 2 + N2 f2 â f1 2 â (N1 + N2) log(1 + e f1 âf2 2 ) + log(1 + e f2âf1 2 ) | 1609.07410#43 | One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities | The softmax representation of probabilities for categorical variables plays a
prominent role in modern machine learning with numerous applications in areas
such as large scale classification, neural language modeling and recommendation
systems. However, softmax estimation is very expensive for large scale
inference because of the high cost associated with computing the normalizing
constant. Here, we introduce an efficient approximation to softmax
probabilities which takes the form of a rigorous lower bound on the exact
probability. This bound is expressed as a product over pairwise probabilities
and it leads to scalable estimation based on stochastic optimization. It allows
us to perform doubly stochastic estimation by subsampling both training
instances and class labels. We show that the new bound has interesting
theoretical properties and we demonstrate its use in classification problems. | http://arxiv.org/pdf/1609.07410 | Michalis K. Titsias | stat.ML | To appear in NIPS 2016 | null | stat.ML | 20160923 | 20161029 | [
{
"id": "1609.07410"
}
] |
1609.07410 | 44 | # h
# i
which is concave wrt f1 and f2. Then, by taking derivatives wrt f1 and f2 we obtain the conditions
N1 â N2 2 = (N1 + N2) 2 " e f1 âf2 2 1 + e f1âf2 2 â e f2âf1 2 1 + e f2âf1 2 # N2 â N1 2 = (N1 + N2) 2 " e f2 âf1 2 1 + e f2âf1 2 â e f1âf2 2 1 + e f1âf2 2 #
Now we can observe that these conditions are satisï¬ed by f1 = 2 log N1 + c and f2 = 2 log N2 + c which gives the global maximizer since F (f1, f2, α) is concave.
9 | 1609.07410#44 | One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities | The softmax representation of probabilities for categorical variables plays a
prominent role in modern machine learning with numerous applications in areas
such as large scale classification, neural language modeling and recommendation
systems. However, softmax estimation is very expensive for large scale
inference because of the high cost associated with computing the normalizing
constant. Here, we introduce an efficient approximation to softmax
probabilities which takes the form of a rigorous lower bound on the exact
probability. This bound is expressed as a product over pairwise probabilities
and it leads to scalable estimation based on stochastic optimization. It allows
us to perform doubly stochastic estimation by subsampling both training
instances and class labels. We show that the new bound has interesting
theoretical properties and we demonstrate its use in classification problems. | http://arxiv.org/pdf/1609.07410 | Michalis K. Titsias | stat.ML | To appear in NIPS 2016 | null | stat.ML | 20160923 | 20161029 | [
{
"id": "1609.07410"
}
] |
1609.07410 | 45 | # References
Bengio, Y. and Sénécal, J.-S. (2003). Quick training of probabilistic neural nets by importance sampling. In Proceedings of the conference on Artiï¬cial Intelligence and Statistics (AISTATS).
Bhatia, K., Jain, H., Kar, P., Varma, M., and Jain, P. (2015). Sparse local embeddings for extreme multi-label classiï¬cation. In Cortes, C., Lawrence, N. D., Lee, D. D., Sugiyama, M., and Garnett, R., editors, Advances in Neural Information Processing Systems 28, pages 730â738. Curran Associates, Inc.
Bishop, C. M. (2006). Pattern Recognition and Machine Learning (Information Science and Statistics). Springer-Verlag New York, Inc., Secaucus, NJ, USA.
Bohning, D. (1992). Multinomial logistic regression algorithm. Annals of the Inst. of Statistical Math, 44:197â 200.
Bouchard, G. (2007). Efï¬cient bounds for the softmax function and applications to approximate inference in hybrid models. Technical report. | 1609.07410#45 | One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities | The softmax representation of probabilities for categorical variables plays a
prominent role in modern machine learning with numerous applications in areas
such as large scale classification, neural language modeling and recommendation
systems. However, softmax estimation is very expensive for large scale
inference because of the high cost associated with computing the normalizing
constant. Here, we introduce an efficient approximation to softmax
probabilities which takes the form of a rigorous lower bound on the exact
probability. This bound is expressed as a product over pairwise probabilities
and it leads to scalable estimation based on stochastic optimization. It allows
us to perform doubly stochastic estimation by subsampling both training
instances and class labels. We show that the new bound has interesting
theoretical properties and we demonstrate its use in classification problems. | http://arxiv.org/pdf/1609.07410 | Michalis K. Titsias | stat.ML | To appear in NIPS 2016 | null | stat.ML | 20160923 | 20161029 | [
{
"id": "1609.07410"
}
] |
1609.07410 | 46 | Bouchard, G. (2007). Efï¬cient bounds for the softmax function and applications to approximate inference in hybrid models. Technical report.
Bradley, R. A. and Terry, M. E. (1952). Rank analysis of incomplete block designs: I. The method of paired comparisons. Biometrika, 39(3/4):324â345.
Devlin, J., Zbib, R., Huang, Z., Lamar, T., Schwartz, R., and Makhoul, J. (2014). Fast and robust neural net- In Proceedings of the 52nd Annual Meeting of the work joint models for statistical machine translation. Association for Computational Linguistics (Volume 1: Long Papers), pages 1370â1380, Baltimore, Mary- land. Association for Computational Linguistics.
Goodfellow, I., Bengio, Y., and Courville, A. (2016). Deep learning. Book in preparation for MIT Press.
In Dasgupta, S. and Mcallester, D., editors, Proceedings of the 30th International Conference on Machine Learning (ICML-13), pages 289â297. JMLR Workshop and Conference Proceedings.
Huang, T.-K., Weng, R. C., and Lin, C.-J. (2006). Generalized Bradley-Terry models and multi-class probability estimates. J. Mach. Learn. Res., 7:85â115. | 1609.07410#46 | One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities | The softmax representation of probabilities for categorical variables plays a
prominent role in modern machine learning with numerous applications in areas
such as large scale classification, neural language modeling and recommendation
systems. However, softmax estimation is very expensive for large scale
inference because of the high cost associated with computing the normalizing
constant. Here, we introduce an efficient approximation to softmax
probabilities which takes the form of a rigorous lower bound on the exact
probability. This bound is expressed as a product over pairwise probabilities
and it leads to scalable estimation based on stochastic optimization. It allows
us to perform doubly stochastic estimation by subsampling both training
instances and class labels. We show that the new bound has interesting
theoretical properties and we demonstrate its use in classification problems. | http://arxiv.org/pdf/1609.07410 | Michalis K. Titsias | stat.ML | To appear in NIPS 2016 | null | stat.ML | 20160923 | 20161029 | [
{
"id": "1609.07410"
}
] |
1609.07410 | 47 | Ji, S., Vishwanathan, S. V. N., Satish, N., Anderson, M. J., and Dubey, P. (2015). Blackout: Speeding up recurrent neural network language models with very large vocabularies.
Katakis, I., Tsoumakas, G., and Vlahavas, I. (2008). Multilabel text classiï¬cation for automated tag suggestion. In In: Proceedings of the ECML/PKDD-08 Workshop on Discovery Challenge.
Khan, M. E., Mohamed, S., Marlin, B. M., and Murphy, K. P. (2012). A stick-breaking likelihood for categorical data analysis with latent Gaussian models. In Proceedings of the Fifteenth International Conference on Artiï¬cial Intelligence and Statistics, AISTATS 2012, La Palma, Canary Islands, April 21-23, 2012, pages 610â618. | 1609.07410#47 | One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities | The softmax representation of probabilities for categorical variables plays a
prominent role in modern machine learning with numerous applications in areas
such as large scale classification, neural language modeling and recommendation
systems. However, softmax estimation is very expensive for large scale
inference because of the high cost associated with computing the normalizing
constant. Here, we introduce an efficient approximation to softmax
probabilities which takes the form of a rigorous lower bound on the exact
probability. This bound is expressed as a product over pairwise probabilities
and it leads to scalable estimation based on stochastic optimization. It allows
us to perform doubly stochastic estimation by subsampling both training
instances and class labels. We show that the new bound has interesting
theoretical properties and we demonstrate its use in classification problems. | http://arxiv.org/pdf/1609.07410 | Michalis K. Titsias | stat.ML | To appear in NIPS 2016 | null | stat.ML | 20160923 | 20161029 | [
{
"id": "1609.07410"
}
] |
1609.07410 | 48 | Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., and Dean, J. (2013). Distributed representations of words and phrases and their compositionality. In Burges, C. J. C., Bottou, L., Welling, M., Ghahramani, Z., and Weinberger, K. Q., editors, Advances in Neural Information Processing Systems 26, pages 3111â3119. Curran Associates, Inc.
Mnih, A. and Teh, Y. W. (2012). A fast and simple algorithm for training neural probabilistic language models. In Proceedings of the 29th International Conference on Machine Learning, pages 1751â1758.
Morin, F. and Bengio, Y. (2005). Hierarchical probabilistic neural network language model. In Proceedings of the Tenth International Workshop on Artiï¬cial Intelligence and Statistics, pages 246â252. Citeseer.
Paquet, U., Koenigstein, N., and Winther, O. (2012). Scalable Bayesian modelling of paired symbols. CoRR, abs/1409.2824.
In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532â1543, Doha, Qatar. Association for Computational Linguistics. | 1609.07410#48 | One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities | The softmax representation of probabilities for categorical variables plays a
prominent role in modern machine learning with numerous applications in areas
such as large scale classification, neural language modeling and recommendation
systems. However, softmax estimation is very expensive for large scale
inference because of the high cost associated with computing the normalizing
constant. Here, we introduce an efficient approximation to softmax
probabilities which takes the form of a rigorous lower bound on the exact
probability. This bound is expressed as a product over pairwise probabilities
and it leads to scalable estimation based on stochastic optimization. It allows
us to perform doubly stochastic estimation by subsampling both training
instances and class labels. We show that the new bound has interesting
theoretical properties and we demonstrate its use in classification problems. | http://arxiv.org/pdf/1609.07410 | Michalis K. Titsias | stat.ML | To appear in NIPS 2016 | null | stat.ML | 20160923 | 20161029 | [
{
"id": "1609.07410"
}
] |
1609.07061 | 0 | 6 1 0 2
p e S 2 2 ] E N . s c [
1 v 1 6 0 7 0 . 9 0 6 1 : v i X r a
Quantized Neural Networks
# Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations
Itay Hubara* Department of Electrical Engineering Technion - Israel Institute of Technology Haifa, Israel
[email protected]
Matthieu Courbariaux* Department of Computer Science and Department of Statistics Universit´e de Montr´eal Montr´eal, Canada
[email protected]
# Daniel Soudry Department of Statistics Columbia University New York, USA
[email protected]
Ran El-Yaniv Department of Computer Science Technion - Israel Institute of Technology Haifa, Israel
[email protected]
Yoshua Bengio Department of Computer Science and Department of Statistics Universit´e de Montr´eal Montr´eal, Canada
[email protected]
*Indicates ï¬rst authors.
Editor:
# Abstract | 1609.07061#0 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 1 | Editor:
# Abstract
We introduce a method to train Quantized Neural Networks (QNNs) â neural networks with extremely low precision (e.g., 1-bit) weights and activations, at run-time. At train- time the quantized weights and activations are used for computing the parameter gradients. During the forward pass, QNNs drastically reduce memory size and accesses, and replace most arithmetic operations with bit-wise operations. As a result, power consumption is expected to be drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to their 32-bit counterparts. For example, our quantized version of AlexNet with 1-bit weights and 2-bit activations achieves 51% top-1 accuracy. Moreover, we quantize the parameter gradients to 6-bits as well which enables gradients computation using only bit-wise opera- tion. Quantized recurrent neural networks were tested over the Penn Treebank dataset, and achieved comparable accuracy as their 32-bit counterparts using only 4-bits. Last but not least, we programmed a binary matrix multiplication GPU kernel with which it is possible to run our MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suï¬ering any loss in classiï¬cation accuracy. The QNN code is available online.
1 | 1609.07061#1 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 2 | 1
Hubara, Courbariaux, Soudry, El-Yaniv and Bengio
Keywords: Deep Learning, Neural Networks Compression, Energy Eï¬cient Neural Net- works, Computer vision, Language Models.
# 1. Introduction
Deep Neural Networks (DNNs) have substantially pushed Artiï¬cial Intelligence (AI) lim- its in a wide range of tasks, including but not limited to object recognition from im- ages (Krizhevsky et al., 2012; Szegedy et al., 2014), speech recognition (Hinton et al., 2012; Sainath et al., 2013), statistical machine translation (Devlin et al., 2014; Sutskever et al., 2014; Bahdanau et al., 2015), Atari and Go games (Mnih et al., 2015; Silver et al., 2016), and even computer generation of abstract art (Mordvintsev et al., 2015). | 1609.07061#2 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
1609.07061 | 3 | Training or even just using neural network (NN) algorithms on conventional general- purpose digital hardware (Von Neumann architecture) has been found highly ineï¬cient due to the massive amount of multiply-accumulate operations (MACs) required to compute the weighted sums of the neuronsâ inputs. Today, DNNs are almost exclusively trained on one or many very fast and power-hungry Graphic Processing Units (GPUs) (Coates et al., 2013). As a result, it is often a challenge to run DNNs on target low-power devices, and substantial research eï¬orts are invested in speeding up DNNs at run-time on both general- purpose (Vanhoucke et al., 2011; Gong et al., 2014; Romero et al., 2014; Han et al., 2015b) and specialized computer hardware (Farabet et al., 2011a,b; Pham et al., 2012; Chen et al., 2014a,b; Esser et al., 2015).
# sppvoach | 1609.07061#3 | Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online. | http://arxiv.org/pdf/1609.07061 | Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio | cs.NE, cs.LG | arXiv admin note: text overlap with arXiv:1602.02830 | null | cs.NE | 20160922 | 20160922 | [
{
"id": "1509.08985"
},
{
"id": "1606.06160"
},
{
"id": "1503.03562"
},
{
"id": "1603.01025"
},
{
"id": "1606.01981"
},
{
"id": "1608.06902"
},
{
"id": "1603.05279"
},
{
"id": "1604.03168"
},
{
"id": "1504.04788"
},
{
"id": "1606.05487"
},
{
"id": "1510.00149"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.