doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
1706.04599
40
Our most important discovery is the surprising effective- ness of temperature scaling despite its remarkable simplic- ity. Temperature scaling outperforms all other methods on the vision tasks, and performs comparably to other methods on the NLP datasets. What is perhaps even more surpris- ing is that temperature scaling outperforms the vector and matrix Platt scaling variants, which are strictly more gen- eral methods. In fact, vector scaling recovers essentially the same solution as temperature scaling – the learned vec- tor has nearly constant entries, and therefore is no different than a scalar transformation. In other words, network mis- calibration is intrinsically low dimensional. The only dataset that temperature scaling does not calibrate is the Reuters dataset. In this instance, only one of the above methods is able to improve calibration. Because this dataset is well-calibrated to begin with (ECE ≤ 1%), there is not much room for improvement with any method, and post-processing may not even be necessary to begin with. It is also possible that our measurements are affected by dataset split or by the particular binning scheme.
1706.04599#40
On Calibration of Modern Neural Networks
Confidence calibration -- the problem of predicting probability estimates representative of the true correctness likelihood -- is important for classification models in many applications. We discover that modern neural networks, unlike those from a decade ago, are poorly calibrated. Through extensive experiments, we observe that depth, width, weight decay, and Batch Normalization are important factors influencing calibration. We evaluate the performance of various post-processing calibration methods on state-of-the-art architectures with image and document classification datasets. Our analysis and experiments not only offer insights into neural network learning, but also provide a simple and straightforward recipe for practical settings: on most datasets, temperature scaling -- a single-parameter variant of Platt Scaling -- is surprisingly effective at calibrating predictions.
http://arxiv.org/pdf/1706.04599
Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger
cs.LG
ICML 2017
null
cs.LG
20170614
20170803
[ { "id": "1610.08936" }, { "id": "1701.06548" }, { "id": "1612.01474" }, { "id": "1607.03594" }, { "id": "1604.07316" }, { "id": "1505.00387" }, { "id": "1703.04977" }, { "id": "1610.05256" } ]
1706.04599
41
Uncal. - CIFAR-100 ResNet-110 (SD) ResNet-110 (SD) HE Outputs HM Outputs [= Gap “Beecez267] | a ece-096 Temp. Scale - CIFAR-100 Hist. Bin. - CIFAR-100 ResNet-110 (SD) Iso. Reg. - CIFAR-100 ResNet-110 (SD) HM Outputs [= Gap HM Outputs [=I Gap 0.0 0.2 04 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 04 06 0.8 1.0 0.0 0.2 04 0.6 0.8 1.0 Confidence Figure 4. Reliability diagrams for CIFAR-100 before (far left) and after calibration (middle left, middle right, far right). Matrix scaling performs poorly on datasets with hundreds of classes (i.e. Birds, Cars, and CIFAR-100), and fails to converge on the 1000-class ImageNet dataset. This is expected, since the number of parameters scales quadrat- ically with the number of classes. Any calibration model with tens of thousands (or more) parameters will overfit to a small validation set, even when applying regularization.
1706.04599#41
On Calibration of Modern Neural Networks
Confidence calibration -- the problem of predicting probability estimates representative of the true correctness likelihood -- is important for classification models in many applications. We discover that modern neural networks, unlike those from a decade ago, are poorly calibrated. Through extensive experiments, we observe that depth, width, weight decay, and Batch Normalization are important factors influencing calibration. We evaluate the performance of various post-processing calibration methods on state-of-the-art architectures with image and document classification datasets. Our analysis and experiments not only offer insights into neural network learning, but also provide a simple and straightforward recipe for practical settings: on most datasets, temperature scaling -- a single-parameter variant of Platt Scaling -- is surprisingly effective at calibrating predictions.
http://arxiv.org/pdf/1706.04599
Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger
cs.LG
ICML 2017
null
cs.LG
20170614
20170803
[ { "id": "1610.08936" }, { "id": "1701.06548" }, { "id": "1612.01474" }, { "id": "1607.03594" }, { "id": "1604.07316" }, { "id": "1505.00387" }, { "id": "1703.04977" }, { "id": "1610.05256" } ]
1706.04599
42
Binning methods improve calibration on most datasets, but do not outperform temperature scaling. Additionally, bin- ning methods tend to change class predictions which hurts accuracy (see Section S3). Histogram binning, the simplest binning method, typically outperforms isotonic regression and BBQ, despite the fact that both methods are strictly more general. This further supports our finding that cali- bration is best corrected by simple models. Reliability diagrams. Figure 4 contains reliability dia- grams for 110-layer ResNets on CIFAR-100 before and af- ter calibration. From the far left diagram, we see that the uncalibrated ResNet tends to be overconfident in its pre- dictions. We then can observe the effects of temperature scaling (middle left), histogram binning (middle right), and isotonic regression (far right) on calibration. All three dis- played methods produce much better confidence estimates. Of the three methods, temperature scaling most closely re- covers the desired diagonal function. Each of the bins are well calibrated, which is remarkable given that all the prob- abilities were modified by only a single parameter. We in- clude reliability diagrams for other datasets in Section S4.
1706.04599#42
On Calibration of Modern Neural Networks
Confidence calibration -- the problem of predicting probability estimates representative of the true correctness likelihood -- is important for classification models in many applications. We discover that modern neural networks, unlike those from a decade ago, are poorly calibrated. Through extensive experiments, we observe that depth, width, weight decay, and Batch Normalization are important factors influencing calibration. We evaluate the performance of various post-processing calibration methods on state-of-the-art architectures with image and document classification datasets. Our analysis and experiments not only offer insights into neural network learning, but also provide a simple and straightforward recipe for practical settings: on most datasets, temperature scaling -- a single-parameter variant of Platt Scaling -- is surprisingly effective at calibrating predictions.
http://arxiv.org/pdf/1706.04599
Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger
cs.LG
ICML 2017
null
cs.LG
20170614
20170803
[ { "id": "1610.08936" }, { "id": "1701.06548" }, { "id": "1612.01474" }, { "id": "1607.03594" }, { "id": "1604.07316" }, { "id": "1505.00387" }, { "id": "1703.04977" }, { "id": "1610.05256" } ]
1706.04599
43
Computation time. All methods scale linearly with the number of validation set samples. Temperature scaling is by far the fastest method, as it amounts to a one- dimensional convex optimization problem. Using a conju- gate gradient solver, the optimal temperature can be found in 10 iterations, or a fraction of a second on most modern hardware. In fact, even a naive line-search for the optimal temperature is faster than any of the other methods. The computational complexity of vector and matrix scaling are linear and quadratic respectively in the number of classes, reflecting the number of parameters in each method. For CIFAR-100 (K = 100), finding a near-optimal vector scal- ing solution with conjugate gradient descent requires at least 2 orders of magnitude more time. Histogram binning and isotonic regression take an order of magnitude longer than temperature scaling, and BBQ takes roughly 3 orders of magnitude more time.
1706.04599#43
On Calibration of Modern Neural Networks
Confidence calibration -- the problem of predicting probability estimates representative of the true correctness likelihood -- is important for classification models in many applications. We discover that modern neural networks, unlike those from a decade ago, are poorly calibrated. Through extensive experiments, we observe that depth, width, weight decay, and Batch Normalization are important factors influencing calibration. We evaluate the performance of various post-processing calibration methods on state-of-the-art architectures with image and document classification datasets. Our analysis and experiments not only offer insights into neural network learning, but also provide a simple and straightforward recipe for practical settings: on most datasets, temperature scaling -- a single-parameter variant of Platt Scaling -- is surprisingly effective at calibrating predictions.
http://arxiv.org/pdf/1706.04599
Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger
cs.LG
ICML 2017
null
cs.LG
20170614
20170803
[ { "id": "1610.08936" }, { "id": "1701.06548" }, { "id": "1612.01474" }, { "id": "1607.03594" }, { "id": "1604.07316" }, { "id": "1505.00387" }, { "id": "1703.04977" }, { "id": "1610.05256" } ]
1706.04599
44
Ease of implementation. BBQ is arguably the most dif- ficult to implement, as it requires implementing a model averaging scheme. While all other methods are relatively easy to implement, temperature scaling may arguably be the most straightforward to incorporate into a neural net- work pipeline. In Torch7 (Collobert et al., 2011), for ex- ample, we implement temperature scaling by inserting a nn.MulConstant between the logits and the softmax, whose parameter is 1/T . We set T = 1 during training, and subsequently find its optimal value on the validation set.4 # 6. Conclusion Modern neural networks exhibit a strange phenomenon: probabilistic error and miscalibration worsen even as clas- sification error is reduced. We have demonstrated that recent advances in neural network architecture and train- ing – model capacity, normalization, and regularization – have strong effects on network calibration. It remains future work to understand why these trends affect cali- bration while improving accuracy. Nevertheless, simple techniques can effectively remedy the miscalibration phe- nomenon in neural networks. Temperature scaling is the simplest, fastest, and most straightforward of the methods, and surprisingly is often the most effective. 4 For an example implementation, see http://github. com/gpleiss/temperature_scaling. # Acknowledgments
1706.04599#44
On Calibration of Modern Neural Networks
Confidence calibration -- the problem of predicting probability estimates representative of the true correctness likelihood -- is important for classification models in many applications. We discover that modern neural networks, unlike those from a decade ago, are poorly calibrated. Through extensive experiments, we observe that depth, width, weight decay, and Batch Normalization are important factors influencing calibration. We evaluate the performance of various post-processing calibration methods on state-of-the-art architectures with image and document classification datasets. Our analysis and experiments not only offer insights into neural network learning, but also provide a simple and straightforward recipe for practical settings: on most datasets, temperature scaling -- a single-parameter variant of Platt Scaling -- is surprisingly effective at calibrating predictions.
http://arxiv.org/pdf/1706.04599
Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger
cs.LG
ICML 2017
null
cs.LG
20170614
20170803
[ { "id": "1610.08936" }, { "id": "1701.06548" }, { "id": "1612.01474" }, { "id": "1607.03594" }, { "id": "1604.07316" }, { "id": "1505.00387" }, { "id": "1703.04977" }, { "id": "1610.05256" } ]
1706.04599
45
4 For an example implementation, see http://github. com/gpleiss/temperature_scaling. # Acknowledgments The authors are supported in part by the III-1618134, III- 1526012, and IIS-1149882 grants from the National Sci- ence Foundation, as well as the Bill and Melinda Gates Foundation and the Office of Naval Research. # References Al-Shedivat, Maruan, Wilson, Andrew Gordon, Saatchi, Yunus, Hu, Zhiting, and Xing, Eric P. Learning scal- able deep kernels with recurrent structure. arXiv preprint arXiv:1610.08936, 2016. Bengio, Yoshua, Goodfellow, Ian J, and Courville, Aaron. Deep learning. Nature, 521:436–444, 2015. Bojarski, Mariusz, Del Testa, Davide, Dworakowski, Daniel, Firner, Bernhard, Flepp, Beat, Goyal, Prasoon, Jackel, Lawrence D, Monfort, Mathew, Muller, Urs, Zhang, Jiakai, et al. End to end learning for self-driving cars. arXiv preprint arXiv:1604.07316, 2016.
1706.04599#45
On Calibration of Modern Neural Networks
Confidence calibration -- the problem of predicting probability estimates representative of the true correctness likelihood -- is important for classification models in many applications. We discover that modern neural networks, unlike those from a decade ago, are poorly calibrated. Through extensive experiments, we observe that depth, width, weight decay, and Batch Normalization are important factors influencing calibration. We evaluate the performance of various post-processing calibration methods on state-of-the-art architectures with image and document classification datasets. Our analysis and experiments not only offer insights into neural network learning, but also provide a simple and straightforward recipe for practical settings: on most datasets, temperature scaling -- a single-parameter variant of Platt Scaling -- is surprisingly effective at calibrating predictions.
http://arxiv.org/pdf/1706.04599
Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger
cs.LG
ICML 2017
null
cs.LG
20170614
20170803
[ { "id": "1610.08936" }, { "id": "1701.06548" }, { "id": "1612.01474" }, { "id": "1607.03594" }, { "id": "1604.07316" }, { "id": "1505.00387" }, { "id": "1703.04977" }, { "id": "1610.05256" } ]
1706.04599
46
Caruana, Rich, Lou, Yin, Gehrke, Johannes, Koch, Paul, Sturm, Marc, and Elhadad, Noemie. Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. In KDD, 2015. Collobert, Ronan, Kavukcuoglu, Koray, and Farabet, Cl´ement. Torch7: A matlab-like environment for ma- chine learning. In BigLearn Workshop, NIPS, 2011. Cosmides, Leda and Tooby, John. Are humans good intu- itive statisticians after all? rethinking some conclusions from the literature on judgment under uncertainty. cog- nition, 58(1):1–73, 1996. DeGroot, Morris H and Fienberg, Stephen E. The compar- ison and evaluation of forecasters. The statistician, pp. 12–22, 1983. Deng, Jia, Dong, Wei, Socher, Richard, Li, Li-Jia, Li, Kai, Imagenet: A large-scale hierarchical and Fei-Fei, Li. image database. In CVPR, pp. 248–255, 2009. Denker, John S and Lecun, Yann. Transforming neural-net In NIPS, pp. output levels to probability distributions. 853–859, 1990.
1706.04599#46
On Calibration of Modern Neural Networks
Confidence calibration -- the problem of predicting probability estimates representative of the true correctness likelihood -- is important for classification models in many applications. We discover that modern neural networks, unlike those from a decade ago, are poorly calibrated. Through extensive experiments, we observe that depth, width, weight decay, and Batch Normalization are important factors influencing calibration. We evaluate the performance of various post-processing calibration methods on state-of-the-art architectures with image and document classification datasets. Our analysis and experiments not only offer insights into neural network learning, but also provide a simple and straightforward recipe for practical settings: on most datasets, temperature scaling -- a single-parameter variant of Platt Scaling -- is surprisingly effective at calibrating predictions.
http://arxiv.org/pdf/1706.04599
Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger
cs.LG
ICML 2017
null
cs.LG
20170614
20170803
[ { "id": "1610.08936" }, { "id": "1701.06548" }, { "id": "1612.01474" }, { "id": "1607.03594" }, { "id": "1604.07316" }, { "id": "1505.00387" }, { "id": "1703.04977" }, { "id": "1610.05256" } ]
1706.04599
47
Denker, John S and Lecun, Yann. Transforming neural-net In NIPS, pp. output levels to probability distributions. 853–859, 1990. Friedman, Jerome, Hastie, Trevor, and Tibshirani, Robert. The elements of statistical learning, volume 1. Springer series in statistics Springer, Berlin, 2001. Gal, Yarin and Ghahramani, Zoubin. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In ICML, 2016. Girshick, Ross. Fast r-cnn. In ICCV, pp. 1440–1448, 2015. Hannun, Awni, Case, Carl, Casper, Jared, Catanzaro, Bryan, Diamos, Greg, Elsen, Erich, Prenger, Ryan, Satheesh, Sanjeev, Sengupta, Shubho, Coates, Adam, et al. Deep speech: Scaling up end-to-end speech recog- nition. arXiv preprint arXiv:1412.5567, 2014. He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Deep residual learning for image recognition. In CVPR, pp. 770–778, 2016.
1706.04599#47
On Calibration of Modern Neural Networks
Confidence calibration -- the problem of predicting probability estimates representative of the true correctness likelihood -- is important for classification models in many applications. We discover that modern neural networks, unlike those from a decade ago, are poorly calibrated. Through extensive experiments, we observe that depth, width, weight decay, and Batch Normalization are important factors influencing calibration. We evaluate the performance of various post-processing calibration methods on state-of-the-art architectures with image and document classification datasets. Our analysis and experiments not only offer insights into neural network learning, but also provide a simple and straightforward recipe for practical settings: on most datasets, temperature scaling -- a single-parameter variant of Platt Scaling -- is surprisingly effective at calibrating predictions.
http://arxiv.org/pdf/1706.04599
Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger
cs.LG
ICML 2017
null
cs.LG
20170614
20170803
[ { "id": "1610.08936" }, { "id": "1701.06548" }, { "id": "1612.01474" }, { "id": "1607.03594" }, { "id": "1604.07316" }, { "id": "1505.00387" }, { "id": "1703.04977" }, { "id": "1610.05256" } ]
1706.04599
48
Hendrycks, Dan and Gimpel, Kevin. A baseline for de- tecting misclassified and out-of-distribution examples in neural networks. In ICLR, 2017. Hinton, Geoffrey, Vinyals, Oriol, and Dean, Jeff. Distilling the knowledge in a neural network. 2015. Huang, Gao, Sun, Yu, Liu, Zhuang, Sedra, Daniel, and Weinberger, Kilian. Deep networks with stochastic depth. In ECCV, 2016. Huang, Gao, Liu, Zhuang, Weinberger, Kilian Q, and van der Maaten, Laurens. Densely connected convolu- tional networks. In CVPR, 2017. Ioffe, Sergey and Szegedy, Christian. Batch normalization: Accelerating deep network training by reducing internal covariate shift. 2015. Iyyer, Mohit, Manjunatha, Varun, Boyd-Graber, Jordan, and Daum´e III, Hal. Deep unordered composition rivals syntactic methods for text classification. In ACL, 2015. Jaynes, Edwin T. Information theory and statistical me- chanics. Physical review, 106(4):620, 1957.
1706.04599#48
On Calibration of Modern Neural Networks
Confidence calibration -- the problem of predicting probability estimates representative of the true correctness likelihood -- is important for classification models in many applications. We discover that modern neural networks, unlike those from a decade ago, are poorly calibrated. Through extensive experiments, we observe that depth, width, weight decay, and Batch Normalization are important factors influencing calibration. We evaluate the performance of various post-processing calibration methods on state-of-the-art architectures with image and document classification datasets. Our analysis and experiments not only offer insights into neural network learning, but also provide a simple and straightforward recipe for practical settings: on most datasets, temperature scaling -- a single-parameter variant of Platt Scaling -- is surprisingly effective at calibrating predictions.
http://arxiv.org/pdf/1706.04599
Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger
cs.LG
ICML 2017
null
cs.LG
20170614
20170803
[ { "id": "1610.08936" }, { "id": "1701.06548" }, { "id": "1612.01474" }, { "id": "1607.03594" }, { "id": "1604.07316" }, { "id": "1505.00387" }, { "id": "1703.04977" }, { "id": "1610.05256" } ]
1706.04599
49
Jaynes, Edwin T. Information theory and statistical me- chanics. Physical review, 106(4):620, 1957. Jiang, Xiaoqian, Osl, Melanie, Kim, Jihoon, and Ohno- Machado, Lucila. Calibrating predictive model estimates to support personalized medicine. Journal of the Amer- ican Medical Informatics Association, 19(2):263–274, 2012. Kendall, Alex and Cipolla, Roberto. Modelling uncertainty in deep learning for camera relocalization. 2016. Kendall, Alex and Gal, Yarin. What uncertainties do we need in bayesian deep learning for computer vision? arXiv preprint arXiv:1703.04977, 2017. Krause, Jonathan, Stark, Michael, Deng, Jia, and Fei-Fei, Li. 3d object representations for fine-grained catego- rization. In IEEE Workshop on 3D Representation and Recognition (3dRR), Sydney, Australia, 2013. Krizhevsky, Alex and Hinton, Geoffrey. Learning multiple layers of features from tiny images, 2009. Kuleshov, Volodymyr and Ermon, Stefano. Reliable con- fidence estimation via online learning. arXiv preprint arXiv:1607.03594, 2016.
1706.04599#49
On Calibration of Modern Neural Networks
Confidence calibration -- the problem of predicting probability estimates representative of the true correctness likelihood -- is important for classification models in many applications. We discover that modern neural networks, unlike those from a decade ago, are poorly calibrated. Through extensive experiments, we observe that depth, width, weight decay, and Batch Normalization are important factors influencing calibration. We evaluate the performance of various post-processing calibration methods on state-of-the-art architectures with image and document classification datasets. Our analysis and experiments not only offer insights into neural network learning, but also provide a simple and straightforward recipe for practical settings: on most datasets, temperature scaling -- a single-parameter variant of Platt Scaling -- is surprisingly effective at calibrating predictions.
http://arxiv.org/pdf/1706.04599
Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger
cs.LG
ICML 2017
null
cs.LG
20170614
20170803
[ { "id": "1610.08936" }, { "id": "1701.06548" }, { "id": "1612.01474" }, { "id": "1607.03594" }, { "id": "1604.07316" }, { "id": "1505.00387" }, { "id": "1703.04977" }, { "id": "1610.05256" } ]
1706.04599
50
Supplementary Materials: On Calibration of Modern Neural Networks Kuleshov, Volodymyr and Liang, Percy. Calibrated struc- tured prediction. In NIPS, pp. 3474–3482, 2015. Srivastava, Rupesh Kumar, Greff, Klaus, and Schmid- arXiv preprint huber, J¨urgen. Highway networks. arXiv:1505.00387, 2015. Lakshminarayanan, Balaji, Pritzel, Alexander, and Blun- dell, Charles. Simple and scalable predictive uncer- tainty estimation using deep ensembles. arXiv preprint arXiv:1612.01474, 2016. Tai, Kai Sheng, Socher, Richard, and Manning, Christo- Improved semantic representations from tree- pher D. structured long short-term memory networks. 2015. LeCun, Yann, Bottou, L´eon, Bengio, Yoshua, and Haffner, Patrick. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278– 2324, 1998. MacKay, David JC. A practical bayesian framework for backpropagation networks. Neural computation, 4(3): 448–472, 1992.
1706.04599#50
On Calibration of Modern Neural Networks
Confidence calibration -- the problem of predicting probability estimates representative of the true correctness likelihood -- is important for classification models in many applications. We discover that modern neural networks, unlike those from a decade ago, are poorly calibrated. Through extensive experiments, we observe that depth, width, weight decay, and Batch Normalization are important factors influencing calibration. We evaluate the performance of various post-processing calibration methods on state-of-the-art architectures with image and document classification datasets. Our analysis and experiments not only offer insights into neural network learning, but also provide a simple and straightforward recipe for practical settings: on most datasets, temperature scaling -- a single-parameter variant of Platt Scaling -- is surprisingly effective at calibrating predictions.
http://arxiv.org/pdf/1706.04599
Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger
cs.LG
ICML 2017
null
cs.LG
20170614
20170803
[ { "id": "1610.08936" }, { "id": "1701.06548" }, { "id": "1612.01474" }, { "id": "1607.03594" }, { "id": "1604.07316" }, { "id": "1505.00387" }, { "id": "1703.04977" }, { "id": "1610.05256" } ]
1706.04599
51
MacKay, David JC. A practical bayesian framework for backpropagation networks. Neural computation, 4(3): 448–472, 1992. Naeini, Mahdi Pakdaman, Cooper, Gregory F, and Hauskrecht, Milos. Obtaining well calibrated probabili- ties using bayesian binning. In AAAI, pp. 2901, 2015. Netzer, Yuval, Wang, Tao, Coates, Adam, Bissacco, Alessandro, Wu, Bo, and Ng, Andrew Y. Reading dig- its in natural images with unsupervised feature learning. In Deep Learning and Unsupervised Feature Learning Workshop, NIPS, 2011. Niculescu-Mizil, Alexandru and Caruana, Rich. Predicting In ICML, good probabilities with supervised learning. pp. 625–632, 2005. Pereyra, Gabriel, Tucker, George, Chorowski, Jan, Kaiser, Łukasz, and Hinton, Geoffrey. Regularizing neural networks by penalizing confident output distributions. arXiv preprint arXiv:1701.06548, 2017. Vapnik, Vladimir N. Statistical Learning Theory. Wiley- Interscience, 1998.
1706.04599#51
On Calibration of Modern Neural Networks
Confidence calibration -- the problem of predicting probability estimates representative of the true correctness likelihood -- is important for classification models in many applications. We discover that modern neural networks, unlike those from a decade ago, are poorly calibrated. Through extensive experiments, we observe that depth, width, weight decay, and Batch Normalization are important factors influencing calibration. We evaluate the performance of various post-processing calibration methods on state-of-the-art architectures with image and document classification datasets. Our analysis and experiments not only offer insights into neural network learning, but also provide a simple and straightforward recipe for practical settings: on most datasets, temperature scaling -- a single-parameter variant of Platt Scaling -- is surprisingly effective at calibrating predictions.
http://arxiv.org/pdf/1706.04599
Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger
cs.LG
ICML 2017
null
cs.LG
20170614
20170803
[ { "id": "1610.08936" }, { "id": "1701.06548" }, { "id": "1612.01474" }, { "id": "1607.03594" }, { "id": "1604.07316" }, { "id": "1505.00387" }, { "id": "1703.04977" }, { "id": "1610.05256" } ]
1706.04599
52
Vapnik, Vladimir N. Statistical Learning Theory. Wiley- Interscience, 1998. Welinder, P., Branson, S., Mita, T., Wah, C., Schroff, F., Belongie, S., and Perona, P. Caltech-UCSD Birds 200. Technical Report CNS-TR-2010-001, California Insti- tute of Technology, 2010. Wilson, Andrew G, Hu, Zhiting, Salakhutdinov, Ruslan R, and Xing, Eric P. Stochastic variational deep kernel learning. In NIPS, pp. 2586–2594, 2016a. Wilson, Andrew Gordon, Hu, Zhiting, Salakhutdinov, Rus- lan, and Xing, Eric P. Deep kernel learning. In AISTATS, pp. 370–378, 2016b. Xiong, Wayne, Droppo, Jasha, Huang, Xuedong, Seide, Frank, Seltzer, Mike, Stolcke, Andreas, Yu, Dong, Achieving human parity in and Zweig, Geoffrey. arXiv preprint conversational speech recognition. arXiv:1610.05256, 2016.
1706.04599#52
On Calibration of Modern Neural Networks
Confidence calibration -- the problem of predicting probability estimates representative of the true correctness likelihood -- is important for classification models in many applications. We discover that modern neural networks, unlike those from a decade ago, are poorly calibrated. Through extensive experiments, we observe that depth, width, weight decay, and Batch Normalization are important factors influencing calibration. We evaluate the performance of various post-processing calibration methods on state-of-the-art architectures with image and document classification datasets. Our analysis and experiments not only offer insights into neural network learning, but also provide a simple and straightforward recipe for practical settings: on most datasets, temperature scaling -- a single-parameter variant of Platt Scaling -- is surprisingly effective at calibrating predictions.
http://arxiv.org/pdf/1706.04599
Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger
cs.LG
ICML 2017
null
cs.LG
20170614
20170803
[ { "id": "1610.08936" }, { "id": "1701.06548" }, { "id": "1612.01474" }, { "id": "1607.03594" }, { "id": "1604.07316" }, { "id": "1505.00387" }, { "id": "1703.04977" }, { "id": "1610.05256" } ]
1706.04599
53
Zadrozny, Bianca and Elkan, Charles. Obtaining cal- ibrated probability estimates from decision trees and naive bayesian classifiers. In ICML, pp. 609–616, 2001. Zadrozny, Bianca and Elkan, Charles. Transforming classi- fier scores into accurate multiclass probability estimates. In KDD, pp. 694–699, 2002. Platt, John et al. Probabilistic outputs for support vec- tor machines and comparisons to regularized likelihood methods. Advances in large margin classifiers, 10(3): 61–74, 1999. Simonyan, Karen and Zisserman, Andrew. Very deep con- volutional networks for large-scale image recognition. In ICLR, 2015. Zagoruyko, Sergey and Komodakis, Nikos. Wide residual networks. In BMVC, 2016. Zhang, Chiyuan, Bengio, Samy, Hardt, Moritz, Recht, Ben- jamin, and Vinyals, Oriol. Understanding deep learning requires rethinking generalization. In ICLR, 2017.
1706.04599#53
On Calibration of Modern Neural Networks
Confidence calibration -- the problem of predicting probability estimates representative of the true correctness likelihood -- is important for classification models in many applications. We discover that modern neural networks, unlike those from a decade ago, are poorly calibrated. Through extensive experiments, we observe that depth, width, weight decay, and Batch Normalization are important factors influencing calibration. We evaluate the performance of various post-processing calibration methods on state-of-the-art architectures with image and document classification datasets. Our analysis and experiments not only offer insights into neural network learning, but also provide a simple and straightforward recipe for practical settings: on most datasets, temperature scaling -- a single-parameter variant of Platt Scaling -- is surprisingly effective at calibrating predictions.
http://arxiv.org/pdf/1706.04599
Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger
cs.LG
ICML 2017
null
cs.LG
20170614
20170803
[ { "id": "1610.08936" }, { "id": "1701.06548" }, { "id": "1612.01474" }, { "id": "1607.03594" }, { "id": "1604.07316" }, { "id": "1505.00387" }, { "id": "1703.04977" }, { "id": "1610.05256" } ]
1706.04599
54
Socher, Richard, Perelygin, Alex, Wu, Jean, Chuang, Ja- son, Manning, Christopher D., Ng, Andrew, and Potts, Christopher. Recursive deep models for semantic com- positionality over a sentiment treebank. In EMNLP, pp. 1631–1642, 2013. Srivastava, Nitish, Hinton, Geoffrey, Krizhevsky, Alex, Sutskever, Ilya, and Salakhutdinov, Ruslan. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15:1929–1958, 2014. # Supplementary Materials for: On Calibration of Modern Neural Networks # S1. Further Information on Calibration Metrics We can connect the ECE metric with our exact miscalibra- tion definition, which is restated here: e||P(¥=¥ | P=p) ~| e||P(¥=¥ | P=p) ~| Let F ˆP (p) be the cumulative distribution function of ˆP so that F ˆP (b) − F ˆP (a) = P( ˆP ∈ [a, b]). Using the Riemann- Stieltjes integral we have
1706.04599#54
On Calibration of Modern Neural Networks
Confidence calibration -- the problem of predicting probability estimates representative of the true correctness likelihood -- is important for classification models in many applications. We discover that modern neural networks, unlike those from a decade ago, are poorly calibrated. Through extensive experiments, we observe that depth, width, weight decay, and Batch Normalization are important factors influencing calibration. We evaluate the performance of various post-processing calibration methods on state-of-the-art architectures with image and document classification datasets. Our analysis and experiments not only offer insights into neural network learning, but also provide a simple and straightforward recipe for practical settings: on most datasets, temperature scaling -- a single-parameter variant of Platt Scaling -- is surprisingly effective at calibrating predictions.
http://arxiv.org/pdf/1706.04599
Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger
cs.LG
ICML 2017
null
cs.LG
20170614
20170803
[ { "id": "1610.08936" }, { "id": "1701.06548" }, { "id": "1612.01474" }, { "id": "1607.03594" }, { "id": "1604.07316" }, { "id": "1505.00387" }, { "id": "1703.04977" }, { "id": "1610.05256" } ]
1706.04599
55
e(le(? =r P=») -o Are Y | P =p) ~p|adFp(p) M YS [PO =¥1P = pm) = Pm| PCP € Im) m= x The first two constraint ensure that q is a probability dis- tribution, while the last constraint limits the scope of distri- butions. Intuitively, the constraint specifies that the average true class logit is equal to the average weighted logit. Proof. We solve this constrained optimization problem us- ing the Lagrangian. We first ignore the constraint q(zi)(k) and later show that the solution satisfies this condition. Let λ, β1, . . . , βn ∈ R be the Lagrangian multipliers and define n ae n [K + > 2 q(2,)) - 2M) =1 Lk=1 4) tog g(a) £378 Sale) ~ 1). i=l k=1 m=1 Im represents
1706.04599#55
On Calibration of Modern Neural Networks
Confidence calibration -- the problem of predicting probability estimates representative of the true correctness likelihood -- is important for classification models in many applications. We discover that modern neural networks, unlike those from a decade ago, are poorly calibrated. Through extensive experiments, we observe that depth, width, weight decay, and Batch Normalization are important factors influencing calibration. We evaluate the performance of various post-processing calibration methods on state-of-the-art architectures with image and document classification datasets. Our analysis and experiments not only offer insights into neural network learning, but also provide a simple and straightforward recipe for practical settings: on most datasets, temperature scaling -- a single-parameter variant of Platt Scaling -- is surprisingly effective at calibrating predictions.
http://arxiv.org/pdf/1706.04599
Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger
cs.LG
ICML 2017
null
cs.LG
20170614
20170803
[ { "id": "1610.08936" }, { "id": "1701.06548" }, { "id": "1612.01474" }, { "id": "1607.03594" }, { "id": "1604.07316" }, { "id": "1505.00387" }, { "id": "1703.04977" }, { "id": "1610.05256" } ]
1706.04599
56
m=1 Im represents where I,, represents the interval of bin By. IP = YIP =Pm)—Pm| is closely approximated by |acc(Bm) — p(Bm)| for n large. Hence ECE using bins converges to the M-term Riemann-Stieltjes sum of zn[[e(=¥ iP») al) k=1 Taking the derivative with respect to q(zi)(k) gives ∂ ∂q(zi)(k) L = −nK − log q(zi)(k) + λz(k) i + βi. Setting the gradient of the Lagrangian L to 0 and rearrang- ing gives # S2. Further Information on Temperature Scaling q(zi)(k) = eλz(k) i +βi−nK. Since D4, k=1 q(zi)(k) = 1 for all i, we must have Here we derive the temperature scaling model using the en- tropy maximization principle with an appropriate balanced equation. Claim 1. Given n samples’ logit vectors z1, . . . , zn and class labels y1, . . . , yn, temperature scaling is the unique solution q to the following entropy maximization problem: yet) (ht) _ ©" Zi) = —— yk et j=l
1706.04599#56
On Calibration of Modern Neural Networks
Confidence calibration -- the problem of predicting probability estimates representative of the true correctness likelihood -- is important for classification models in many applications. We discover that modern neural networks, unlike those from a decade ago, are poorly calibrated. Through extensive experiments, we observe that depth, width, weight decay, and Batch Normalization are important factors influencing calibration. We evaluate the performance of various post-processing calibration methods on state-of-the-art architectures with image and document classification datasets. Our analysis and experiments not only offer insights into neural network learning, but also provide a simple and straightforward recipe for practical settings: on most datasets, temperature scaling -- a single-parameter variant of Platt Scaling -- is surprisingly effective at calibrating predictions.
http://arxiv.org/pdf/1706.04599
Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger
cs.LG
ICML 2017
null
cs.LG
20170614
20170803
[ { "id": "1610.08936" }, { "id": "1701.06548" }, { "id": "1612.01474" }, { "id": "1607.03594" }, { "id": "1604.07316" }, { "id": "1505.00387" }, { "id": "1703.04977" }, { "id": "1610.05256" } ]
1706.04599
57
yet) (ht) _ ©" Zi) = —— yk et j=l which recovers the temperature scaling model by setting T = 1 λ . sty 3 (zi) log q(zi)™ max i=1k=1 subject to q(z;) >0 Vi,k K q(zi)™ = Vi k=1 n n K 5 k 7 1) =P Mala. Figure S1 visualizes Claim 1. We see that, as training con- tinues, the model begins to overfit with respect to NLL (red line). This results in a low-entropy softmax distribution over classes (blue line), which explains the model’s over- confidence. Temperature scaling not only lowers the NLL but also raises the entropy of the distribution (green line). # S3. Additional Tables Tables S1, S2, and S3 display the MCE, test error, and NLL for all the experimental settings outlined in Section 5. Supplementary Materials: On Calibration of Modern Neural Networks Entropy vs. NLL on CIFAR—100 3.5 — Entropy & NLL after Calibration —— Entropy before Calibration 31 NLL before Calibration —— Optimal T Selected Entropy / NLL / T 100 200 300 Epoch 400 500
1706.04599#57
On Calibration of Modern Neural Networks
Confidence calibration -- the problem of predicting probability estimates representative of the true correctness likelihood -- is important for classification models in many applications. We discover that modern neural networks, unlike those from a decade ago, are poorly calibrated. Through extensive experiments, we observe that depth, width, weight decay, and Batch Normalization are important factors influencing calibration. We evaluate the performance of various post-processing calibration methods on state-of-the-art architectures with image and document classification datasets. Our analysis and experiments not only offer insights into neural network learning, but also provide a simple and straightforward recipe for practical settings: on most datasets, temperature scaling -- a single-parameter variant of Platt Scaling -- is surprisingly effective at calibrating predictions.
http://arxiv.org/pdf/1706.04599
Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger
cs.LG
ICML 2017
null
cs.LG
20170614
20170803
[ { "id": "1610.08936" }, { "id": "1701.06548" }, { "id": "1612.01474" }, { "id": "1607.03594" }, { "id": "1604.07316" }, { "id": "1505.00387" }, { "id": "1703.04977" }, { "id": "1610.05256" } ]
1706.04599
59
Dataset Model Uncalibrated Hist. Binning Isotonic BBQ Temp. Scaling Vector Scaling Matrix Scaling Birds Cars CIFAR-10 CIFAR-10 CIFAR-10 CIFAR-10 CIFAR-10 CIFAR-100 CIFAR-100 CIFAR-100 CIFAR-100 CIFAR-100 ImageNet ImageNet SVHN ResNet 50 ResNet 50 ResNet 110 ResNet 110 (SD) Wide ResNet 32 DenseNet 40 LeNet 5 ResNet 110 ResNet 110 (SD) Wide ResNet 32 DenseNet 40 LeNet 5 DenseNet 161 ResNet 152 ResNet 152 (SD) 30.06% 41.55% 33.78% 34.52% 27.97% 22.44% 8.02% 35.5% 26.42% 33.11% 21.52% 10.25% 14.07% 12.2% 19.36% 25.35% 5.16% 26.87% 17.0% 12.19% 7.77% 16.49% 7.03% 9.12% 6.22% 9.36% 18.61% 13.14% 14.57% 11.16% 16.59% 11.72% 15.23% 9.31% 7.8% 72.64% 16.45%
1706.04599#59
On Calibration of Modern Neural Networks
Confidence calibration -- the problem of predicting probability estimates representative of the true correctness likelihood -- is important for classification models in many applications. We discover that modern neural networks, unlike those from a decade ago, are poorly calibrated. Through extensive experiments, we observe that depth, width, weight decay, and Batch Normalization are important factors influencing calibration. We evaluate the performance of various post-processing calibration methods on state-of-the-art architectures with image and document classification datasets. Our analysis and experiments not only offer insights into neural network learning, but also provide a simple and straightforward recipe for practical settings: on most datasets, temperature scaling -- a single-parameter variant of Platt Scaling -- is surprisingly effective at calibrating predictions.
http://arxiv.org/pdf/1706.04599
Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger
cs.LG
ICML 2017
null
cs.LG
20170614
20170803
[ { "id": "1610.08936" }, { "id": "1701.06548" }, { "id": "1612.01474" }, { "id": "1607.03594" }, { "id": "1604.07316" }, { "id": "1505.00387" }, { "id": "1703.04977" }, { "id": "1610.05256" } ]
1706.04599
60
13.14% 14.57% 11.16% 16.59% 11.72% 15.23% 9.31% 7.8% 72.64% 16.45% 19.26% 6.19% 9.22% 19.54% 14.57% 18.34% 82.35% 10.36% 10.9% 10.95% 9.12% 14.87% 11.88% 10.59% 8.67% 3.64% 9.96% 11.57% 10.96% 8.74% 8.85% 18.67% 9.09% 9.08% 20.23% 8.56% 15.45% 9.11% 4.58% 5.14% 4.74% 8.85% 5.33% 19.4% 5.22% 12.29% 12.29% 18.05% 9.81% 8.59% 27.39% 15.55% 4.43% 3.17% 19.39% 2.5% 8.85% 6.31% 8.82% 8.65% 9.61% 9.61% 30.78% 38.67% 29.65% 22.89% 10.74% 9.65% 4.36% 16.89% 45.62% 35.6% 44.73% 38.64%
1706.04599#60
On Calibration of Modern Neural Networks
Confidence calibration -- the problem of predicting probability estimates representative of the true correctness likelihood -- is important for classification models in many applications. We discover that modern neural networks, unlike those from a decade ago, are poorly calibrated. Through extensive experiments, we observe that depth, width, weight decay, and Batch Normalization are important factors influencing calibration. We evaluate the performance of various post-processing calibration methods on state-of-the-art architectures with image and document classification datasets. Our analysis and experiments not only offer insights into neural network learning, but also provide a simple and straightforward recipe for practical settings: on most datasets, temperature scaling -- a single-parameter variant of Platt Scaling -- is surprisingly effective at calibrating predictions.
http://arxiv.org/pdf/1706.04599
Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger
cs.LG
ICML 2017
null
cs.LG
20170614
20170803
[ { "id": "1610.08936" }, { "id": "1701.06548" }, { "id": "1612.01474" }, { "id": "1607.03594" }, { "id": "1604.07316" }, { "id": "1505.00387" }, { "id": "1703.04977" }, { "id": "1610.05256" } ]
1706.04599
62
Table S1. MCE (%) (with M = 15 bins) on standard vision and NLP datasets before calibration and with various calibration methods. The number following a model’s name denotes the network depth. MCE seems very sensitive to the binning scheme and is less suited for small test sets. # S4. Additional Reliability Diagrams We include reliability diagrams for additional datasets: CIFAR-10 (Figure S2) and SST (Figure S3 and Figure S4). Note that, as mentioned in Section 2, the reliability diagrams do not represent the proportion of predictions that belong to a given bin. Supplementary Materials: On Calibration of Modern Neural Networks
1706.04599#62
On Calibration of Modern Neural Networks
Confidence calibration -- the problem of predicting probability estimates representative of the true correctness likelihood -- is important for classification models in many applications. We discover that modern neural networks, unlike those from a decade ago, are poorly calibrated. Through extensive experiments, we observe that depth, width, weight decay, and Batch Normalization are important factors influencing calibration. We evaluate the performance of various post-processing calibration methods on state-of-the-art architectures with image and document classification datasets. Our analysis and experiments not only offer insights into neural network learning, but also provide a simple and straightforward recipe for practical settings: on most datasets, temperature scaling -- a single-parameter variant of Platt Scaling -- is surprisingly effective at calibrating predictions.
http://arxiv.org/pdf/1706.04599
Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger
cs.LG
ICML 2017
null
cs.LG
20170614
20170803
[ { "id": "1610.08936" }, { "id": "1701.06548" }, { "id": "1612.01474" }, { "id": "1607.03594" }, { "id": "1604.07316" }, { "id": "1505.00387" }, { "id": "1703.04977" }, { "id": "1610.05256" } ]
1706.04599
63
Dataset Model Uncalibrated Hist. Binning Isotonic BBQ Temp. Scaling Vector Scaling Matrix Scaling Birds Cars CIFAR-10 CIFAR-10 CIFAR-10 CIFAR-10 CIFAR-10 CIFAR-100 CIFAR-100 CIFAR-100 CIFAR-100 CIFAR-100 ImageNet ImageNet SVHN ResNet 50 ResNet 50 ResNet 110 ResNet 110 (SD) Wide ResNet 32 DenseNet 40 LeNet 5 ResNet 110 ResNet 110 (SD) Wide ResNet 32 DenseNet 40 LeNet 5 DenseNet 161 ResNet 152 ResNet 152 (SD) 22.54% 14.28% 6.21% 5.64% 6.96% 5.91% 15.57% 27.83% 24.91% 28.0% 26.45% 44.92% 22.57% 22.31% 1.98% 55.02% 16.24% 6.45% 5.59% 7.3% 6.12% 15.63% 34.78% 33.78% 34.29% 34.78% 54.06% 48.32% 48.1% 2.06% 23.37% 37.76% 14.9% 19.25% 6.25% 6.36% 5.55%
1706.04599#63
On Calibration of Modern Neural Networks
Confidence calibration -- the problem of predicting probability estimates representative of the true correctness likelihood -- is important for classification models in many applications. We discover that modern neural networks, unlike those from a decade ago, are poorly calibrated. Through extensive experiments, we observe that depth, width, weight decay, and Batch Normalization are important factors influencing calibration. We evaluate the performance of various post-processing calibration methods on state-of-the-art architectures with image and document classification datasets. Our analysis and experiments not only offer insights into neural network learning, but also provide a simple and straightforward recipe for practical settings: on most datasets, temperature scaling -- a single-parameter variant of Platt Scaling -- is surprisingly effective at calibrating predictions.
http://arxiv.org/pdf/1706.04599
Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger
cs.LG
ICML 2017
null
cs.LG
20170614
20170803
[ { "id": "1610.08936" }, { "id": "1701.06548" }, { "id": "1612.01474" }, { "id": "1607.03594" }, { "id": "1604.07316" }, { "id": "1505.00387" }, { "id": "1703.04977" }, { "id": "1610.05256" } ]
1706.04599
64
48.32% 48.1% 2.06% 23.37% 37.76% 14.9% 19.25% 6.25% 6.36% 5.55% 5.62% 7.35% 7.01% 5.96% 6.0% 15.69% 15.64% 28.41% 28.56% 25.42% 25.17% 28.61% 29.08% 26.73% 26.4% 45.77% 46.82% 23.2% 47.58% 22.94% 47.6% 2.04% 2.04% 22.54% 14.28% 6.21% 5.64% 6.96% 5.91% 15.57% 27.83% 24.91% 28.0% 26.45% 44.92% 22.57% 22.31% 1.98% 22.99% 14.15% 6.37% 5.62% 7.1% 5.96% 15.53% 27.82% 24.99% 28.45% 26.25% 45.53% 22.54% 22.56% 2.0% 29.51% 17.98% 6.42% 5.69% 7.27% 6.0% 15.81% 38.77% 35.09% 37.4% 36.14%
1706.04599#64
On Calibration of Modern Neural Networks
Confidence calibration -- the problem of predicting probability estimates representative of the true correctness likelihood -- is important for classification models in many applications. We discover that modern neural networks, unlike those from a decade ago, are poorly calibrated. Through extensive experiments, we observe that depth, width, weight decay, and Batch Normalization are important factors influencing calibration. We evaluate the performance of various post-processing calibration methods on state-of-the-art architectures with image and document classification datasets. Our analysis and experiments not only offer insights into neural network learning, but also provide a simple and straightforward recipe for practical settings: on most datasets, temperature scaling -- a single-parameter variant of Platt Scaling -- is surprisingly effective at calibrating predictions.
http://arxiv.org/pdf/1706.04599
Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger
cs.LG
ICML 2017
null
cs.LG
20170614
20170803
[ { "id": "1610.08936" }, { "id": "1701.06548" }, { "id": "1612.01474" }, { "id": "1607.03594" }, { "id": "1604.07316" }, { "id": "1505.00387" }, { "id": "1703.04977" }, { "id": "1610.05256" } ]
1706.04599
67
Dataset Model Uncalibrated Hist. Binning Isotonic BBQ Temp. Scaling Vector Scaling Matrix Scaling Birds Cars CIFAR-10 CIFAR-10 CIFAR-10 CIFAR-10 CIFAR-10 CIFAR-100 CIFAR-100 CIFAR-100 CIFAR-100 CIFAR-100 ImageNet ImageNet SVHN ResNet 50 ResNet 50 ResNet 110 ResNet 110 (SD) Wide ResNet 32 DenseNet 40 LeNet 5 ResNet 110 ResNet 110 (SD) Wide ResNet 32 DenseNet 40 LeNet 5 DenseNet 161 ResNet 152 ResNet 152 (SD) 0.9786 0.5488 0.3285 0.2959 0.3293 0.2228 0.4688 1.4978 1.1157 1.3434 1.0134 1.6639 0.9338 0.8961 0.0842 1.6226 0.7977 0.2532 0.2027 0.2778 0.212 0.529 1.4379 1.1985 1.4499 1.2156 2.2574 1.4716 1.4507 0.1137 1.4128 0.8793 0.2237 0.1867 0.2428 0.1969 0.4757 1.207 1.0317
1706.04599#67
On Calibration of Modern Neural Networks
Confidence calibration -- the problem of predicting probability estimates representative of the true correctness likelihood -- is important for classification models in many applications. We discover that modern neural networks, unlike those from a decade ago, are poorly calibrated. Through extensive experiments, we observe that depth, width, weight decay, and Batch Normalization are important factors influencing calibration. We evaluate the performance of various post-processing calibration methods on state-of-the-art architectures with image and document classification datasets. Our analysis and experiments not only offer insights into neural network learning, but also provide a simple and straightforward recipe for practical settings: on most datasets, temperature scaling -- a single-parameter variant of Platt Scaling -- is surprisingly effective at calibrating predictions.
http://arxiv.org/pdf/1706.04599
Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger
cs.LG
ICML 2017
null
cs.LG
20170614
20170803
[ { "id": "1610.08936" }, { "id": "1701.06548" }, { "id": "1612.01474" }, { "id": "1607.03594" }, { "id": "1604.07316" }, { "id": "1505.00387" }, { "id": "1703.04977" }, { "id": "1610.05256" } ]
1706.04599
68
0.1137 1.4128 0.8793 0.2237 0.1867 0.2428 0.1969 0.4757 1.207 1.0317 1.2086 1.0615 1.8173 1.1912 1.1859 0.095 1.2539 0.6986 0.263 0.2159 0.2774 0.2087 0.4984 1.5466 1.1982 1.459 1.1572 1.9893 1.4272 1.3987 0.1062 0.8792 0.5311 0.2102 0.1718 0.2283 0.1750 0.459 1.0442 0.8613 1.0565 0.9026 1.6560 0.8885 0.8657 0.0821 0.9021 0.5299 0.2088 0.1709 0.2275 0.1757 0.4568 1.0485 0.8655 1.0648 0.9011 1.6648 0.8879 0.8742 0.0844 2.334 1.0206 0.2048 0.1766 0.2229 0.176 0.4607 2.5637 1.8182 2.5507 1.9639 2.1405 - - 0.0924 20 News Reuters SST Binary SST
1706.04599#68
On Calibration of Modern Neural Networks
Confidence calibration -- the problem of predicting probability estimates representative of the true correctness likelihood -- is important for classification models in many applications. We discover that modern neural networks, unlike those from a decade ago, are poorly calibrated. Through extensive experiments, we observe that depth, width, weight decay, and Batch Normalization are important factors influencing calibration. We evaluate the performance of various post-processing calibration methods on state-of-the-art architectures with image and document classification datasets. Our analysis and experiments not only offer insights into neural network learning, but also provide a simple and straightforward recipe for practical settings: on most datasets, temperature scaling -- a single-parameter variant of Platt Scaling -- is surprisingly effective at calibrating predictions.
http://arxiv.org/pdf/1706.04599
Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger
cs.LG
ICML 2017
null
cs.LG
20170614
20170803
[ { "id": "1610.08936" }, { "id": "1701.06548" }, { "id": "1612.01474" }, { "id": "1607.03594" }, { "id": "1604.07316" }, { "id": "1505.00387" }, { "id": "1703.04977" }, { "id": "1610.05256" } ]
1706.04599
70
Table S3. NLL (%) on standard vision and NLP datasets before calibration and with various calibration methods. The number following a model’s name denotes the network depth. To summarize, NLL roughly follows the trends of ECE. Supplementary Materials: On Calibration of Modern Neural Networks 2 q 8 8 a Uncal. - CIFAR-10 Temp. Scale - CIFAR-10 Hist. Bin. - CIFAR-10 Iso. Reg. - CIFAR-10 ResNet-110 (SD) ResNet-110 (SD) ResNet-110 (SD) ResNet-110 (SD) 1.0 HE Outputs HE Outputs HE Outputs 0.8 if (“5 Gap (25) Gap (5) Gap 06 0.4 0.4 0.2 ¢ 0.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 04 06 0.8 1.0 0.0 0.2 04 06 08 1.0 0.0 0.2 04 0.6 0.8 1.0 Confidence Figure S2. Reliability diagrams for CIFAR-10 before (far left) and after calibration (middle left, middle right, far right).
1706.04599#70
On Calibration of Modern Neural Networks
Confidence calibration -- the problem of predicting probability estimates representative of the true correctness likelihood -- is important for classification models in many applications. We discover that modern neural networks, unlike those from a decade ago, are poorly calibrated. Through extensive experiments, we observe that depth, width, weight decay, and Batch Normalization are important factors influencing calibration. We evaluate the performance of various post-processing calibration methods on state-of-the-art architectures with image and document classification datasets. Our analysis and experiments not only offer insights into neural network learning, but also provide a simple and straightforward recipe for practical settings: on most datasets, temperature scaling -- a single-parameter variant of Platt Scaling -- is surprisingly effective at calibrating predictions.
http://arxiv.org/pdf/1706.04599
Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger
cs.LG
ICML 2017
null
cs.LG
20170614
20170803
[ { "id": "1610.08936" }, { "id": "1701.06548" }, { "id": "1612.01474" }, { "id": "1607.03594" }, { "id": "1604.07316" }, { "id": "1505.00387" }, { "id": "1703.04977" }, { "id": "1610.05256" } ]
1706.04599
71
Figure S2. Reliability diagrams for CIFAR-10 before (far left) and after calibration (middle left, middle right, far right). Uncal. - SST-FG Temp. Scale - SST-FG Hist. Bin. - SST-FG Iso. Reg. - SST-FG 10 Tree LSTM Tree LSTM Tree LSTM Tree LSTM HE Outputs HE Outputs 0.8 (=) Gap [=>] Gap 0.6 0.4 0.2 ‘I Wifscs=2.56 0.0 # Accuracy 0.0 0.2 04 06 0.8 1.0 0.0 0.2 04 06 0.8 10 00 0.2 04 06 0.8 1.0 0.0 0.2 04 06 0.8 1.0 Confidence Figure S3. Reliability diagrams for SST Binary and SST Fine Grained before (far left) and after calibration (middle left, middle right, far right). Uncal. - SST-BIN Temp. Scale - SST-BIN Hist. Bin. - SST-BIN Iso. Reg. - SST-BIN 10 Tree LSTM Tree LSTM Tree LSTM Tree LSTM , HE Outputs HE Outputs 0.8 (=) Gap [1 Gap 06 0.4 0.2 0.0 2 3 a
1706.04599#71
On Calibration of Modern Neural Networks
Confidence calibration -- the problem of predicting probability estimates representative of the true correctness likelihood -- is important for classification models in many applications. We discover that modern neural networks, unlike those from a decade ago, are poorly calibrated. Through extensive experiments, we observe that depth, width, weight decay, and Batch Normalization are important factors influencing calibration. We evaluate the performance of various post-processing calibration methods on state-of-the-art architectures with image and document classification datasets. Our analysis and experiments not only offer insights into neural network learning, but also provide a simple and straightforward recipe for practical settings: on most datasets, temperature scaling -- a single-parameter variant of Platt Scaling -- is surprisingly effective at calibrating predictions.
http://arxiv.org/pdf/1706.04599
Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger
cs.LG
ICML 2017
null
cs.LG
20170614
20170803
[ { "id": "1610.08936" }, { "id": "1701.06548" }, { "id": "1612.01474" }, { "id": "1607.03594" }, { "id": "1604.07316" }, { "id": "1505.00387" }, { "id": "1703.04977" }, { "id": "1610.05256" } ]
1706.03762
0
3 2 0 2 g u A 2 ] L C . s c [ 7 v 2 6 7 3 0 . 6 0 7 1 : v i X r a Provided proper attribution is provided, Google hereby grants permission to reproduce the tables and figures in this paper solely for use in journalistic or scholarly works. # Attention Is All You Need # Ashish Vaswani∗ Google Brain [email protected] Noam Shazeer∗ Google Brain [email protected] Niki Parmar∗ Google Research [email protected] Jakob Uszkoreit∗ Google Research [email protected] # Llion Jones∗ Google Research [email protected] Aidan N. Gomez∗ † University of Toronto [email protected] Łukasz Kaiser∗ Google Brain [email protected] # Illia Polosukhin∗ ‡ [email protected] # Abstract
1706.03762#0
Attention Is All You Need
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
http://arxiv.org/pdf/1706.03762
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin
cs.CL, cs.LG
15 pages, 5 figures
null
cs.CL
20170612
20230802
[ { "id": "1601.06733" }, { "id": "1508.07909" }, { "id": "1602.02410" }, { "id": "1703.03130" }, { "id": "1511.06114" }, { "id": "1610.10099" }, { "id": "1508.04025" }, { "id": "1705.04304" }, { "id": "1608.05859" }, { "id": "1701.06538" }, { "id": "1609.08144" }, { "id": "1607.06450" }, { "id": "1705.03122" }, { "id": "1610.02357" }, { "id": "1703.10722" } ]
1706.03872
0
7 1 0 2 n u J 2 1 ] L C . s c [ arXiv:1706.03872v1 | 1 v 2 7 8 3 0 . 6 0 7 1 : v i X r a # Six Challenges for Neural Machine Translation # Philipp Koehn Computer Science Department Johns Hopkins University [email protected] Rebecca Knowles Computer Science Department Johns Hopkins University [email protected] # Abstract We explore six challenges for neural machine translation: domain mismatch, amount of training data, rare words, long sentences, word alignment, and beam search. We show both deficiencies and improvements over the quality of phrase- based statistical machine translation. # 1 Introduction 3. NMT systems that operate at the sub-word level (e.g. with byte-pair encoding) perform better than SMT systems on extremely low- frequency words, but still show weakness in translating low-frequency words belonging to highly-inflected categories (e.g. verbs). 4. NMT systems have lower translation quality on very long sentences, but do comparably better up to a sentence length of about 60 words.
1706.03872#0
Six Challenges for Neural Machine Translation
We explore six challenges for neural machine translation: domain mismatch, amount of training data, rare words, long sentences, word alignment, and beam search. We show both deficiencies and improvements over the quality of phrase-based statistical machine translation.
http://arxiv.org/pdf/1706.03872
Philipp Koehn, Rebecca Knowles
cs.CL
12 pages; First Workshop on Neural Machine Translation, 2017
null
cs.CL
20170612
20170612
[ { "id": "1706.03872" }, { "id": "1612.06897" } ]
1706.03762
1
# Illia Polosukhin∗ ‡ [email protected] # Abstract The dominant sequence transduction models are based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English- to-German translation task, improving over the existing best results, including ensembles, by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
1706.03762#1
Attention Is All You Need
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
http://arxiv.org/pdf/1706.03762
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin
cs.CL, cs.LG
15 pages, 5 figures
null
cs.CL
20170612
20230802
[ { "id": "1601.06733" }, { "id": "1508.07909" }, { "id": "1602.02410" }, { "id": "1703.03130" }, { "id": "1511.06114" }, { "id": "1610.10099" }, { "id": "1508.04025" }, { "id": "1705.04304" }, { "id": "1608.05859" }, { "id": "1701.06538" }, { "id": "1609.08144" }, { "id": "1607.06450" }, { "id": "1705.03122" }, { "id": "1610.02357" }, { "id": "1703.10722" } ]
1706.03872
1
4. NMT systems have lower translation quality on very long sentences, but do comparably better up to a sentence length of about 60 words. Neural machine translation has emerged as the most promising machine translation approach in recent years, showing superior performance on public benchmarks (Bojar et al., 2016) and rapid adoption in deployments by, e.g., Google (Wu et al., 2016), Systran (Crego et al., 2016), and WIPO (Junczys-Dowmunt et al., 2016). But there have also been reports of poor performance, such as the systems built under low-resource conditions in the DARPA LORELEI program.1 In this paper, we examine a number of chal- lenges to neural machine translation (NMT) and give empirical results on how well the technology currently holds up, compared to traditional statis- tical machine translation (SMT). 5. The attention model for NMT does not al- ways fulfill the role of a word alignment model, but may in fact dramatically diverge. 6. Beam search decoding only improves trans- lation quality for narrow beams and deterio- rates when exposed to a larger search space.
1706.03872#1
Six Challenges for Neural Machine Translation
We explore six challenges for neural machine translation: domain mismatch, amount of training data, rare words, long sentences, word alignment, and beam search. We show both deficiencies and improvements over the quality of phrase-based statistical machine translation.
http://arxiv.org/pdf/1706.03872
Philipp Koehn, Rebecca Knowles
cs.CL
12 pages; First Workshop on Neural Machine Translation, 2017
null
cs.CL
20170612
20170612
[ { "id": "1706.03872" }, { "id": "1612.06897" } ]
1706.03762
2
∗Equal contribution. Listing order is random. Jakob proposed replacing RNNs with self-attention and started the effort to evaluate this idea. Ashish, with Illia, designed and implemented the first Transformer models and has been crucially involved in every aspect of this work. Noam proposed scaled dot-product attention, multi-head attention and the parameter-free position representation and became the other person involved in nearly every detail. Niki designed, implemented, tuned and evaluated countless model variants in our original codebase and tensor2tensor. Llion also experimented with novel model variants, was responsible for our initial codebase, and efficient inference and visualizations. Lukasz and Aidan spent countless long days designing various parts of and implementing tensor2tensor, replacing our earlier codebase, greatly improving results and massively accelerating our research. †Work performed while at Google Brain. ‡Work performed while at Google Research. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. # 1 Introduction
1706.03762#2
Attention Is All You Need
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
http://arxiv.org/pdf/1706.03762
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin
cs.CL, cs.LG
15 pages, 5 figures
null
cs.CL
20170612
20230802
[ { "id": "1601.06733" }, { "id": "1508.07909" }, { "id": "1602.02410" }, { "id": "1703.03130" }, { "id": "1511.06114" }, { "id": "1610.10099" }, { "id": "1508.04025" }, { "id": "1705.04304" }, { "id": "1608.05859" }, { "id": "1701.06538" }, { "id": "1609.08144" }, { "id": "1607.06450" }, { "id": "1705.03122" }, { "id": "1610.02357" }, { "id": "1703.10722" } ]
1706.03872
2
6. Beam search decoding only improves trans- lation quality for narrow beams and deterio- rates when exposed to a larger search space. We note a 7th challenge that we do not exam- ine empirically: NMT systems are much less in- terpretable. The answer to the question of why the training data leads these systems to decide on specific word choices during decoding is buried in large matrices of real-numbered values. There is a clear need to develop better analytics for NMT. We find that: 1. NMT systems have lower quality out of do- main, to the point that they completely sacri- fice adequacy for the sake of fluency. the compa- rable performance of NMT and SMT sys- tems. Bentivogli et al. (2016) considered dif- ferent linguistic categories for English–German and Toral and S´anchez-Cartagena (2017) com- pared different broad aspects such as fluency and reordering for nine language directions. 2. NMT systems have a steeper learning curve with respect to the amount of training data, resulting in worse quality in low-resource settings, but better performance in high- resource settings. 1https://www.nist.gov/itl/iad/mig/lorehlt16- evaluations # 2 Experimental Setup
1706.03872#2
Six Challenges for Neural Machine Translation
We explore six challenges for neural machine translation: domain mismatch, amount of training data, rare words, long sentences, word alignment, and beam search. We show both deficiencies and improvements over the quality of phrase-based statistical machine translation.
http://arxiv.org/pdf/1706.03872
Philipp Koehn, Rebecca Knowles
cs.CL
12 pages; First Workshop on Neural Machine Translation, 2017
null
cs.CL
20170612
20170612
[ { "id": "1706.03872" }, { "id": "1612.06897" } ]
1706.03762
3
†Work performed while at Google Brain. ‡Work performed while at Google Research. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. # 1 Introduction Recurrent neural networks, long short-term memory [13] and gated recurrent [7] neural networks in particular, have been firmly established as state of the art approaches in sequence modeling and transduction problems such as language modeling and machine translation [35, 2, 5]. Numerous efforts have since continued to push the boundaries of recurrent language models and encoder-decoder architectures [38, 24, 15]. Recurrent models typically factor computation along the symbol positions of the input and output sequences. Aligning the positions to steps in computation time, they generate a sequence of hidden states ht, as a function of the previous hidden state ht−1 and the input for position t. This inherently sequential nature precludes parallelization within training examples, which becomes critical at longer sequence lengths, as memory constraints limit batching across examples. Recent work has achieved significant improvements in computational efficiency through factorization tricks [21] and conditional computation [32], while also improving model performance in case of the latter. The fundamental constraint of sequential computation, however, remains.
1706.03762#3
Attention Is All You Need
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
http://arxiv.org/pdf/1706.03762
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin
cs.CL, cs.LG
15 pages, 5 figures
null
cs.CL
20170612
20230802
[ { "id": "1601.06733" }, { "id": "1508.07909" }, { "id": "1602.02410" }, { "id": "1703.03130" }, { "id": "1511.06114" }, { "id": "1610.10099" }, { "id": "1508.04025" }, { "id": "1705.04304" }, { "id": "1608.05859" }, { "id": "1701.06538" }, { "id": "1609.08144" }, { "id": "1607.06450" }, { "id": "1705.03122" }, { "id": "1610.02357" }, { "id": "1703.10722" } ]
1706.03872
3
1https://www.nist.gov/itl/iad/mig/lorehlt16- evaluations # 2 Experimental Setup We use common toolkits for neural machine trans- lation (Nematus) and traditional phrase-based sta- tistical machine translation (Moses) with common data sets, drawn from WMT and OPUS. # 2.1 Neural Machine Translation While a variety of neural machine transla- tion approaches were initially proposed — such as the use of convolutional neural networks (Kalchbrenner and Blunsom, 2013) — practically all recent work has been focused on the attention- based encoder-decoder model (Bahdanau et al., 2015). We use the toolkit Nematus2 (Sennrich et al., 2017) which has been shown to give state-of-the- art results (Sennrich et al., 2016a) at the WMT 2016 evaluation campaign (Bojar et al., 2016). Unless noted otherwise, we use default settings, such as beam search and single model decoding. The training data is processed with byte-pair en- coding (Sennrich et al., 2016b) into subwords to fit a 50,000 word vocabulary limit. # 2.2 Statistical Machine Translation
1706.03872#3
Six Challenges for Neural Machine Translation
We explore six challenges for neural machine translation: domain mismatch, amount of training data, rare words, long sentences, word alignment, and beam search. We show both deficiencies and improvements over the quality of phrase-based statistical machine translation.
http://arxiv.org/pdf/1706.03872
Philipp Koehn, Rebecca Knowles
cs.CL
12 pages; First Workshop on Neural Machine Translation, 2017
null
cs.CL
20170612
20170612
[ { "id": "1706.03872" }, { "id": "1612.06897" } ]
1706.03762
4
Attention mechanisms have become an integral part of compelling sequence modeling and transduc- tion models in various tasks, allowing modeling of dependencies without regard to their distance in the input or output sequences [2, 19]. In all but a few cases [27], however, such attention mechanisms are used in conjunction with a recurrent network. In this work we propose the Transformer, a model architecture eschewing recurrence and instead relying entirely on an attention mechanism to draw global dependencies between input and output. The Transformer allows for significantly more parallelization and can reach a new state of the art in translation quality after being trained for as little as twelve hours on eight P100 GPUs. # 2 Background The goal of reducing sequential computation also forms the foundation of the Extended Neural GPU [16], ByteNet [18] and ConvS2S [9], all of which use convolutional neural networks as basic building block, computing hidden representations in parallel for all input and output positions. In these models, the number of operations required to relate signals from two arbitrary input or output positions grows in the distance between positions, linearly for ConvS2S and logarithmically for ByteNet. This makes it more difficult to learn dependencies between distant positions [12]. In the Transformer this is reduced to a constant number of operations, albeit at the cost of reduced effective resolution due to averaging attention-weighted positions, an effect we counteract with Multi-Head Attention as described in section 3.2.
1706.03762#4
Attention Is All You Need
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
http://arxiv.org/pdf/1706.03762
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin
cs.CL, cs.LG
15 pages, 5 figures
null
cs.CL
20170612
20230802
[ { "id": "1601.06733" }, { "id": "1508.07909" }, { "id": "1602.02410" }, { "id": "1703.03130" }, { "id": "1511.06114" }, { "id": "1610.10099" }, { "id": "1508.04025" }, { "id": "1705.04304" }, { "id": "1608.05859" }, { "id": "1701.06538" }, { "id": "1609.08144" }, { "id": "1607.06450" }, { "id": "1705.03122" }, { "id": "1610.02357" }, { "id": "1703.10722" } ]
1706.03872
4
# 2.2 Statistical Machine Translation Our machine translation systems are trained us- ing Moses3 (Koehn et al., 2007). We build phrase- based systems using standard features that are commonly used in recent system submissions to WMT (Williams et al., 2016; Ding et al., 2016a). While we use the shorthand SMT for these phrase-based systems, we note that there are other statistical machine translation approaches such as hierarchical phrase-based models (Chiang, 2007) and syntax-based models (Galley et al., 2004, 2006) that have been shown to give superior per- formance for language pairs such as Chinese– English and German–English. # 2.3 Data Conditions We carry out our experiments on English–Spanish and German–English. For these language pairs, large training data sets are available. We use datasets from the shared translation task organized alongside the Conference on Machine Translation (WMT)4. For the domain experiments, we use the OPUS corpus5 (Tiedemann, 2012). Except for the domain experiments, we use the WMT test sets composed of news stories, which are characterized by a broad range of topic, for- mal language, relatively long sentences (about 30 words on average), and high standards for gram- mar, orthography, and style.
1706.03872#4
Six Challenges for Neural Machine Translation
We explore six challenges for neural machine translation: domain mismatch, amount of training data, rare words, long sentences, word alignment, and beam search. We show both deficiencies and improvements over the quality of phrase-based statistical machine translation.
http://arxiv.org/pdf/1706.03872
Philipp Koehn, Rebecca Knowles
cs.CL
12 pages; First Workshop on Neural Machine Translation, 2017
null
cs.CL
20170612
20170612
[ { "id": "1706.03872" }, { "id": "1612.06897" } ]
1706.03762
5
Self-attention, sometimes called intra-attention is an attention mechanism relating different positions of a single sequence in order to compute a representation of the sequence. Self-attention has been used successfully in a variety of tasks including reading comprehension, abstractive summarization, textual entailment and learning task-independent sentence representations [4, 27, 28, 22]. End-to-end memory networks are based on a recurrent attention mechanism instead of sequence- aligned recurrence and have been shown to perform well on simple-language question answering and language modeling tasks [34]. To the best of our knowledge, however, the Transformer is the first transduction model relying entirely on self-attention to compute representations of its input and output without using sequence- aligned RNNs or convolution. In the following sections, we will describe the Transformer, motivate self-attention and discuss its advantages over models such as [17, 18] and [9]. # 3 Model Architecture
1706.03762#5
Attention Is All You Need
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
http://arxiv.org/pdf/1706.03762
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin
cs.CL, cs.LG
15 pages, 5 figures
null
cs.CL
20170612
20230802
[ { "id": "1601.06733" }, { "id": "1508.07909" }, { "id": "1602.02410" }, { "id": "1703.03130" }, { "id": "1511.06114" }, { "id": "1610.10099" }, { "id": "1508.04025" }, { "id": "1705.04304" }, { "id": "1608.05859" }, { "id": "1701.06538" }, { "id": "1609.08144" }, { "id": "1607.06450" }, { "id": "1705.03122" }, { "id": "1610.02357" }, { "id": "1703.10722" } ]
1706.03872
5
2https://github.com/rsennrich/nematus/ 3http://www.stat.org/moses/ 4http://www.statmt.org/wmt17/ 5http://opus.lingfil.uu.se/ Corpus Law (Acquis) Medical (EMEA) IT Koran (Tanzil) Subtitles Words 18,128,173 14,301,472 3,041,677 9,848,539 114,371,754 Sentences W/S 25.3 12.9 9.0 20.5 8.2 715,372 1,104,752 337,817 480,421 13,873,398 Table 1: Corpora used to train domain-specific systems, IT corpora are GNOME, KDE, PHP, Ubuntu, and OpenOffice. # 3 Challenges # 3.1 Domain Mismatch
1706.03872#5
Six Challenges for Neural Machine Translation
We explore six challenges for neural machine translation: domain mismatch, amount of training data, rare words, long sentences, word alignment, and beam search. We show both deficiencies and improvements over the quality of phrase-based statistical machine translation.
http://arxiv.org/pdf/1706.03872
Philipp Koehn, Rebecca Knowles
cs.CL
12 pages; First Workshop on Neural Machine Translation, 2017
null
cs.CL
20170612
20170612
[ { "id": "1706.03872" }, { "id": "1612.06897" } ]
1706.03762
6
# 3 Model Architecture Most competitive neural sequence transduction models have an encoder-decoder structure [5, 2, 35]. Here, the encoder maps an input sequence of symbol representations (x1, ..., xn) to a sequence of continuous representations z = (z1, ..., zn). Given z, the decoder then generates an output sequence (y1, ..., ym) of symbols one element at a time. At each step the model is auto-regressive [10], consuming the previously generated symbols as additional input when generating the next. 2 Output Probabilities Add & Norm Feed Forward Add & Norm Multi-Head Attention a, Add & Norm Add & Norm Feed Forward Nx | -+CAgc8 Norm) Add & Norm Masked Multi-Head Multi-Head Attention Attention Se a, ee a, Positional Positional Encoding @ © @ Encoding Input Output Embedding Embedding Inputs Outputs (shifted right) Figure 1: The Transformer - model architecture. The Transformer follows this overall architecture using stacked self-attention and point-wise, fully connected layers for both the encoder and decoder, shown in the left and right halves of Figure 1, respectively. # 3.1 Encoder and Decoder Stacks
1706.03762#6
Attention Is All You Need
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
http://arxiv.org/pdf/1706.03762
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin
cs.CL, cs.LG
15 pages, 5 figures
null
cs.CL
20170612
20230802
[ { "id": "1601.06733" }, { "id": "1508.07909" }, { "id": "1602.02410" }, { "id": "1703.03130" }, { "id": "1511.06114" }, { "id": "1610.10099" }, { "id": "1508.04025" }, { "id": "1705.04304" }, { "id": "1608.05859" }, { "id": "1701.06538" }, { "id": "1609.08144" }, { "id": "1607.06450" }, { "id": "1705.03122" }, { "id": "1610.02357" }, { "id": "1703.10722" } ]
1706.03872
6
# 3 Challenges # 3.1 Domain Mismatch A known challenge in translation is that in dif- ferent domains,6 words have different transla- tions and meaning is expressed in different styles. Hence, a crucial step in developing machine trans- lation systems targeted at a specific use case is domain adaptation. We expect that methods for domain adaptation will be developed for NMT. A currently popular approach is to train a general do- main system, followed by training on in-domain data for a few epochs (Luong and Manning, 2015; Freitag and Al-Onaizan, 2016). Often, large amounts of training data are only available out of domain, but we still seek to have robust performance. To test how well NMT and SMT hold up, we trained five different sys- tems using different corpora obtained from OPUS (Tiedemann, 2012). An additional system was trained on all the training data. Statistics about corpus sizes are shown in Table 1. Note that these domains are quite distant from each other, much more so than, say, Europarl, TED Talks, News Commentary, and Global Voices.
1706.03872#6
Six Challenges for Neural Machine Translation
We explore six challenges for neural machine translation: domain mismatch, amount of training data, rare words, long sentences, word alignment, and beam search. We show both deficiencies and improvements over the quality of phrase-based statistical machine translation.
http://arxiv.org/pdf/1706.03872
Philipp Koehn, Rebecca Knowles
cs.CL
12 pages; First Workshop on Neural Machine Translation, 2017
null
cs.CL
20170612
20170612
[ { "id": "1706.03872" }, { "id": "1612.06897" } ]
1706.03762
7
# 3.1 Encoder and Decoder Stacks Encoder: The encoder is composed of a stack of N = 6 identical layers. Each layer has two sub-layers. The first is a multi-head self-attention mechanism, and the second is a simple, position- wise fully connected feed-forward network. We employ a residual connection [11] around each of the two sub-layers, followed by layer normalization [1]. That is, the output of each sub-layer is LayerNorm(x + Sublayer(x)), where Sublayer(x) is the function implemented by the sub-layer itself. To facilitate these residual connections, all sub-layers in the model, as well as the embedding layers, produce outputs of dimension dmodel = 512.
1706.03762#7
Attention Is All You Need
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
http://arxiv.org/pdf/1706.03762
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin
cs.CL, cs.LG
15 pages, 5 figures
null
cs.CL
20170612
20230802
[ { "id": "1601.06733" }, { "id": "1508.07909" }, { "id": "1602.02410" }, { "id": "1703.03130" }, { "id": "1511.06114" }, { "id": "1610.10099" }, { "id": "1508.04025" }, { "id": "1705.04304" }, { "id": "1608.05859" }, { "id": "1701.06538" }, { "id": "1609.08144" }, { "id": "1607.06450" }, { "id": "1705.03122" }, { "id": "1610.02357" }, { "id": "1703.10722" } ]
1706.03872
7
We trained both SMT and NMT systems for all domains. All systems were trained for German- English, with tuning and test sets sub-sampled from the data (these were not used in training). A common byte-pair encoding is used for all training runs. See Figure 1 for results. While the in-domain NMT and SMT systems are similar (NMT is better for IT and Subtitles, SMT is better for Law, Med- ical, and Koran), the out-of-domain performance for the NMT systems is worse in almost all cases, sometimes dramatically so. For instance the Med6We use the customary definition of domain in machine translation: a domain is defined by a corpus from a specific source, and may differ from other domains in topic, genre, style, level of formality, etc.
1706.03872#7
Six Challenges for Neural Machine Translation
We explore six challenges for neural machine translation: domain mismatch, amount of training data, rare words, long sentences, word alignment, and beam search. We show both deficiencies and improvements over the quality of phrase-based statistical machine translation.
http://arxiv.org/pdf/1706.03872
Philipp Koehn, Rebecca Knowles
cs.CL
12 pages; First Workshop on Neural Machine Translation, 2017
null
cs.CL
20170612
20170612
[ { "id": "1706.03872" }, { "id": "1612.06897" } ]
1706.03762
8
Decoder: The decoder is also composed of a stack of N = 6 identical layers. In addition to the two sub-layers in each encoder layer, the decoder inserts a third sub-layer, which performs multi-head attention over the output of the encoder stack. Similar to the encoder, we employ residual connections around each of the sub-layers, followed by layer normalization. We also modify the self-attention sub-layer in the decoder stack to prevent positions from attending to subsequent positions. This masking, combined with fact that the output embeddings are offset by one position, ensures that the predictions for position i can depend only on the known outputs at positions less than i. # 3.2 Attention An attention function can be described as mapping a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors. The output is computed as a weighted sum 3 Scaled Dot-Product Attention Multi-Head Attention Linear
1706.03762#8
Attention Is All You Need
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
http://arxiv.org/pdf/1706.03762
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin
cs.CL, cs.LG
15 pages, 5 figures
null
cs.CL
20170612
20230802
[ { "id": "1601.06733" }, { "id": "1508.07909" }, { "id": "1602.02410" }, { "id": "1703.03130" }, { "id": "1511.06114" }, { "id": "1610.10099" }, { "id": "1508.04025" }, { "id": "1705.04304" }, { "id": "1608.05859" }, { "id": "1701.06538" }, { "id": "1609.08144" }, { "id": "1607.06450" }, { "id": "1705.03122" }, { "id": "1610.02357" }, { "id": "1703.10722" } ]
1706.03872
8
System ↓ Law Medical IT Koran Subtitles All Data 30.5 32.8 45.1 42.2 35.3 44.7 17.9 17.9 26.4 20.8 2.2 1.3 6.9 18.2 12.1 34.4 31.1 Law 3.5 2.8 6.0 2.0 0.6 8.5 43.5 39.4 10.2 3.9 Medical 2.0 1.4 5.8 6.5 5.3 1.9 42.1 39.8 IT 1.8 1.6 3.7 3.9 4.7 0.0 2.1 0.4 0.0 2.3 Koran 15.9 18.8 1.8 1.0 5.5 9.3 17.8 25.9 9.2 13.6 7.0 9.0 8.4 Subtitles 9.9 22.1 Figure 1: Quality of systems (BLEU), when trained on one domain (rows) and tested on another domain (columns). Comparably, NMT systems (left bars) show more degraded performance out of domain. ical system leads to a BLEU score of 3.9 (NMT) vs. 10.2 (SMT) on the Law test set.
1706.03872#8
Six Challenges for Neural Machine Translation
We explore six challenges for neural machine translation: domain mismatch, amount of training data, rare words, long sentences, word alignment, and beam search. We show both deficiencies and improvements over the quality of phrase-based statistical machine translation.
http://arxiv.org/pdf/1706.03872
Philipp Koehn, Rebecca Knowles
cs.CL
12 pages; First Workshop on Neural Machine Translation, 2017
null
cs.CL
20170612
20170612
[ { "id": "1706.03872" }, { "id": "1612.06897" } ]
1706.03762
9
Figure 2: (left) Scaled Dot-Product Attention. (right) Multi-Head Attention consists of several attention layers running in parallel. of the values, where the weight assigned to each value is computed by a compatibility function of the query with the corresponding key. # 3.2.1 Scaled Dot-Product Attention We call our particular attention "Scaled Dot-Product Attention" (Figure 2). The input consists of queries and keys of dimension dk, and values of dimension dv. We compute the dot products of the dk, and apply a softmax function to obtain the weights on the query with all keys, divide each by values. In practice, we compute the attention function on a set of queries simultaneously, packed together into a matrix Q. The keys and values are also packed together into matrices K and V . We compute the matrix of outputs as: Attention(Q, K, V ) = softmax( QK T √ dk )V (1)
1706.03762#9
Attention Is All You Need
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
http://arxiv.org/pdf/1706.03762
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin
cs.CL, cs.LG
15 pages, 5 figures
null
cs.CL
20170612
20230802
[ { "id": "1601.06733" }, { "id": "1508.07909" }, { "id": "1602.02410" }, { "id": "1703.03130" }, { "id": "1511.06114" }, { "id": "1610.10099" }, { "id": "1508.04025" }, { "id": "1705.04304" }, { "id": "1608.05859" }, { "id": "1701.06538" }, { "id": "1609.08144" }, { "id": "1607.06450" }, { "id": "1705.03122" }, { "id": "1610.02357" }, { "id": "1703.10722" } ]
1706.03872
9
ical system leads to a BLEU score of 3.9 (NMT) vs. 10.2 (SMT) on the Law test set. Figure 2 displays an example. When translating the sentence Schaue um dich herum. (reference: Look around you.) from the Subtitles corpus, we see mostly non-sensical and completely unre- lated output from the NMT system. For instance, the translation from the IT system is Switches to paused. Note that the output of the NMT system is often quite fluent (e.g., Take heed of your own souls.) but completely unrelated to the input, while the SMT output betrays its difficulties with coping with the out-of-domain input by leaving some words untranslated (e.g., Schaue by dich around.). This is of particular concern when MT is used for information gisting — the user will be mislead by hallucinated content in the NMT output.
1706.03872#9
Six Challenges for Neural Machine Translation
We explore six challenges for neural machine translation: domain mismatch, amount of training data, rare words, long sentences, word alignment, and beam search. We show both deficiencies and improvements over the quality of phrase-based statistical machine translation.
http://arxiv.org/pdf/1706.03872
Philipp Koehn, Rebecca Knowles
cs.CL
12 pages; First Workshop on Neural Machine Translation, 2017
null
cs.CL
20170612
20170612
[ { "id": "1706.03872" }, { "id": "1612.06897" } ]
1706.03762
10
Attention(Q, K, V ) = softmax( QK T √ dk )V (1) The two most commonly used attention functions are additive attention [2], and dot-product (multi- plicative) attention. Dot-product attention is identical to our algorithm, except for the scaling factor . Additive attention computes the compatibility function using a feed-forward network with of a single hidden layer. While the two are similar in theoretical complexity, dot-product attention is much faster and more space-efficient in practice, since it can be implemented using highly optimized matrix multiplication code. While for small values of dk the two mechanisms perform similarly, additive attention outperforms dot product attention without scaling for larger values of dk [3]. We suspect that for large values of dk, the dot products grow large in magnitude, pushing the softmax function into regions where it has extremely small gradients 4. To counteract this effect, we scale the dot products by 1√ dk # 3.2.2 Multi-Head Attention Instead of performing a single attention function with dmodel-dimensional keys, values and queries, we found it beneficial to linearly project the queries, keys and values h times with different, learned linear projections to dk, dk and dv dimensions, respectively. On each of these projected versions of queries, keys and values we then perform the attention function in parallel, yielding dv-dimensional
1706.03762#10
Attention Is All You Need
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
http://arxiv.org/pdf/1706.03762
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin
cs.CL, cs.LG
15 pages, 5 figures
null
cs.CL
20170612
20230802
[ { "id": "1601.06733" }, { "id": "1508.07909" }, { "id": "1602.02410" }, { "id": "1703.03130" }, { "id": "1511.06114" }, { "id": "1610.10099" }, { "id": "1508.04025" }, { "id": "1705.04304" }, { "id": "1608.05859" }, { "id": "1701.06538" }, { "id": "1609.08144" }, { "id": "1607.06450" }, { "id": "1705.03122" }, { "id": "1610.02357" }, { "id": "1703.10722" } ]
1706.03872
10
Schaue um dich herum. Look around you. NMT: Look around you. SMT: Look around you. NMT: Sughum gravecorn. SMT: In order to implement dich Schaue . NMT: EMEA / MB / 049 / 01-EN-Final Work progamme for 2002 SMT: Schaue by dich around . NMT: Switches to paused. SMT: To Schaue by itself . NMT: Take heed of your own souls. SMT: And you see. Source Ref. All Law Medical IT Koran Subtitles NMT: Look around you. SMT: Look around you . # 3.2 Amount of Training Data A well-known property of statistical systems is increasing amounts of training data lead that to better results. In SMT systems, we have previously observed that doubling the amount of training data gives a fixed increase in This holds true for both par- BLEU scores. allel and monolingual data (Turchi et al., 2008; Irvine and Callison-Burch, 2013). Figure 2: Examples for the translation of a sen- tence from the Subtitles corpus, when translated with systems trained on different corpora. Per- formance out-of-domain is dramatically worse for NMT. # BLEU Scores with Varying Amounts of Training Data
1706.03872#10
Six Challenges for Neural Machine Translation
We explore six challenges for neural machine translation: domain mismatch, amount of training data, rare words, long sentences, word alignment, and beam search. We show both deficiencies and improvements over the quality of phrase-based statistical machine translation.
http://arxiv.org/pdf/1706.03872
Philipp Koehn, Rebecca Knowles
cs.CL
12 pages; First Workshop on Neural Machine Translation, 2017
null
cs.CL
20170612
20170612
[ { "id": "1706.03872" }, { "id": "1612.06897" } ]
1706.03762
11
4To illustrate why the dot products get large, assume that the components of q and k are independent random i=1 qiki, has mean 0 and variance dk. 4 output values. These are concatenated and once again projected, resulting in the final values, as depicted in Figure 2. Multi-head attention allows the model to jointly attend to information from different representation subspaces at different positions. With a single attention head, averaging inhibits this. MultiHead(Q, K, V ) = Concat(head1, ..., headh)W O where headi = Attention(QW Q i , KW K i , V W V i ) Where the projections are parameter matrices W Q and W O ∈ Rhdv×dmodel. i ∈ Rdmodel×dk , W K i ∈ Rdmodel×dk , W V i ∈ Rdmodel×dv In this work we employ h = 8 parallel attention layers, or heads. For each of these we use dk = dv = dmodel/h = 64. Due to the reduced dimension of each head, the total computational cost is similar to that of single-head attention with full dimensionality. # 3.2.3 Applications of Attention in our Model The Transformer uses multi-head attention in three different ways:
1706.03762#11
Attention Is All You Need
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
http://arxiv.org/pdf/1706.03762
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin
cs.CL, cs.LG
15 pages, 5 figures
null
cs.CL
20170612
20230802
[ { "id": "1601.06733" }, { "id": "1508.07909" }, { "id": "1602.02410" }, { "id": "1703.03130" }, { "id": "1511.06114" }, { "id": "1610.10099" }, { "id": "1508.04025" }, { "id": "1705.04304" }, { "id": "1608.05859" }, { "id": "1701.06538" }, { "id": "1609.08144" }, { "id": "1607.06450" }, { "id": "1705.03122" }, { "id": "1610.02357" }, { "id": "1703.10722" } ]
1706.03872
11
# BLEU Scores with Varying Amounts of Training Data 30 20 21.8 16.4 23.4 18.1 24.9 19.6 26.2 26.9 21.2 22.2 18.2 27.9 28.6 29.2 29.6 27.4 29.2 25.7 26.1 26.9 23.5 24.7 22.4 30.3 31.1 30.1 30.4 27.8 28.6 14.7 11.9 10 7.2 1.6 Phrase-Based with Big LM Phrase-Based Neural 0 106 107 108 Corpus Size (English Words) Figure 3: BLEU scores for English-Spanish sys- tems trained on 0.4 million to 385.7 million words of parallel data. Quality for NMT starts much lower, outperforms SMT at about 15 mil- lion words, and even beats a SMT system with a big 2 billion word in-domain language model un- der high-resource conditions.
1706.03872#11
Six Challenges for Neural Machine Translation
We explore six challenges for neural machine translation: domain mismatch, amount of training data, rare words, long sentences, word alignment, and beam search. We show both deficiencies and improvements over the quality of phrase-based statistical machine translation.
http://arxiv.org/pdf/1706.03872
Philipp Koehn, Rebecca Knowles
cs.CL
12 pages; First Workshop on Neural Machine Translation, 2017
null
cs.CL
20170612
20170612
[ { "id": "1706.03872" }, { "id": "1612.06897" } ]
1706.03762
12
# 3.2.3 Applications of Attention in our Model The Transformer uses multi-head attention in three different ways: • In "encoder-decoder attention" layers, the queries come from the previous decoder layer, and the memory keys and values come from the output of the encoder. This allows every position in the decoder to attend over all positions in the input sequence. This mimics the typical encoder-decoder attention mechanisms in sequence-to-sequence models such as [38, 2, 9]. • The encoder contains self-attention layers. In a self-attention layer all of the keys, values and queries come from the same place, in this case, the output of the previous layer in the encoder. Each position in the encoder can attend to all positions in the previous layer of the encoder. • Similarly, self-attention layers in the decoder allow each position in the decoder to attend to all positions in the decoder up to and including that position. We need to prevent leftward information flow in the decoder to preserve the auto-regressive property. We implement this inside of scaled dot-product attention by masking out (setting to −∞) all values in the input of the softmax which correspond to illegal connections. See Figure 2. # 3.3 Position-wise Feed-Forward Networks
1706.03762#12
Attention Is All You Need
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
http://arxiv.org/pdf/1706.03762
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin
cs.CL, cs.LG
15 pages, 5 figures
null
cs.CL
20170612
20230802
[ { "id": "1601.06733" }, { "id": "1508.07909" }, { "id": "1602.02410" }, { "id": "1703.03130" }, { "id": "1511.06114" }, { "id": "1610.10099" }, { "id": "1508.04025" }, { "id": "1705.04304" }, { "id": "1608.05859" }, { "id": "1701.06538" }, { "id": "1609.08144" }, { "id": "1607.06450" }, { "id": "1705.03122" }, { "id": "1610.02357" }, { "id": "1703.10722" } ]
1706.03872
12
How do the data needs of SMT and NMT com- pare? NMT promises both to generalize better (ex- ploiting word similary in embeddings) and condi- tion on larger context (entire input and all prior output words). We built English-Spanish systems on WMT data,7 about 385.7 million English words paired with Spanish. To obtain a learning curve, we used 1 1024 , 2 , and all of the data. For SMT, the language model was trained on the Spanish part of each subset, respectively. In addition to a NMT and SMT system trained on each subset, we also used all additionally provided monolingual data for a big language model in contrastive SMT sys- tems.
1706.03872#12
Six Challenges for Neural Machine Translation
We explore six challenges for neural machine translation: domain mismatch, amount of training data, rare words, long sentences, word alignment, and beam search. We show both deficiencies and improvements over the quality of phrase-based statistical machine translation.
http://arxiv.org/pdf/1706.03872
Philipp Koehn, Rebecca Knowles
cs.CL
12 pages; First Workshop on Neural Machine Translation, 2017
null
cs.CL
20170612
20170612
[ { "id": "1706.03872" }, { "id": "1612.06897" } ]
1706.03762
13
# 3.3 Position-wise Feed-Forward Networks In addition to attention sub-layers, each of the layers in our encoder and decoder contains a fully connected feed-forward network, which is applied to each position separately and identically. This consists of two linear transformations with a ReLU activation in between. FFN(x) = max(0, xW1 + b1)W2 + b2 (2) While the linear transformations are the same across different positions, they use different parameters from layer to layer. Another way of describing this is as two convolutions with kernel size 1. The dimensionality of input and output is dmodel = 512, and the inner-layer has dimensionality df f = 2048. # 3.4 Embeddings and Softmax Similarly to other sequence transduction models, we use learned embeddings to convert the input tokens and output tokens to vectors of dimension dmodel. We also use the usual learned linear transfor- mation and softmax function to convert the decoder output to predicted next-token probabilities. In our model, we share the same weight matrix between the two embedding layers and the pre-softmax dmodel. linear transformation, similar to [30]. In the embedding layers, we multiply those weights by 5
1706.03762#13
Attention Is All You Need
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
http://arxiv.org/pdf/1706.03762
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin
cs.CL, cs.LG
15 pages, 5 figures
null
cs.CL
20170612
20230802
[ { "id": "1601.06733" }, { "id": "1508.07909" }, { "id": "1602.02410" }, { "id": "1703.03130" }, { "id": "1511.06114" }, { "id": "1610.10099" }, { "id": "1508.04025" }, { "id": "1705.04304" }, { "id": "1608.05859" }, { "id": "1701.06538" }, { "id": "1609.08144" }, { "id": "1607.06450" }, { "id": "1705.03122" }, { "id": "1610.02357" }, { "id": "1703.10722" } ]
1706.03872
13
Results are shown in Figure 3. NMT ex- hibits a much steeper learning curve, starting with abysmal results (BLEU score of 1.6 vs. 16.4 for 1024 of the data), outperforming SMT 25.7 vs. 24.7 with 1 16 of the data (24.1 million words), and even beating the SMT system with a big language model with the full data set (31.1 for NMT, 28.4 for SMT, 30.4 for SMT+BigLM). 7Spanish was last represented in 2013, we used data from http://statmt.org/wmt13/translation-task.html # Src:
1706.03872#13
Six Challenges for Neural Machine Translation
We explore six challenges for neural machine translation: domain mismatch, amount of training data, rare words, long sentences, word alignment, and beam search. We show both deficiencies and improvements over the quality of phrase-based statistical machine translation.
http://arxiv.org/pdf/1706.03872
Philipp Koehn, Rebecca Knowles
cs.CL
12 pages; First Workshop on Neural Machine Translation, 2017
null
cs.CL
20170612
20170612
[ { "id": "1706.03872" }, { "id": "1612.06897" } ]
1706.03762
14
5 Table 1: Maximum path lengths, per-layer complexity and minimum number of sequential operations for different layer types. n is the sequence length, d is the representation dimension, k is the kernel size of convolutions and r the size of the neighborhood in restricted self-attention. Layer Type Self-Attention Recurrent Convolutional Self-Attention (restricted) Complexity per Layer O(n2 · d) O(n · d2) O(k · n · d2) O(r · n · d) Sequential Maximum Path Length Operations O(1) O(n) O(1) O(1) O(1) O(n) O(logk(n)) O(n/r) # 3.5 Positional Encoding Since our model contains no recurrence and no convolution, in order for the model to make use of the order of the sequence, we must inject some information about the relative or absolute position of the tokens in the sequence. To this end, we add "positional encodings" to the input embeddings at the bottoms of the encoder and decoder stacks. The positional encodings have the same dimension dmodel as the embeddings, so that the two can be summed. There are many choices of positional encodings, learned and fixed [9]. In this work, we use sine and cosine functions of different frequencies:
1706.03762#14
Attention Is All You Need
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
http://arxiv.org/pdf/1706.03762
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin
cs.CL, cs.LG
15 pages, 5 figures
null
cs.CL
20170612
20230802
[ { "id": "1601.06733" }, { "id": "1508.07909" }, { "id": "1602.02410" }, { "id": "1703.03130" }, { "id": "1511.06114" }, { "id": "1610.10099" }, { "id": "1508.04025" }, { "id": "1705.04304" }, { "id": "1608.05859" }, { "id": "1701.06538" }, { "id": "1609.08144" }, { "id": "1607.06450" }, { "id": "1705.03122" }, { "id": "1610.02357" }, { "id": "1703.10722" } ]
1706.03872
14
7Spanish was last represented in 2013, we used data from http://statmt.org/wmt13/translation-task.html # Src: A Republican strategy to counter the re-election of Obama Un ´organo de coordinaci´on para el anuncio de libre determinaci´on Lista de una estrategia para luchar contra la elecci´on de hojas de Ohio Explosi´on realiza una estrategia divisiva de luchar contra las elecciones de autor Una estrategia republicana para la eliminaci´on de la reelecci´on de Obama Estrategia siria para contrarrestar la reelecci´on del Obama . 1 1024 1 512 1 256 1 128 1 64 1 32 + Una estrategia republicana para contrarrestar la
1706.03872#14
Six Challenges for Neural Machine Translation
We explore six challenges for neural machine translation: domain mismatch, amount of training data, rare words, long sentences, word alignment, and beam search. We show both deficiencies and improvements over the quality of phrase-based statistical machine translation.
http://arxiv.org/pdf/1706.03872
Philipp Koehn, Rebecca Knowles
cs.CL
12 pages; First Workshop on Neural Machine Translation, 2017
null
cs.CL
20170612
20170612
[ { "id": "1706.03872" }, { "id": "1612.06897" } ]
1706.03762
15
In this work, we use sine and cosine functions of different frequencies: P E(pos,2i) = sin(pos/100002i/dmodel) P E(pos,2i+1) = cos(pos/100002i/dmodel) where pos is the position and i is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from 2π to 10000 · 2π. We chose this function because we hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset k, P Epos+k can be represented as a linear function of P Epos. We also experimented with using learned positional embeddings [9] instead, and found that the two versions produced nearly identical results (see Table 3 row (E)). We chose the sinusoidal version because it may allow the model to extrapolate to sequence lengths longer than the ones encountered during training. # 4 Why Self-Attention
1706.03762#15
Attention Is All You Need
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
http://arxiv.org/pdf/1706.03762
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin
cs.CL, cs.LG
15 pages, 5 figures
null
cs.CL
20170612
20230802
[ { "id": "1601.06733" }, { "id": "1508.07909" }, { "id": "1602.02410" }, { "id": "1703.03130" }, { "id": "1511.06114" }, { "id": "1610.10099" }, { "id": "1508.04025" }, { "id": "1705.04304" }, { "id": "1608.05859" }, { "id": "1701.06538" }, { "id": "1609.08144" }, { "id": "1607.06450" }, { "id": "1705.03122" }, { "id": "1610.02357" }, { "id": "1703.10722" } ]
1706.03872
15
reelecci´on de Obama Figure 4: Translations of the first sentence of the test set using NMT system trained on varying amounts of training data. Under low resource con- ditions, NMT produces fluent output unrelated to the input. The contrast between the NMT and SMT learn- ing curves is quite striking. While NMT is able to exploit increasing amounts of training data more effectively, it is unable to get off the ground with training corpus sizes of a few million words or less. 1024 of the training data, the output is completely unrelated to the input, some key words are properly translated with 1 256 of the data (estrategia for strat- egy, elecci´on or elecciones for election), and start- ing with 1
1706.03872#15
Six Challenges for Neural Machine Translation
We explore six challenges for neural machine translation: domain mismatch, amount of training data, rare words, long sentences, word alignment, and beam search. We show both deficiencies and improvements over the quality of phrase-based statistical machine translation.
http://arxiv.org/pdf/1706.03872
Philipp Koehn, Rebecca Knowles
cs.CL
12 pages; First Workshop on Neural Machine Translation, 2017
null
cs.CL
20170612
20170612
[ { "id": "1706.03872" }, { "id": "1612.06897" } ]
1706.03762
16
# 4 Why Self-Attention In this section we compare various aspects of self-attention layers to the recurrent and convolu- tional layers commonly used for mapping one variable-length sequence of symbol representations (x1, ..., xn) to another sequence of equal length (z1, ..., zn), with xi, zi ∈ Rd, such as a hidden layer in a typical sequence transduction encoder or decoder. Motivating our use of self-attention we consider three desiderata. One is the total computational complexity per layer. Another is the amount of computation that can be parallelized, as measured by the minimum number of sequential operations required. The third is the path length between long-range dependencies in the network. Learning long-range dependencies is a key challenge in many sequence transduction tasks. One key factor affecting the ability to learn such dependencies is the length of the paths forward and backward signals have to traverse in the network. The shorter these paths between any combination of positions in the input and output sequences, the easier it is to learn long-range dependencies [12]. Hence we also compare the maximum path length between any two input and output positions in networks composed of the different layer types.
1706.03762#16
Attention Is All You Need
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
http://arxiv.org/pdf/1706.03762
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin
cs.CL, cs.LG
15 pages, 5 figures
null
cs.CL
20170612
20230802
[ { "id": "1601.06733" }, { "id": "1508.07909" }, { "id": "1602.02410" }, { "id": "1703.03130" }, { "id": "1511.06114" }, { "id": "1610.10099" }, { "id": "1508.04025" }, { "id": "1705.04304" }, { "id": "1608.05859" }, { "id": "1701.06538" }, { "id": "1609.08144" }, { "id": "1607.06450" }, { "id": "1705.03122" }, { "id": "1610.02357" }, { "id": "1703.10722" } ]
1706.03872
16
# 3.3 Rare Words Conventional wisdom states that neural machine translation models perform particularly poorly on rare words, (Luong et al., 2015; Sennrich et al., 2016b; Arthur et al., 2016) due in part to the smaller vocabularies used by NMT systems. We examine this claim by comparing performance on rare word translation between NMT and SMT systems of similar quality for German–English and find that NMT systems actually outperform SMT systems on translation of very infrequent words. However, both NMT and SMT systems do continue to have difficulty translating some infrequent words, particularly those belonging to highly-inflected categories. For the neural machine translation model, we use a publicly available model8 with the train- ing settings of Edinburgh’s WMT submission (Sennrich et al., 2016a). This was trained using # 8https://github.com/rsennrich/wmt16-scripts/
1706.03872#16
Six Challenges for Neural Machine Translation
We explore six challenges for neural machine translation: domain mismatch, amount of training data, rare words, long sentences, word alignment, and beam search. We show both deficiencies and improvements over the quality of phrase-based statistical machine translation.
http://arxiv.org/pdf/1706.03872
Philipp Koehn, Rebecca Knowles
cs.CL
12 pages; First Workshop on Neural Machine Translation, 2017
null
cs.CL
20170612
20170612
[ { "id": "1706.03872" }, { "id": "1612.06897" } ]
1706.03762
17
As noted in Table 1, a self-attention layer connects all positions with a constant number of sequentially executed operations, whereas a recurrent layer requires O(n) sequential operations. In terms of computational complexity, self-attention layers are faster than recurrent layers when the sequence 6 length n is smaller than the representation dimensionality d, which is most often the case with sentence representations used by state-of-the-art models in machine translations, such as word-piece [38] and byte-pair [31] representations. To improve computational performance for tasks involving very long sequences, self-attention could be restricted to considering only a neighborhood of size r in the input sequence centered around the respective output position. This would increase the maximum path length to O(n/r). We plan to investigate this approach further in future work.
1706.03762#17
Attention Is All You Need
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
http://arxiv.org/pdf/1706.03762
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin
cs.CL, cs.LG
15 pages, 5 figures
null
cs.CL
20170612
20230802
[ { "id": "1601.06733" }, { "id": "1508.07909" }, { "id": "1602.02410" }, { "id": "1703.03130" }, { "id": "1511.06114" }, { "id": "1610.10099" }, { "id": "1508.04025" }, { "id": "1705.04304" }, { "id": "1608.05859" }, { "id": "1701.06538" }, { "id": "1609.08144" }, { "id": "1607.06450" }, { "id": "1705.03122" }, { "id": "1610.02357" }, { "id": "1703.10722" } ]
1706.03872
17
# 8https://github.com/rsennrich/wmt16-scripts/ 70% 60% 50% 40% 0 —4 2 — — —3 1 8 — — 2 6 1 4 6 8 2 1 6 5 2 2 1 5 9 9 9 9 9 9 1 9 9 9 3 9 9 9 7 9 9 9 5 1 9 9 9 1 3 9 9 9 3 6 + 0 0 0 4 6 0% 5% Figure 5: Precision of translation and deletion rates by source words type. SMT (light blue) and NMT (dark green). The horizontal axis represents the corpus frequency of the source types, with the axis labels showing the upper end of the bin. Bin width is proportional to the number of word types in that frequency range. The upper part of the graph shows the precision averaged across all word types in the bin. The lower part shows the proportion of source tokens in the bin that were deleted. Nematus9 (Sennrich et al., 2017), with byte-pair encodings (Sennrich et al., 2016b) to allow for open-vocabulary NMT.
1706.03872#17
Six Challenges for Neural Machine Translation
We explore six challenges for neural machine translation: domain mismatch, amount of training data, rare words, long sentences, word alignment, and beam search. We show both deficiencies and improvements over the quality of phrase-based statistical machine translation.
http://arxiv.org/pdf/1706.03872
Philipp Koehn, Rebecca Knowles
cs.CL
12 pages; First Workshop on Neural Machine Translation, 2017
null
cs.CL
20170612
20170612
[ { "id": "1706.03872" }, { "id": "1612.06897" } ]
1706.03762
18
A single convolutional layer with kernel width k < n does not connect all pairs of input and output positions. Doing so requires a stack of O(n/k) convolutional layers in the case of contiguous kernels, or O(logk(n)) in the case of dilated convolutions [18], increasing the length of the longest paths between any two positions in the network. Convolutional layers are generally more expensive than recurrent layers, by a factor of k. Separable convolutions [6], however, decrease the complexity considerably, to O(k · n · d + n · d2). Even with k = n, however, the complexity of a separable convolution is equal to the combination of a self-attention layer and a point-wise feed-forward layer, the approach we take in our model. As side benefit, self-attention could yield more interpretable models. We inspect attention distributions from our models and present and discuss examples in the appendix. Not only do individual attention heads clearly learn to perform different tasks, many appear to exhibit behavior related to the syntactic and semantic structure of the sentences. # 5 Training This section describes the training regime for our models. # 5.1 Training Data and Batching
1706.03762#18
Attention Is All You Need
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
http://arxiv.org/pdf/1706.03762
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin
cs.CL, cs.LG
15 pages, 5 figures
null
cs.CL
20170612
20230802
[ { "id": "1601.06733" }, { "id": "1508.07909" }, { "id": "1602.02410" }, { "id": "1703.03130" }, { "id": "1511.06114" }, { "id": "1610.10099" }, { "id": "1508.04025" }, { "id": "1705.04304" }, { "id": "1608.05859" }, { "id": "1701.06538" }, { "id": "1609.08144" }, { "id": "1607.06450" }, { "id": "1705.03122" }, { "id": "1610.02357" }, { "id": "1703.10722" } ]
1706.03872
18
Nematus9 (Sennrich et al., 2017), with byte-pair encodings (Sennrich et al., 2016b) to allow for open-vocabulary NMT. that we used was trained using Moses (Koehn et al., 2007), and the training data and parameters match those de- scribed in Johns Hopkins University’s submission to the WMT shared task (Ding et al., 2016b). Both models have case-sensitive BLEU scores of 34.5 on the WMT 2016 news test set (for the NMT model, this reflects the BLEU score re- sulting from translation with a beam size of 1). We use a single corpus for computing our lexi- cal frequency counts (a concatenation of Common Crawl, Europarl, and News Commentary). described by follow the for examining the Koehn and Haddow (2012) effect of source word frequency on translation accuracy.10
1706.03872#18
Six Challenges for Neural Machine Translation
We explore six challenges for neural machine translation: domain mismatch, amount of training data, rare words, long sentences, word alignment, and beam search. We show both deficiencies and improvements over the quality of phrase-based statistical machine translation.
http://arxiv.org/pdf/1706.03872
Philipp Koehn, Rebecca Knowles
cs.CL
12 pages; First Workshop on Neural Machine Translation, 2017
null
cs.CL
20170612
20170612
[ { "id": "1706.03872" }, { "id": "1612.06897" } ]
1706.03762
19
# 5 Training This section describes the training regime for our models. # 5.1 Training Data and Batching We trained on the standard WMT 2014 English-German dataset consisting of about 4.5 million sentence pairs. Sentences were encoded using byte-pair encoding [3], which has a shared source- target vocabulary of about 37000 tokens. For English-French, we used the significantly larger WMT 2014 English-French dataset consisting of 36M sentences and split tokens into a 32000 word-piece vocabulary [38]. Sentence pairs were batched together by approximate sequence length. Each training batch contained a set of sentence pairs containing approximately 25000 source tokens and 25000 target tokens. # 5.2 Hardware and Schedule We trained our models on one machine with 8 NVIDIA P100 GPUs. For our base models using the hyperparameters described throughout the paper, each training step took about 0.4 seconds. We trained the base models for a total of 100,000 steps or 12 hours. For our big models,(described on the bottom line of table 3), step time was 1.0 seconds. The big models were trained for 300,000 steps (3.5 days). # 5.3 Optimizer We used the Adam optimizer [20] with β1 = 0.9, β2 = 0.98 and ϵ = 10−9. We varied the learning rate over the course of training, according to the formula:
1706.03762#19
Attention Is All You Need
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
http://arxiv.org/pdf/1706.03762
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin
cs.CL, cs.LG
15 pages, 5 figures
null
cs.CL
20170612
20230802
[ { "id": "1601.06733" }, { "id": "1508.07909" }, { "id": "1602.02410" }, { "id": "1703.03130" }, { "id": "1511.06114" }, { "id": "1610.10099" }, { "id": "1508.04025" }, { "id": "1705.04304" }, { "id": "1608.05859" }, { "id": "1701.06538" }, { "id": "1609.08144" }, { "id": "1607.06450" }, { "id": "1705.03122" }, { "id": "1610.02357" }, { "id": "1703.10722" } ]
1706.03872
19
described by follow the for examining the Koehn and Haddow (2012) effect of source word frequency on translation accuracy.10 The overall average precision is quite similar between the NMT and SMT systems, with the SMT system scoring 70.1% overall and the NMT system scoring 70.3%. This reflects the similar overall quality of the MT systems. Figure 5 gives a detailed breakdown. The values above the hor- izontal axis represent precisions, while the lower portion represents what proportion of the words were deleted. The first item of note is that the NMT system has an overall higher proportion of deleted words. Of the 64379 words examined, the NMT system is estimated to have deleted 3769 of them, while the SMT system deleted 2274. Both the NMT and SMT systems delete very frequent and very infrequent words at higher proportions than words that fall into the middle range. Across frequencies, the NMT systems delete a higher pro- portion of words than the SMT system does. (The related issue of translation length is discussed in more detail in Section 3.4.)
1706.03872#19
Six Challenges for Neural Machine Translation
We explore six challenges for neural machine translation: domain mismatch, amount of training data, rare words, long sentences, word alignment, and beam search. We show both deficiencies and improvements over the quality of phrase-based statistical machine translation.
http://arxiv.org/pdf/1706.03872
Philipp Koehn, Rebecca Knowles
cs.CL
12 pages; First Workshop on Neural Machine Translation, 2017
null
cs.CL
20170612
20170612
[ { "id": "1706.03872" }, { "id": "1612.06897" } ]
1706.03762
20
lrate = d−0.5 model · min(step_num−0.5, step_num · warmup_steps−1.5) (3) This corresponds to increasing the learning rate linearly for the first warmup_steps training steps, and decreasing it thereafter proportionally to the inverse square root of the step number. We used warmup_steps = 4000. # 5.4 Regularization We employ three types of regularization during training: 7 Table 2: The Transformer achieves better BLEU scores than previous state-of-the-art models on the English-to-German and English-to-French newstest2014 tests at a fraction of the training cost.
1706.03762#20
Attention Is All You Need
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
http://arxiv.org/pdf/1706.03762
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin
cs.CL, cs.LG
15 pages, 5 figures
null
cs.CL
20170612
20230802
[ { "id": "1601.06733" }, { "id": "1508.07909" }, { "id": "1602.02410" }, { "id": "1703.03130" }, { "id": "1511.06114" }, { "id": "1610.10099" }, { "id": "1508.04025" }, { "id": "1705.04304" }, { "id": "1608.05859" }, { "id": "1701.06538" }, { "id": "1609.08144" }, { "id": "1607.06450" }, { "id": "1705.03122" }, { "id": "1610.02357" }, { "id": "1703.10722" } ]
1706.03872
20
9https://github.com/rsennrich/nematus/ 10First, we automatically align the source sentence and the machine translation output. We use fast-align (Dyer et al., 2013) to align the full training corpus (source and reference) along with the test source and MT output. We use the sug- gested standard options for alignment and then symmetrize the alignment with grow-diag-final-and. The next interesting observation is what hap- pens with unknown words (words which were never observed in the training corpus). The SMT system translates these correctly 53.2% of the time, while the NMT system translates them corEach source word is either unaligned (“dropped”) or aligned to one or more target language words. For each tar- get word to which the source word is aligned, we check if that target word appears in the reference translation. If the target word appears the same number of times in the MT out- put as in the reference, we award that alignment a score of one. If the target word appears more times in the MT output
1706.03872#20
Six Challenges for Neural Machine Translation
We explore six challenges for neural machine translation: domain mismatch, amount of training data, rare words, long sentences, word alignment, and beam search. We show both deficiencies and improvements over the quality of phrase-based statistical machine translation.
http://arxiv.org/pdf/1706.03872
Philipp Koehn, Rebecca Knowles
cs.CL
12 pages; First Workshop on Neural Machine Translation, 2017
null
cs.CL
20170612
20170612
[ { "id": "1706.03872" }, { "id": "1612.06897" } ]
1706.03762
21
Model ByteNet [18] Deep-Att + PosUnk [39] GNMT + RL [38] ConvS2S [9] MoE [32] Deep-Att + PosUnk Ensemble [39] GNMT + RL Ensemble [38] ConvS2S Ensemble [9] Transformer (base model) Transformer (big) BLEU EN-DE EN-FR 23.75 24.6 25.16 26.03 26.30 26.36 27.3 28.4 39.2 39.92 40.46 40.56 40.4 41.16 41.29 38.1 41.8 Training Cost (FLOPs) EN-DE EN-FR 2.3 · 1019 9.6 · 1018 2.0 · 1019 1.8 · 1020 7.7 · 1019 1.0 · 1020 1.4 · 1020 1.5 · 1020 1.2 · 1020 8.0 · 1020 1.1 · 1021 1.2 · 1021 3.3 · 1018 2.3 · 1019
1706.03762#21
Attention Is All You Need
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
http://arxiv.org/pdf/1706.03762
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin
cs.CL, cs.LG
15 pages, 5 figures
null
cs.CL
20170612
20230802
[ { "id": "1601.06733" }, { "id": "1508.07909" }, { "id": "1602.02410" }, { "id": "1703.03130" }, { "id": "1511.06114" }, { "id": "1610.10099" }, { "id": "1508.04025" }, { "id": "1705.04304" }, { "id": "1608.05859" }, { "id": "1701.06538" }, { "id": "1609.08144" }, { "id": "1607.06450" }, { "id": "1705.03122" }, { "id": "1610.02357" }, { "id": "1703.10722" } ]
1706.03872
21
than in the reference, we award fractional credit. If the target word does not appear in the reference, we award zero credit. We then average these scores over the full set of target words aligned to the given source word to compute the precision for that source word. Source words can then be binned by fre- quency and average translation precisions can be computed. Label Adjective Named Entity Noun Number Verb Other Unobserved Observed Once 4 40 35 12 3 6 10 42 35 4 6 3 Table 2: Breakdown of the first 100 tokens that were unobserved in training or observed once in training, by hand-annotated category. rectly 60.1% of the time. This is reflected in Fig- ure 5, where the SMT system shows a steep curve up from the unobserved words, while the NMT system does not see a great jump.
1706.03872#21
Six Challenges for Neural Machine Translation
We explore six challenges for neural machine translation: domain mismatch, amount of training data, rare words, long sentences, word alignment, and beam search. We show both deficiencies and improvements over the quality of phrase-based statistical machine translation.
http://arxiv.org/pdf/1706.03872
Philipp Koehn, Rebecca Knowles
cs.CL
12 pages; First Workshop on Neural Machine Translation, 2017
null
cs.CL
20170612
20170612
[ { "id": "1706.03872" }, { "id": "1612.06897" } ]
1706.03762
22
Residual Dropout We apply dropout [33] to the output of each sub-layer, before it is added to the sub-layer input and normalized. In addition, we apply dropout to the sums of the embeddings and the positional encodings in both the encoder and decoder stacks. For the base model, we use a rate of Pdrop = 0.1. Label Smoothing During training, we employed label smoothing of value ϵls = 0.1 [36]. This hurts perplexity, as the model learns to be more unsure, but improves accuracy and BLEU score. # 6 Results # 6.1 Machine Translation On the WMT 2014 English-to-German translation task, the big transformer model (Transformer (big) in Table 2) outperforms the best previously reported models (including ensembles) by more than 2.0 BLEU, establishing a new state-of-the-art BLEU score of 28.4. The configuration of this model is listed in the bottom line of Table 3. Training took 3.5 days on 8 P100 GPUs. Even our base model surpasses all previously published models and ensembles, at a fraction of the training cost of any of the competitive models.
1706.03762#22
Attention Is All You Need
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
http://arxiv.org/pdf/1706.03762
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin
cs.CL, cs.LG
15 pages, 5 figures
null
cs.CL
20170612
20230802
[ { "id": "1601.06733" }, { "id": "1508.07909" }, { "id": "1602.02410" }, { "id": "1703.03130" }, { "id": "1511.06114" }, { "id": "1610.10099" }, { "id": "1508.04025" }, { "id": "1705.04304" }, { "id": "1608.05859" }, { "id": "1701.06538" }, { "id": "1609.08144" }, { "id": "1607.06450" }, { "id": "1705.03122" }, { "id": "1610.02357" }, { "id": "1703.10722" } ]
1706.03872
22
Both SMT and NMT systems actually have their worst performance on words that were ob- served a single time in the training corpus, drop- ping to 48.6% and 52.2%, respectively; even worse than for unobserved words. Table 2 shows a breakdown of the categories of words that were unobserved in the training corpus or observed only once. The most common categories across both are named entity (including entity and location names) and nouns. The named entities can of- ten be passed through unchanged (for example, the surname “Elabdellaoui” is broken into “E@@ lab@@ d@@ ell@@ a@@ oui” by the byte- pair encoding and is correctly passed through un- changed by both the NMT and SMT systems). Many of the nouns are compound nouns; when these are correctly translated, it may be attributed to compound-splitting (SMT) or byte-pair encod- ing (NMT). The factored SMT system also has ac- cess to the stemmed form of words, which can also play a similar role to byte-pair encoding in enabling translation of unobserved inflected forms (e.g. adjectives, verbs). Unsurprisingly, there are many
1706.03872#22
Six Challenges for Neural Machine Translation
We explore six challenges for neural machine translation: domain mismatch, amount of training data, rare words, long sentences, word alignment, and beam search. We show both deficiencies and improvements over the quality of phrase-based statistical machine translation.
http://arxiv.org/pdf/1706.03872
Philipp Koehn, Rebecca Knowles
cs.CL
12 pages; First Workshop on Neural Machine Translation, 2017
null
cs.CL
20170612
20170612
[ { "id": "1706.03872" }, { "id": "1612.06897" } ]
1706.03762
23
On the WMT 2014 English-to-French translation task, our big model achieves a BLEU score of 41.0, outperforming all of the previously published single models, at less than 1/4 the training cost of the previous state-of-the-art model. The Transformer (big) model trained for English-to-French used dropout rate Pdrop = 0.1, instead of 0.3. For the base models, we used a single model obtained by averaging the last 5 checkpoints, which were written at 10-minute intervals. For the big models, we averaged the last 20 checkpoints. We used beam search with a beam size of 4 and length penalty α = 0.6 [38]. These hyperparameters were chosen after experimentation on the development set. We set the maximum output length during inference to input length + 50, but terminate early when possible [38]. Table 2 summarizes our results and compares our translation quality and training costs to other model architectures from the literature. We estimate the number of floating point operations used to train a model by multiplying the training time, the number of GPUs used, and an estimate of the sustained single-precision floating-point capacity of each GPU 5. # 6.2 Model Variations To evaluate the importance of different components of the Transformer, we varied our base model in different ways, measuring the change in performance on English-to-German translation on the
1706.03762#23
Attention Is All You Need
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
http://arxiv.org/pdf/1706.03762
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin
cs.CL, cs.LG
15 pages, 5 figures
null
cs.CL
20170612
20230802
[ { "id": "1601.06733" }, { "id": "1508.07909" }, { "id": "1602.02410" }, { "id": "1703.03130" }, { "id": "1511.06114" }, { "id": "1610.10099" }, { "id": "1508.04025" }, { "id": "1705.04304" }, { "id": "1608.05859" }, { "id": "1701.06538" }, { "id": "1609.08144" }, { "id": "1607.06450" }, { "id": "1705.03122" }, { "id": "1610.02357" }, { "id": "1703.10722" } ]
1706.03762
24
To evaluate the importance of different components of the Transformer, we varied our base model in different ways, measuring the change in performance on English-to-German translation on the 5We used values of 2.8, 3.7, 6.0 and 9.5 TFLOPS for K80, K40, M40 and P100, respectively. 8 Table 3: Variations on the Transformer architecture. Unlisted values are identical to those of the base model. All metrics are on the English-to-German translation development set, newstest2013. Listed perplexities are per-wordpiece, according to our byte-pair encoding, and should not be compared to per-word perplexities.
1706.03762#24
Attention Is All You Need
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
http://arxiv.org/pdf/1706.03762
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin
cs.CL, cs.LG
15 pages, 5 figures
null
cs.CL
20170612
20230802
[ { "id": "1601.06733" }, { "id": "1508.07909" }, { "id": "1602.02410" }, { "id": "1703.03130" }, { "id": "1511.06114" }, { "id": "1610.10099" }, { "id": "1508.04025" }, { "id": "1705.04304" }, { "id": "1608.05859" }, { "id": "1701.06538" }, { "id": "1609.08144" }, { "id": "1607.06450" }, { "id": "1705.03122" }, { "id": "1610.02357" }, { "id": "1703.10722" } ]
1706.03872
24
The categories which involve more extensive inflection (adjectives and verbs) are arguably the most interesting. Adjectives and verbs have worse accuracy rates and higher deletion rates than nouns across most word frequencies. We show examples in Figure 6 of situations where the NMT system succeeds and fails, and contrast it with the fail- ures of the SMT system. In Example 1, the NMT system successfully translates the unobserved ad- jective choreographiertes (choreographed), while (1) ... choreographiertes Gesamtkunstwerk ... (2) ... die Polizei ihn einkesselte. (1) chore@@ ograph@@ iertes (2) ein@@ kes@@ sel@@ te (1) ... choreographed overall artwork ... (2) ... police stabbed him. (1) ... choreographiertes total work of art ... (2) ... police einkesselte him. (1) ... choreographed complete work of art ... (2) ... police closed in on him. Figure 6: Examples of words that were unob- served in the training corpus, their byte-pair en- codings, and their translations.
1706.03872#24
Six Challenges for Neural Machine Translation
We explore six challenges for neural machine translation: domain mismatch, amount of training data, rare words, long sentences, word alignment, and beam search. We show both deficiencies and improvements over the quality of phrase-based statistical machine translation.
http://arxiv.org/pdf/1706.03872
Philipp Koehn, Rebecca Knowles
cs.CL
12 pages; First Workshop on Neural Machine Translation, 2017
null
cs.CL
20170612
20170612
[ { "id": "1706.03872" }, { "id": "1612.06897" } ]
1706.03762
25
base (A) (B) (C) (D) N dmodel 6 512 2 4 8 256 1024 dff 2048 1024 4096 h 8 1 4 16 32 dk 64 512 128 32 16 16 32 32 128 dv 64 512 128 32 16 32 128 Pdrop 0.1 0.0 0.2 ϵls 0.1 0.0 0.2 PPL train steps (dev) 100K 4.92 5.29 5.00 4.91 5.01 5.16 5.01 6.11 5.19 4.88 5.75 4.66 5.12 4.75 5.77 4.95 4.67 5.47 4.92 300K 4.33 BLEU params ×106 (dev) 25.8 65 24.9 25.5 25.8 25.4 25.1 25.4 23.7 25.3 25.5 24.5 26.0 25.4 26.2 24.6 25.5 25.3 25.7 25.7 26.4 58 60 36 50 80 28 168 53 90 (E) big 6 positional embedding instead of sinusoids 1024 4096 16 0.3 213 development set, newstest2013. We used beam search as described in the previous section, but no checkpoint averaging. We present these results in Table 3.
1706.03762#25
Attention Is All You Need
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
http://arxiv.org/pdf/1706.03762
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin
cs.CL, cs.LG
15 pages, 5 figures
null
cs.CL
20170612
20230802
[ { "id": "1601.06733" }, { "id": "1508.07909" }, { "id": "1602.02410" }, { "id": "1703.03130" }, { "id": "1511.06114" }, { "id": "1610.10099" }, { "id": "1508.04025" }, { "id": "1705.04304" }, { "id": "1608.05859" }, { "id": "1701.06538" }, { "id": "1609.08144" }, { "id": "1607.06450" }, { "id": "1705.03122" }, { "id": "1610.02357" }, { "id": "1703.10722" } ]
1706.03872
25
Figure 6: Examples of words that were unob- served in the training corpus, their byte-pair en- codings, and their translations. the SMT system does not. In Example 2, the SMT system simply passes the German verb einkesselte (closed in on) unchanged into the out- put, while the NMT system fails silently, selecting the fluent-sounding but semantically inappropriate “stabbed” instead. While there remains room for improvement, NMT systems (at least those using byte-pair en- coding) perform better on very low-frequency words then SMT systems do. Byte-pair encoding is sometimes sufficient (much like stemming or compound-splitting) to allow the successful trans- lation of rare words even though it does not nec- essarily split words at morphological boundaries. As with the fluent-sounding but semantically inap- propriate examples from domain-mismatch, NMT may sometimes fail similarly when it encounters unknown words even in-domain. # 3.4 Long Sentences A well-known flaw of early encoder-decoder NMT models was the inability to properly translate long 2014; Pouget-Abadie et al., 2014). The introduction of the attention model remedied this problem somewhat. But how well?
1706.03872#25
Six Challenges for Neural Machine Translation
We explore six challenges for neural machine translation: domain mismatch, amount of training data, rare words, long sentences, word alignment, and beam search. We show both deficiencies and improvements over the quality of phrase-based statistical machine translation.
http://arxiv.org/pdf/1706.03872
Philipp Koehn, Rebecca Knowles
cs.CL
12 pages; First Workshop on Neural Machine Translation, 2017
null
cs.CL
20170612
20170612
[ { "id": "1706.03872" }, { "id": "1612.06897" } ]
1706.03762
26
development set, newstest2013. We used beam search as described in the previous section, but no checkpoint averaging. We present these results in Table 3. In Table 3 rows (A), we vary the number of attention heads and the attention key and value dimensions, keeping the amount of computation constant, as described in Section 3.2.2. While single-head attention is 0.9 BLEU worse than the best setting, quality also drops off with too many heads. In Table 3 rows (B), we observe that reducing the attention key size dk hurts model quality. This suggests that determining compatibility is not easy and that a more sophisticated compatibility function than dot product may be beneficial. We further observe in rows (C) and (D) that, as expected, bigger models are better, and dropout is very helpful in avoiding over-fitting. In row (E) we replace our sinusoidal positional encoding with learned positional embeddings [9], and observe nearly identical results to the base model. # 6.3 English Constituency Parsing To evaluate if the Transformer can generalize to other tasks we performed experiments on English constituency parsing. This task presents specific challenges: the output is subject to strong structural constraints and is significantly longer than the input. Furthermore, RNN sequence-to-sequence models have not been able to attain state-of-the-art results in small-data regimes [37].
1706.03762#26
Attention Is All You Need
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
http://arxiv.org/pdf/1706.03762
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin
cs.CL, cs.LG
15 pages, 5 figures
null
cs.CL
20170612
20230802
[ { "id": "1601.06733" }, { "id": "1508.07909" }, { "id": "1602.02410" }, { "id": "1703.03130" }, { "id": "1511.06114" }, { "id": "1610.10099" }, { "id": "1508.04025" }, { "id": "1705.04304" }, { "id": "1608.05859" }, { "id": "1701.06538" }, { "id": "1609.08144" }, { "id": "1607.06450" }, { "id": "1705.03122" }, { "id": "1610.02357" }, { "id": "1703.10722" } ]
1706.03872
26
We used the large English-Spanish system from the learning curve experiments (Section 3.2), and used it to translate a collection of news test sets from the WMT shared tasks. We broke up these sets into buckets based on source sentence length (1-9 subword tokens, 10-19 subword tokens, etc.) and computed corpus-level BLEU scores for each. Figure 7 shows the results. While overall NMT is better than SMT, the SMT system outperforms NMT on sentences of length 60 and higher. Qual- ity for the two systems is relatively close, except for the very long sentences (80 and more tokens). The quality of the NMT system is dramatically BLEU Scores with Varying Sentence Length 35 34.7 34.7 33.9 33 33.8 34.1 U E L B 30 28.5 29.6 31 30.3 32.3 31.5 31.3 27.1 26.9 27.6 28.7 Neural Phrase-Based 27.7 25 0 10 Sentence Length (source, subword count) 20 30 40 50 60 70 80
1706.03872#26
Six Challenges for Neural Machine Translation
We explore six challenges for neural machine translation: domain mismatch, amount of training data, rare words, long sentences, word alignment, and beam search. We show both deficiencies and improvements over the quality of phrase-based statistical machine translation.
http://arxiv.org/pdf/1706.03872
Philipp Koehn, Rebecca Knowles
cs.CL
12 pages; First Workshop on Neural Machine Translation, 2017
null
cs.CL
20170612
20170612
[ { "id": "1706.03872" }, { "id": "1612.06897" } ]
1706.03762
27
We trained a 4-layer transformer with dmodel = 1024 on the Wall Street Journal (WSJ) portion of the Penn Treebank [25], about 40K training sentences. We also trained it in a semi-supervised setting, using the larger high-confidence and BerkleyParser corpora from with approximately 17M sentences [37]. We used a vocabulary of 16K tokens for the WSJ only setting and a vocabulary of 32K tokens for the semi-supervised setting. We performed only a small number of experiments to select the dropout, both attention and residual (section 5.4), learning rates and beam size on the Section 22 development set, all other parameters remained unchanged from the English-to-German base translation model. During inference, we 9 Table 4: The Transformer generalizes well to English constituency parsing (Results are on Section 23 of WSJ)
1706.03762#27
Attention Is All You Need
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
http://arxiv.org/pdf/1706.03762
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin
cs.CL, cs.LG
15 pages, 5 figures
null
cs.CL
20170612
20230802
[ { "id": "1601.06733" }, { "id": "1508.07909" }, { "id": "1602.02410" }, { "id": "1703.03130" }, { "id": "1511.06114" }, { "id": "1610.10099" }, { "id": "1508.04025" }, { "id": "1705.04304" }, { "id": "1608.05859" }, { "id": "1701.06538" }, { "id": "1609.08144" }, { "id": "1607.06450" }, { "id": "1705.03122" }, { "id": "1610.02357" }, { "id": "1703.10722" } ]
1706.03872
27
Figure 7: Quality of translations based on sen- tence length. SMT outperforms NMT for sen- tences longer than 60 subword tokens. For very long sentences (80+) quality is much worse due to too short output. lower for these since it produces too short trans- lations (length ratio 0.859, opposed to 1.024). # 3.5 Word Alignment The key contribution of the attention model in neural machine translation (Bahdanau et al., 2015) was the imposition of an alignment of the output words to the input words. This takes the shape of a probability distribution over the input words which is used to weigh them in a bag-of-words representation of the input sentence. Arguably, this attention model does not func- tionally play the role of a word alignment between the source in the target, at least not in the same way as its analog in statistical machine translation. While in both cases, alignment is a latent variable that is used to obtain probability distributions over words or phrases, arguably the attention model has a broader role. For instance, when translating a verb, attention may also be paid to its subject and object since these may disambiguate it. To fur- ther complicate matters, the word representations are products of bidirectional gated recurrent neu- ral networks that have the effect that each word representation is informed by the entire sentence context.
1706.03872#27
Six Challenges for Neural Machine Translation
We explore six challenges for neural machine translation: domain mismatch, amount of training data, rare words, long sentences, word alignment, and beam search. We show both deficiencies and improvements over the quality of phrase-based statistical machine translation.
http://arxiv.org/pdf/1706.03872
Philipp Koehn, Rebecca Knowles
cs.CL
12 pages; First Workshop on Neural Machine Translation, 2017
null
cs.CL
20170612
20170612
[ { "id": "1706.03872" }, { "id": "1612.06897" } ]
1706.03762
28
9 Table 4: The Transformer generalizes well to English constituency parsing (Results are on Section 23 of WSJ) Parser Training Vinyals & Kaiser el al. (2014) [37] WSJ only, discriminative WSJ only, discriminative WSJ only, discriminative WSJ only, discriminative WSJ only, discriminative semi-supervised semi-supervised semi-supervised semi-supervised semi-supervised multi-task generative Petrov et al. (2006) [29] Zhu et al. (2013) [40] Dyer et al. (2016) [8] Transformer (4 layers) Zhu et al. (2013) [40] Huang & Harper (2009) [14] McClosky et al. (2006) [26] Vinyals & Kaiser el al. (2014) [37] Transformer (4 layers) Luong et al. (2015) [23] Dyer et al. (2016) [8] WSJ 23 F1 88.3 90.4 90.4 91.7 91.3 91.3 91.3 92.1 92.1 92.7 93.0 93.3 increased the maximum output length to input length + 300. We used a beam size of 21 and α = 0.3 for both WSJ only and the semi-supervised setting.
1706.03762#28
Attention Is All You Need
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
http://arxiv.org/pdf/1706.03762
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin
cs.CL, cs.LG
15 pages, 5 figures
null
cs.CL
20170612
20230802
[ { "id": "1601.06733" }, { "id": "1508.07909" }, { "id": "1602.02410" }, { "id": "1703.03130" }, { "id": "1511.06114" }, { "id": "1610.10099" }, { "id": "1508.04025" }, { "id": "1705.04304" }, { "id": "1608.05859" }, { "id": "1701.06538" }, { "id": "1609.08144" }, { "id": "1607.06450" }, { "id": "1705.03122" }, { "id": "1610.02357" }, { "id": "1703.10722" } ]
1706.03872
28
But there is a clear need for an alignment mech- anism between source and target words. For in- stance, prior work used the alignments provided by the attention model to interpolate word trans- lation decisions with traditional probabilistic dics n o i t a l e r n e e w t e b a m a b O d n a u h a y n a t e N e v a h n e e b d e n i a r t s r o f s r a e y . die Beziehungen 56 89 16 zwischen 72 26 Obama 96 und Netanjahu 79 98 sind 42 11 38 seit 22 54 10 Jahren angespannt . 11 14 84 23 98 49 Figure 8: Word alignment for English–German: comparing the attention model states (green boxes with probability in percent if over 10) with align- ments obtained from fast-align (blue outlines). tionaries (Arthur et al., 2016), for the introduction of coverage and fertility models (Tu et al., 2016), etc.
1706.03872#28
Six Challenges for Neural Machine Translation
We explore six challenges for neural machine translation: domain mismatch, amount of training data, rare words, long sentences, word alignment, and beam search. We show both deficiencies and improvements over the quality of phrase-based statistical machine translation.
http://arxiv.org/pdf/1706.03872
Philipp Koehn, Rebecca Knowles
cs.CL
12 pages; First Workshop on Neural Machine Translation, 2017
null
cs.CL
20170612
20170612
[ { "id": "1706.03872" }, { "id": "1612.06897" } ]
1706.03762
29
increased the maximum output length to input length + 300. We used a beam size of 21 and α = 0.3 for both WSJ only and the semi-supervised setting. Our results in Table 4 show that despite the lack of task-specific tuning our model performs sur- prisingly well, yielding better results than all previously reported models with the exception of the Recurrent Neural Network Grammar [8]. In contrast to RNN sequence-to-sequence models [37], the Transformer outperforms the Berkeley- Parser [29] even when training only on the WSJ training set of 40K sentences. # 7 Conclusion In this work, we presented the Transformer, the first sequence transduction model based entirely on attention, replacing the recurrent layers most commonly used in encoder-decoder architectures with multi-headed self-attention. For translation tasks, the Transformer can be trained significantly faster than architectures based on recurrent or convolutional layers. On both WMT 2014 English-to-German and WMT 2014 English-to-French translation tasks, we achieve a new state of the art. In the former task our best model outperforms even all previously reported ensembles.
1706.03762#29
Attention Is All You Need
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
http://arxiv.org/pdf/1706.03762
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin
cs.CL, cs.LG
15 pages, 5 figures
null
cs.CL
20170612
20230802
[ { "id": "1601.06733" }, { "id": "1508.07909" }, { "id": "1602.02410" }, { "id": "1703.03130" }, { "id": "1511.06114" }, { "id": "1610.10099" }, { "id": "1508.04025" }, { "id": "1705.04304" }, { "id": "1608.05859" }, { "id": "1701.06538" }, { "id": "1609.08144" }, { "id": "1607.06450" }, { "id": "1705.03122" }, { "id": "1610.02357" }, { "id": "1703.10722" } ]
1706.03872
29
tionaries (Arthur et al., 2016), for the introduction of coverage and fertility models (Tu et al., 2016), etc. But is the attention model in fact the proper means? To examine this, we compare the soft alignment matrix (the sequence of attention vec- tors) with word alignments obtained by traditional word alignment methods. We use incremental fast-align (Dyer et al., 2013) to align the input and output of the neural machine system. See Figure 8 for an illustration. We compare the word attention states (green boxes) with the word alignments obtained with fast align (blue outlines). For most words, these match up pretty well. Both attention states and fast-align align- ment points are a bit fuzzy around the function words have-been/sind. the attention model may settle on alignments that do not correspond with our intu- ition or alignment points obtained with fast-align. See Figure 9 for the reverse language direction, German–English. All the alignment points appear to be off by one position. We are not aware of any intuitive explanation for this divergent behavior — the translation quality is high for both systems. We measure how well the soft alignment (atten- tion model) of the NMT system match the align- ments of fast-align with two metrics:
1706.03872#29
Six Challenges for Neural Machine Translation
We explore six challenges for neural machine translation: domain mismatch, amount of training data, rare words, long sentences, word alignment, and beam search. We show both deficiencies and improvements over the quality of phrase-based statistical machine translation.
http://arxiv.org/pdf/1706.03872
Philipp Koehn, Rebecca Knowles
cs.CL
12 pages; First Workshop on Neural Machine Translation, 2017
null
cs.CL
20170612
20170612
[ { "id": "1706.03872" }, { "id": "1612.06897" } ]
1706.03762
30
We are excited about the future of attention-based models and plan to apply them to other tasks. We plan to extend the Transformer to problems involving input and output modalities other than text and to investigate local, restricted attention mechanisms to efficiently handle large inputs and outputs such as images, audio and video. Making generation less sequential is another research goals of ours. The code we used to train and evaluate our models is available at https://github.com/ tensorflow/tensor2tensor. Acknowledgements We are grateful to Nal Kalchbrenner and Stephan Gouws for their fruitful comments, corrections and inspiration. # References [1] Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016. [2] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473, 2014. [3] Denny Britz, Anna Goldie, Minh-Thang Luong, and Quoc V. Le. Massive exploration of neural machine translation architectures. CoRR, abs/1703.03906, 2017.
1706.03762#30
Attention Is All You Need
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
http://arxiv.org/pdf/1706.03762
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin
cs.CL, cs.LG
15 pages, 5 figures
null
cs.CL
20170612
20230802
[ { "id": "1601.06733" }, { "id": "1508.07909" }, { "id": "1602.02410" }, { "id": "1703.03130" }, { "id": "1511.06114" }, { "id": "1610.10099" }, { "id": "1508.04025" }, { "id": "1705.04304" }, { "id": "1608.05859" }, { "id": "1701.06538" }, { "id": "1609.08144" }, { "id": "1607.06450" }, { "id": "1705.03122" }, { "id": "1610.02357" }, { "id": "1703.10722" } ]
1706.03872
30
We measure how well the soft alignment (atten- tion model) of the NMT system match the align- ments of fast-align with two metrics: • a match score that checks for each output if the aligned input word according to fasts a d s i n t l ¨a h r e V n e h c s i w z a m a b O d n u u h a y n a t e N t s i t i e s n e r h a J t n n a p s e g . the relationship 47 81 17 between 72 Obama 87 and Netanyahu 93 95 has 38 16 26 been 21 14 54 stretched 77 for years 38 33 90 12 . 11 19 32 17 Figure 9: Mismatch between attention states and desired word alignments (German–English). align is indeed the input word that received the highest attention probability, and • a probability mass score that sums up the probability mass given to each alignment point obtained from fast-align. In these scores, we have to handle byte pair encod- ing and many-to-many alignments11
1706.03872#30
Six Challenges for Neural Machine Translation
We explore six challenges for neural machine translation: domain mismatch, amount of training data, rare words, long sentences, word alignment, and beam search. We show both deficiencies and improvements over the quality of phrase-based statistical machine translation.
http://arxiv.org/pdf/1706.03872
Philipp Koehn, Rebecca Knowles
cs.CL
12 pages; First Workshop on Neural Machine Translation, 2017
null
cs.CL
20170612
20170612
[ { "id": "1706.03872" }, { "id": "1612.06897" } ]
1706.03762
31
[4] Jianpeng Cheng, Li Dong, and Mirella Lapata. Long short-term memory-networks for machine reading. arXiv preprint arXiv:1601.06733, 2016. 10 [5] Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. CoRR, abs/1406.1078, 2014. [6] Francois Chollet. Xception: Deep learning with depthwise separable convolutions. arXiv preprint arXiv:1610.02357, 2016. [7] Junyoung Chung, Çaglar Gülçehre, Kyunghyun Cho, and Yoshua Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. CoRR, abs/1412.3555, 2014. [8] Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A. Smith. Recurrent neural network grammars. In Proc. of NAACL, 2016.
1706.03762#31
Attention Is All You Need
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
http://arxiv.org/pdf/1706.03762
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin
cs.CL, cs.LG
15 pages, 5 figures
null
cs.CL
20170612
20230802
[ { "id": "1601.06733" }, { "id": "1508.07909" }, { "id": "1602.02410" }, { "id": "1703.03130" }, { "id": "1511.06114" }, { "id": "1610.10099" }, { "id": "1508.04025" }, { "id": "1705.04304" }, { "id": "1608.05859" }, { "id": "1701.06538" }, { "id": "1609.08144" }, { "id": "1607.06450" }, { "id": "1705.03122" }, { "id": "1610.02357" }, { "id": "1703.10722" } ]
1706.03872
31
In these scores, we have to handle byte pair encod- ing and many-to-many alignments11 In out experiment, we use the neural ma- chine translation models provided by Edinburgh12 (Sennrich et al., 2016a). We run fast-align on the same parallel data sets to obtain alignment mod- els and used them to align the input and output of the NMT system. Table 3 shows alignment scores for the systems. The results suggest that, while drastic, the divergence for German–English is an outlier. We note, however, that we have seen such large a divergence also under different data condi- tions.
1706.03872#31
Six Challenges for Neural Machine Translation
We explore six challenges for neural machine translation: domain mismatch, amount of training data, rare words, long sentences, word alignment, and beam search. We show both deficiencies and improvements over the quality of phrase-based statistical machine translation.
http://arxiv.org/pdf/1706.03872
Philipp Koehn, Rebecca Knowles
cs.CL
12 pages; First Workshop on Neural Machine Translation, 2017
null
cs.CL
20170612
20170612
[ { "id": "1706.03872" }, { "id": "1612.06897" } ]
1706.03762
32
[9] Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin. Convolu- tional sequence to sequence learning. arXiv preprint arXiv:1705.03122v2, 2017. [10] Alex Graves. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, 2013. [11] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for im- age recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 770–778, 2016. [12] Sepp Hochreiter, Yoshua Bengio, Paolo Frasconi, and Jürgen Schmidhuber. Gradient flow in recurrent nets: the difficulty of learning long-term dependencies, 2001. [13] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997. [14] Zhongqiang Huang and Mary Harper. Self-training PCFG grammars with latent annotations across languages. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 832–841. ACL, August 2009.
1706.03762#32
Attention Is All You Need
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
http://arxiv.org/pdf/1706.03762
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin
cs.CL, cs.LG
15 pages, 5 figures
null
cs.CL
20170612
20230802
[ { "id": "1601.06733" }, { "id": "1508.07909" }, { "id": "1602.02410" }, { "id": "1703.03130" }, { "id": "1511.06114" }, { "id": "1610.10099" }, { "id": "1508.04025" }, { "id": "1705.04304" }, { "id": "1608.05859" }, { "id": "1701.06538" }, { "id": "1609.08144" }, { "id": "1607.06450" }, { "id": "1705.03122" }, { "id": "1610.02357" }, { "id": "1703.10722" } ]
1706.03872
32
11(1) NMT operates on subwords, but fast-align is run on full words. (2) If an input word is split into subwords by byte pair encoding, then we add their attention scores. (3) If an output word is split into subwords, then we take the average of their attention vectors. (4) The match scores and probability mass scores are computed as average over output word-level scores. (5) If an output word has no fast-align alignment point, it is ignored in this computation. (6) If an output word is fast-aligned to multiple input words, then (6a) for the match score: count it as correct if the n aligned words among the top n highest scoring words according to attention and (6b) for the probability mass score: add up their attention scores. # 12https://github.com/rsennrich/wmt16-scripts Language Pair Match German–English English–German Czech–English English–Czech Russian–English English–Russian Prob. 14.9% 16.0% 77.2% 63.2% 78.0% 63.3% 76.1% 59.7% 72.5% 65.0% 73.4% 64.1% Table 3: Scores indicating overlap between at- tention probabilities and alignments obtained with fast-align.
1706.03872#32
Six Challenges for Neural Machine Translation
We explore six challenges for neural machine translation: domain mismatch, amount of training data, rare words, long sentences, word alignment, and beam search. We show both deficiencies and improvements over the quality of phrase-based statistical machine translation.
http://arxiv.org/pdf/1706.03872
Philipp Koehn, Rebecca Knowles
cs.CL
12 pages; First Workshop on Neural Machine Translation, 2017
null
cs.CL
20170612
20170612
[ { "id": "1706.03872" }, { "id": "1612.06897" } ]
1706.03762
33
[15] Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring the limits of language modeling. arXiv preprint arXiv:1602.02410, 2016. [16] Łukasz Kaiser and Samy Bengio. Can active memory replace attention? In Advances in Neural Information Processing Systems, (NIPS), 2016. [17] Łukasz Kaiser and Ilya Sutskever. Neural GPUs learn algorithms. In International Conference on Learning Representations (ICLR), 2016. [18] Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, and Ko- ray Kavukcuoglu. Neural machine translation in linear time. arXiv preprint arXiv:1610.10099v2, 2017. [19] Yoon Kim, Carl Denton, Luong Hoang, and Alexander M. Rush. Structured attention networks. In International Conference on Learning Representations, 2017. [20] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
1706.03762#33
Attention Is All You Need
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
http://arxiv.org/pdf/1706.03762
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin
cs.CL, cs.LG
15 pages, 5 figures
null
cs.CL
20170612
20230802
[ { "id": "1601.06733" }, { "id": "1508.07909" }, { "id": "1602.02410" }, { "id": "1703.03130" }, { "id": "1511.06114" }, { "id": "1610.10099" }, { "id": "1508.04025" }, { "id": "1705.04304" }, { "id": "1608.05859" }, { "id": "1701.06538" }, { "id": "1609.08144" }, { "id": "1607.06450" }, { "id": "1705.03122" }, { "id": "1610.02357" }, { "id": "1703.10722" } ]
1706.03872
33
Table 3: Scores indicating overlap between at- tention probabilities and alignments obtained with fast-align. Note that the attention model may produce bet- ter word alignments by guided alignment training (Chen et al., 2016; Liu et al., 2016) where super- vised word alignments (such as the ones produced by fast-align) are provided to model training. # 3.6 Beam Search The task of decoding is to find the full sentence translation with the highest probability. In statis- tical machine translation, this problem has been addressed with heuristic search techniques that ex- plore a subset of the space of possible translation. A common feature of these search techniques is a beam size parameter that limits the number of par- tial translations maintained per input word. There is typically a straightforward relationship between this beam size parameter and the model score of resulting translations and also their qual- ity score (e.g., BLEU). While there are dimin- ishing returns for increasing the beam parameter, typically improvements in these scores can be ex- pected with larger beams.
1706.03872#33
Six Challenges for Neural Machine Translation
We explore six challenges for neural machine translation: domain mismatch, amount of training data, rare words, long sentences, word alignment, and beam search. We show both deficiencies and improvements over the quality of phrase-based statistical machine translation.
http://arxiv.org/pdf/1706.03872
Philipp Koehn, Rebecca Knowles
cs.CL
12 pages; First Workshop on Neural Machine Translation, 2017
null
cs.CL
20170612
20170612
[ { "id": "1706.03872" }, { "id": "1612.06897" } ]
1706.03762
34
[20] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015. [21] Oleksii Kuchaiev and Boris Ginsburg. Factorization tricks for LSTM networks. arXiv preprint arXiv:1703.10722, 2017. [22] Zhouhan Lin, Minwei Feng, Cicero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. A structured self-attentive sentence embedding. arXiv preprint arXiv:1703.03130, 2017. [23] Minh-Thang Luong, Quoc V. Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. Multi-task sequence to sequence learning. arXiv preprint arXiv:1511.06114, 2015. [24] Minh-Thang Luong, Hieu Pham, and Christopher D Manning. Effective approaches to attention- based neural machine translation. arXiv preprint arXiv:1508.04025, 2015. 11 [25] Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. Building a large annotated corpus of english: The penn treebank. Computational linguistics, 19(2):313–330, 1993.
1706.03762#34
Attention Is All You Need
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
http://arxiv.org/pdf/1706.03762
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin
cs.CL, cs.LG
15 pages, 5 figures
null
cs.CL
20170612
20230802
[ { "id": "1601.06733" }, { "id": "1508.07909" }, { "id": "1602.02410" }, { "id": "1703.03130" }, { "id": "1511.06114" }, { "id": "1610.10099" }, { "id": "1508.04025" }, { "id": "1705.04304" }, { "id": "1608.05859" }, { "id": "1701.06538" }, { "id": "1609.08144" }, { "id": "1607.06450" }, { "id": "1705.03122" }, { "id": "1610.02357" }, { "id": "1703.10722" } ]
1706.03872
34
Decoding in neural translation models can be set up in similar fashion. When predicting the next output word, we may not only commit to the high- est scoring word prediction but also maintain the next best scoring words in a list of partial trans- lations. We record with each partial translation the word translation probabilities (obtained from the softmax), extend each partial translation with subsequent word predictions and accumulate these scores. Since the number of partial translation ex- plodes exponentially with each new output word, we prune them down to a beam of highest scoring partial translations. As in traditional statistical machine translation decoding, increasing the beam size allows us to explore a larger set of the space of possible transla- tion and hence find translations with better model Czech–English English-Czech U E L B 31 30 29.7 29.7 30.830.930.930.930.9 30.9 30.6 30.4 30.5 30.430.3 30.4 30 29.8 29.4 30.7 30.3 29.9 29 Unnormalized Normalized 28.5 1 2 4 8 12 20 30 50 100 200 500 1,000 Beam Size
1706.03872#34
Six Challenges for Neural Machine Translation
We explore six challenges for neural machine translation: domain mismatch, amount of training data, rare words, long sentences, word alignment, and beam search. We show both deficiencies and improvements over the quality of phrase-based statistical machine translation.
http://arxiv.org/pdf/1706.03872
Philipp Koehn, Rebecca Knowles
cs.CL
12 pages; First Workshop on Neural Machine Translation, 2017
null
cs.CL
20170612
20170612
[ { "id": "1706.03872" }, { "id": "1612.06897" } ]
1706.03762
35
[26] David McClosky, Eugene Charniak, and Mark Johnson. Effective self-training for parsing. In Proceedings of the Human Language Technology Conference of the NAACL, Main Conference, pages 152–159. ACL, June 2006. [27] Ankur Parikh, Oscar Täckström, Dipanjan Das, and Jakob Uszkoreit. A decomposable attention model. In Empirical Methods in Natural Language Processing, 2016. [28] Romain Paulus, Caiming Xiong, and Richard Socher. A deep reinforced model for abstractive summarization. arXiv preprint arXiv:1705.04304, 2017. [29] Slav Petrov, Leon Barrett, Romain Thibaux, and Dan Klein. Learning accurate, compact, and interpretable tree annotation. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 433–440. ACL, July 2006. [30] Ofir Press and Lior Wolf. Using the output embedding to improve language models. arXiv preprint arXiv:1608.05859, 2016. [31] Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909, 2015.
1706.03762#35
Attention Is All You Need
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
http://arxiv.org/pdf/1706.03762
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin
cs.CL, cs.LG
15 pages, 5 figures
null
cs.CL
20170612
20230802
[ { "id": "1601.06733" }, { "id": "1508.07909" }, { "id": "1602.02410" }, { "id": "1703.03130" }, { "id": "1511.06114" }, { "id": "1610.10099" }, { "id": "1508.04025" }, { "id": "1705.04304" }, { "id": "1608.05859" }, { "id": "1701.06538" }, { "id": "1609.08144" }, { "id": "1607.06450" }, { "id": "1705.03122" }, { "id": "1610.02357" }, { "id": "1703.10722" } ]
1706.03872
35
U E L B 24 23 22 22 23.2 23.1 23.9 2424.124.124 23.6 24.1 23.8 24.2 23.8 24 23.9 23.5 22.7 21 20 Unnormalized Normalized 1 2 4 8 12 20 30 50 100 200 Beam Size 23.6 23.2 19.9 500 1,000 German–English English–German 37 36 35.7 35.7 36.6 36.4 37.537.537.637.637.6 37.6 37.6 37.2 36.9 36.936.836.736.6 36.3 36.1 35.7 37.6 37.6 35 Unnormalized Normalized 34.6 1 2 4 8 12 20 30 50 100 200 500 1 000 Beam Size # U E L B
1706.03872#35
Six Challenges for Neural Machine Translation
We explore six challenges for neural machine translation: domain mismatch, amount of training data, rare words, long sentences, word alignment, and beam search. We show both deficiencies and improvements over the quality of phrase-based statistical machine translation.
http://arxiv.org/pdf/1706.03872
Philipp Koehn, Rebecca Knowles
cs.CL
12 pages; First Workshop on Neural Machine Translation, 2017
null
cs.CL
20170612
20170612
[ { "id": "1706.03872" }, { "id": "1612.06897" } ]
1706.03762
36
[32] Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. arXiv preprint arXiv:1701.06538, 2017. [33] Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdi- nov. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):1929–1958, 2014. [34] Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. End-to-end memory networks. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 2440–2448. Curran Associates, Inc., 2015. [35] Ilya Sutskever, Oriol Vinyals, and Quoc VV Le. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems, pages 3104–3112, 2014.
1706.03762#36
Attention Is All You Need
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
http://arxiv.org/pdf/1706.03762
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin
cs.CL, cs.LG
15 pages, 5 figures
null
cs.CL
20170612
20230802
[ { "id": "1601.06733" }, { "id": "1508.07909" }, { "id": "1602.02410" }, { "id": "1703.03130" }, { "id": "1511.06114" }, { "id": "1610.10099" }, { "id": "1508.04025" }, { "id": "1705.04304" }, { "id": "1608.05859" }, { "id": "1701.06538" }, { "id": "1609.08144" }, { "id": "1607.06450" }, { "id": "1705.03122" }, { "id": "1610.02357" }, { "id": "1703.10722" } ]
1706.03872
36
# U E L B U E L B 29 28 28 27.9 28.929 29.129.129.2 29.2 29.2 28.6 28.4 28.428.528.528.528.4 28.1 27.6 27 26.8 26.8 Unnormalized Normalized 1 2 4 8 12 20 30 50 100 200 Beam Size 29.1 28.7 26.7 500 1 000 Romanian–English English–Romanian 17 16 15.8 15.8 17.3 16.916.916.9 16.6 16.616.7 16.4 16.5 16.416.416.416.416.3 16.2 16.4 16 15.9 15.6 Unnormalized Normalized 15.3 15 1 2 4 8 12 20 30 50 100 200 500 1 000 Beam Size # U E L B
1706.03872#36
Six Challenges for Neural Machine Translation
We explore six challenges for neural machine translation: domain mismatch, amount of training data, rare words, long sentences, word alignment, and beam search. We show both deficiencies and improvements over the quality of phrase-based statistical machine translation.
http://arxiv.org/pdf/1706.03872
Philipp Koehn, Rebecca Knowles
cs.CL
12 pages; First Workshop on Neural Machine Translation, 2017
null
cs.CL
20170612
20170612
[ { "id": "1706.03872" }, { "id": "1612.06897" } ]