doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
1705.04304
33
Caglar Gulcehre, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, and Yoshua Bengio. Pointing the unknown words. arXiv preprint arXiv:1603.08148, 2016. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa In Advances in Suleyman, and Phil Blunsom. Teaching machines to read and comprehend. Neural Information Processing Systems, pp. 1693–1701, 2015. Sepp Hochreiter and J¨urgen Schmidhuber. Long short-term memory. Neural computation, 9(8): 1735–1780, 1997. Kai Hong and Ani Nenkova. Improving the estimation of word importance for news multi-document summarization-extended technical report. 2014. Kai Hong, Mitchell Marcus, and Ani Nenkova. System combination for multi-document summa- rization. In EMNLP, pp. 107–117, 2015. Hakan Inan, Khashayar Khosravi, and Richard Socher. Tying word vectors and word classifiers: A loss framework for language modeling. Proceedings of the International Conference on Learning Representations, 2017.
1705.04304#33
A Deep Reinforced Model for Abstractive Summarization
Attentional, RNN-based encoder-decoder models for abstractive summarization have achieved good performance on short input and output sequences. For longer documents and summaries however these models often include repetitive and incoherent phrases. We introduce a neural network model with a novel intra-attention that attends over the input and continuously generated output separately, and a new training method that combines standard supervised word prediction and reinforcement learning (RL). Models trained only with supervised learning often exhibit "exposure bias" - they assume ground truth is provided at each step during training. However, when standard word prediction is combined with the global sequence prediction training of RL the resulting summaries become more readable. We evaluate this model on the CNN/Daily Mail and New York Times datasets. Our model obtains a 41.16 ROUGE-1 score on the CNN/Daily Mail dataset, an improvement over previous state-of-the-art models. Human evaluation also shows that our model produces higher quality summaries.
http://arxiv.org/pdf/1705.04304
Romain Paulus, Caiming Xiong, Richard Socher
cs.CL
null
null
cs.CL
20170511
20171113
[ { "id": "1603.08148" }, { "id": "1612.00563" }, { "id": "1608.02927" }, { "id": "1603.08887" }, { "id": "1511.06732" }, { "id": "1611.03382" }, { "id": "1603.08023" }, { "id": "1609.08144" }, { "id": "1601.06733" }, { "id": "1602.06023" }, { "id": "1608.05859" }, { "id": "1509.00685" } ]
1705.04146
34
Perplexity. In terms of perplexity, we observe that the regular sequence to sequence model fares poorly on this dataset, as the model requires the generation of many values that tend to be sparse. Adding an input copy mechanism greatly improves the perplexity as it allows the genera- tion process to use values that were mentioned in the question. The output copying mechanism improves perplexity slightly over the input copy mechanism, as many values are repeated after their first occurrence. For instance, in Problem 2, the value “1326” is used twice, so even though the model cannot generate it easily in the first occur- rence, the second one can simply be generated by copying the first one. We can observe that our model yields significant improvements over the baselines, demonstrating that the ability to gener- ate new values by algebraic manipulation is essen- tial in this task. An example of a program that is inferred is shown in Figure 4. The graph was gen- erated by finding the most likely program z that generates y. Each node isolates a value in x, m, or y, where arrows indicate an operation executed with
1705.04146#34
Program Induction by Rationale Generation : Learning to Solve and Explain Algebraic Word Problems
Solving algebraic word problems requires executing a series of arithmetic operations---a program---to obtain a final answer. However, since programs can be arbitrarily complicated, inducing them directly from question-answer pairs is a formidable challenge. To make this task more feasible, we solve these problems by generating answer rationales, sequences of natural language and human-readable mathematical expressions that derive the final answer through a series of small steps. Although rationales do not explicitly specify programs, they provide a scaffolding for their structure via intermediate milestones. To evaluate our approach, we have created a new 100,000-sample dataset of questions, answers and rationales. Experimental results show that indirect supervision of program learning via answer rationales is a promising strategy for inducing arithmetic programs.
http://arxiv.org/pdf/1705.04146
Wang Ling, Dani Yogatama, Chris Dyer, Phil Blunsom
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20170511
20171023
[]
1705.04304
34
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. Junyi Jessy Li, Kapil Thadani, and Amanda Stent. The role of discourse units in near-extractive summarization. In 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pp. 137, 2016. Chin-Yew Lin. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out: Proceedings of the ACL-04 workshop, volume 8. Barcelona, Spain, 2004. Chia-Wei Liu, Ryan Lowe, Iulian V Serban, Michael Noseworthy, Laurent Charlin, and Joelle Pineau. How not to evaluate your dialogue system: An empirical study of unsupervised eval- uation metrics for dialogue response generation. arXiv preprint arXiv:1603.08023, 2016. Christopher D Manning, Mihai Surdeanu, John Bauer, Jenny Rose Finkel, Steven Bethard, and In ACL (System David McClosky. The stanford corenlp natural language processing toolkit. Demonstrations), pp. 55–60, 2014.
1705.04304#34
A Deep Reinforced Model for Abstractive Summarization
Attentional, RNN-based encoder-decoder models for abstractive summarization have achieved good performance on short input and output sequences. For longer documents and summaries however these models often include repetitive and incoherent phrases. We introduce a neural network model with a novel intra-attention that attends over the input and continuously generated output separately, and a new training method that combines standard supervised word prediction and reinforcement learning (RL). Models trained only with supervised learning often exhibit "exposure bias" - they assume ground truth is provided at each step during training. However, when standard word prediction is combined with the global sequence prediction training of RL the resulting summaries become more readable. We evaluate this model on the CNN/Daily Mail and New York Times datasets. Our model obtains a 41.16 ROUGE-1 score on the CNN/Daily Mail dataset, an improvement over previous state-of-the-art models. Human evaluation also shows that our model produces higher quality summaries.
http://arxiv.org/pdf/1705.04304
Romain Paulus, Caiming Xiong, Richard Socher
cs.CL
null
null
cs.CL
20170511
20171113
[ { "id": "1603.08148" }, { "id": "1612.00563" }, { "id": "1608.02927" }, { "id": "1603.08887" }, { "id": "1511.06732" }, { "id": "1611.03382" }, { "id": "1603.08023" }, { "id": "1609.08144" }, { "id": "1601.06733" }, { "id": "1602.06023" }, { "id": "1608.05859" }, { "id": "1509.00685" } ]
1705.04146
35
by finding the most likely program z that generates y. Each node isolates a value in x, m, or y, where arrows indicate an operation executed with the outgoing nodes as arguments and incom- ing node as the return of the operation. For sim- plicity, operations that copy or convert values (e.g. from string to float) were not included, but nodes that were copied/converted share the same color. Examples of tokens where our model can obtain the perplexity reduction are the values “0.025”, “0.023”, “0.002” and finally the answer “E” , as these cannot be copied from the input or output.
1705.04146#35
Program Induction by Rationale Generation : Learning to Solve and Explain Algebraic Word Problems
Solving algebraic word problems requires executing a series of arithmetic operations---a program---to obtain a final answer. However, since programs can be arbitrarily complicated, inducing them directly from question-answer pairs is a formidable challenge. To make this task more feasible, we solve these problems by generating answer rationales, sequences of natural language and human-readable mathematical expressions that derive the final answer through a series of small steps. Although rationales do not explicitly specify programs, they provide a scaffolding for their structure via intermediate milestones. To evaluate our approach, we have created a new 100,000-sample dataset of questions, answers and rationales. Experimental results show that indirect supervision of program learning via answer rationales is a promising strategy for inducing arithmetic programs.
http://arxiv.org/pdf/1705.04146
Wang Ling, Dani Yogatama, Chris Dyer, Phil Blunsom
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20170511
20171023
[]
1705.04304
35
Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer sentinel mixture models. Proceedings of the International Conference on Learning Representations, 2017. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed represen- tations of words and phrases and their compositionality. In Advances in neural information pro- cessing systems, pp. 3111–3119, 2013. Ramesh Nallapati, Bowen Zhou, C¸ a˘glar G¨ulc¸ehre, Bing Xiang, et al. Abstractive text summarization using sequence-to-sequence rnns and beyond. arXiv preprint arXiv:1602.06023, 2016. Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. Summarunner: A recurrent neural network based sequence model for extractive summarization of documents. Proceedings of the 31st AAAI con- ference, 2017. Mohammad Norouzi, Samy Bengio, Navdeep Jaitly, Mike Schuster, Yonghui Wu, Dale Schuurmans, et al. Reward augmented maximum likelihood for neural structured prediction. In Advances In Neural Information Processing Systems, pp. 1723–1731, 2016.
1705.04304#35
A Deep Reinforced Model for Abstractive Summarization
Attentional, RNN-based encoder-decoder models for abstractive summarization have achieved good performance on short input and output sequences. For longer documents and summaries however these models often include repetitive and incoherent phrases. We introduce a neural network model with a novel intra-attention that attends over the input and continuously generated output separately, and a new training method that combines standard supervised word prediction and reinforcement learning (RL). Models trained only with supervised learning often exhibit "exposure bias" - they assume ground truth is provided at each step during training. However, when standard word prediction is combined with the global sequence prediction training of RL the resulting summaries become more readable. We evaluate this model on the CNN/Daily Mail and New York Times datasets. Our model obtains a 41.16 ROUGE-1 score on the CNN/Daily Mail dataset, an improvement over previous state-of-the-art models. Human evaluation also shows that our model produces higher quality summaries.
http://arxiv.org/pdf/1705.04304
Romain Paulus, Caiming Xiong, Richard Socher
cs.CL
null
null
cs.CL
20170511
20171113
[ { "id": "1603.08148" }, { "id": "1612.00563" }, { "id": "1608.02927" }, { "id": "1603.08887" }, { "id": "1511.06732" }, { "id": "1611.03382" }, { "id": "1603.08023" }, { "id": "1609.08144" }, { "id": "1601.06733" }, { "id": "1602.06023" }, { "id": "1608.05859" }, { "id": "1509.00685" } ]
1705.04146
36
BLEU. We observe that the regular sequence to sequence model achieves a low BLEU score. In fact, due to the high perplexities the model gener- ates very short rationales, which frequently consist of segments similar to “Answer should be D”, as most rationales end with similar statements. By applying the copy mechanism the BLEU score improves substantially, as the model can define In- the variables that are used in the rationale. terestingly, the output copy mechanism adds no further improvement in the perplexity evaluation. This is because during decoding all values that can be copied from the output are values that could have been generated by the model either from the softmax or the input copy mechanism. As such, adding an output copying mechanism adds little to the expressiveness of the model during decoding. Finally, our model can achieve the highest BLEU score as it has the mechanism to generate the intermediate and final values in the rationale. Accuracy. In terms of accuracy, we see that all baseline models obtain values close to chance (20%), indicating that they are completely unable to solve the problem. In contrast, we see that our model can solve problems at a rate that is signifi- cantly higher than chance, demonstrating the value of our program-driven approach, and its ability to learn to generate programs.
1705.04146#36
Program Induction by Rationale Generation : Learning to Solve and Explain Algebraic Word Problems
Solving algebraic word problems requires executing a series of arithmetic operations---a program---to obtain a final answer. However, since programs can be arbitrarily complicated, inducing them directly from question-answer pairs is a formidable challenge. To make this task more feasible, we solve these problems by generating answer rationales, sequences of natural language and human-readable mathematical expressions that derive the final answer through a series of small steps. Although rationales do not explicitly specify programs, they provide a scaffolding for their structure via intermediate milestones. To evaluate our approach, we have created a new 100,000-sample dataset of questions, answers and rationales. Experimental results show that indirect supervision of program learning via answer rationales is a promising strategy for inducing arithmetic programs.
http://arxiv.org/pdf/1705.04146
Wang Ling, Dani Yogatama, Chris Dyer, Phil Blunsom
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20170511
20171023
[]
1705.04304
36
Benjamin Nye and Ani Nenkova. Identification and characterization of newsworthy verbs in world news. In HLT-NAACL, pp. 1440–1445, 2015. 10 Jeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vectors for word representation. In EMNLP, volume 14, pp. 1532–1543, 2014. Ofir Press and Lior Wolf. Using the output embedding to improve language models. arXiv preprint arXiv:1608.05859, 2016. Marc’Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. Sequence level train- ing with recurrent neural networks. arXiv preprint arXiv:1511.06732, 2015. Steven J Rennie, Etienne Marcheret, Youssef Mroueh, Jarret Ross, and Vaibhava Goel. Self-critical sequence training for image captioning. arXiv preprint arXiv:1612.00563, 2016. Alexander M Rush, Sumit Chopra, and Jason Weston. A neural attention model for abstractive sentence summarization. arXiv preprint arXiv:1509.00685, 2015.
1705.04304#36
A Deep Reinforced Model for Abstractive Summarization
Attentional, RNN-based encoder-decoder models for abstractive summarization have achieved good performance on short input and output sequences. For longer documents and summaries however these models often include repetitive and incoherent phrases. We introduce a neural network model with a novel intra-attention that attends over the input and continuously generated output separately, and a new training method that combines standard supervised word prediction and reinforcement learning (RL). Models trained only with supervised learning often exhibit "exposure bias" - they assume ground truth is provided at each step during training. However, when standard word prediction is combined with the global sequence prediction training of RL the resulting summaries become more readable. We evaluate this model on the CNN/Daily Mail and New York Times datasets. Our model obtains a 41.16 ROUGE-1 score on the CNN/Daily Mail dataset, an improvement over previous state-of-the-art models. Human evaluation also shows that our model produces higher quality summaries.
http://arxiv.org/pdf/1705.04304
Romain Paulus, Caiming Xiong, Richard Socher
cs.CL
null
null
cs.CL
20170511
20171113
[ { "id": "1603.08148" }, { "id": "1612.00563" }, { "id": "1608.02927" }, { "id": "1603.08887" }, { "id": "1511.06732" }, { "id": "1611.03382" }, { "id": "1603.08023" }, { "id": "1609.08144" }, { "id": "1601.06733" }, { "id": "1602.06023" }, { "id": "1608.05859" }, { "id": "1509.00685" } ]
1705.04146
37
In general, the problems we solve correctly cor- respond to simple problems that can be solved in one or two operations. Examples include ques- tions such as “Billy cut up each cake into 10 slices, and ended up with 120 slices altogether. How many cakes did she cut up? A) 9 B) 7 C) 12 D) 14 E) 16”, which can be solved in a single step. In this case, our model predicts “120 / 10 = 12 cakes. Answer is C” as the rationale, which is reasonable. # 6.5 Discussion. While we show that our model can outperform the models built up to date, generating complex ratio- nales as those shown in Figure 1 correctly is still an unsolved problem, as each additional step adds complexity to the problem both during inference and decoding. Yet, this is the first result showing that it is possible to solve math problems in such a manner, and we believe this modeling approach and dataset will drive work on this problem. # 7 Related Work
1705.04146#37
Program Induction by Rationale Generation : Learning to Solve and Explain Algebraic Word Problems
Solving algebraic word problems requires executing a series of arithmetic operations---a program---to obtain a final answer. However, since programs can be arbitrarily complicated, inducing them directly from question-answer pairs is a formidable challenge. To make this task more feasible, we solve these problems by generating answer rationales, sequences of natural language and human-readable mathematical expressions that derive the final answer through a series of small steps. Although rationales do not explicitly specify programs, they provide a scaffolding for their structure via intermediate milestones. To evaluate our approach, we have created a new 100,000-sample dataset of questions, answers and rationales. Experimental results show that indirect supervision of program learning via answer rationales is a promising strategy for inducing arithmetic programs.
http://arxiv.org/pdf/1705.04146
Wang Ling, Dani Yogatama, Chris Dyer, Phil Blunsom
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20170511
20171023
[]
1705.04304
37
Evan Sandhaus. The new york times annotated corpus. Linguistic Data Consortium, Philadelphia, 6(12):e26752, 2008. Baskaran Sankaran, Haitao Mi, Yaser Al-Onaizan, and Abe Ittycheriah. Temporal attention model for neural machine translation. arXiv preprint arXiv:1608.02927, 2016. Abigail See, Peter J. Liu, and Christopher D. Manning. Get to the point: Summarization with pointer-generator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1073–1083, July 2017. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pp. 3104–3112, 2014. Arun Venkatraman, Martial Hebert, and J Andrew Bagnell. learned time series models. In AAAI, pp. 3024–3030, 2015. Improving multi-step prediction of Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. Pointer networks. In Advances in Neural Information Processing Systems, pp. 2692–2700, 2015.
1705.04304#37
A Deep Reinforced Model for Abstractive Summarization
Attentional, RNN-based encoder-decoder models for abstractive summarization have achieved good performance on short input and output sequences. For longer documents and summaries however these models often include repetitive and incoherent phrases. We introduce a neural network model with a novel intra-attention that attends over the input and continuously generated output separately, and a new training method that combines standard supervised word prediction and reinforcement learning (RL). Models trained only with supervised learning often exhibit "exposure bias" - they assume ground truth is provided at each step during training. However, when standard word prediction is combined with the global sequence prediction training of RL the resulting summaries become more readable. We evaluate this model on the CNN/Daily Mail and New York Times datasets. Our model obtains a 41.16 ROUGE-1 score on the CNN/Daily Mail dataset, an improvement over previous state-of-the-art models. Human evaluation also shows that our model produces higher quality summaries.
http://arxiv.org/pdf/1705.04304
Romain Paulus, Caiming Xiong, Richard Socher
cs.CL
null
null
cs.CL
20170511
20171113
[ { "id": "1603.08148" }, { "id": "1612.00563" }, { "id": "1608.02927" }, { "id": "1603.08887" }, { "id": "1511.06732" }, { "id": "1611.03382" }, { "id": "1603.08023" }, { "id": "1609.08144" }, { "id": "1601.06733" }, { "id": "1602.06023" }, { "id": "1608.05859" }, { "id": "1509.00685" } ]
1705.04146
38
# 7 Related Work Extensive efforts have been made in the domain of math problem solving (Hosseini et al., 2014; Kushman et al., 2014; Roy and Roth, 2015), which aim at obtaining the correct answer to a given math problem. Other work has focused on learning to map math expressions into formal languages (Roy et al., 2016). We aim to generate natural language rationales, where the bindings between variables and the problem solving approach are mixed into
1705.04146#38
Program Induction by Rationale Generation : Learning to Solve and Explain Algebraic Word Problems
Solving algebraic word problems requires executing a series of arithmetic operations---a program---to obtain a final answer. However, since programs can be arbitrarily complicated, inducing them directly from question-answer pairs is a formidable challenge. To make this task more feasible, we solve these problems by generating answer rationales, sequences of natural language and human-readable mathematical expressions that derive the final answer through a series of small steps. Although rationales do not explicitly specify programs, they provide a scaffolding for their structure via intermediate milestones. To evaluate our approach, we have created a new 100,000-sample dataset of questions, answers and rationales. Experimental results show that indirect supervision of program learning via answer rationales is a promising strategy for inducing arithmetic programs.
http://arxiv.org/pdf/1705.04146
Wang Ling, Dani Yogatama, Chris Dyer, Phil Blunsom
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20170511
20171023
[]
1705.04304
38
Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229–256, 1992. Ronald J Williams and David Zipser. A learning algorithm for continually running fully recurrent neural networks. Neural computation, 1(2):270–280, 1989. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Google’s neural machine trans- arXiv preprint lation system: Bridging the gap between human and machine translation. arXiv:1609.08144, 2016. Yinfei Yang and Ani Nenkova. Detecting information-dense texts in multiple news domains. In AAAI, pp. 1650–1656, 2014. Wenyuan Zeng, Wenjie Luo, Sanja Fidler, and Raquel Urtasun. Efficient summarization with read- again and copy mechanism. arXiv preprint arXiv:1611.03382, 2016. # A NYT DATASET A.1 PREPROCESSING
1705.04304#38
A Deep Reinforced Model for Abstractive Summarization
Attentional, RNN-based encoder-decoder models for abstractive summarization have achieved good performance on short input and output sequences. For longer documents and summaries however these models often include repetitive and incoherent phrases. We introduce a neural network model with a novel intra-attention that attends over the input and continuously generated output separately, and a new training method that combines standard supervised word prediction and reinforcement learning (RL). Models trained only with supervised learning often exhibit "exposure bias" - they assume ground truth is provided at each step during training. However, when standard word prediction is combined with the global sequence prediction training of RL the resulting summaries become more readable. We evaluate this model on the CNN/Daily Mail and New York Times datasets. Our model obtains a 41.16 ROUGE-1 score on the CNN/Daily Mail dataset, an improvement over previous state-of-the-art models. Human evaluation also shows that our model produces higher quality summaries.
http://arxiv.org/pdf/1705.04304
Romain Paulus, Caiming Xiong, Richard Socher
cs.CL
null
null
cs.CL
20170511
20171113
[ { "id": "1603.08148" }, { "id": "1612.00563" }, { "id": "1608.02927" }, { "id": "1603.08887" }, { "id": "1511.06732" }, { "id": "1611.03382" }, { "id": "1603.08023" }, { "id": "1609.08144" }, { "id": "1601.06733" }, { "id": "1602.06023" }, { "id": "1608.05859" }, { "id": "1509.00685" } ]
1705.04146
39
cost per capsule for bottle T ? (A) $ 0.25 (B) $0.12 (C) $ 0.05 (D) $ 0.03 (E) $ 0.002 © Bottle [Al contains 250 capsules and costs $[6l25). Bottle [T|contains ff30|capsules and costs $[2199]. What is the [difference] between the cost per[capsula for bottle R and the div(m,,m)) sub(mm5,m3,) check(m,) div(m,,ms) Y [ Cost | [ per | [Reapsuiel { in | [FR] [ is | [BBB] [7 | (BO) | = | (GY [0 Cost ][ per | [feapsute | [in |[ t ][ is | [99] | | (Bay | = | [tees] | The || difference | | is | [[GI00a] [ The | [answer] [is] [E | [<€0s> Figure 4: Illustration of the most likely latent program inferred by our algorithm to explain a held-out question-rationale pair. a single generative model that attempts to solve the problem while explaining the approach taken.
1705.04146#39
Program Induction by Rationale Generation : Learning to Solve and Explain Algebraic Word Problems
Solving algebraic word problems requires executing a series of arithmetic operations---a program---to obtain a final answer. However, since programs can be arbitrarily complicated, inducing them directly from question-answer pairs is a formidable challenge. To make this task more feasible, we solve these problems by generating answer rationales, sequences of natural language and human-readable mathematical expressions that derive the final answer through a series of small steps. Although rationales do not explicitly specify programs, they provide a scaffolding for their structure via intermediate milestones. To evaluate our approach, we have created a new 100,000-sample dataset of questions, answers and rationales. Experimental results show that indirect supervision of program learning via answer rationales is a promising strategy for inducing arithmetic programs.
http://arxiv.org/pdf/1705.04146
Wang Ling, Dani Yogatama, Chris Dyer, Phil Blunsom
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20170511
20171023
[]
1705.04304
39
# A NYT DATASET A.1 PREPROCESSING We remove all documents that do not have a full article text, abstract or headline. We concatenate the headline, byline and full article text, separated by special tokens, to produce a single input sequence for each example. We tokenize the input and abstract pairs with the Stanford tokenizer (Manning et al., 2014). We convert all tokens to lower-case and replace all numbers with “0”, remove “(s)” and “(m)” marks in the abstracts and all occurrences of the following words, singular or plural, if they are surrounded by semicolons or at the end of the abstract: “photo”, “graph”, “chart”, “map”, “table” 11 and “drawing”. Since the NYT abstracts almost never contain periods, we consider them multi- sentence summaries if we split sentences based on semicolons. This allows us to make the summary format and evaluation procedure similar to the CNN/Daily Mail dataset. These pre-processing steps give us an average of 549 input tokens and 40 output tokens per example, after limiting the input and output lengths to 800 and 100 tokens. A.2 DATASET SPLITS
1705.04304#39
A Deep Reinforced Model for Abstractive Summarization
Attentional, RNN-based encoder-decoder models for abstractive summarization have achieved good performance on short input and output sequences. For longer documents and summaries however these models often include repetitive and incoherent phrases. We introduce a neural network model with a novel intra-attention that attends over the input and continuously generated output separately, and a new training method that combines standard supervised word prediction and reinforcement learning (RL). Models trained only with supervised learning often exhibit "exposure bias" - they assume ground truth is provided at each step during training. However, when standard word prediction is combined with the global sequence prediction training of RL the resulting summaries become more readable. We evaluate this model on the CNN/Daily Mail and New York Times datasets. Our model obtains a 41.16 ROUGE-1 score on the CNN/Daily Mail dataset, an improvement over previous state-of-the-art models. Human evaluation also shows that our model produces higher quality summaries.
http://arxiv.org/pdf/1705.04304
Romain Paulus, Caiming Xiong, Richard Socher
cs.CL
null
null
cs.CL
20170511
20171113
[ { "id": "1603.08148" }, { "id": "1612.00563" }, { "id": "1608.02927" }, { "id": "1603.08887" }, { "id": "1511.06732" }, { "id": "1611.03382" }, { "id": "1603.08023" }, { "id": "1609.08144" }, { "id": "1601.06733" }, { "id": "1602.06023" }, { "id": "1608.05859" }, { "id": "1509.00685" } ]
1705.04146
40
a single generative model that attempts to solve the problem while explaining the approach taken. Our approach is strongly tied with the work on sequence to sequence transduction using the encoder-decoder paradigm (Sutskever et al., 2014; Bahdanau et al., 2014; Kalchbrenner and Blun- som, 2013), and inherits ideas from the extensive literature on semantic parsing (Jones et al., 2012; Berant et al., 2013; Andreas et al., 2013; Quirk et al., 2015; Liang et al., 2016; Neelakantan et al., 2016) and program generation (Reed and de Fre- itas, 2016; Graves et al., 2016), namely, the usage of an external memory, the application of differ- ent operators over values in the memory and the copying of stored values into the output sequence. Providing textual explanations for classification decisions has begun to receive attention, as part of increased interest in creating models whose deci- sions can be interpreted. Lei et al. (2016), jointly modeled both a classification decision, and the se- lection of the most relevant subsection of a docu- ment for making the classification decision. Hen- dricks et al. (2016) generate textual explanations for visual classification problems, but in contrast
1705.04146#40
Program Induction by Rationale Generation : Learning to Solve and Explain Algebraic Word Problems
Solving algebraic word problems requires executing a series of arithmetic operations---a program---to obtain a final answer. However, since programs can be arbitrarily complicated, inducing them directly from question-answer pairs is a formidable challenge. To make this task more feasible, we solve these problems by generating answer rationales, sequences of natural language and human-readable mathematical expressions that derive the final answer through a series of small steps. Although rationales do not explicitly specify programs, they provide a scaffolding for their structure via intermediate milestones. To evaluate our approach, we have created a new 100,000-sample dataset of questions, answers and rationales. Experimental results show that indirect supervision of program learning via answer rationales is a promising strategy for inducing arithmetic programs.
http://arxiv.org/pdf/1705.04146
Wang Ling, Dani Yogatama, Chris Dyer, Phil Blunsom
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20170511
20171023
[]
1705.04304
40
A.2 DATASET SPLITS We created our own training, validation, and testing splits for this dataset. Instead of producing random splits, we sorted the documents by their publication date in chronological order and used the first 90% (589,284 examples) for training, the next 5% (32,736) for validation, and the remaining 5% (32,739) for testing. This makes our dataset splits easily reproducible and follows the intuition that if used in a production environment, such a summarization model would be used on recent articles rather than random ones. A.3 POINTER SUPERVISION
1705.04304#40
A Deep Reinforced Model for Abstractive Summarization
Attentional, RNN-based encoder-decoder models for abstractive summarization have achieved good performance on short input and output sequences. For longer documents and summaries however these models often include repetitive and incoherent phrases. We introduce a neural network model with a novel intra-attention that attends over the input and continuously generated output separately, and a new training method that combines standard supervised word prediction and reinforcement learning (RL). Models trained only with supervised learning often exhibit "exposure bias" - they assume ground truth is provided at each step during training. However, when standard word prediction is combined with the global sequence prediction training of RL the resulting summaries become more readable. We evaluate this model on the CNN/Daily Mail and New York Times datasets. Our model obtains a 41.16 ROUGE-1 score on the CNN/Daily Mail dataset, an improvement over previous state-of-the-art models. Human evaluation also shows that our model produces higher quality summaries.
http://arxiv.org/pdf/1705.04304
Romain Paulus, Caiming Xiong, Richard Socher
cs.CL
null
null
cs.CL
20170511
20171113
[ { "id": "1603.08148" }, { "id": "1612.00563" }, { "id": "1608.02927" }, { "id": "1603.08887" }, { "id": "1511.06732" }, { "id": "1611.03382" }, { "id": "1603.08023" }, { "id": "1609.08144" }, { "id": "1601.06733" }, { "id": "1602.06023" }, { "id": "1608.05859" }, { "id": "1509.00685" } ]
1705.04146
41
to our model, they first generate an answer, and then, conditional on the answer, generate an ex- planation. This effectively creates a post-hoc jus- tification for a classification decision rather than a program for deducing an answer. These papers, like ours, have jointly modeled rationales and an- swer predictions; however, we are the first to use rationales to guide program induction. # 8 Conclusion In this work, we addressed the problem of generat- ing rationales for math problems, where the task is to not only obtain the correct answer of the prob- lem, but also generate a description of the method used to solve the problem. To this end, we collect 100,000 question and rationale pairs, and propose a model that can generate natural language and perform arithmetic operations in the same decod- ing process. Experiments show that our method outperforms existing neural models, in both the fluency of the rationales that are generated and the ability to solve the problem. # References Jacob Andreas, Andreas Vlachos, and Stephen Clark. 2013. Semantic parsing as machine translation. In Proc. of ACL.
1705.04146#41
Program Induction by Rationale Generation : Learning to Solve and Explain Algebraic Word Problems
Solving algebraic word problems requires executing a series of arithmetic operations---a program---to obtain a final answer. However, since programs can be arbitrarily complicated, inducing them directly from question-answer pairs is a formidable challenge. To make this task more feasible, we solve these problems by generating answer rationales, sequences of natural language and human-readable mathematical expressions that derive the final answer through a series of small steps. Although rationales do not explicitly specify programs, they provide a scaffolding for their structure via intermediate milestones. To evaluate our approach, we have created a new 100,000-sample dataset of questions, answers and rationales. Experimental results show that indirect supervision of program learning via answer rationales is a promising strategy for inducing arithmetic programs.
http://arxiv.org/pdf/1705.04146
Wang Ling, Dani Yogatama, Chris Dyer, Phil Blunsom
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20170511
20171023
[]
1705.04304
41
A.3 POINTER SUPERVISION We run each input and abstract sequence through the Stanford named entity recognizer (NER) (Man- ning et al., 2014). For all named entity tokens in the abstract if the type “PERSON”, “LOCATION”, “ORGANIZATION” or “MISC”, we find their first occurrence in the input sequence. We use this information to supervise p(ut) (Equation 11) and αe ti (Equation 4) during training. Note that the NER tagger is only used to create the dataset and is no longer needed during testing, thus we’re not adding any dependencies to our model. We also add pointer supervision for out-of-vocabulary output tokens if they are present in the input. # B HYPERPARAMETERS AND IMPLEMENTATION DETAILS For ML training, we use the teacher forcing algorithm with the only difference that at each decoding step, we choose with a 25% probability the previously generated token instead of the ground-truth token as the decoder input token yt−1, which reduces exposure bias (Venkatraman et al., 2015). We use a γ = 0.9984 for the ML+RL loss function.
1705.04304#41
A Deep Reinforced Model for Abstractive Summarization
Attentional, RNN-based encoder-decoder models for abstractive summarization have achieved good performance on short input and output sequences. For longer documents and summaries however these models often include repetitive and incoherent phrases. We introduce a neural network model with a novel intra-attention that attends over the input and continuously generated output separately, and a new training method that combines standard supervised word prediction and reinforcement learning (RL). Models trained only with supervised learning often exhibit "exposure bias" - they assume ground truth is provided at each step during training. However, when standard word prediction is combined with the global sequence prediction training of RL the resulting summaries become more readable. We evaluate this model on the CNN/Daily Mail and New York Times datasets. Our model obtains a 41.16 ROUGE-1 score on the CNN/Daily Mail dataset, an improvement over previous state-of-the-art models. Human evaluation also shows that our model produces higher quality summaries.
http://arxiv.org/pdf/1705.04304
Romain Paulus, Caiming Xiong, Richard Socher
cs.CL
null
null
cs.CL
20170511
20171113
[ { "id": "1603.08148" }, { "id": "1612.00563" }, { "id": "1608.02927" }, { "id": "1603.08887" }, { "id": "1511.06732" }, { "id": "1611.03382" }, { "id": "1603.08023" }, { "id": "1609.08144" }, { "id": "1601.06733" }, { "id": "1602.06023" }, { "id": "1608.05859" }, { "id": "1509.00685" } ]
1705.04146
42
# References Jacob Andreas, Andreas Vlachos, and Stephen Clark. 2013. Semantic parsing as machine translation. In Proc. of ACL. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. arXiv 1409.0473. Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In Proc. of EMNLP. Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska- Barwiska, Sergio Gmez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, Adri Puigdomnech Badia, Karl Moritz Hermann, Yori Zwols, Georg Ostrovski, Adam Cain, Helen King, Christopher Summerfield, Phil Blunsom, Koray Kavukcuoglu, and Demis Hassabis. 2016. Hybrid computing using a neural network with dynamic external memory. Nature 538(7626):471– 476. Brent Harrison, Upol Ehsan, and Mark O. Riedl. A neural machine lan- CoRR abs/1702.07826.
1705.04146#42
Program Induction by Rationale Generation : Learning to Solve and Explain Algebraic Word Problems
Solving algebraic word problems requires executing a series of arithmetic operations---a program---to obtain a final answer. However, since programs can be arbitrarily complicated, inducing them directly from question-answer pairs is a formidable challenge. To make this task more feasible, we solve these problems by generating answer rationales, sequences of natural language and human-readable mathematical expressions that derive the final answer through a series of small steps. Although rationales do not explicitly specify programs, they provide a scaffolding for their structure via intermediate milestones. To evaluate our approach, we have created a new 100,000-sample dataset of questions, answers and rationales. Experimental results show that indirect supervision of program learning via answer rationales is a promising strategy for inducing arithmetic programs.
http://arxiv.org/pdf/1705.04146
Wang Ling, Dani Yogatama, Chris Dyer, Phil Blunsom
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20170511
20171023
[]
1705.04304
42
We use two 200-dimensional LSTMs for the bidirectional encoder and one 400-dimensional LSTM for the decoder. We limit the input vocabulary size to 150,000 tokens, and the output vocabulary to 50,000 tokens by selecting the most frequent tokens in the training set. Input word embeddings are 100-dimensional and are initialized with GloVe (Pennington et al., 2014). We train all our models with Adam (Kingma & Ba, 2014) with a batch size of 50 and a learning rate α of 0.001 for ML training and 0.0001 for RL and ML+RL training. At test time, we use beam search of width 5 on all our models to generate our final predictions. 12
1705.04304#42
A Deep Reinforced Model for Abstractive Summarization
Attentional, RNN-based encoder-decoder models for abstractive summarization have achieved good performance on short input and output sequences. For longer documents and summaries however these models often include repetitive and incoherent phrases. We introduce a neural network model with a novel intra-attention that attends over the input and continuously generated output separately, and a new training method that combines standard supervised word prediction and reinforcement learning (RL). Models trained only with supervised learning often exhibit "exposure bias" - they assume ground truth is provided at each step during training. However, when standard word prediction is combined with the global sequence prediction training of RL the resulting summaries become more readable. We evaluate this model on the CNN/Daily Mail and New York Times datasets. Our model obtains a 41.16 ROUGE-1 score on the CNN/Daily Mail dataset, an improvement over previous state-of-the-art models. Human evaluation also shows that our model produces higher quality summaries.
http://arxiv.org/pdf/1705.04304
Romain Paulus, Caiming Xiong, Richard Socher
cs.CL
null
null
cs.CL
20170511
20171113
[ { "id": "1603.08148" }, { "id": "1612.00563" }, { "id": "1608.02927" }, { "id": "1603.08887" }, { "id": "1511.06732" }, { "id": "1611.03382" }, { "id": "1603.08023" }, { "id": "1609.08144" }, { "id": "1601.06733" }, { "id": "1602.06023" }, { "id": "1608.05859" }, { "id": "1509.00685" } ]
1705.04146
43
Brent Harrison, Upol Ehsan, and Mark O. Riedl. A neural machine lan- CoRR abs/1702.07826. Lisa Anne Hendricks, Zeynep Akata, Marcus Rohrbach, Jeff Donahue, Bernt Schiele, and Trevor In Darrell. 2016. Generating visual explanations. Proc. ECCV. Mohammad Javad Hosseini, Hannaneh Hajishirzi, Oren Etzioni, and Nate Kushman. 2014. Learning to solve arithmetic word problems with verb catego- rization. In Proc. of EMNLP. Bevan Keeley Jones, Mark Johnson, and Sharon Gold- water. 2012. Semantic parsing with bayesian tree transducers. In Proc. of ACL. Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent continuous translation models. In Proc. of EMNLP. Nate Kushman, Yoav Artzi, Luke Zettlemoyer, and Regina Barzilay. 2014. Learning to automatically solve algebra word problems. In Proc. of ACL. Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2016. In Proc. of Rationalizing neural predictions. EMNLP.
1705.04146#43
Program Induction by Rationale Generation : Learning to Solve and Explain Algebraic Word Problems
Solving algebraic word problems requires executing a series of arithmetic operations---a program---to obtain a final answer. However, since programs can be arbitrarily complicated, inducing them directly from question-answer pairs is a formidable challenge. To make this task more feasible, we solve these problems by generating answer rationales, sequences of natural language and human-readable mathematical expressions that derive the final answer through a series of small steps. Although rationales do not explicitly specify programs, they provide a scaffolding for their structure via intermediate milestones. To evaluate our approach, we have created a new 100,000-sample dataset of questions, answers and rationales. Experimental results show that indirect supervision of program learning via answer rationales is a promising strategy for inducing arithmetic programs.
http://arxiv.org/pdf/1705.04146
Wang Ling, Dani Yogatama, Chris Dyer, Phil Blunsom
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20170511
20171023
[]
1705.04146
44
Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2016. In Proc. of Rationalizing neural predictions. EMNLP. Chen Liang, Jonathan Berant, Quoc Le, Kenneth D. Forbus, and Ni Lao. 2016. Neural symbolic ma- chines: Learning semantic parsers on freebase with weak supervision. arXiv 1611.00020. Wang Ling, Edward Grefenstette, Karl Moritz Her- mann, Tom´as Kocisk´y, Andrew Senior, Fumin Wang, and Phil Blunsom. 2016. Latent predictor networks for code generation. In Proc. of ACL. Stephen Merity, Caiming Xiong, James Bradbury, and Pointer sentinel mixture Richard Socher. 2016. models. arXiv 1609.07843. Arvind Neelakantan, Quoc V. Le, and Ilya Sutskever. Inducing latent pro- 2016. Neural programmer: grams with gradient descent. In Proc. ICLR. Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: A method for automatic eval- uation of machine translation. In Proc. of ACL.
1705.04146#44
Program Induction by Rationale Generation : Learning to Solve and Explain Algebraic Word Problems
Solving algebraic word problems requires executing a series of arithmetic operations---a program---to obtain a final answer. However, since programs can be arbitrarily complicated, inducing them directly from question-answer pairs is a formidable challenge. To make this task more feasible, we solve these problems by generating answer rationales, sequences of natural language and human-readable mathematical expressions that derive the final answer through a series of small steps. Although rationales do not explicitly specify programs, they provide a scaffolding for their structure via intermediate milestones. To evaluate our approach, we have created a new 100,000-sample dataset of questions, answers and rationales. Experimental results show that indirect supervision of program learning via answer rationales is a promising strategy for inducing arithmetic programs.
http://arxiv.org/pdf/1705.04146
Wang Ling, Dani Yogatama, Chris Dyer, Phil Blunsom
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20170511
20171023
[]
1705.04146
45
Chris Quirk, Raymond Mooney, and Michel Galley. 2015. Language to code: Learning semantic parsers for if-this-then-that recipes. In Proc. of ACL. Scott E. Reed and Nando de Freitas. 2016. Neural programmer-interpreters. In Proc. of ICLR. Subhro Roy and Dan Roth. 2015. Solving general arithmetic word problems. In Proc. of EMNLP. Subhro Roy, Shyam Upadhyay, and Dan Roth. 2016. Equation parsing: Mapping sentences to grounded equations. In Proc. of EMNLP. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural net- works. arXiv 1409.3215. Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Proc. of NIPS.
1705.04146#45
Program Induction by Rationale Generation : Learning to Solve and Explain Algebraic Word Problems
Solving algebraic word problems requires executing a series of arithmetic operations---a program---to obtain a final answer. However, since programs can be arbitrarily complicated, inducing them directly from question-answer pairs is a formidable challenge. To make this task more feasible, we solve these problems by generating answer rationales, sequences of natural language and human-readable mathematical expressions that derive the final answer through a series of small steps. Although rationales do not explicitly specify programs, they provide a scaffolding for their structure via intermediate milestones. To evaluate our approach, we have created a new 100,000-sample dataset of questions, answers and rationales. Experimental results show that indirect supervision of program learning via answer rationales is a promising strategy for inducing arithmetic programs.
http://arxiv.org/pdf/1705.04146
Wang Ling, Dani Yogatama, Chris Dyer, Phil Blunsom
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20170511
20171023
[]
1705.03551
0
7 1 0 2 y a M 3 1 ] L C . s c [ 2 v 1 5 5 3 0 . 5 0 7 1 : v i X r a # TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension # Mandar Joshi† Eunsol Choi† Daniel S. Weld† Luke Zettlemoyer†‡ † Paul G. Allen School of Computer Science & Engineering, Univ. of Washington, Seattle, WA {mandar90, eunsol, weld, lsz}@cs.washington.edu # ‡ Allen Institute for Artificial Intelligence, Seattle, WA [email protected] # Abstract
1705.03551#0
TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension
We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a feature-based classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23% and 40% vs. 80%), suggesting that TriviaQA is a challenging testbed that is worth significant future study. Data and code available at -- http://nlp.cs.washington.edu/triviaqa/
http://arxiv.org/pdf/1705.03551
Mandar Joshi, Eunsol Choi, Daniel S. Weld, Luke Zettlemoyer
cs.CL
Added references, fixed typos, minor baseline update
null
cs.CL
20170509
20170513
[]
1705.03551
1
# ‡ Allen Institute for Artificial Intelligence, Seattle, WA [email protected] # Abstract We present TriviaQA, a challenging reading comprehension dataset contain- ing over 650K question-answer-evidence triples. TriviaQA includes 95K question- answer pairs authored by trivia enthusi- asts and independently gathered evidence documents, six per question on average, that provide high quality distant super- vision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sen- tences, and (3) requires more cross sen- tence reasoning to find answers. We also present two baseline algorithms: a feature- based classifier and a state-of-the-art neu- ral network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23% and 40% vs. 80%), suggesting that Trivi- aQA is a challenging testbed that is worth significant future study.1
1705.03551#1
TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension
We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a feature-based classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23% and 40% vs. 80%), suggesting that TriviaQA is a challenging testbed that is worth significant future study. Data and code available at -- http://nlp.cs.washington.edu/triviaqa/
http://arxiv.org/pdf/1705.03551
Mandar Joshi, Eunsol Choi, Daniel S. Weld, Luke Zettlemoyer
cs.CL
Added references, fixed typos, minor baseline update
null
cs.CL
20170509
20170513
[]
1705.03551
2
Question: The Dodecanese Campaign of WWII that was an attempt by the Allied forces to capture islands in the Aegean Sea was the inspiration for which acclaimed 1961 commando film? Answer: The Guns of Navarone Excerpt: The Dodecanese Campaign of World War II was an attempt by Allied forces to capture the Italian- held Dodecanese islands in the Aegean Sea following the surrender of Italy in September 1943, and use them as bases against the German-controlled Balkans. The failed campaign, and in particular the Battle of Leros, inspired the 1957 novel The Guns of Navarone and the successful 1961 movie of the same name. Question: American Callan Pinckney’s eponymously named system became a best-selling (1980s-2000s) book/video franchise in what genre? Answer: Fitness Excerpt: Callan Pinckney was an American fitness pro- fessional. She achieved unprecedented success with her Callanetics exercises. Her 9 books all became inter- national best-sellers and the video series that followed went on to sell over 6 million copies. Pinckney’s first video release ”Callanetics: 10 Years Younger In 10 Hours” outsold every other fitness video in the US.
1705.03551#2
TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension
We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a feature-based classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23% and 40% vs. 80%), suggesting that TriviaQA is a challenging testbed that is worth significant future study. Data and code available at -- http://nlp.cs.washington.edu/triviaqa/
http://arxiv.org/pdf/1705.03551
Mandar Joshi, Eunsol Choi, Daniel S. Weld, Luke Zettlemoyer
cs.CL
Added references, fixed typos, minor baseline update
null
cs.CL
20170509
20170513
[]
1705.03551
3
Figure 1: Question-answer pairs with sample ex- cerpts from evidence documents from TriviaQA exhibiting lexical and syntactic variability, and re- quiring reasoning from multiple sentences. # Introduction Reading comprehension (RC) systems aim to an- swer any question that could be posed against the facts in some reference text. This goal is challeng- ing for a number of reasons: (1) the questions can be complex (e.g. have highly compositional se- mantics), (2) finding the correct answer can re- quire complex reasoning (e.g. combining facts from multiple sentences or background knowl- edge) and (3) individual facts can be difficult to 1Data and code washington.edu/triviaqa/ available at http://nlp.cs. recover from text (e.g. due to lexical and syntactic variation). Figure 1 shows examples of all these phenomena. This paper presents TriviaQA, a new reading comprehension dataset designed to simul- taneously test all of these challenges.
1705.03551#3
TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension
We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a feature-based classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23% and 40% vs. 80%), suggesting that TriviaQA is a challenging testbed that is worth significant future study. Data and code available at -- http://nlp.cs.washington.edu/triviaqa/
http://arxiv.org/pdf/1705.03551
Mandar Joshi, Eunsol Choi, Daniel S. Weld, Luke Zettlemoyer
cs.CL
Added references, fixed typos, minor baseline update
null
cs.CL
20170509
20170513
[]
1705.03551
4
Recently, significant progress has been made by introducing large new reading comprehension datasets that primarily focus on one of the chal- lenges listed above, for example by crowdsourc- ing the gathering of question answer pairs (Ra- jpurkar et al., 2016) or using cloze-style sentences instead of questions (Hermann et al., 2015; Onishi et al., 2016) (see Table 1 for more examples). In general, system performance has improved rapidly as each resource is released. The best models ofDataset Large scale Freeform Well formed Independent of Varied Answer Evidence Evidence TriviaQA | v | v | v | v | v SQuAD (Rajpurkar et al., 2016) v v v x x MS Marco (Nguyen et al., 2016) v v x v v NewsQA(Trischler et al., 2016) v v v x* x WikiQA (Yang et al., 2016) x x x v x TREC (Voorhees and Tice, 2000) x v v v v Table 1: Comparison of TriviaQA with existing QA datasets. Our dataset is unique in that it is natu- rally occurring, well-formed questions collected independent of the evidences. *NewsQA uses evidence articles indirectly by using only article summaries.
1705.03551#4
TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension
We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a feature-based classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23% and 40% vs. 80%), suggesting that TriviaQA is a challenging testbed that is worth significant future study. Data and code available at -- http://nlp.cs.washington.edu/triviaqa/
http://arxiv.org/pdf/1705.03551
Mandar Joshi, Eunsol Choi, Daniel S. Weld, Luke Zettlemoyer
cs.CL
Added references, fixed typos, minor baseline update
null
cs.CL
20170509
20170513
[]
1705.03551
6
TriviaQA contains over 650K question-answer- evidence triples, that are derived by combining 95K Trivia enthusiast authored question-answer pairs with on average six supporting evidence doc- uments per question. To our knowledge, TriviaQA is the first dataset where full-sentence questions are authored organically (i.e. independently of an NLP task) and evidence documents are collected retrospectively from Wikipedia and the Web. This decoupling of question generation from evidence collection allows us to control for potential bias in question style or content, while offering organi- cally generated questions from various topics. De- signed to engage humans, TriviaQA presents a new challenge for RC models. They should be able to deal with large amount of text from var- ious sources such as news articles, encyclopedic entries and blog articles, and should handle infer- ence over multiple sentences. For example, our dataset contains three times as many questions that require inference over multiple sentences than the recently released SQuAD (Rajpurkar et al., 2016) dataset. Section 4 present a more detailed discus- sion of these challenges. on SQuAD, perhaps due to the challenges listed above. The baseline results also fall far short of human performance levels, 79.7%, suggesting sig- nificant room for the future work. In summary, we make the following contributions.
1705.03551#6
TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension
We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a feature-based classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23% and 40% vs. 80%), suggesting that TriviaQA is a challenging testbed that is worth significant future study. Data and code available at -- http://nlp.cs.washington.edu/triviaqa/
http://arxiv.org/pdf/1705.03551
Mandar Joshi, Eunsol Choi, Daniel S. Weld, Luke Zettlemoyer
cs.CL
Added references, fixed typos, minor baseline update
null
cs.CL
20170509
20170513
[]
1705.03551
7
• We collect over 650K question-answer- evidence triples, with questions originat- ing from trivia enthusiasts independent of the evidence documents. A high percent- age of the questions are challenging, with substantial syntactic and lexical variabil- ity and often requiring multi-sentence rea- soning. The dataset and code are avail- able at http://nlp.cs.washington. edu/triviaqa/, offering resources for training new reading-comprehension models. • We present a manual analysis quantifying the quality of the dataset and the challenges in- volved in solving the task. • We present experiments with two baseline methods, demonstrating that the TriviaQA tasks are not easily solved and are worthy of future study. • In addition to the automatically gath- ered large-scale (but noisy) dataset, we present a clean, human-annotated subset of 1975 question-document-answer triples whose documents are certified to contain all facts required to answer the questions.
1705.03551#7
TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension
We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a feature-based classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23% and 40% vs. 80%), suggesting that TriviaQA is a challenging testbed that is worth significant future study. Data and code available at -- http://nlp.cs.washington.edu/triviaqa/
http://arxiv.org/pdf/1705.03551
Mandar Joshi, Eunsol Choi, Daniel S. Weld, Luke Zettlemoyer
cs.CL
Added references, fixed typos, minor baseline update
null
cs.CL
20170509
20170513
[]
1705.03551
8
Finally, we present baseline experiments on the TriviaQA dataset, including a linear classifier in- spired by work on CNN Dailymail and MCTest (Chen et al., 2016; Richardson et al., 2013) and a state-of-the-art neural network baseline (Seo et al., 2017). The neural model performs best, but only achieves 40% for TriviaQA in comparison to 68% # 2 Overview Problem Formulation We frame reading com- prehension as the problem of answering a ques- tion q given the textual evidence provided by doc- ument set D. We assume access to a dataset of tuples {(qi, ai, Di)|i = 1 . . . n} where ai is a text string that defines the correct answer to question qi. Following recent formulations (Rajpurkar et al., 2016), we further assume that ai appears as a substring for some document in the set Di.2 However, we differ by setting Di as a set of documents, where previous work assumed a single document (Hermann et al., 2015) or even just a short paragraph (Rajpurkar et al., 2016).
1705.03551#8
TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension
We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a feature-based classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23% and 40% vs. 80%), suggesting that TriviaQA is a challenging testbed that is worth significant future study. Data and code available at -- http://nlp.cs.washington.edu/triviaqa/
http://arxiv.org/pdf/1705.03551
Mandar Joshi, Eunsol Choi, Daniel S. Weld, Luke Zettlemoyer
cs.CL
Added references, fixed typos, minor baseline update
null
cs.CL
20170509
20170513
[]
1705.03551
9
Data and Distant Supervision Our evidence documents are automatically gathered from either Wikipedia or more general Web search results (de- tails in Section 3). Because we gather evidence using an automated process, the documents are not guaranteed to contain all facts needed to an- swer the question. Therefore, they are best seen as a source of distant supervision, based on the assumption that the presence of the answer string in an evidence document implies that the docu- ment does answer the question.3 Section 4 shows that this assumption is valid over 75% of the time, making evidence documents a strong source of distant supervision for training machine reading systems. In particular, we consider two types of distant supervision, depending on the source of our doc- uments. For web search results, we expect the documents that contain the correct answer a to be highly redundant, and therefore let each question- answer-document tuple be an independent data point. (|D;| = 1 for all i and q; = q; for many i,j pairs). However, in Wikipedia we generally expect most facts to be stated only once, so we in- stead pool all of the evidence documents and never repeat the same question in the dataset (|D;| = 1.8 on average and q; # q; for all i, 7). In other words, each question (paired with the union of all of its evidence documents) is a single data point.
1705.03551#9
TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension
We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a feature-based classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23% and 40% vs. 80%), suggesting that TriviaQA is a challenging testbed that is worth significant future study. Data and code available at -- http://nlp.cs.washington.edu/triviaqa/
http://arxiv.org/pdf/1705.03551
Mandar Joshi, Eunsol Choi, Daniel S. Weld, Luke Zettlemoyer
cs.CL
Added references, fixed typos, minor baseline update
null
cs.CL
20170509
20170513
[]
1705.03551
10
These are far from the only assumptions that could be made in this distant supervision setup. For example, our data would also support multi- instance learning, which makes the at least once assumption, from relation extraction (Riedel et al., 2010; Hoffmann et al., 2011) or many other pos- sibilities. However, the experiments in Section 6 show that these assumptions do present a strong 2The data we will present in Section 3 would further sup- port a task formulation where some documents D do not have the correct answer and the model must learn when to abstain. We leave this to future work. 3An example context for the first question in Figure 1 where such an assumption fails would be the following ev- idence string: The Guns of Navarone is a 1961 British- American epic adventure war film directed by J. Lee Thomp- son. Total number of QA pairs Number of unique answers Number of evidence documents 95,956 40,478 662,659 Avg. question length (word) Avg. document length (word) 14 2,895 Table 2: TriviaQA: Dataset statistics. signal for learning; we believe the data will fuel significant future study. # 3 Dataset Collection
1705.03551#10
TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension
We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a feature-based classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23% and 40% vs. 80%), suggesting that TriviaQA is a challenging testbed that is worth significant future study. Data and code available at -- http://nlp.cs.washington.edu/triviaqa/
http://arxiv.org/pdf/1705.03551
Mandar Joshi, Eunsol Choi, Daniel S. Weld, Luke Zettlemoyer
cs.CL
Added references, fixed typos, minor baseline update
null
cs.CL
20170509
20170513
[]
1705.03551
11
Table 2: TriviaQA: Dataset statistics. signal for learning; we believe the data will fuel significant future study. # 3 Dataset Collection We collected a large dataset to support the read- ing comprehension task described above. First we gathered question-answer pairs from 14 trivia and quiz-league websites. We removed questions with less than four tokens, since these were generally either too simple or too vague. We then collected textual evidence to answer questions using two sources: documents from Web search results and Wikipedia articles for en- tities in the question. To collect the former, we posed each question4 as a search query to the Bing Web search API, and collected the top 50 search result URLs. To exclude the trivia websites, we removed from the results all pages from the trivia websites we scraped and any page whose url in- cluded the keywords trivia, question, or answer. We then crawled the top 10 search result Web pages and pruned PDF and other ill formatted doc- uments. The search output includes a diverse set of documents such as blog articles, news articles, and encyclopedic entries.
1705.03551#11
TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension
We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a feature-based classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23% and 40% vs. 80%), suggesting that TriviaQA is a challenging testbed that is worth significant future study. Data and code available at -- http://nlp.cs.washington.edu/triviaqa/
http://arxiv.org/pdf/1705.03551
Mandar Joshi, Eunsol Choi, Daniel S. Weld, Luke Zettlemoyer
cs.CL
Added references, fixed typos, minor baseline update
null
cs.CL
20170509
20170513
[]
1705.03551
12
Wikipedia pages for entities mentioned in the question often provide useful information. We therefore collected an additional set of evidence documents by applying TAGME, an off-the-shelf entity linker (Ferragina and Scaiella, 2010), to find Wikipedia entities mentioned in the question, and added the corresponding pages as evidence docu- ments. Finally, to support learning from distant super- vision, we further filtered the evidence documents to exclude those missing the correct answer string and formed evidence document sets as described in Section 2. This left us with 95K question- answer pairs organized into (1) 650K training ex- amples for the Web search results, each contain4Note that we did not use the answer as a part of the search query to avoid biasing the results. Property Example annotation Statistics Which politician won the Nobel Peace Prize in 2009? What fragrant essential oil is obtained from Damask Rose? Avg. entities / question Fine grained answer type Coarse grained answer type Who won the Nobel Peace Prize in 2009? Time frame Comparisons What was photographed for the first time in October 1959 What is the appropriate name of the largest type of frog? 1.77 per question 73.5% of questions 15.5% of questions 34% of questions 9% of questions
1705.03551#12
TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension
We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a feature-based classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23% and 40% vs. 80%), suggesting that TriviaQA is a challenging testbed that is worth significant future study. Data and code available at -- http://nlp.cs.washington.edu/triviaqa/
http://arxiv.org/pdf/1705.03551
Mandar Joshi, Eunsol Choi, Daniel S. Weld, Luke Zettlemoyer
cs.CL
Added references, fixed typos, minor baseline update
null
cs.CL
20170509
20170513
[]
1705.03551
13
Table 3: Properties of questions on 200 annotated examples show that a majority of TriviaQA questions contain multiple entities. The boldfaced words hint at the presence of corresponding property. artifac | | writing-style | \ | organization N V/A serviceman\ whol ine \ tf : \ cognition ae skilled-worker object communication p % official \ / psychologstionsat : worker —_physicabeptity Zz skilled-worker abetttiorect center owner aN area fisputant. rin afi = preserver locationegion A < district ; lawoiveplitcian ee ae geographical leader head j artist” professional administrator —__/__creatoteommunicator.ntertaineheaith-protessional aristocrat wrongdoer J artist’ negotiator. musician male-aristocrat Nincipal musician representative Figure 2: Distribution of hierarchical WordNet synsets for entities appearing in the answer. The arc length is proportional to the number of ques- tions containing that category. Type Percentage Numerical Free text Wikipedia title Person Location Organization Misc. 4.17 2.98 92.85 32 23 5 40 Table 4: Distribution of answer types on 200 an- notated examples.
1705.03551#13
TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension
We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a feature-based classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23% and 40% vs. 80%), suggesting that TriviaQA is a challenging testbed that is worth significant future study. Data and code available at -- http://nlp.cs.washington.edu/triviaqa/
http://arxiv.org/pdf/1705.03551
Mandar Joshi, Eunsol Choi, Daniel S. Weld, Luke Zettlemoyer
cs.CL
Added references, fixed typos, minor baseline update
null
cs.CL
20170509
20170513
[]
1705.03551
14
Table 4: Distribution of answer types on 200 an- notated examples. sis, we sampled 200 question answer pairs and manually analysed their properties. About 73.5% of these questions contain phrases that describe a fine grained category to which the answer belongs, while 15.5% hint at a coarse grained category (one of person, organization, location, and miscella- neous). Questions often involve reasoning over time frames, as well as making comparisons. A summary of the analysis is presented in Table 3. ing a single (combined) evidence document, and (2) 78K examples for the Wikipedia reading com- prehension domain, containing on average 1.8 ev- idence documents per example. Table 2 con- tains the dataset statistics. While not the focus of this paper, we have also released the full un- filtered dataset which contains 110,495 QA pairs and 740K evidence documents to support research in allied problems such as open domain and IR- style question answering.
1705.03551#14
TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension
We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a feature-based classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23% and 40% vs. 80%), suggesting that TriviaQA is a challenging testbed that is worth significant future study. Data and code available at -- http://nlp.cs.washington.edu/triviaqa/
http://arxiv.org/pdf/1705.03551
Mandar Joshi, Eunsol Choi, Daniel S. Weld, Luke Zettlemoyer
cs.CL
Added references, fixed typos, minor baseline update
null
cs.CL
20170509
20170513
[]
1705.03551
15
Answers in TriviaQA belong to a diverse set of types. 92.85% of the answers are titles in Wikipedia,5 4.17% are numerical expressions (e.g., 9 kilometres) while the rest are open ended noun and verb phrases. A coarse grained type analysis of answers that are Wikipedia entities pre- sented in Table 4. It should be noted that not all Wikipedia titles are named entities; many are common phrases such as barber or soup. Fig- ure 2 shows diverse topics indicated by WordNet synsets of answer entities. # 4 Dataset Analysis A quantitative and qualitative analysis of Trivi- aQA shows it contains complex questions about a diverse set of entities, which are answerable using the evidence documents. Question and answer analysis TriviaQA ques- tions, authored by trivia enthusiasts, cover various topics of people’s interest. The average question length is 14 tokens indicating that many questions are highly compositional. For qualitative analyEvidence analysis A qualitative analysis of TriviaQA shows that the evidence contains an- swers for 79.7% and 75.4% of questions from the Wikipedia and Web domains respectively. To analyse the quality of evidence and evaluate base- lines, we asked a human annotator to answer 986 and 1345 (dev and test set) questions from the Wikipedia and Web domains respectively. Trivia
1705.03551#15
TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension
We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a feature-based classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23% and 40% vs. 80%), suggesting that TriviaQA is a challenging testbed that is worth significant future study. Data and code available at -- http://nlp.cs.washington.edu/triviaqa/
http://arxiv.org/pdf/1705.03551
Mandar Joshi, Eunsol Choi, Daniel S. Weld, Luke Zettlemoyer
cs.CL
Added references, fixed typos, minor baseline update
null
cs.CL
20170509
20170513
[]
1705.03551
17
Reasoning Lexical variation (synonym) Major correspondences between the question and the answer sentence are synonyms. Frequency 41% in Wiki documents, 39% in web documents. Examples Q What is solid CO2 commonly called? S The frozen solid form of CO2, known as dry ice ... Q Who wrote the novel The Eagle Has landed? S The Eagle Has Landed is a book by British writer Jack Higgins Reasoning Lexical variation and world knowledge Major correspondences between the question and the document require common sense or external knowledge. Frequency 17% in Wiki documents, 17% in web documents. Q What is the first name of Madame Bovary in Flaubert’s 1856 novel? S Madame Bovary (1856) is the French writer Gustave Flaubert’s debut novel. The story focuses on a doctor’s Examples wife, Emma Bovary Q Who was the female member of the 1980’s pop music duo, Eurythmics? S Eurythmics were a British music duo consisting of members Annie Lennox and David A. Stewart. Reasoning Syntactic Variation After the question is paraphrased into declarative form, its syntactic dependency structure does not match that of
1705.03551#17
TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension
We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a feature-based classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23% and 40% vs. 80%), suggesting that TriviaQA is a challenging testbed that is worth significant future study. Data and code available at -- http://nlp.cs.washington.edu/triviaqa/
http://arxiv.org/pdf/1705.03551
Mandar Joshi, Eunsol Choi, Daniel S. Weld, Luke Zettlemoyer
cs.CL
Added references, fixed typos, minor baseline update
null
cs.CL
20170509
20170513
[]
1705.03551
18
David A. Stewart. Reasoning Syntactic Variation After the question is paraphrased into declarative form, its syntactic dependency structure does not match that of the answer sentence Frequency 69% in Wiki documents, 65% in web documents. Examples Q In which country did the Battle of El Alamein take place? S The 1942 Battle of El Alamein in Egypt was actually two pivotal battles of World War II Q Whom was Ronald Reagan referring to when he uttered the famous phrase evil empire in a 1983 speech? S The phrase evil empire was first applied to the Soviet Union in 1983 by U.S. President Ronald Reagan. Reasoning Multiple sentences Requires reasoning over multiple sentences. Frequency 40% in Wiki documents, 35% in web documents. Q Name the Greek Mythological hero who killed the gorgon Medusa. S Perseus asks god to aid him. So the goddess Athena and Hermes helps him out to kill Medusa. Examples Q Who starred in and directed the 1993 film A Bronx Tale? S Robert De Niro To Make His Broadway Directorial Debut With A Bronx Tale: The Musical. The actor starred and directed the 1993 film. Reasoning Lists, Table Answer
1705.03551#18
TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension
We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a feature-based classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23% and 40% vs. 80%), suggesting that TriviaQA is a challenging testbed that is worth significant future study. Data and code available at -- http://nlp.cs.washington.edu/triviaqa/
http://arxiv.org/pdf/1705.03551
Mandar Joshi, Eunsol Choi, Daniel S. Weld, Luke Zettlemoyer
cs.CL
Added references, fixed typos, minor baseline update
null
cs.CL
20170509
20170513
[]
1705.03551
20
Table 5: Analysis of reasoning used to answer TriviaQA questions shows that a high proportion of evi- dence sentence(s) exhibit syntactic and lexical variation with respect to questions. Answers are indicated by boldfaced text. questions contain multiple clues about the an- swer(s) not all of which are referenced in the docu- ments. The annotator was asked to answer a ques- tion if the minimal set of facts (ignoring temporal references like this year) required to answer the question are present in the document, and abstain otherwise. For example, it is possible to answer the question, Who became president of the Mor- mons in 1844, organised settlement of the Mor- mons in Utah 1847 and founded Salt Lake City? using only the fact that Salt Lake City was founded by Brigham Young. We found that the accu- racy (evaluated using the original answers) for the Wikipedia and Web domains was 79.6 and 75.3 respectively. We use the correctly answered ques- tions (and documents) as verified sets for evalua- tion (section 6).
1705.03551#20
TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension
We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a feature-based classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23% and 40% vs. 80%), suggesting that TriviaQA is a challenging testbed that is worth significant future study. Data and code available at -- http://nlp.cs.washington.edu/triviaqa/
http://arxiv.org/pdf/1705.03551
Mandar Joshi, Eunsol Choi, Daniel S. Weld, Luke Zettlemoyer
cs.CL
Added references, fixed typos, minor baseline update
null
cs.CL
20170509
20170513
[]
1705.03551
21
Challenging problem A comparison of evi- dence with respect to the questions shows a high proportion of questions require reason- ing over multiple sentences. To compare our dataset against previous datasets, we classified 100 question-evidence pairs each from Wikipedia and the Web according to the form of reasoning re- quired to answer them. We focus the analysis on Wikipedia since the analysis on Web documents are similar. Categories are not mutually exclusive: single example can fall into multiple categories. A summary of the analysis is presented in Table 5. On comparing evidence sentences with their corresponding questions, we found that 69% of the questions had a different syntactic structure while 41% were lexically different. For 40% of the questions, we found that the information required to answer them was scattered over multi- ple sentences. Compared to SQuAD, over three times as many questions in TriviaQA require rea- soning over multiple sentences. Moreover, 17% of the examples required some form of world knowledge. Question-evidence pairs in Trivi- aQA display more lexical and syntactic variance than SQuAD. This supports our earlier assertion that decoupling question generation from evidence collection results in a more challenging problem. # 5 Baseline methods
1705.03551#21
TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension
We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a feature-based classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23% and 40% vs. 80%), suggesting that TriviaQA is a challenging testbed that is worth significant future study. Data and code available at -- http://nlp.cs.washington.edu/triviaqa/
http://arxiv.org/pdf/1705.03551
Mandar Joshi, Eunsol Choi, Daniel S. Weld, Luke Zettlemoyer
cs.CL
Added references, fixed typos, minor baseline update
null
cs.CL
20170509
20170513
[]
1705.03551
22
# 5 Baseline methods To quantify the difficulty level of the dataset for current methods, we present results on neural and other models. We used a random entity base- line and a simple classifier inspired from previ- ous work (Wang et al., 2015; Chen et al., 2016), and compare these to BiDAF (Seo et al., 2017), one of the best performing models for the SQuAD dataset. # 5.1 Random entity baseline We developed the random entity baseline for the Wikipedia domain since the provided documents can be directly mapped to candidate answers. In this heuristic approach, we first construct a candi- date answer set using the entities associated with the provided Wikipedia pages for a given question (on average 1.8 per question). We then randomly pick a candidate that does not occur in the ques- tion. If no such candidate exists, we pick any ran- dom candidate from the candidate set. # 5.2 Entity classifier We also frame the task as a ranking problem over candidate answers in the documents. More for- mally, given a question qi, an answer a+ i , and a evidence document Di, we want to learn a scoring function score, such that score(a+ i |qi, Di) > score(a− i |qi, Di)
1705.03551#22
TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension
We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a feature-based classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23% and 40% vs. 80%), suggesting that TriviaQA is a challenging testbed that is worth significant future study. Data and code available at -- http://nlp.cs.washington.edu/triviaqa/
http://arxiv.org/pdf/1705.03551
Mandar Joshi, Eunsol Choi, Daniel S. Weld, Luke Zettlemoyer
cs.CL
Added references, fixed typos, minor baseline update
null
cs.CL
20170509
20170513
[]
1705.03551
23
score(a+ i |qi, Di) > score(a− i |qi, Di) where a− is any candidate other than the answer. i The function score is learnt using LambdaMART (Wu et al., 2010),6 a boosted tree based ranking algorithm. This is similar to previous entity-centric classi- fiers for QA (Chen et al., 2016; Wang et al., 2015), and uses context and Wikipedia catalog based fea- tures. To construct the candidate answer set, we 6We use the RankLib implementation https:// sourceforge.net/p/lemur/wiki/RankLib/ consider sentences that contain at least one word in common with the question. We then add every n-gram (n ∈ [1, 5]) that occurs in these sentences and is a title of some Wikipedia article.7 # 5.3 Neural model
1705.03551#23
TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension
We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a feature-based classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23% and 40% vs. 80%), suggesting that TriviaQA is a challenging testbed that is worth significant future study. Data and code available at -- http://nlp.cs.washington.edu/triviaqa/
http://arxiv.org/pdf/1705.03551
Mandar Joshi, Eunsol Choi, Daniel S. Weld, Luke Zettlemoyer
cs.CL
Added references, fixed typos, minor baseline update
null
cs.CL
20170509
20170513
[]
1705.03551
24
# 5.3 Neural model Recurrent neural network models (RNNs) (Her- mann et al., 2015; Chen et al., 2016) have been very effective for reading comprehension. For our task, we modified the BiDAF model (Seo et al., 2017), which takes a sequence of context words as input and outputs the start and end positions of the predicted answer in the context. The model uti- lizes an RNN at the character level, token level, and phrase level to encode context and question and uses attention mechanism between question and context. Authored independently from the evidence doc- ument, TriviaQA does not contain the exact spans of the answers. We approximate the answer span by finding the first match of answer string in the evidence document. Developed for a dataset where the evidence document is a single paragraph (average 122 words), the BiDAF model does not scale to long documents. To overcome this, we truncate the evidence document to the first 800 words.8
1705.03551#24
TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension
We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a feature-based classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23% and 40% vs. 80%), suggesting that TriviaQA is a challenging testbed that is worth significant future study. Data and code available at -- http://nlp.cs.washington.edu/triviaqa/
http://arxiv.org/pdf/1705.03551
Mandar Joshi, Eunsol Choi, Daniel S. Weld, Luke Zettlemoyer
cs.CL
Added references, fixed typos, minor baseline update
null
cs.CL
20170509
20170513
[]
1705.03551
25
When the data contains more than one evidence document, as in our Wikipedia domain, we predict for each document separately and aggregate the predictions by taking a sum of confidence scores. More specifically, when the model outputs a can- didate answer Ai from n documents Di,1, ...Di,n with confidences ci,1, ...ci,n, the score of Ai is given by score(Ai) = ci,k k We select candidate answer with the highest score. # 6 Experiments An evaluation of our baselines shows that both of our tasks are challenging, and that the TriviaQA dataset supports significant future work. 7Using a named entity recognition system to generate can- didate entities is not feasible as answers can be common nouns or phrases. 8We found that splitting documents into smaller sub doc- uments degrades performance since a majority of sub docu- ments do not contain the answer. Wikipedia Web Wikipedia verified Web verified Questions Documents Questions Documents Questions Documents Questions Documents Train 61,888 110,648 76,496 528,979 - - - - Dev 7,993 14,229 9,951 68,621 297 305 322 325 Test 7,701 13,661 9,509 65,059 584 592 733 769
1705.03551#25
TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension
We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a feature-based classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23% and 40% vs. 80%), suggesting that TriviaQA is a challenging testbed that is worth significant future study. Data and code available at -- http://nlp.cs.washington.edu/triviaqa/
http://arxiv.org/pdf/1705.03551
Mandar Joshi, Eunsol Choi, Daniel S. Weld, Luke Zettlemoyer
cs.CL
Added references, fixed typos, minor baseline update
null
cs.CL
20170509
20170513
[]
1705.03551
26
Table 6: Data statistics for each task setup. The Wikipedia domain is evaluated over questions while the web domain is evaluated over docu- ments. # 6.1 Evaluation Metrics We use the same evaluation metrics as SQuAD – exact match (EM) and F1 over words in the an- swer(s). For questions that have Numerical and FreeForm answers, we use a single given answer as ground truth. For questions that have Wikipedia entities as answers, we use Wikipedia aliases as valid answer along with the given answer. Since Wikipedia and the web are vastly differ- ent in terms of style and content, we report per- formance on each source separately. While us- ing Wikipedia, we evaluate at the question level since facts needed to answer a question are gen- erally stated only once. On the other hand, due to high information redundancy in web documents (around 6 documents per question), we report doc- ument level accuracy and F1 when evaluating on web documents. Lastly, in addition to distant su- pervision, we also report evaluation on the clean dev and test questions collection using a human annotator (section 4) # 6.2 Experimental Setup
1705.03551#26
TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension
We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a feature-based classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23% and 40% vs. 80%), suggesting that TriviaQA is a challenging testbed that is worth significant future study. Data and code available at -- http://nlp.cs.washington.edu/triviaqa/
http://arxiv.org/pdf/1705.03551
Mandar Joshi, Eunsol Choi, Daniel S. Weld, Luke Zettlemoyer
cs.CL
Added references, fixed typos, minor baseline update
null
cs.CL
20170509
20170513
[]
1705.03551
27
# 6.2 Experimental Setup We randomly partition QA pairs in the dataset into train (80%), development (10%), and test set (10%). In addition to distant supervision evalua- tion, we also evaluate baselines on verified subsets (see section 4) of the dev and test partitions. Table 6 contains the number of questions and documents for each task. We trained the entity classifier on a random sample of 50,000 questions from the train- ing set. For training BiDAF on the web domain, we first randomly sampled 80,000 documents. For both domains, we used only those (training) doc- uments where the answer appears in the first 400 tokens to keep training time manageable. Design- ing scalable techniques that can use the entirety of the data is an interesting direction for future work. # 6.3 Results The performance of the proposed models is sum- marized in Table 7. The poor performance of the random entity baseline shows that the task is not already solved by information retrieval. For both Wikipedia and web documents, BiDAF (40%) out- performs the classifier (23%). The oracle score is the upper bound on the exact match accuracy.9 All models lag significantly behind the human baseline of 79.7% on the Wikipedia domain, and 75.4% on the web domain.
1705.03551#27
TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension
We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a feature-based classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23% and 40% vs. 80%), suggesting that TriviaQA is a challenging testbed that is worth significant future study. Data and code available at -- http://nlp.cs.washington.edu/triviaqa/
http://arxiv.org/pdf/1705.03551
Mandar Joshi, Eunsol Choi, Daniel S. Weld, Luke Zettlemoyer
cs.CL
Added references, fixed typos, minor baseline update
null
cs.CL
20170509
20170513
[]
1705.03551
28
We analyse the performance of BiDAF on the development set using Wikipedia as the evidence source by question length and answer type. The accuracy of the system steadily decreased as the length of the questions increased – with 50% for questions with 5 or fewer words to 32% for 20 or more words. This suggests that longer composi- tional questions are harder for current methods. # 6.4 Error analysis Our qualitative error analysis reveals that compo- sitionality in questions and lexical variation and low signal-to-noise ratio in (full) documents is still a challenge for current methods. We randomly sampled 100 incorrect BiDAF predictions from the development set and used Wikipedia evidence documents for manual analysis. We found that 19 examples lacked evidence in any of the provided documents, 3 had incorrect ground truth, and 3 were valid answers that were not included in the answer key. Furthermore, 12 predictions were par- tially correct (Napoleonic vs Napoleonic Wars). This seems to be consistent with human perfor- mance of 79.7%. For the rest, we classified each example into one or more categories listed in Table 8. Distractor en- tities refers to the presence of entities similar to ground truth. E.g., for the question, Rebecca Front plays Detective Chief Superintendent Innocent in which TV series?, the evidence describes all roles played by Rebecca Front.
1705.03551#28
TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension
We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a feature-based classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23% and 40% vs. 80%), suggesting that TriviaQA is a challenging testbed that is worth significant future study. Data and code available at -- http://nlp.cs.washington.edu/triviaqa/
http://arxiv.org/pdf/1705.03551
Mandar Joshi, Eunsol Choi, Daniel S. Weld, Luke Zettlemoyer
cs.CL
Added references, fixed typos, minor baseline update
null
cs.CL
20170509
20170513
[]
1705.03551
29
The first two rows suggest that long and noisy documents make the question answering task more difficult, as compared for example to the short passages in SQuAD. Furthermore, a high proportion of errors are caused by paraphrasing, and the answer is sometimes stated indirectly. For 9A question q is considered answerable for the oracle score if the correct answer is found in the evidence D or, in case of the classifier, is a part of the candidate set. Since we truncate documents, the upper bound is not 100%.
1705.03551#29
TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension
We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a feature-based classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23% and 40% vs. 80%), suggesting that TriviaQA is a challenging testbed that is worth significant future study. Data and code available at -- http://nlp.cs.washington.edu/triviaqa/
http://arxiv.org/pdf/1705.03551
Mandar Joshi, Eunsol Choi, Daniel S. Weld, Luke Zettlemoyer
cs.CL
Added references, fixed typos, minor baseline update
null
cs.CL
20170509
20170513
[]
1705.03551
30
Distant Supervision Verified Method Domain Dev Test Dev Test EM F1 Oracle EM F1 Oracle EM F1 Oracle EM F1 Oracle Random Classifier Wiki BiDAF 12.72 22.91 16.30 12.74 22.35 16.28 14.81 23.31 19.53 15.41 25.44 19.19 23.42 27.68 71.41 22.45 26.52 71.67 24.91 29.43 80.13 27.23 31.37 77.74 40.26 45.74 82.55 40.32 45.91 82.82 47.47 53.70 90.23 44.86 50.71 86.81 Classifier BiDAF web 24.64 29.08 66.78 24.00 28.38 66.35 27.38 31.91 77.23 30.17 34.67 76.72 41.08 47.40 82.93 40.74 47.05 82.95 51.38 55.47 90.46 49.54 55.80 89.99 Table 7: Performance of all systems on TriviaQA using distantly supervised evaluation. The best per- forming system is indicated in bold.
1705.03551#30
TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension
We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a feature-based classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23% and 40% vs. 80%), suggesting that TriviaQA is a challenging testbed that is worth significant future study. Data and code available at -- http://nlp.cs.washington.edu/triviaqa/
http://arxiv.org/pdf/1705.03551
Mandar Joshi, Eunsol Choi, Daniel S. Weld, Luke Zettlemoyer
cs.CL
Added references, fixed typos, minor baseline update
null
cs.CL
20170509
20170513
[]
1705.03551
31
Table 7: Performance of all systems on TriviaQA using distantly supervised evaluation. The best per- forming system is indicated in bold. Category Proportion Insufficient evidence Prediction from incorrect document(s) Answer not in clipped document Paraphrasing Distractor entities Reasoning over multiple sentences 19 7 15 29 11 18 Table 8: Qualitative error analysis of BiDAF on Wikipedia evidence documents. example, the evidence for the question What was Truman Capote’s last name before he was adopted by his stepfather? consists of the following text Truman Garcia Capote born Truman Streckfus Persons, was an American ... In 1933, he moved to New York City to live with his mother and her second husband, Joseph Capote, who adopted him as his stepson and renamed him Truman Garca Capote.
1705.03551#31
TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension
We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a feature-based classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23% and 40% vs. 80%), suggesting that TriviaQA is a challenging testbed that is worth significant future study. Data and code available at -- http://nlp.cs.washington.edu/triviaqa/
http://arxiv.org/pdf/1705.03551
Mandar Joshi, Eunsol Choi, Daniel S. Weld, Luke Zettlemoyer
cs.CL
Added references, fixed typos, minor baseline update
null
cs.CL
20170509
20170513
[]
1705.03551
32
Datasets with natural language questions in- clude MCTest (Richardson et al., 2013), SQuAD (Rajpurkar et al., 2016), and NewsQA (Trischler et al., 2016). MCTest is limited in scale with only 2640 multiple choice questions. SQuAD con- tains 100K crowdsourced questions and answers paired with short Wikipedia passages. NewsQA uses crowdsourcing to create questions solely from news article summaries in order to control potential bias. The crucial difference between SQuAD/NewsQA and TriviaQA is that TriviaQA questions have not been crowdsourced from pre- selected passages. Additionally, our evidence set consists of web documents, while SQuAD and NewsQA are limited to Wikipedia and news arti- cles respectively. Other recently released datasets include (Lai et al., 2017). # 7.2 Open domain question answering # 7 Related work Recent interest in question answering has resulted in the creation of several datasets. However, they are either limited in scale or suffer from biases stemming from their construction process. We group existing datasets according to their associ- ated tasks, and compare them against TriviaQA. The analysis is summarized in Table 1. # 7.1 Reading comprehension
1705.03551#32
TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension
We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a feature-based classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23% and 40% vs. 80%), suggesting that TriviaQA is a challenging testbed that is worth significant future study. Data and code available at -- http://nlp.cs.washington.edu/triviaqa/
http://arxiv.org/pdf/1705.03551
Mandar Joshi, Eunsol Choi, Daniel S. Weld, Luke Zettlemoyer
cs.CL
Added references, fixed typos, minor baseline update
null
cs.CL
20170509
20170513
[]
1705.03551
33
# 7.1 Reading comprehension Reading comprehension tasks aims to test the abil- ity of a system to understand a document using questions based upon its contents. Researchers have constructed cloze-style datasets (Hill et al., 2015; Hermann et al., 2015; Paperno et al., 2016; Onishi et al., 2016), where the task is to pre- dict missing words, often entities, in a docu- ment. Cloze-style datasets, while easier to con- struct large-scale automatically, do not contain natural language questions. The recently released MS Marco dataset (Nguyen et al., 2016) also contains independently authored questions and documents drawn from the search results. However, the questions in the dataset are derived from search logs and the answers are crowdsourced. On the other hand, trivia enthusi- asts provided both questions and answers for our dataset. Knowledge base question answering involves converting natural language questions to logical forms that can be executed over a KB. Proposed datasets (Cai and Yates, 2013; Berant et al., 2013; Bordes et al., 2015) are either limited in scale or in the complexity of questions, and can only retrieve facts covered by the KB.
1705.03551#33
TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension
We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a feature-based classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23% and 40% vs. 80%), suggesting that TriviaQA is a challenging testbed that is worth significant future study. Data and code available at -- http://nlp.cs.washington.edu/triviaqa/
http://arxiv.org/pdf/1705.03551
Mandar Joshi, Eunsol Choi, Daniel S. Weld, Luke Zettlemoyer
cs.CL
Added references, fixed typos, minor baseline update
null
cs.CL
20170509
20170513
[]
1705.03551
34
A standard task for open domain IR-style QA is the annual TREC competitions (Voorhees and Tice, 2000), which contains questions from var- ious domains but is limited in size. Many ad- vances from the TREC competitions were used in the IBM Watson system for Jeopardy! (Ferrucci et al., 2010). Other datasets includes SearchQA (Dunn et al., 2017) where Jeopardy! questions are paired with search engine snippets, the Wik- iQA dataset (Yang et al., 2015) for answer sen- tence selection, and the Chinese language WebQA (Li et al., 2016) dataset, which focuses on the task of answer phrase extraction. TriviaQA contains examples that could be used for both stages of the pipeline, although our focus on this paper is in- stead on using the data for reading comprehension where the answer is always present.
1705.03551#34
TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension
We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a feature-based classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23% and 40% vs. 80%), suggesting that TriviaQA is a challenging testbed that is worth significant future study. Data and code available at -- http://nlp.cs.washington.edu/triviaqa/
http://arxiv.org/pdf/1705.03551
Mandar Joshi, Eunsol Choi, Daniel S. Weld, Luke Zettlemoyer
cs.CL
Added references, fixed typos, minor baseline update
null
cs.CL
20170509
20170513
[]
1705.03551
35
Other recent approaches attempt to combine structured high precision KBs with semi- like OpenIE structured information sources triples (Fader et al., 2014), HTML tables (Pasupat and Liang, 2015), and large (and noisy) (Sawant and Chakrabarti, 2013; Joshi corpora et al., 2014; Xu et al., 2015). TriviaQA, which has Wikipedia entities as answers, makes it possible to leverage structured KBs like Freebase, which we leave to future work. Furthermore, about 7% of the TriviaQA questions have answers in HTML tables and lists, which could be used to augment these existing resources.
1705.03551#35
TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension
We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a feature-based classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23% and 40% vs. 80%), suggesting that TriviaQA is a challenging testbed that is worth significant future study. Data and code available at -- http://nlp.cs.washington.edu/triviaqa/
http://arxiv.org/pdf/1705.03551
Mandar Joshi, Eunsol Choi, Daniel S. Weld, Luke Zettlemoyer
cs.CL
Added references, fixed typos, minor baseline update
null
cs.CL
20170509
20170513
[]
1705.03551
36
Trivia questions from quiz bowl have been pre- viously used in other question answering tasks (Boyd-Graber et al., 2012). Quiz bowl questions are paragraph length and pyramidal.10 A num- ber of different aspects of this problem have been carefully studied, typically using classifiers over a pre-defined set of answers (Iyyer et al., 2014) and studying incremental answering to answer as quickly as possible (Boyd-Graber et al., 2012) or using reinforcement learning to model opponent behavior (He et al., 2016). These competitive chal- lenges are not present in our single-sentence ques- tion setting. Developing joint models for multi- sentence reasoning for questions and answer doc- uments is an important area for future work. # 8 Conclusion and Future Work
1705.03551#36
TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension
We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a feature-based classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23% and 40% vs. 80%), suggesting that TriviaQA is a challenging testbed that is worth significant future study. Data and code available at -- http://nlp.cs.washington.edu/triviaqa/
http://arxiv.org/pdf/1705.03551
Mandar Joshi, Eunsol Choi, Daniel S. Weld, Luke Zettlemoyer
cs.CL
Added references, fixed typos, minor baseline update
null
cs.CL
20170509
20170513
[]
1705.03551
37
# 8 Conclusion and Future Work We present TriviaQA, a new dataset of 650K To our question-document-evidence triples. knowledge, TriviaQA is the first dataset where questions are authored by trivia enthusiasts, inde- pendently of the evidence documents. The evi- dence documents come from two domains – Web search results and Wikipedia pages – with highly differing levels of information redundancy. Re- sults from current state-of-the-art baselines indi10Pyramidal questions consist of a series of clues about the answer arranged in order from most to least difficult. cate that TriviaQA is a challenging testbed that de- serves significant future study. While not the focus of this paper, TriviaQA also provides a provides a benchmark for a variety of other tasks such as IR-style question answering, QA over structured KBs and joint modeling of KBs and text, with much more data than previ- ously available. # Acknowledgments
1705.03551#37
TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension
We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a feature-based classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23% and 40% vs. 80%), suggesting that TriviaQA is a challenging testbed that is worth significant future study. Data and code available at -- http://nlp.cs.washington.edu/triviaqa/
http://arxiv.org/pdf/1705.03551
Mandar Joshi, Eunsol Choi, Daniel S. Weld, Luke Zettlemoyer
cs.CL
Added references, fixed typos, minor baseline update
null
cs.CL
20170509
20170513
[]
1705.03551
38
# Acknowledgments This work was supported by DARPA contract the WRF/Cable Professor- FA8750-13-2-0019, ship, gifts from Google and Tencent, and an Allen Distinguished Investigator Award. The authors would like to thank Minjoon Seo for the BiDAF code, and Noah Smith, Srinivasan Iyer, Mark Yatskar, Nicholas FitzGerald, Antoine Bosselut, Dallas Card, and anonymous reviewers for help- ful comments. # References Jonathan Berant, Andrew Chou, Roy Frostig, and Semantic parsing on free- Percy Liang. 2013. In Proceedings base from question-answer pairs. of the 2013 Conference on Empirical Methods in Natural Language Processing, EMNLP 2013, 18- 21 October 2013, Grand Hyatt Seattle, Seattle, Washington, USA, A meeting of SIGDAT, a Spe- cial Interest Group of the ACL. pages 1533–1544. http://aclweb.org/anthology/D/D13/D13-1160.pdf. Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Large-scale simple ques- CoRR Jason Weston. 2015. tion answering with memory networks. abs/1506.02075. https://arxiv.org/abs/1506.02075.
1705.03551#38
TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension
We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a feature-based classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23% and 40% vs. 80%), suggesting that TriviaQA is a challenging testbed that is worth significant future study. Data and code available at -- http://nlp.cs.washington.edu/triviaqa/
http://arxiv.org/pdf/1705.03551
Mandar Joshi, Eunsol Choi, Daniel S. Weld, Luke Zettlemoyer
cs.CL
Added references, fixed typos, minor baseline update
null
cs.CL
20170509
20170513
[]
1705.03551
39
Jordan Boyd-Graber, Brianna Satinoff, He He, and Hal Daum´e III. 2012. Besting the quiz master: Crowdsourcing incremental classification In Proceedings of the 2012 Joint Con- games. ference on Empirical Methods in Natural Lan- guage Processing and Computational Natural Lan- guage Learning. Association for Computational Linguistics, Jeju Island, Korea, pages 1290–1301. http://www.aclweb.org/anthology/D12-1118. Qingqing Cai and Alexander Yates. 2013. Large-scale semantic parsing via schema matching and lexicon extension. In Proceedings of the 51st Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computa- tional Linguistics, Sofia, Bulgaria, pages 423–433. http://www.aclweb.org/anthology/P13-1042. Danqi Chen, Jason Bolton, and Christopher D. Man- the cnn/daily mail reading comprehension task. In Pro- ceedings of the 54th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 2358–2367. http://www.aclweb.org/anthology/P16-1223.
1705.03551#39
TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension
We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a feature-based classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23% and 40% vs. 80%), suggesting that TriviaQA is a challenging testbed that is worth significant future study. Data and code available at -- http://nlp.cs.washington.edu/triviaqa/
http://arxiv.org/pdf/1705.03551
Mandar Joshi, Eunsol Choi, Daniel S. Weld, Luke Zettlemoyer
cs.CL
Added references, fixed typos, minor baseline update
null
cs.CL
20170509
20170513
[]
1705.03551
40
Matthew Dunn, Levent Sagun, Mike Higgins, Ugur and Kyunghyun Cho. Guney, Volkan Cirik, 2017. Searchqa: A new q&a dataset aug- mented with context from a search engine. CoRR https://arxiv.org/abs/1704.05179. Anthony Fader, Luke Zettlemoyer, and Oren Etzioni. 2014. Open question answering over curated and In Proceedings of extracted knowledge bases. the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, New York, NY, USA, KDD ’14, pages 1156–1165. https://doi.org/10.1145/2623330.2623677. Tagme: fragments On-the-fly annotation of the (by wikipedia entities). 19th ACM International Conference on Informa- tion and Knowledge Management. ACM, New York, NY, USA, CIKM ’10, pages 1625–1628. https://doi.org/10.1145/1871437.1871689.
1705.03551#40
TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension
We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a feature-based classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23% and 40% vs. 80%), suggesting that TriviaQA is a challenging testbed that is worth significant future study. Data and code available at -- http://nlp.cs.washington.edu/triviaqa/
http://arxiv.org/pdf/1705.03551
Mandar Joshi, Eunsol Choi, Daniel S. Weld, Luke Zettlemoyer
cs.CL
Added references, fixed typos, minor baseline update
null
cs.CL
20170509
20170513
[]
1705.03551
41
David Ferrucci, Eric Brown, Jennifer Chu-Carroll, James Fan, David Gondek, Aditya A. Kalyan- pur, Adam Lally, J. William Murdock, Eric Ny- berg, John Prager, Nico Schlaefer, and Chris Welty. 2010. Building watson: An overview of the deepqa project. AI MAGAZINE 31(3):59–79. Jordan Boyd-Graber, Kevin Kwok, and Hal Daum´e III. 2016. Opponent modeling in In Maria Flo- learning. deep reinforcement rina Balcan and Kilian Q. Weinberger, editors, Proceedings of The 33rd International Confer- ence on Machine Learning. PMLR, New York, New York, USA, volume 48 of Proceedings of Machine Learning Research, pages 1804–1813. http://proceedings.mlr.press/v48/he16.html. Karl Moritz Hermann, Tom´aˇs Koˇcisk´y, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Teach- Suleyman, and Phil Blunsom. 2015. In Ad- ing machines to read and comprehend. vances in Neural Information Processing Systems. http://arxiv.org/abs/1506.03340.
1705.03551#41
TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension
We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a feature-based classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23% and 40% vs. 80%), suggesting that TriviaQA is a challenging testbed that is worth significant future study. Data and code available at -- http://nlp.cs.washington.edu/triviaqa/
http://arxiv.org/pdf/1705.03551
Mandar Joshi, Eunsol Choi, Daniel S. Weld, Luke Zettlemoyer
cs.CL
Added references, fixed typos, minor baseline update
null
cs.CL
20170509
20170513
[]
1705.03551
42
Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2015. The goldilocks principle: Reading children’s books with explicit memory representa- tions. CoRR https://arxiv.org/abs/1511.02301. Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke Zettlemoyer, and Daniel S. Weld. 2011. Knowledge-based weak supervision for information In Proceed- extraction of overlapping relations. ings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Lin- guistics, Portland, Oregon, USA, pages 541–550. http://www.aclweb.org/anthology/P11-1055. Mohit Iyyer, Jordan Boyd-Graber, Leonardo Claudino, Richard Socher, and Hal Daum´e III. 2014. A neural network for factoid question answering over In Proceedings of the 2014 Confer- paragraphs. ence on Empirical Methods in Natural Language Processing (EMNLP). Association for Computa- tional Linguistics, Doha, Qatar, pages 633–644. http://www.aclweb.org/anthology/D14-1070.
1705.03551#42
TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension
We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a feature-based classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23% and 40% vs. 80%), suggesting that TriviaQA is a challenging testbed that is worth significant future study. Data and code available at -- http://nlp.cs.washington.edu/triviaqa/
http://arxiv.org/pdf/1705.03551
Mandar Joshi, Eunsol Choi, Daniel S. Weld, Luke Zettlemoyer
cs.CL
Added references, fixed typos, minor baseline update
null
cs.CL
20170509
20170513
[]
1705.03551
43
Mandar Joshi, Uma Sawant, and Soumen Chakrabarti. 2014. Knowledge graph and corpus driven segmen- tation and answer inference for telegraphic entity- In Proceedings of the 2014 Con- seeking queries. ference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computa- tional Linguistics, Doha, Qatar, pages 1104–1114. http://www.aclweb.org/anthology/D14-1117. Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017. Race: Large-scale reading comprehension dataset from examinations. CoRR https://arxiv.org/abs/1704.04683. Peng Li, Wei Li, Zhengyan He, Xuguang Wang, Ying Cao, Jie Zhou, and Wei Xu. 2016. Dataset and neural recurrent sequence labeling model for open-domain factoid question answering. CoRR https://arxiv.org/abs/1607.06275.
1705.03551#43
TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension
We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a feature-based classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23% and 40% vs. 80%), suggesting that TriviaQA is a challenging testbed that is worth significant future study. Data and code available at -- http://nlp.cs.washington.edu/triviaqa/
http://arxiv.org/pdf/1705.03551
Mandar Joshi, Eunsol Choi, Daniel S. Weld, Luke Zettlemoyer
cs.CL
Added references, fixed typos, minor baseline update
null
cs.CL
20170509
20170513
[]
1705.03551
44
Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. MS MARCO: A human generated machine reading comprehension dataset. In Workshop in Ad- vances in Neural Information Processing Systems. https://arxiv.org/pdf/1611.09268.pdf. Takeshi Onishi, Hai Wang, Mohit Bansal, Kevin Gim- pel, and David McAllester. 2016. Who did what: A large-scale person-centered cloze dataset. In Pro- ceedings of the 2016 Conference on Empirical Meth- ods in Natural Language Processing. Association for Computational Linguistics, Austin, Texas, pages 2230–2235. https://aclweb.org/anthology/D16- 1241.
1705.03551#44
TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension
We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a feature-based classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23% and 40% vs. 80%), suggesting that TriviaQA is a challenging testbed that is worth significant future study. Data and code available at -- http://nlp.cs.washington.edu/triviaqa/
http://arxiv.org/pdf/1705.03551
Mandar Joshi, Eunsol Choi, Daniel S. Weld, Luke Zettlemoyer
cs.CL
Added references, fixed typos, minor baseline update
null
cs.CL
20170509
20170513
[]
1705.03551
45
Denis Paperno, Germ´an Kruszewski, Angeliki Lazari- dou, Ngoc Quan Pham, Raffaella Bernardi, San- dro Pezzelle, Marco Baroni, Gemma Boleda, and Raquel Fernandez. 2016. The lambada dataset: Word prediction requiring a broad discourse con- text. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers). Association for Computa- tional Linguistics, Berlin, Germany, pages 1525– 1534. http://www.aclweb.org/anthology/P16-1144. Panupong Pasupat and Percy Liang. 2015. Com- positional semantic parsing on semi-structured ta- In Proceedings of the 53rd Annual Meet- bles. ing of the Association for Computational Lin- guistics and the 7th International Joint Confer- the ence on Natural Language Processing of Asian Federation of Natural Language Process- July 26-31, 2015, Beijing, ing, ACL 2015, China, Volume 1: Long Papers. pages 1470–1480. http://aclweb.org/anthology/P/P15/P15-1142.pdf.
1705.03551#45
TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension
We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a feature-based classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23% and 40% vs. 80%), suggesting that TriviaQA is a challenging testbed that is worth significant future study. Data and code available at -- http://nlp.cs.washington.edu/triviaqa/
http://arxiv.org/pdf/1705.03551
Mandar Joshi, Eunsol Choi, Daniel S. Weld, Luke Zettlemoyer
cs.CL
Added references, fixed typos, minor baseline update
null
cs.CL
20170509
20170513
[]
1705.03551
46
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing. Association for Computa- tional Linguistics, Austin, Texas, pages 2383–2392. https://aclweb.org/anthology/D16-1264. Matthew Richardson, Christopher J.C. Burges, and Erin Renshaw. 2013. MCTest: A challenge dataset for the open-domain machine comprehension of the 2013 Conference text. on Empirical Methods in Natural Language Pro- cessing. Association for Computational Linguis- tics, Seattle, Washington, USA, pages 193–203. http://www.aclweb.org/anthology/D13-1020. Sebastian Riedel, Limin Yao, and Andrew Mc- Modeling relations and their Callum. 2010. In Proceedings labeled text. mentions without of the 2010 European Conference on Ma- chine Learning and Knowledge Discovery in Databases: III. Springer-Verlag, Berlin, Heidelberg, ECML PKDD’10, pages 148–163. http://dl.acm.org/citation.cfm?id=1889788.1889799.
1705.03551#46
TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension
We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a feature-based classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23% and 40% vs. 80%), suggesting that TriviaQA is a challenging testbed that is worth significant future study. Data and code available at -- http://nlp.cs.washington.edu/triviaqa/
http://arxiv.org/pdf/1705.03551
Mandar Joshi, Eunsol Choi, Daniel S. Weld, Luke Zettlemoyer
cs.CL
Added references, fixed typos, minor baseline update
null
cs.CL
20170509
20170513
[]
1705.03551
47
Uma Sawant and Soumen Chakrabarti. 2013. Learn- ing joint query interpretation and response rank- In Proceedings of the 22Nd International ing. Conference on World Wide Web. ACM, New York, NY, USA, WWW ’13, pages 1099–1110. https://doi.org/10.1145/2488388.2488484. Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional attention flow for machine comprehension. In Proceedings of the International Conference on Learning Represen- tations (ICLR). https://arxiv.org/abs/1611.01603. Adam Trischler, Tong Wang, Xingdi Yuan, Justin Philip Bach- Newsqa: CoRR Ellen M. Voorhees and Dawn M. Tice. 2000. Build- In ing a question answering test collection. Proceedings of International ACM SIGIR Conference on Research and De- velopment in Information Retrieval. ACM, New York, NY, USA, SIGIR ’00, pages 200–207. https://doi.org/10.1145/345508.345577.
1705.03551#47
TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension
We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a feature-based classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23% and 40% vs. 80%), suggesting that TriviaQA is a challenging testbed that is worth significant future study. Data and code available at -- http://nlp.cs.washington.edu/triviaqa/
http://arxiv.org/pdf/1705.03551
Mandar Joshi, Eunsol Choi, Daniel S. Weld, Luke Zettlemoyer
cs.CL
Added references, fixed typos, minor baseline update
null
cs.CL
20170509
20170513
[]
1705.03551
48
Hai Wang, Mohit Bansal, Kevin Gimpel, and David McAllester. 2015. Machine comprehension with syntax, frames, and semantics. In Proceedings of the 53rd Annual Meeting of the Association for Compu- tational Linguistics and the 7th International Joint Conference on Natural Language Processing (Vol- ume 2: Short Papers). Association for Computa- tional Linguistics, Beijing, China, pages 700–706. http://www.aclweb.org/anthology/P15-2115. Qiang Wu, Christopher J. Burges, Krysta M. Svore, and Jianfeng Gao. 2010. Adapting boosting for infor- mation retrieval measures. Inf. Retr. 13(3):254–270. https://doi.org/10.1007/s10791-009-9112-1. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, and Yoshua Bengio. 2015. Show, at- image caption generation tend and tell: Neural the In Proceedings of with visual attention. International Conference on Machine Learning. https://arxiv.org/abs/1502.03044.
1705.03551#48
TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension
We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a feature-based classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23% and 40% vs. 80%), suggesting that TriviaQA is a challenging testbed that is worth significant future study. Data and code available at -- http://nlp.cs.washington.edu/triviaqa/
http://arxiv.org/pdf/1705.03551
Mandar Joshi, Eunsol Choi, Daniel S. Weld, Luke Zettlemoyer
cs.CL
Added references, fixed typos, minor baseline update
null
cs.CL
20170509
20170513
[]
1705.03551
49
Yi Yang, Wen-tau Yih, and Christopher Meek. 2015. Wikiqa: A challenge dataset for open-domain ques- In Proceedings of the 2015 Con- tion answering. ference on Empirical Methods in Natural Lan- guage Processing. Association for Computational Linguistics, Lisbon, Portugal, pages 2013–2018. http://aclweb.org/anthology/D15-1237. Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierar- chical attention networks for document classifica- In Proceedings of the 2016 Conference of tion. the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Lin- guistics, San Diego, California, pages 1480–1489. http://www.aclweb.org/anthology/N16-1174.
1705.03551#49
TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension
We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a feature-based classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23% and 40% vs. 80%), suggesting that TriviaQA is a challenging testbed that is worth significant future study. Data and code available at -- http://nlp.cs.washington.edu/triviaqa/
http://arxiv.org/pdf/1705.03551
Mandar Joshi, Eunsol Choi, Daniel S. Weld, Luke Zettlemoyer
cs.CL
Added references, fixed typos, minor baseline update
null
cs.CL
20170509
20170513
[]
1705.03122
0
7 1 0 2 l u J 5 2 ] L C . s c [ 3 v 2 2 1 3 0 . 5 0 7 1 : v i X r a # Convolutional Sequence to Sequence Learning Jonas Gehring Michael Auli David Grangier Denis Yarats Yann N. Dauphin Facebook AI Facebook AI Research # Abstract The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural net- works. We introduce an architecture based en- tirely on convolutional neural networks.! Com- pared to recurrent models, computations over all elements can be fully parallelized during training to better exploit the GPU hardware and optimiza- tion is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propaga- tion and we equip each decoder layer with a sep- arate attention module. We outperform the accu- racy of the deep LSTM setup of Wu et al. (2016) on both WMT” 14 English-German and WMT’ 14 English-French translation at an order of magni- tude faster speed, both on GPU and CPU. # 1. Introduction
1705.03122#0
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
http://arxiv.org/pdf/1705.03122
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
cs.CL
null
null
cs.CL
20170508
20170725
[ { "id": "1611.01576" }, { "id": "1610.00072" }, { "id": "1609.09106" }, { "id": "1612.08083" }, { "id": "1607.06450" } ]
1705.03122
1
# 1. Introduction Sequence to sequence learning has been successful in many tasks such as machine translation, speech recogni- tion (Sutskever et al., 2014; Chorowski et al., 2015) and text summarization (Rush et al., 2015; Nallapati et al., 2016; Shen et al., 2016) amongst others. The dominant approach to date encodes the input sequence with a se- ries of bi-directional recurrent neural networks (RNN) and generates a variable length output with another set of de- coder RNNs, both of which interface via a soft-attention mechanism (Bahdanau et al., 2014; Luong et al., 2015). In machine translation, this architecture has been demon- strated to outperform traditional phrase-based models by large margins (Sennrich et al., 2016b; Zhou et al., 2016; Wu et al., 2016; §2). 'The source code and models are available at https: // github.com/facebookresearch/fairseq.
1705.03122#1
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
http://arxiv.org/pdf/1705.03122
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
cs.CL
null
null
cs.CL
20170508
20170725
[ { "id": "1611.01576" }, { "id": "1610.00072" }, { "id": "1609.09106" }, { "id": "1612.08083" }, { "id": "1607.06450" } ]
1705.03122
2
'The source code and models are available at https: // github.com/facebookresearch/fairseq. Convolutional neural networks are less common for se- quence modeling, despite several advantages (Waibel et al., 1989; LeCun & Bengio, 1995). Compared to recurrent lay- ers, convolutions create representations for fixed size con- texts, however, the effective context size of the network can easily be made larger by stacking several layers on top of each other. This allows to precisely control the maximum length of dependencies to be modeled. Convolutional net- works do not depend on the computations of the previous time step and therefore allow parallelization over every ele- ment in a sequence. This contrasts with RNNs which main- tain a hidden state of the entire past that prevents parallel computation within a sequence.
1705.03122#2
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
http://arxiv.org/pdf/1705.03122
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
cs.CL
null
null
cs.CL
20170508
20170725
[ { "id": "1611.01576" }, { "id": "1610.00072" }, { "id": "1609.09106" }, { "id": "1612.08083" }, { "id": "1607.06450" } ]
1705.03122
3
Multi-layer convolutional neural networks create hierarchi- cal representations over the input sequence in which nearby input elements interact at lower layers while distant ele- ments interact at higher layers. Hierarchical structure pro- vides a shorter path to capture long-range dependencies compared to the chain structure modeled by recurrent net- works, e.g. we can obtain a feature representation captur- ing relationships within a window of n words by applying only O(;) convolutional operations for kernels of width k, compared to a linear number O(n) for recurrent neu- ral networks. Inputs to a convolutional network are fed through a constant number of kernels and non-linearities, whereas recurrent networks apply up to n operations and non-linearities to the first word and only a single set of operations to the last word. Fixing the number of non- linearities applied to the inputs also eases learning.
1705.03122#3
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
http://arxiv.org/pdf/1705.03122
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
cs.CL
null
null
cs.CL
20170508
20170725
[ { "id": "1611.01576" }, { "id": "1610.00072" }, { "id": "1609.09106" }, { "id": "1612.08083" }, { "id": "1607.06450" } ]
1705.03122
4
Recent work has applied convolutional neural networks to sequence modeling such as Bradbury et al. (2016) who in- troduce recurrent pooling between a succession of convo- lutional layers or Kalchbrenner et al. (2016) who tackle neural translation without attention. However, none of these approaches has been demonstrated improvements over state of the art results on large benchmark datasets. Gated convolutions have been previously explored for ma- chine translation by Meng et al. (2015) but their evaluation was restricted to a small dataset and the model was used in tandem with a traditional count-based model. Architec1 Convolutional Sequence to Sequence Learning tures which are partially convolutional have shown strong performance on larger tasks but their decoder is still recur- rent (Gehring et al., 2016). state h; and the last prediction y;; the result is normalized to be a distribution over input elements. In this paper we propose an architecture for sequence to se- quence modeling that is entirely convolutional. Our model is equipped with gated linear units (Dauphin et al., 2016) and residual connections (He et al., 2015a). We also use attention in every decoder layer and demonstrate that each attention layer only adds a negligible amount of overhead. The combination of these choices enables us to tackle large scale problems (§3).
1705.03122#4
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
http://arxiv.org/pdf/1705.03122
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
cs.CL
null
null
cs.CL
20170508
20170725
[ { "id": "1611.01576" }, { "id": "1610.00072" }, { "id": "1609.09106" }, { "id": "1612.08083" }, { "id": "1607.06450" } ]
1705.03122
5
We evaluate our approach on several large datasets for ma- chine translation as well as summarization and compare to the current best architectures reported in the literature. On WMT’ 16 English-Romanian translation we achieve a new state of the art, outperforming the previous best result by 1.9 BLEU. On WMT’ 14 English-German we outperform the strong LSTM setup of Wu et al. (2016) by 0.5 BLEU and on WMT’ 14 English-French we outperform the like- lihood trained system of Wu et al. (2016) by 1.6 BLEU. Furthermore, our model can translate unseen sentences at an order of magnitude faster speed than Wu et al. (2016) on GPU and CPU hardware (§4, §5).
1705.03122#5
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
http://arxiv.org/pdf/1705.03122
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
cs.CL
null
null
cs.CL
20170508
20170725
[ { "id": "1611.01576" }, { "id": "1610.00072" }, { "id": "1609.09106" }, { "id": "1612.08083" }, { "id": "1607.06450" } ]
1705.03122
6
Popular choices for recurrent networks in encoder-decoder models are long short term memory networks (LSTM; Hochreiter & Schmidhuber, 1997) and gated recurrent units (GRU; Cho et al., 2014). Both extend Elman RNNs (El- man, 1990) with a gating mechanism that allows the mem- orization of information from previous time steps in order to model long-term dependencies. Most recent approaches also rely on bi-directional encoders to build representations of both past and future contexts (Bahdanau et al., 2014; Zhou et al., 2016; Wu et al., 2016). Models with many lay- ers often rely on shortcut or residual connections (He et al., 2015a; Zhou et al., 2016; Wu et al., 2016). # 3. A Convolutional Architecture Next we introduce a fully convolutional architecture for se- quence to sequence modeling. Instead of relying on RNNs to compute intermediate encoder states z and decoder states h we use convolutional neural networks (CNN). # 3.1. Position Embeddings # 2. Recurrent Sequence to Sequence Learning
1705.03122#6
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
http://arxiv.org/pdf/1705.03122
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
cs.CL
null
null
cs.CL
20170508
20170725
[ { "id": "1611.01576" }, { "id": "1610.00072" }, { "id": "1609.09106" }, { "id": "1612.08083" }, { "id": "1607.06450" } ]
1705.03122
7
# 3.1. Position Embeddings # 2. Recurrent Sequence to Sequence Learning Sequence to sequence modeling has been synonymous with recurrent neural network based encoder-decoder ar- chitectures (Sutskever et al., 2014; Bahdanau et al., 2014). The encoder RNN processes an input sequence x = (1,...,@m) of m elements and returns state representa- tions z = (z....,2m). The decoder RNN takes z and generates the output sequence y = (y1,...,Yn) left to right, one element at a time. To generate output y;+1, the decoder computes a new hidden state h;,, based on the previous state h;, an embedding g; of the previous target language word y;, as well as a conditional input c; derived from the encoder output z. Based on this generic formula- tion, various encoder-decoder architectures have been pro- posed, which differ mainly in the conditional input and the type of RNN.
1705.03122#7
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
http://arxiv.org/pdf/1705.03122
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
cs.CL
null
null
cs.CL
20170508
20170725
[ { "id": "1611.01576" }, { "id": "1610.00072" }, { "id": "1609.09106" }, { "id": "1612.08083" }, { "id": "1607.06450" } ]
1705.03122
8
First, we embed input elements x = (21,...,2m) in dis- tributional space as w = (w,...,Wy,), where w; € RS is a column in an embedding matrix D € RY. We also equip our model with a sense of order by embedding the ab- solute position of input elements p = (pi,...,2m) where Pj Ee RY. Both are combined to obtain input element rep- resentations e = (w, + pi,...,Wm + Pm). We proceed similarly for output elements that were already generated by the decoder network to yield output element represen- tations that are being fed back into the decoder network g = (91,---+9n). Position embeddings are useful in our architecture since they give our model a sense of which portion of the sequence in the input or output it is currently dealing with (85.4). # 3.2. Convolutional Block Structure
1705.03122#8
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
http://arxiv.org/pdf/1705.03122
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
cs.CL
null
null
cs.CL
20170508
20170725
[ { "id": "1611.01576" }, { "id": "1610.00072" }, { "id": "1609.09106" }, { "id": "1612.08083" }, { "id": "1607.06450" } ]
1705.03122
9
# 3.2. Convolutional Block Structure Models without attention consider only the final encoder state z,, by setting c; = z,, for all i (Cho et al., 2014), or simply initialize the first decoder state with z,,, (Sutskever et al., 2014), in which case c; is not used. Architectures with attention (Bahdanau et al., 2014; Luong et al., 2015) compute c; as a weighted sum of (2)...., 2m) at each time step. The weights of the sum are referred to as attention scores and allow the network to focus on different parts of the input sequence as it generates the output sequences. At- tention scores are computed by essentially comparing each encoder state z; to a combination of the previous decoder
1705.03122#9
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
http://arxiv.org/pdf/1705.03122
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
cs.CL
null
null
cs.CL
20170508
20170725
[ { "id": "1611.01576" }, { "id": "1610.00072" }, { "id": "1609.09106" }, { "id": "1612.08083" }, { "id": "1607.06450" } ]
1705.03122
10
Both encoder and decoder networks share a simple block structure that computes intermediate states based on a fixed number of input elements. We denote the output of the /- th block as h’ = (hi,...,h!,) for the decoder network, and z! = (z!,...,2!,) for the encoder network; we refer to blocks and layers interchangeably. Each block contains a one dimensional convolution followed by a non-linearity. For a decoder network with a single block and kernel width k, each resulting state h} contains information over k input elements. Stacking several blocks on top of each other in- creases the number of input elements represented in a state. For instance, stacking 6 blocks with k = 5 results in an in- put field of 25 elements, i.e. each output depends on 25 2 Convolutional Sequence to Sequence Learning inputs. Non-linearities allow the networks to exploit the full input field, or to focus on fewer elements if needed.
1705.03122#10
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
http://arxiv.org/pdf/1705.03122
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
cs.CL
null
null
cs.CL
20170508
20170725
[ { "id": "1611.01576" }, { "id": "1610.00072" }, { "id": "1609.09106" }, { "id": "1612.08083" }, { "id": "1607.06450" } ]
1705.03122
11
2 Convolutional Sequence to Sequence Learning inputs. Non-linearities allow the networks to exploit the full input field, or to focus on fewer elements if needed. Each convolution kernel is parameterized as W € IR2¢**4, by € IR and takes as input X € R**4 which is a concatenation of k input elements embedded in d dimen- sions and maps them to a single output element Y € R74 that has twice the dimensionality of the input elements; subsequent layers operate over the k output elements of the previous layer. We choose gated linear units (GLU; Dauphin et al., 2016) as non-linearity which implement a simple gating mechanism over the output of the convolu- tion Y = [A B] € R74: v(( B)) =A®o(B) where A, B € R® are the inputs to the non-linearity, @ is the point-wise multiplication and the output v([A B]) € R¢ is half the size of Y. The gates ¢(B) control which inputs A of the current context are relevant. A similar non- linearity has been introduced in Oord et al. (2016b) who apply tanh to A but Dauphin et al. (2016) shows that GLUs perform better in the context of language modelling.
1705.03122#11
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
http://arxiv.org/pdf/1705.03122
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
cs.CL
null
null
cs.CL
20170508
20170725
[ { "id": "1611.01576" }, { "id": "1610.00072" }, { "id": "1609.09106" }, { "id": "1612.08083" }, { "id": "1607.06450" } ]
1705.03122
12
To enable deep convolutional networks, we add residual connections from the input of each convolution to the out- put of the block (He et al., 2015a). <p> </s> <p> They agree <p> Embeddings HH Gated Linear Units Attention I z ® ~® ~® Dot products yoyoy oy yoyo oy oy : <p> <s>_— Sie stimmen zu Sie stimmen zu </s> hi = OW [par Migeyal + Oe) +E For encoder networks we ensure that the output of the con- volutional layers matches the input length by padding the input at each layer. However, for decoder networks we have to take care that no future information is available to the de- coder (Oord et al., 2016a). Specifically, we pad the input by & — 1 elements on both the left and right side by zero vectors, and then remove k elements from the end of the convolution output.
1705.03122#12
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
http://arxiv.org/pdf/1705.03122
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
cs.CL
null
null
cs.CL
20170508
20170725
[ { "id": "1611.01576" }, { "id": "1610.00072" }, { "id": "1609.09106" }, { "id": "1612.08083" }, { "id": "1607.06450" } ]
1705.03122
13
Figure 1. Illustration of batching during training. The English source sentence is encoded (top) and we compute all attention values for the four German target words (center) simultaneously. Our attentions are just dot products between decoder context rep- resentations (bottom left) and encoder representations. We add the conditional inputs computed by the attention (center right) to the decoder states which then predict the target words (bottom right). The sigmoid and multiplicative boxes illustrate Gated Lin- ear Units. target element g;: We also add linear mappings to project between the embed- ding size f and the convolution outputs that are of size 2d. We apply such a transform to w when feeding embeddings to the encoder network, to the encoder output z;, to the fi- nal layer of the decoder just before the softmax Tae, and to all decoder layers h! before computing attention scores (1). di = Wahi + ba + gi oO) For decoder layer / the attention a, of state i and source el- ement j is computed as a dot-product between the decoder state summary di and each output z} of the last encoder block u: Finally, we compute a distribution over the T’ possible next target elements y;,1 by transforming the top decoder out- put h¥ viaa linear layer with weights W, and bias bo:
1705.03122#13
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
http://arxiv.org/pdf/1705.03122
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
cs.CL
null
null
cs.CL
20170508
20170725
[ { "id": "1611.01576" }, { "id": "1610.00072" }, { "id": "1609.09106" }, { "id": "1612.08083" }, { "id": "1607.06450" } ]
1705.03122
14
PUY. Yis-+-sYisX) = softmax(W hk + by) € RT 1 exp (ih -28) 0 SE exp (df - =F) a, The conditional input c} to the current decoder layer is a weighted sum of the encoder outputs as well as the input element embeddings e,; (Figure 1, center right): # 3.3. Multi-step Attention m = So ai, (2} +e;) (2) j=l We introduce a separate attention mechanism for each de- coder layer. To compute the attention, we combine the cur- rent decoder state h! with an embedding of the previous This is slightly different to recurrent approaches which compute both the attention and the weighted sum over z7 3 Convolutional Sequence to Sequence Learning only. We found adding e; to be beneficial and it resem- bles key-value memory networks where the keys are the z/' and the values are the 2; + e; (Miller et al., 2016). En- coder outputs z;' represent potentially large input contexts and e; provides point information about a specific input el- ement that is useful when making a prediction. Once c} has been computed, it is simply added to the output of the corresponding decoder layer hi.
1705.03122#14
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
http://arxiv.org/pdf/1705.03122
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
cs.CL
null
null
cs.CL
20170508
20170725
[ { "id": "1611.01576" }, { "id": "1610.00072" }, { "id": "1609.09106" }, { "id": "1612.08083" }, { "id": "1607.06450" } ]
1705.03122
15
This can be seen as attention with multiple *hops’ (Sukhbaatar et al., 2015) compared to single step attention (Bahdanau et al., 2014; Luong et al., 2015; Zhou et al., 2016; Wu et al., 2016). In particular, the attention of the first layer determines a useful source context which is then fed to the second layer that takes this information into account when computing attention etc. The decoder also has immediate access to the attention history of the k — 1 previous time steps because the conditional inputs ah, seey at are part of nmi, wee hit which are input to hi. This makes it easier for the model to take into ac- count which previous inputs have been attended to already compared to recurrent nets where this information is in the recurrent state and needs to survive several non-linearities. Overall, our attention mechanism considers which words we previously attended to (Yang et al., 2016) and performs multiple attention ’hops’ per time step. In Appendix §C, we plot attention scores for a deep decoder and show that at different layers, different portions of the source are at- tended to. of attention mechanisms we use; we exclude source word embeddings. We found this to stabilize learning since the encoder received too much gradient otherwise. # 3.5. Initialization
1705.03122#15
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
http://arxiv.org/pdf/1705.03122
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
cs.CL
null
null
cs.CL
20170508
20170725
[ { "id": "1611.01576" }, { "id": "1610.00072" }, { "id": "1609.09106" }, { "id": "1612.08083" }, { "id": "1607.06450" } ]
1705.03122
16
of attention mechanisms we use; we exclude source word embeddings. We found this to stabilize learning since the encoder received too much gradient otherwise. # 3.5. Initialization Normalizing activations when adding the output of dif- ferent layers, e.g. residual connections, requires careful weight initialization. The motivation for our initialization is the same as for the normalization: maintain the variance of activations throughout the forward and backward passes. All embeddings are initialized from a normal distribution with mean 0 and standard deviation 0.1. For layers whose output is not directly fed to a gated linear unit, we initial- ize weights from N(0, \/1/n;) where n; is the number of input connections to each neuron. This ensures that the variance of a normally distributed input is retained.
1705.03122#16
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
http://arxiv.org/pdf/1705.03122
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
cs.CL
null
null
cs.CL
20170508
20170725
[ { "id": "1611.01576" }, { "id": "1610.00072" }, { "id": "1609.09106" }, { "id": "1612.08083" }, { "id": "1607.06450" } ]
1705.03122
17
For layers which are followed by a GLU activation, we pro- pose a weight initialization scheme by adapting the deriva- tions in (He et al., 2015b; Glorot & Bengio, 2010; Ap- pendix A). If the GLU inputs are distributed with mean 0 and have sufficiently small variance, then we can approx- imate the output variance with 1/4 of the input variance (Appendix A.1). Hence, we initialize the weights so that the input to the GLU activations have 4 times the variance of the layer input. This is achieved by drawing their initial values from V(0, \/4/nz). Biases are uniformly set to zero when the network is constructed. Our convolutional architecture also allows to batch the at- tention computation across all elements of a sequence com- pared to RNNs (Figure 1, middle). We batch the computa- tions of each decoder layer individually. # 3.4. Normalization Strategy
1705.03122#17
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
http://arxiv.org/pdf/1705.03122
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
cs.CL
null
null
cs.CL
20170508
20170725
[ { "id": "1611.01576" }, { "id": "1610.00072" }, { "id": "1609.09106" }, { "id": "1612.08083" }, { "id": "1607.06450" } ]
1705.03122
18
# 3.4. Normalization Strategy We stabilize learning through careful weight initialization (83.5) and by scaling parts of the network to ensure that the variance throughout the network does not change dramati- cally. In particular, we scale the output of residual blocks as well as the attention to preserve the variance of activa- tions. We multiply the sum of the input and output of a residual block by V0.5 to halve the variance of the sum. This assumes that both summands have the same variance which is not always true but effective in practice. We apply dropout to the input of some layers so that in- puts are retained with a probability of p. This can be seen as multiplication with a Bernoulli random variable taking value 1/p with probability p and 0 otherwise (Srivastava et al., 2014). The application of dropout will then cause the variance to be scaled by 1/p. We aim to restore the incoming variance by initializing the respective layers with larger weights. Specifically, we use N(0, \/4p/n;) for lay- ers whose output is subject to a GLU and (0, \/p/ni) otherwise (Appendix A.3). # 4. Experimental Setup # 4.1. Datasets We consider three major WMT translation tasks as well as a text summarization task.
1705.03122#18
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
http://arxiv.org/pdf/1705.03122
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
cs.CL
null
null
cs.CL
20170508
20170725
[ { "id": "1611.01576" }, { "id": "1610.00072" }, { "id": "1609.09106" }, { "id": "1612.08083" }, { "id": "1607.06450" } ]
1705.03122
19
# 4. Experimental Setup # 4.1. Datasets We consider three major WMT translation tasks as well as a text summarization task. The conditional input c! generated by the attention is a weighted sum of m vectors (2) and we counteract a change in variance through scaling by m,/1/m; we multiply by m to scale up the inputs to their original size, assuming the attention scores are uniformly distributed. This is generally not the case but we found it to work well in practice. For convolutional decoders with multiple attention, we scale the gradients for the encoder layers by the number WMT’ 16 English-Romanian. We use the same data and pre-processing as Sennrich et al. (2016b) but remove sen- tences with more than 175 words. This results in 2.8M sen- tence pairs for training and we evaluate on newstest2016.” 2We followed the pre-processing of https://github. com/rsennrich/wmt16-scripts/blob/80e21le5/ sample/preprocess. sh and added the back-translated data from http://data.statmt.org/rsennrich/wmt1l6_ 4 Convolutional Sequence to Sequence Learning
1705.03122#19
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
http://arxiv.org/pdf/1705.03122
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
cs.CL
null
null
cs.CL
20170508
20170725
[ { "id": "1611.01576" }, { "id": "1610.00072" }, { "id": "1609.09106" }, { "id": "1612.08083" }, { "id": "1607.06450" } ]
1705.03122
20
4 Convolutional Sequence to Sequence Learning We experiment with word-based models using a source vo- cabulary of 200K types and a target vocabulary of 80K types. We also consider a joint source and target byte-pair encoding (BPE) with 40K types (Sennrich et al., 2016a;b). WMT’ 14 English-German. We use the same setup as Lu- ong et al. (2015) which comprises 4.5M sentence pairs for training and we test on newstest2014.> As vocabulary we use 40K sub-word types based on BPE. WMT’ 14 English-French. We use the full training set of 36M sentence pairs, and remove sentences longer than 175 words as well as pairs with a source/target length ratio ex- ceeding 1.5. This results in 35.5M sentence-pairs for train- ing. Results are reported on newstest2014. We use a source and target vocabulary with 40K BPE types. still fit in GPU memory. If the threshold is exceeded, we simply split the batch until the threshold is met and pro- cess the parts separatedly. Gradients are normalized by the number of non-padding tokens per mini-batch. We also use weight normalization for all layers except for lookup tables (Salimans & Kingma, 2016).
1705.03122#20
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
http://arxiv.org/pdf/1705.03122
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
cs.CL
null
null
cs.CL
20170508
20170725
[ { "id": "1611.01576" }, { "id": "1610.00072" }, { "id": "1609.09106" }, { "id": "1612.08083" }, { "id": "1607.06450" } ]
1705.03122
21
Besides dropout on the embeddings and the decoder out- put, we also apply dropout to the input of the convolu- tional blocks (Srivastava et al., 2014). All models are im- plemented in Torch (Collobert et al., 2011) and trained on a single Nvidia M40 GPU except for WMT’ 14 English- French for which we use a multi-GPU setup on a single machine. We train on up to eight GPUs synchronously by maintaining copies of the model on each card and split the batch so that each worker computes 1/8-th of the gradients; at the end we sum the gradients via Nvidia NCCL. In all setups a small subset of the training data serves as val- idation set (about 0.5-1% for each dataset) for early stop- ping and learning rate annealing. # 4.3. Evaluation
1705.03122#21
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
http://arxiv.org/pdf/1705.03122
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
cs.CL
null
null
cs.CL
20170508
20170725
[ { "id": "1611.01576" }, { "id": "1610.00072" }, { "id": "1609.09106" }, { "id": "1612.08083" }, { "id": "1607.06450" } ]
1705.03122
22
# 4.3. Evaluation Abstractive summarization. We train on the Gigaword corpus (Graff et al., 2003) and pre-process it identically to Rush et al. (2015) resulting in 3.8M training examples and 190K for validation. We evaluate on the DUC-2004 test data comprising 500 article-title pairs (Over et al., 2007) and report three variants of recall-based ROUGE (Lin, 2004), namely, ROUGE-1 (unigrams), ROUGE-2 (bi- grams), and ROUGE-L (longest-common substring). We also evaluate on a Gigaword test set of 2000 pairs which is identical to the one used by Rush et al. (2015) and we report Fl ROUGE similar to prior work. Similar to Shen et al. (2016) we use a source and target vocabulary of 30K words and require outputs to be at least 14 words long.
1705.03122#22
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
http://arxiv.org/pdf/1705.03122
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
cs.CL
null
null
cs.CL
20170508
20170725
[ { "id": "1611.01576" }, { "id": "1610.00072" }, { "id": "1609.09106" }, { "id": "1612.08083" }, { "id": "1607.06450" } ]
1705.03122
23
We report average results over three runs of each model, where each differs only in the initial random seed. Trans- lations are generated by a beam search and we normalize log-likelihood scores by sentence length. We use a beam of width 5. We divide the log-likelihoods of the final hy- pothesis in beam search by their length |y|. For WMT’ 14 English-German we tune a length normalization constant on a separate development set (newstest2015) and we nor- malize log-likelihoods by |y|* (Wu et al., 2016). On other datasets we did not find any benefit with length normaliza- tion. # 4.2. Model Parameters and Optimization We use 512 hidden units for both encoders and decoders, unless otherwise stated. All embeddings, including the out- put produced by the decoder before the final linear layer, have dimensionality 512; we use the same dimensionalities for linear layers mapping between the hidden and embed- ding sizes (§3.2).
1705.03122#23
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
http://arxiv.org/pdf/1705.03122
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
cs.CL
null
null
cs.CL
20170508
20170725
[ { "id": "1611.01576" }, { "id": "1610.00072" }, { "id": "1609.09106" }, { "id": "1612.08083" }, { "id": "1607.06450" } ]
1705.03122
24
We train our convolutional models with Nesterov’s accel- erated gradient method (Sutskever et al., 2013) using a mo- mentum value of 0.99 and renormalize gradients if their norm exceeds 0.1 (Pascanu et al., 2013). We use a learn- ing rate of 0.25 and once the validation perplexity stops improving, we reduce the learning rate by an order of mag- nitude after each epoch until it falls below 10-4. For word-based models, we perform unknown word re- placement based on attention scores after generation (Jean et al., 2015). Unknown words are replaced by looking up the source word with the maximum attention score in a pre- computed dictionary. If the dictionary contains no trans- lation, then we simply copy the source word. Dictionar- ies were extracted from the word aligned training data that we obtained with fast_align (Dyer et al., 2013). Each source word is mapped to the target word it is most fre- quently aligned to. In our multi-step attention (§3.3) we simply average the attention scores over all layers. Fi- nally, we compute case-sensitive tokenized BLEU, except for WMT’ 16 English-Romanian where we use detokenized BLEU to be comparable with Sennrich et al. (2016b).*
1705.03122#24
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
http://arxiv.org/pdf/1705.03122
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
cs.CL
null
null
cs.CL
20170508
20170725
[ { "id": "1611.01576" }, { "id": "1610.00072" }, { "id": "1609.09106" }, { "id": "1612.08083" }, { "id": "1607.06450" } ]
1705.03122
25
4https://github.com/moses-smt / mosesdecoder/blob/617e8c8/scripts/generic/ {multi-bleu.perl,mteval-vl3a.pl} Unless otherwise stated, we use mini-batches of 64 sen- tences. We restrict the maximum number of words in a mini-batch to make sure that batches with long sentences backtranslations/en-ro. # Shttp://nlp.stanford.edu/projects/nmt 5 Convolutional Sequence to Sequence Learning # 5. Results # 5.1. Recurrent vs. Convolutional Models We first evaluate our convolutional model on three transla- tion tasks. On WMT’ 16 English-Romanian translation we compare to Sennrich et al. (2016b) which is the winning entry on this language pair at WMT” 16 (Bojar et al., 2016). Their model implements the attention-based sequence to sequence architecture of Bahdanau et al. (2014) and uses GRU cells both in the encoder and decoder. We test both word-based and BPE vocabularies (§4).
1705.03122#25
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
http://arxiv.org/pdf/1705.03122
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
cs.CL
null
null
cs.CL
20170508
20170725
[ { "id": "1611.01576" }, { "id": "1610.00072" }, { "id": "1609.09106" }, { "id": "1612.08083" }, { "id": "1607.06450" } ]
1705.03122
26
Table | shows that our fully convolutional sequence to se- quence model (ConvS2S) outperforms the WMT’ 16 win- ning entry for English-Romanian by 1.9 BLEU with a BPE encoding and by 1.3 BLEU with a word factored vocabu- lary. This instance of our architecture has 20 layes in the encoder and 20 layers in the decoder, both using kernels of width 3 and hidden size 512 throughout. Training took between 6 and 7.5 days on a single GPU. On WMT?’ 14 English to German translation we compare to the following prior work: Luong et al. (2015) is based on a four layer LSTM attention model, ByteNet (Kalchbrenner et al., 2016) propose a convolutional model based on char- acters without attention, with 30 layers in the encoder and 30 layers in the decoder, GNMT (Wu et al., 2016) repre- sents the state of the art on this dataset and they use eight encoder LSTMs as well as eight decoder LSTMs, we quote their result for a word-based model, such as ours, as well as a word-piece model (Schuster & Nakajima, 2012).5
1705.03122#26
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
http://arxiv.org/pdf/1705.03122
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
cs.CL
null
null
cs.CL
20170508
20170725
[ { "id": "1611.01576" }, { "id": "1610.00072" }, { "id": "1609.09106" }, { "id": "1612.08083" }, { "id": "1607.06450" } ]
1705.03122
27
WMT’16 English-Romanian BLEU Sennrich et al. (2016b) GRU (BPE 90K) 28.1 ConvS2S (Word 80K) 29.45 ConvS2S (BPE 40K) 30.02 WMT?’14 English-German BLEU Luong et al. (2015) LSTM (Word 50K) 20.9 Kalchbrenner et al. (2016) ByteNet (Char) 23.75 Wu et al. (2016) GNMT (Word 80K) 23.12 Wu et al. (2016) GNMT (Word pieces) 24.61 ConvS2S (BPE 40K) 25.16 WMT’14 English-French BLEU Wu et al. (2016) GNMT (Word 80K) 37.90 Wu et al. (2016) GNMT (Word pieces) 38.95 Wu et al. (2016) GNMT (Word pieces) +RL = 39.92 ConvS2S (BPE 40K) 40.51 Table 1. Accuracy on WMT tasks comapred to previous work. ConvS2S and GNMT results are averaged over several runs. BLEU. Reinforcement learning is equally applicable to our architecture and we believe that it would further improve our results.
1705.03122#27
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
http://arxiv.org/pdf/1705.03122
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
cs.CL
null
null
cs.CL
20170508
20170725
[ { "id": "1611.01576" }, { "id": "1610.00072" }, { "id": "1609.09106" }, { "id": "1612.08083" }, { "id": "1607.06450" } ]
1705.03122
28
BLEU. Reinforcement learning is equally applicable to our architecture and we believe that it would further improve our results. The results (Table 1) show that our convolutional model outpeforms GNMT by 0.5 BLEU. Our encoder has 15 lay- ers and the decoder has 15 layers, both with 512 hidden units in the first ten layers and 768 units in the subsequent three layers, all using kernel width 3. The final two layers have 2048 units which are just linear mappings with a sin- gle input. We trained this model on a single GPU over a period of 18.5 days with a batch size of 48. LSTM sparse mixtures have shown strong accuracy at 26.03 BLEU for a single run (Shazeer et al., 2016) which compares to 25.39 BLEU for our best run. This mixture sums the output of four experts, not unlike an ensemble which sums the output of multiple networks. ConvS2S also benefits from ensem- bling (85.2), therefore mixtures are a promising direction.
1705.03122#28
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
http://arxiv.org/pdf/1705.03122
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
cs.CL
null
null
cs.CL
20170508
20170725
[ { "id": "1611.01576" }, { "id": "1610.00072" }, { "id": "1609.09106" }, { "id": "1612.08083" }, { "id": "1607.06450" } ]
1705.03122
29
The ConvS2S model for this experiment uses 15 layers in the encoder and 15 layers in the decoder, both with 512 hidden units in the first five layers, 768 units in the subse- quent four layers, 1024 units in the next 3 layers, all using kernel width 3; the final two layers have 2048 units and 4096 units each but the they are linear mappings with ker- nel width 1. This model has an effective context size of only 25 words, beyond which it cannot access any infor- mation on the target size. Our results are based on training with 8 GPUs for about 37 days and batch size 32 on each worker.® The same configuration as for WMT’ 14 English- German achieves 39.41 BLEU in two weeks on this dataset in an eight GPU setup. Finally, we train on the much larger WMT’ 14 English- French task where we compare to the state of the art re- sult of GNMT (Wu et al., 2016). Our model is trained with a simple token-level likelihood objective and we improve over GNMT in the same setting by 1.6 BLEU on average. We also outperform their reinforcement (RL) models by 0.5
1705.03122#29
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
http://arxiv.org/pdf/1705.03122
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
cs.CL
null
null
cs.CL
20170508
20170725
[ { "id": "1611.01576" }, { "id": "1610.00072" }, { "id": "1609.09106" }, { "id": "1612.08083" }, { "id": "1607.06450" } ]
1705.03122
30
Zhou et al. (2016) report a non-averaged result of 39.2 BLEU. More recently, Ha et al. (2016) showed that one can generate weights with one LSTM for another LSTM. This approach achieves 40.03 BLEU but the result is not averaged. Shazeer et al. (2016) compares at 40.56 BLEU to our best single run of 40.70 BLEU. 5We did not use the exact same vocabulary size because word pieces and BPE estimate the vocabulary differently. ®This is half of the GPU time consumed by a basic model of Wu et al. (2016) who use 96 GPUs for 6 days. We expect the time to train our model to decrease substantially in a multi-machine setup. 6 Convolutional Sequence to Sequence Learning WMT’14 English-German BLEU Wu et al. (2016) GNMT 26.20 Wu et al. (2016) GNMT+RL 26.30 ConvS2S 26.43 WMT’14 English-French BLEU Zhou et al. (2016) 40.4 Wu et al. (2016) GNMT 40.35 Wu et al. (2016) GNMT+RL 41.16 ConvS2S 41.44 ConvS2S (10 models) 41.62
1705.03122#30
Convolutional Sequence to Sequence Learning
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
http://arxiv.org/pdf/1705.03122
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
cs.CL
null
null
cs.CL
20170508
20170725
[ { "id": "1611.01576" }, { "id": "1610.00072" }, { "id": "1609.09106" }, { "id": "1612.08083" }, { "id": "1607.06450" } ]