id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
1611.09823#18
Dialogue Learning With Human-In-The-Loop
5 # Under review as a conference paper at ICLR 2017 4.2.3 FORWARD PREDICTION (FP) FP (Weston, 2016) handles the situation where a numerical reward for a botâ s answer is not available, meaning that there are no +1 or 0 labels available after a studentâ s utterance. Instead, the model assumes the teacher gives textual feedback t to the botâ s answer, taking the form of a dialogue utterance, and the model tries to predict this instead. Suppose that x denotes the teacherâ s question and C=c1, c2, ..., cN denotes the dialogue history as before. In FP, the model ï¬ rst maps the teacherâ s initial question x and dialogue history C to a vector representation u using a memory network with multiple hops. Then the model will perform another hop of attention over all possible studentâ s answers in A, with an additional part that incorporates the information of which candidate (i.e., a) was actually selected in the dialogue: pË a = softmax(uT yË a) o = pË a(yË a + β · 1[Ë a = a]) Ë aâ A (4) where yË a denotes the vector representation for the studentâ s answer candidate Ë a. β is a (learned) d-dimensional vector to signify the actual action a that the student chooses. o is then combined with u to predict the teacherâ s feedback t using a softmax: u1 = o + u t = softmax(uT (5) where xri denotes the embedding for the ith response. In the online setting, the teacher will give textual feedback, and the learner needs to update its model using the feedback. It was shown in Weston (2016) that in an off-line setting this procedure can work either on its own, or in conjunction with a method that uses numerical rewards as well for improved performance. In the online setting, we consider two simple extensions: e e-greedy exploration: with probability â ¬ the student will give a random answer, and with probability 1 â ¢ it will give the answer that its model assigns the largest probability. This method enables the model to explore the space of actions and to potentially discover correct answers.
1611.09823#17
1611.09823#19
1611.09823
[ "1511.06931" ]
1611.09823#19
Dialogue Learning With Human-In-The-Loop
â ¢ data balancing: cluster the set of teacher responses t and then balance training across the clusters equally.2 This is a type of experience replay (Mnih et al., 2013) but sampling with an evened distribution. Balancing stops part of the distribution dominating the learning. For example, if the model is not exposed to sufï¬ cient positive and negative feedback, and one class overly dominates, the learning process degenerates to a model that always predicts the same output regardless of its input. # 5 EXPERIMENTS Experiments are ï¬ rst conducted using our simulator, and then using Amazon Mechanical Turk with real human subjects taking the role of the teacher3. 5.1 SIMULATOR Online Experiments In our first experiments, we considered both the bAbI and WikiMovies tasks and varied batch size, random exploration rate â ¬, and type of model. Figure [3]and Figure [4] shows (Task 6) results on bAbI and WikiMovies. Other tasks yield similar conclusions and are reported in the appendix. Overall, we obtain the following conclusions:
1611.09823#18
1611.09823#20
1611.09823
[ "1511.06931" ]
1611.09823#20
Dialogue Learning With Human-In-The-Loop
â ¢ In general RBI and FP do work in a reinforcement learning setting, but can perform better with random exploration. â ¢ In particular RBI can fail without exploration. RBI needs random noise for exploring labels otherwise it can get stuck predicting a subset of labels and fail. 2In the simulated data, because the responses are templates, this can be implemented by ï¬ rst randomly sampling the response, and then randomly sampling a story with that response; we keep the history of all stories seen from which we sample. For real data slightly more sophisticated clustering should be used. 3 Code and data are available at https://github.com/facebook/MemNN/tree/master/HITL.
1611.09823#19
1611.09823#21
1611.09823
[ "1511.06931" ]
1611.09823#21
Dialogue Learning With Human-In-The-Loop
6 # Under review as a conference paper at ICLR 2017 Random Exploration for FP SACP ci Sata cE RS 0.9) 0.8) > 0.7 UV £ 0.6 a U oe g 0.5) â 0.4 oa -_ 0.3] o-~ 0.2 _â 60 80 ie) 20 40 60 80 Epoch Epoch Random Exploration for FP with Balancin 1,0,__Comparing RBI, FP and REINFORCE 0.9) 0.9) 0.8) 0.8] 0.7 > 9-7] fs) © 0.6) £06 5 gos £05 0.4| 0.4 i ] @â e REINFORCE 0.3 0.3] a RBI 0.2 0.2) ma FP i?) 20 40 60 80 i¢) 20 40 60 80 Epoch Epoch 1.0, RBI (eps=0.6) Varying Batch Size FP (eps=0.6) Varying Batch Size 0.9| 0.9) 0.8} 0.8} 5, 0.7 50.7 8 8 © 0.6 § 0.6 Sos 3g gt 05 0.4 ee batch 20 e@â e batch 20 a batch 80 0.4 4 batch 80 0.3 mm batch 320 03 mm batch 320 o2 |e batch 1000 : + batch 1000 i) 20 40 60 80 100 i) 20 40 60 80 100 Epoch Epoch Figure 3: Training epoch vs. test accuracy for bAbI (Task 6) varying exploration ¢ and batch size. Random exploration is important for both reward-based (RBI) and forward prediction (FP). Performance is largely independent of batch size, and RBI performs similarly to REINFORCE. Note that supervised, rather than reinforcement learning, with gold standard labels achieves 100% accuracy on this task. e REINFORCE obtains similar performance to RBI with optimal e.
1611.09823#20
1611.09823#22
1611.09823
[ "1511.06931" ]
1611.09823#22
Dialogue Learning With Human-In-The-Loop
e FP with balancing or with exploration via â ¬ both outperform FP alone. â ¢ For both RBI and FP, performance is largely independent of online batch size. Dataset Batch Size Experiments Given that larger online batch sizes appear to work well, and that this could be important in a real-world data collection setup where the same model is deployed to gather a large amount of feedback from humans, we conducted further experiments where the batch size is exactly equal to the dataset size and for each batch training is completed to convergence.
1611.09823#21
1611.09823#23
1611.09823
[ "1511.06931" ]
1611.09823#23
Dialogue Learning With Human-In-The-Loop
7 # Under review as a conference paper at ICLR 2017 Random Exploration for RBI Random Exploration for FP 0.7| 0.7} 06 0.6, 0.5| a Zo. £04 one £ = Fe} â J 0.4 â g03 _ g = _ 0.34 _ 0.2| oo oo 01 9 0.2 9 as as 0.1 0 5 10 15 20 0 5 10 15 20 Epoch Epoch RBI (eps=0.5) Varying Batch Size Comparing RBI, FP and REINFORCE 0.7 0.71 0.6] 0.6 > 0.5 >o.5) foal £ 5% 3 0.4 uu uu 203 = batch 32 2 03 sa batch 320 0.2 ma batch 3200 02 ee REINFORCE = batch 32000 oa RBI 0.1 © full dataset 0.1 mm FP ) 5 10 15 20 () 5 10 15 20 Epoch Epoch Figure 4:
1611.09823#22
1611.09823#24
1611.09823
[ "1511.06931" ]
1611.09823#24
Dialogue Learning With Human-In-The-Loop
WikiMovies: Training epoch vs. test accuracy on Task 6 varying (top left panel) explo- ration rate â ¬ while setting batch size to 32 for RBI, (top right panel) for FP, (bottom left) batch size for RBI, and (bottom right) comparing RBI, REINFORCE and FP with â ¬ = 0.5. The model is robust to the choice of batch size. RBI and REINFORCE perform comparably. Note that supervised, rather than reinforcement learning, with gold standard labels achieves 80% accuracy on this task ( After the model has been trained on the dataset, it is deployed to collect a new dataset of questions and answers, and the process is repeated. Table 1 reports test error at each iteration of training, using the bAbI Task 6 as the case study (see the appendix for results on other tasks). The following conclusions can be made for this setting:
1611.09823#23
1611.09823#25
1611.09823
[ "1511.06931" ]
1611.09823#25
Dialogue Learning With Human-In-The-Loop
â ¢ RBI improves in performance as we iterate. Unlike in the online case, RBI does not need random exploration. We believe this is because the ï¬ rst batch, which is collected with a randomly initialized model, contains enough variety of examples with positive rewards that the model does not get stuck predicting a subset of labels. â ¢ FP is not stable in this setting. This is because once the model gets very good at making predictions (at the third iteration), it is not exposed to a sufï¬ cient number of negative re- sponses anymore. From that point on, learning degenerates and performance drops as the model always predicts the same responses. At the next iteration, it will recover again since it has a more balanced training set, but then it will collapse again in an oscillating behavior. e FP does work if extended with balancing or random exploration with sufficiently large e.
1611.09823#24
1611.09823#26
1611.09823
[ "1511.06931" ]
1611.09823#26
Dialogue Learning With Human-In-The-Loop
â ¢ RBI+FP also works well and helps with the instability of FP, alleviating the need for random exploration and data balancing. Overall, our simulation results indicate that while a bot can be effectively trained fully online from bot-teacher interactions, collecting real dialogue data in batches (which is easier to collect and iterate experiments over) is also a viable approach. We hence pursue the latter approach in our next set of experiments. 8 # Under review as a conference paper at ICLR 2017 Iteration 1 2 3 4 5 6 Imitation Learning 0.24 | 0.23 | 0.23 | 0.22 | 0.23 | 0.23 Reward Based Imitation (RBI) | 0.74 | 0.87 | 0.90 | 0.96 | 0.96 | 0.98 Forward Pred. (FP) 0.99 | 0.96 | 1.00 | 0.30 | 1.00 | 0.29 RBI+FP 0.99 | 0.96 | 0.97 | 0.95 | 0.94 | 0.97 FP (balanced) 0.99 | 0.97 | 0.97 | 0.97 | 0.97 | 0.97 FP (rand. exploration â ¬ = 0.25) | 0.96 | 0.88 | 0.94 | 0.26 | 0.64 | 0.99 FP (rand. exploration â ¬ = 0.5) | 0.98 | 0.98 | 0.99 | 0.98 | 0.95 | 0.99 Table 1: Test accuracy of various models per iteration in the dataset batch size case (using batch size equal to the size of the full training set) for bAbI, Task 6. Results > 0.95 are in bold. Relation to experiments in Weston (2016) As described in detail in Section 2 the datasets we use in our experiments were introduced in (Weston et al., 2015).
1611.09823#25
1611.09823#27
1611.09823
[ "1511.06931" ]
1611.09823#27
Dialogue Learning With Human-In-The-Loop
However, that work involved constructing pre-built ï¬ xed policies (and hence, datasets), rather than training the learner in a rein- forcement/interactive learning using a simulator, as in our work. They achieved this by choosing an omniscient (but deliberately imperfect) labeler that gets Ï acc examples always correct (the paper looked at values 1%, 10% and 50%). In a realistic setting one does not have access to an omniscient labeler, one has to learn a policy completely from scratch, online, starting with a random policy, as we do here. Nevertheless, it is possible to compare our learnt policies to those results because we use the same train/valid/test splits. The clearest comparison comparison is via Table 1, where the policy is learnt using batch iterations of the dataset, updating the policy on each iteration. Weston et al. (2015) can be viewed as training only one iteration, with a pre-built policy, as explained above, where 59%, 81% and 99% accuracy was obtained for RBI for Ï acc with 1%, 10% and 50% respectively4. While Ï acc of 50% is good In this work a random policy begins with 74% enough to solve the task, lower values are not. accuracy on the ï¬ rst iteration, but importantly on each iteration the policy is updated and improves, with values of 87%, 90% on iterations 2 and 3 respectively, and 98% on iteration 6. This is a key differentiator to the work of (Weston et al., 2015) where such improvement was not shown. We show that such online learning works for both reward-based numerical feedback and for forward prediction methods using textual feedback (as long as balancing or random exploration is performed sufï¬ ciently). The ï¬ nal performance outperforms most values of Ï acc from Weston et al. (2015) unless Ï is so large that the task is already solved. This is a key contribution of our work. Similar conclusions can be made for Figures Bjand 4} Despite our initial random policy starting at close to 0% accuracy, if random exploration â
1611.09823#26
1611.09823#28
1611.09823
[ "1511.06931" ]
1611.09823#28
Dialogue Learning With Human-In-The-Loop
¬ > 0.2 is employed then after a number of epochs the performance is better than most values of tacc from {Weston et al.| (2015), e.g. compare the accuracies given in the previous paragraph (59%, 81% and 99%) to Figure|3} top left. 5.2 HUMAN FEEDBACK We employed Turkers to both ask questions and then give textual feedback on the botâ s answers, as described in Section 3.2. Our experimental protocol was as follows.
1611.09823#27
1611.09823#29
1611.09823
[ "1511.06931" ]
1611.09823#29
Dialogue Learning With Human-In-The-Loop
We ï¬ rst trained a MemN2N using supervised (i.e., imitation) learning on a training set of 1000 questions produced by Turkers and using the known correct answers provided by the original dataset (and no textual feedback). Next, using the trained policy, we collected textual feedback for the responses of the bot for an additional 10,000 questions. Examples from the collected dataset are given in Figure 2. Given this dataset, we compare various models: RBI, FP and FP+RBI. As we know the correct answers to the additional questions, we can assign a positive reward to questions the bot got correct. We hence measure the impact of the sparseness of this reward signal, where a fraction r of additional examples have rewards. The models are tested on a test set of â ¼8,000 questions (produced by Turkers), and hyperparameters are tuned on a similarly sized validation set. Note this is a harder task than the WikiMovies task in the simulator due to the use natural language from Turkers, hence lower test performance is expected. 4Note, this is not the same as a randomly initialized neural network policy, because due to the synthetic construction with an omniscient labeler the labels will be balanced. In our work, we learn the policy from randomly initialized weights which are updated as we learn the policy.
1611.09823#28
1611.09823#30
1611.09823
[ "1511.06931" ]
1611.09823#30
Dialogue Learning With Human-In-The-Loop
9 # Under review as a conference paper at ICLR 2017 Results are given in Table 2. They indicate that both RBI and FP are useful. When rewards are sparse, FP still works via the textual feedback while RBI can only use the initial 1000 examples when r = 0. As FP does not use numericalrewards at all, it is invariant to the parameter r. The combination of FP and RBI outperforms either alone. Model Reward Based Imitation (RBI) Forward Prediction (FP) RBI+FP r = 0 0.333 0.358 0.431 r = 0.1 0.340 0.358 0.438 r = 0.5 0.365 0.358 0.443 r = 1 0.375 0.358 0.441
1611.09823#29
1611.09823#31
1611.09823
[ "1511.06931" ]
1611.09823#31
Dialogue Learning With Human-In-The-Loop
Table 2: Incorporating Feedback From Humans via Mechanical Turk. Textual feedback is provided for 10,000 model predictions (from a model trained with 1k labeled training examples), and additional sparse binary rewards (fraction r of examples have rewards). Forward Prediction and Reward-based Imitation are both useful, with their combination performing best. We also conducted additional experiments comparing with (i) synthetic feedback and (ii) the fully supervised case which are given in Appendix C.1. They show that the results with human feedback are competitive with these approaches. # 6 CONCLUSION We studied dialogue learning of end-to-end models using textual feedback and numerical rewards. Both fully online and iterative batch settings are viable approaches to policy learning, as long as possible instabilities in the learning algorithms are taken into account.
1611.09823#30
1611.09823#32
1611.09823
[ "1511.06931" ]
1611.09823#32
Dialogue Learning With Human-In-The-Loop
Secondly, we showed for the ï¬ rst time that the recently introduced FP method can work in both an online setting and on real human feedback. Overall, our results indicate that it is feasible to build a practical pipeline that starts with a model trained on an initial ï¬ xed dataset, which then learns from interactions with humans in a (semi-)online fashion to improve itself. Future research should work towards doing this in a never-ending learning setup. # REFERENCES Mohammad Amin Bassiri.
1611.09823#31
1611.09823#33
1611.09823
[ "1511.06931" ]
1611.09823#33
Dialogue Learning With Human-In-The-Loop
Interactional feedback and the impact of attitude and motivation on noticing l2 form. English Language and Literature Studies, 1(2):61, 2011. Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. Large-scale simple question answering with memory networks. arXiv preprint arXiv:1506.02075, 2015. Leon Bottou, Jonas Peters, Denis X. Quionero-Candela, Joaquin amd Charles, D. Max Chicker- ing, Elon Portugaly, Dipankar Ray, Patrice Simard, and Ed Snelson.
1611.09823#32
1611.09823#34
1611.09823
[ "1511.06931" ]
1611.09823#34
Dialogue Learning With Human-In-The-Loop
Counterfactual reasoning and learning systems: The example of computational advertising. Journal of Machine Learning Research, 14:3207â 3260, 2013. Jesse Dodge, Andreea Gane, Xiang Zhang, Antoine Bordes, Sumit Chopra, Alexander Miller, Arthur Szlam, and Jason Weston. Evaluating prerequisite qualities for learning end-to-end dialog sys- tems. arXiv preprint arXiv:1511.06931, 2015.
1611.09823#33
1611.09823#35
1611.09823
[ "1511.06931" ]
1611.09823#35
Dialogue Learning With Human-In-The-Loop
Milica GaË sic, Catherine Breslin, Matthew Henderson, Dongho Kim, Martin Szummer, Blaise Thom- son, Pirros Tsiakoulis, and Steve Young. Pomdp-based dialogue manager adaptation to extended domains. In Proceedings of SIGDIAL, 2013. Milica GaË sic, Dongho Kim, Pirros Tsiakoulis, Catherine Breslin, Matthew Henderson, Martin Szummer, Blaise Thomson, and Steve Young.
1611.09823#34
1611.09823#36
1611.09823
[ "1511.06931" ]
1611.09823#36
Dialogue Learning With Human-In-The-Loop
Incremental on-line adaptation of pomdp-based dialogue managers to extended domains. In Proceedings on InterSpeech, 2014. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa In Advances in Suleyman, and Phil Blunsom. Teaching machines to read and comprehend. Neural Information Processing Systems, pp. 1693â 1701, 2015. 10 # Under review as a conference paper at ICLR 2017
1611.09823#35
1611.09823#37
1611.09823
[ "1511.06931" ]
1611.09823#37
Dialogue Learning With Human-In-The-Loop
Esther Levin, Roberto Pieraccini, and Wieland Eckert. Learning dialogue strategies within the markov decision process framework. In Automatic Speech Recognition and Understanding, 1997. Proceedings., 1997 IEEE Workshop on, pp. 72â 79. IEEE, 1997. Esther Levin, Roberto Pieraccini, and Wieland Eckert. A stochastic model of human-machine in- teraction for learning dialog strategies. IEEE Transactions on speech and audio processing, 8(1): 11â
1611.09823#36
1611.09823#38
1611.09823
[ "1511.06931" ]
1611.09823#38
Dialogue Learning With Human-In-The-Loop
23, 2000. Alexander Miller, Adam Fisch, Jesse Dodge, Amir-Hossein Karimi, Antoine Bordes, and Ja- arXiv preprint son Weston. Key-value memory networks for directly reading documents. arXiv:1606.03126, 2016. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wier- stra, and Martin Riedmiller. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013. Roberto Pieraccini, David Suendermann, Krishna Dayanidhi, and Jackson Liscombe. Are we there yet? research in commercial spoken dialog systems. In International Conference on Text, Speech and Dialogue, pp. 3â 13. Springer, 2009. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250, 2016.
1611.09823#37
1611.09823#39
1611.09823
[ "1511.06931" ]
1611.09823#39
Dialogue Learning With Human-In-The-Loop
Marcâ Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. Sequence level train- ing with recurrent neural networks. arXiv preprint arXiv:1511.06732, 2015. Jost Schatzmann, Karl Weilhammer, Matt Stuttle, and Steve Young. A survey of statistical user sim- ulation techniques for reinforcement-learning of dialogue management strategies. The knowledge engineering review, 21(02):97â 126, 2006. Satinder Singh, Michael Kearns, Diane J Litman, Marilyn A Walker, et al.
1611.09823#38
1611.09823#40
1611.09823
[ "1511.06931" ]
1611.09823#40
Dialogue Learning With Human-In-The-Loop
Empirical evaluation of a reinforcement learning spoken dialogue system. In AAAI/IAAI, pp. 645â 651, 2000. Satinder Singh, Diane Litman, Michael Kearns, and Marilyn Walker. Optimizing dialogue man- agement with reinforcement learning: Experiments with the njfun system. Journal of Artiï¬ cial Intelligence Research, 16:105â 133, 2002. Pei-Hao Su, Milica Gasic, Nikola Mrksic, Lina Rojas-Barahona, Stefan Ultes, David Vandyke, Tsung-Hsien Wen, and Steve Young. Continuously learning neural dialogue management. arXiv preprint arXiv:1606.02689, 2016. Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. End-to-end memory networks. In Advances in neural information processing systems, pp. 2440â 2448, 2015.
1611.09823#39
1611.09823#41
1611.09823
[ "1511.06931" ]
1611.09823#41
Dialogue Learning With Human-In-The-Loop
Marilyn A. Walker. An application of reinforcement learning to dialogue strategy selection in a spoken dialogue system for email. Journal of Artiï¬ cial Intelligence Research, 12:387â 416, 2000. Marilyn A Walker, Rashmi Prasad, and Amanda Stent. A trainable generator for recommendations in multimodal dialog. In INTERSPEECH, 2003. Margaret G Werts, Mark Wolery, Ariane Holcombe, and David L Gast. Instructive feedback: Review of parameters and effects. Journal of Behavioral Education, 5(1):55â 75, 1995.
1611.09823#40
1611.09823#42
1611.09823
[ "1511.06931" ]
1611.09823#42
Dialogue Learning With Human-In-The-Loop
Jason Weston. Dialog-based language learning. arXiv preprint arXiv:1604.06045, 2016. Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M Rush, Bart van Merri¨enboer, Armand Joulin, and Tomas Mikolov. Towards ai-complete question answering: A set of prerequisite toy tasks. arXiv preprint arXiv:1502.05698, 2015. Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229â 256, 1992. Steve Young, Milica GaË si´c, Simon Keizer, Franc¸ois Mairesse, Jost Schatzmann, Blaise Thomson, and Kai Yu.
1611.09823#41
1611.09823#43
1611.09823
[ "1511.06931" ]
1611.09823#43
Dialogue Learning With Human-In-The-Loop
The hidden information state model: A practical framework for pomdp-based spoken dialogue management. Computer Speech & Language, 24(2):150â 174, 2010. 11 # Under review as a conference paper at ICLR 2017 Steve Young, Milica GaË si´c, Blaise Thomson, and Jason D Williams. Pomdp-based statistical spoken dialog systems: A review. Proceedings of the IEEE, 101(5):1160â 1179, 2013.
1611.09823#42
1611.09823#44
1611.09823
[ "1511.06931" ]
1611.09823#44
Dialogue Learning With Human-In-The-Loop
Wojciech Zaremba and Ilya Sutskever. Reinforcement learning neural turing machines. arXiv preprint arXiv:1505.00521, 362, 2015. 12 # Under review as a conference paper at ICLR 2017 # A FURTHER SIMULATOR TASK DETAILS The tasks in Weston (2016) were speciï¬ cally: - Task 1: The teacher tells the student exactly what they should have said (supervised baseline). - Task 2: The teacher replies with positive textual feedback and reward, or negative textual feedback. - Task 3: The teacher gives textual feedback containing the answer when the bot is wrong. - Task 4: The teacher provides a hint by providing the class of the correct answer, e.g., â
1611.09823#43
1611.09823#45
1611.09823
[ "1511.06931" ]
1611.09823#45
Dialogue Learning With Human-In-The-Loop
No itâ s a movieâ for the question â which movie did Forest Gump star in?â . - Task 5: The teacher provides a reason why the studentâ s answer is wrong by pointing out the relevant supporting fact from the knowledge base. - Task 6: The teacher gives positive reward only 50% of the time. - Task 7: Rewards are missing and the teacher only gives natural language feedback. - Task 8: Combines Tasks 1 and 2 to see whether a learner can learn successfully from both forms of supervision at once. - Task 9: The bot asks questions of the teacher about what it has done wrong. - Task 10: The bot will receive a hint rather than the correct answer after asking for help. We refer the readers to (Weston, 2016) for more detailed descriptions and the motivation behind these tasks. The difference in our system is that the model can be trained on-the-ï¬ y via the simulator: after receiving feedback and/or rewards, the model can update itself and apply its learning to the next episode. We present results on Tasks 2, 3 and 4 in this appendix
1611.09823#44
1611.09823#46
1611.09823
[ "1511.06931" ]
1611.09823#46
Dialogue Learning With Human-In-The-Loop
# B INSTRUCTIONS GIVEN TO TURKERS These are the instructions given for the textual feedback mechanical turk task (we also constructed a separate task to collect the initial questions, not described here): Title: Write brief responses to given dialogue exchanges (about 15 min) Description: Write a brief response to a studentâ s answer to a teacherâ s question, providing feedback to the student on their answer. Instructions: Each task consists of the following triplets: 1. a question by the teacher 2. the correct answer(s) to the question (separated by â
1611.09823#45
1611.09823#47
1611.09823
[ "1511.06931" ]
1611.09823#47
Dialogue Learning With Human-In-The-Loop
ORâ ) 3. a proposed answer in reply to the question from the student Consider the scenario where you are the teacher and have already asked the question, and received the reply from the student. Please compose a brief response giving feedback to the student about their answer. The correct answers are provided so that you know whether the student was correct or not. For example, given 1) question: â what is a color in the united states ï¬ ag?â ; 2) correct answer: â white, blue, redâ ; 3) student reply: â redâ , your response could be something like â thatâ s right!â ; for 3) reply: â greenâ , you might say â no thatâ s not rightâ or â nope, a correct answer is actually whiteâ . Please vary responses and try to minimize spelling mistakes. If the same responses are copied/pasted or overused, weâ ll reject the HIT. Avoid naming the student or addressing â the classâ directly.
1611.09823#46
1611.09823#48
1611.09823
[ "1511.06931" ]
1611.09823#48
Dialogue Learning With Human-In-The-Loop
We will consider bonuses for higher quality responses during review. 13 # Under review as a conference paper at ICLR 2017 T : Which movie did Tom Hanks star in ? S : Forrest Gump T : Which movie did Tom Hanks star in ? S : Brad Pitt. Task 1: Imitating an Expert Student S: Forrest Gump T: (no response) Task 1: Imitating an Expert Student S: Forrest Gump T: (no response) Task 2: Positive and Negative Feedback T: Yes, thatâ s right! (+) Task 2: Positive and Negative Feedback T: No, thatâ s incorrect! Task 3: Answers Supplied by Teacher T: Yes, that is correct. (+) Task 3: Answers Supplied by Teacher T: No, the answer is Forrest Gump ! Task 4: Hints Supplied by Teacher T: Correct! (+) Task 4: Hints Supplied by Teacher T: No, itâ s a movie !
1611.09823#47
1611.09823#49
1611.09823
[ "1511.06931" ]
1611.09823#49
Dialogue Learning With Human-In-The-Loop
Task 5: Supporting Facts Supplied by Teacher T: Thatâ s right. (+) Task 5: Supporting Facts Supplied by Teacher T: No, because Forrest Gump starred actors Tom Hanks, Robin Wright, Gary Sinise ! Task 6: Partial Feedback if random(0,1)<0.5 then T: Thatâ s correct. (+) else T: Thatâ s correct. Task 6: Partial Feedback T: Sorry, wrong. Task 7: No Feedback T: Yes. Task 7: No Feedback T:
1611.09823#48
1611.09823#50
1611.09823
[ "1511.06931" ]
1611.09823#50
Dialogue Learning With Human-In-The-Loop
No. Task 8: Imitation and Feedback Mixture if random(0,1)<0.5 then T: Yes, thatâ s right! (+) Task 8: Imitation and Feedback Mixture if random(0,1)<0.5 then T: Wrong. else T: (no response) else S: Forrest Gump Task 9: Asking For Corrections T: Correct! (+) Task 9: Asking For Corrections T: No, thatâ s wrong. S: Can you help me? T: Forrest Gump ! Task 10: Asking For Supporting Facts T: Yes, thatâ s right! (+) Task 10: Asking For Supporting Facts T: Sorry, thatâ s not it. S: Can you help me? T:
1611.09823#49
1611.09823#51
1611.09823
[ "1511.06931" ]
1611.09823#51
Dialogue Learning With Human-In-The-Loop
A relevant fact is that Forrest Gump starred actors Tom Hanks, Robin Wright, Gary Sinise ! Figure 5: The ten tasks our simulator implements, which evaluate different forms of teacher response and binary feedback. In each case the same example from WikiMovies is given for simplicity, where the student answered correctly for all tasks (left) or incorrectly (right). Red text denotes responses by the bot with S denoting the bot. Blue text is spoken by the teacher with T denoting the teacherâ s response. For imitation learning the teacher provides the response the student should say denoted with S in Tasks 1 and 8. A (+) denotes a positive reward. C ADDITIONAL EXPERIMENTS Iteration 1 2 3 4 5 6 Imitation Learning 0.24 | 0.23 | 0.23 | 0.23 |] 0.25 | 0.25 Reward Based Imitation (RBI) | 0.95 | 0.99 | 0.99 | 0.99 | 1.00 | 1.00 Forward Pred. (FP) 1.00 | 0.19 | 0.86 | 0.30 | 99 | 0.22 RBI+FP 0.99 | 0.99 | 0.99 | 0.99 | 99 | 0.99 FP (balanced) 0.99 | 0.97 | 0.98 | 0.98 | 0.96 | 0.97 FP (rand. exploration â ¬ = 0.25) | 0.99 | 0.91 | 0.93 | 0.88 | 0.94 | 0.94 FP (rand. exploration â ¬ = 0.5) | 0.98 | 0.93 | 0.97 | 0.96 | 0.95 | 0.97 Table 3: Test accuracy of various models in the dataset batch size case (using batch size equal to the size of the full training set) for bAbI, task 3. Results > 0.95 are in bold.
1611.09823#50
1611.09823#52
1611.09823
[ "1511.06931" ]
1611.09823#52
Dialogue Learning With Human-In-The-Loop
14 # Under review as a conference paper at ICLR 2017 1.0, Random Exploration for RBI i?) 20 40 60 80 Epoch i?) 20 40 60 80 Epoch i¢) ~ 20 40 60 80 Epoch 1.0, RBI (eps=0.6) Varying Batch Size Random Exploration for FP i) 20 40 60 80 Epoch Comparing RBI, FP and REINFORCE
1611.09823#51
1611.09823#53
1611.09823
[ "1511.06931" ]
1611.09823#53
Dialogue Learning With Human-In-The-Loop
@â e REINFORCE sa RBI ma FP i?) 20 40 60 80 Epoch # FP (eps=0.6) Varying Batch Size 0.9|[e-e # batch 20 09 # da # batch 80 0.8 0.8)) # pm # batch 320 50.7 20.6 Â¥ 0.5 0.4) # ee # batch 20 0.7||@-* > & 0.6 FE} 90.5 0.4) # batch 1000 # da # batch 80 0.3 # mm # batch 320 03 . 02 @« # batch 1000 0.2 # i 0 20 # 40 60 Epoch 80 100 0 20 # 40 60 Epoch 80 100 Figure 6:
1611.09823#52
1611.09823#54
1611.09823
[ "1511.06931" ]
1611.09823#54
Dialogue Learning With Human-In-The-Loop
Training epoch vs. test accuracy for bAbI (Task 2) varying exploration ¢ and batch size. 15 # Under review as a conference paper at ICLR 2017 Random Exploration for RBI Random Exploration for FP 1.0 0.9) 0.9) 0.8) 0.8) 0.7) > 0.7| > £0.6 £06 £05 g 0.5] 0.4 0.4) 0.3) 0.3 0.2 0.2 i¢) 20 40 60 0 20 40 60 80 Epoch Epoch Random Exploration for FP with Balancin 1.0-â Comparing RBI, FP and REINFORCE @â e REINFORCE aa RBI mm FP i?) 20 40 60 80 i?) 20 40 60 80 Epoch Epoch 1.0, RBI (eps=0.6) Varying Batch Size FP (eps=0.6) Varying Batch Size 0.9! 0.9) 0.8] 0.8} > 0.7] > 0.7 8 8 5 0-6 5 0.6 205 3g <â to5 0.4 @â e batch 20 @â e batch 20 aa batch 80 0.4] ta batch 80 0.3 mm batch 320 mm batch 320 02 |e batch 1000 0.3 | batch 1000 i) 20 40 60 80 100 i) 20 40 60 80 100 Epoch Epoch Figure 7: Training epoch vs. test accuracy for bAbI (Task 3) varying exploration ¢ and batch size. Random exploration is important for both reward-based (RBI) and forward prediction (FP).
1611.09823#53
1611.09823#55
1611.09823
[ "1511.06931" ]
1611.09823#55
Dialogue Learning With Human-In-The-Loop
16 # Under review as a conference paper at ICLR 2017 Random Exploration for RBI i¢) 20 40 60 Epoch Random Exploration for FP with Balancin 0 20 40 60 80 0 Epoch ing Batch Size Random Exploration for FP 40 Epoch 60 Comparing RBI, FP and REINFORCE ee REINFORCE oa RBI mm FP 20 FP (eps=0.6) Varying Batch Size 40 Epoch 60 80 0.9| 0.9) 0.8! 0.8} >, 0.7 507 o o £ 0.6 £ 0.6) 5 £05 go5 0.4 ee batch 20 0.4! @@ batch 20 aa batch 80 aa batch 80 0.3 ma batch 320 0.3 mm batch 320 02 | ee batch 1000 02 + batch 1000 i) 20 40 60 80 100 ie) 20 40 60 80 100 Epoch Epoch Figure 8: Training epoch vs. test accuracy for bAbI (Task 4) varying exploration ¢ and batch size. Random exploration is important for both reward-based (RBI) and forward prediction (FP).
1611.09823#54
1611.09823#56
1611.09823
[ "1511.06931" ]
1611.09823#56
Dialogue Learning With Human-In-The-Loop
17 # Under review as a conference paper at ICLR 2017 Random Exploration for RBI Random Exploration for FP 07] 0.7, 0.6 0.6. > 05 > 05 u u oO oO Soa a Soa â uu ih U oh 03 _ 203 ua _ _ 0.2 o- 0.2) oo 0.1 â ~ â ~ a ol â 0 5 10 15 20 0 5 10 15 20 Epoch Epoch RBI (eps=0.5) Varying Batch Size Comparing RBI, FP and REINFORCE ° un ° uu Accuracy ° Ss Accuracy ° + @e batch 32 0.3 0.3 sa batch 320 0.2 ma batch 3200 02 ee REINFORCE + batch 32000 os RBI ol © full dataset 0.1 ma FP 0 5 10 15 20 0 5 10 15 20 Epoch Epoch Figure 9:
1611.09823#55
1611.09823#57
1611.09823
[ "1511.06931" ]
1611.09823#57
Dialogue Learning With Human-In-The-Loop
WikiMovies: Training epoch vs. test accuracy on Task 2 varying (top left panel) explo- ration rate â ¬ while setting batch size to 32 for RBI, (top right panel) for FP, (bottom left) batch size for RBI, and (bottom right) comparing RBI, REINFORCE and FP setting « = 0.5. The model is robust to the choice of batch size. RBI and REINFORCE perform comparably. 18 # Under review as a conference paper at ICLR 2017 Random Exploration for RBI Random Exploration for FP 0.71 06 ost 0.5| a B05} Loa O90 £ oo a a oO â © 0.4} â 2 0.3 Ga 4 -« xt xt _ 0.34 _ 0.2| o~6 02 o~6 0.1 Vy Vy a a 0.1 0 5 10 15 20 0 5 10 15 20 Epoch Epoch RBI (eps=0.5) Varying Batch Size Comparing RBI, FP and REINFORCE 0.7 0.71 0.6] 0.6) gos 3 0.5} 50.4 5 0.4 uu U fo4 ee batch 32 {03 . aa batch 320 0.2 ma batch 3200 0.2 © REINFORCE = batch 32000 ta RBI 0.1 © full dataset 0.1 mm FP ) 5 10 15 20 0 5 10 15 20 Epoch Epoch Figure 10:
1611.09823#56
1611.09823#58
1611.09823
[ "1511.06931" ]
1611.09823#58
Dialogue Learning With Human-In-The-Loop
WikiMovies: Training epoch vs. test accuracy on Task 3 varying (top left pane ) explo- ration rate â ¬ while setting batch size to 32 for RBI, (top right panel) for FP, (bottom left) batch size for RBI, and (bottom right) comparing RBI, REINFORCE and FP setting « = 0.5. The model is robust to the choice of batch size. RBI and REINFORCE perform comparably. 19 # Under review as a conference paper at ICLR 2017 Random Exploration for RBI Random Exploration for FP 0.7 0.7} 0.6 0.6 0.5 3 pos, Loa a £ f° S 5 0.4) uu â U â Y 03) [e) xt _ to3 _ _ _ 0.2 O° oo 0.2} 0.1 â ~ â ~ a 01 â 0 5 10 15 20 0 5 10 15 20 Epoch Epoch RBI (eps=0.5) Varying Batch Size Comparing RBI, FP and REINFORCE 0.7 0.7} 0.6 0.6, 705 3° 50.4 5 0. uu U fo4 ee batch 32 Zo . aa batch 320 02 ma batch 3200 02 ee REINFORCE + batch 32000 os RBI ol © full dataset 0.1 ma FP 0 5 10 15 20 0 5 10 15 20 Epoch Epoch Figure 11:
1611.09823#57
1611.09823#59
1611.09823
[ "1511.06931" ]
1611.09823#59
Dialogue Learning With Human-In-The-Loop
WikiMovies: Training epoch vs. test accuracy on Task 4 varying (top left panel) explo- ration rate ⠬ while setting batch size to 32 for RBI, (top right panel) for FP, (bottom left) batch size for RBI, and (bottom right) comparing RBI, REINFORCE and FP setting « = 0.5. The model is robust to the choice of batch size. RBI and REINFORCE perform comparably. 20 # Under review as a conference paper at ICLR 2017 FP (eps=0.5) Varying Batch Size FP (eps=0.5) Varying Batch Size 0.7 0.7 0.6 0.6 0.5) o Fo.5| coal £ Fe} 3 0.4 fo4 ee batch 32 2 ee batch 32 aa batch 320 0.3 ta batch 320 0.21 ma batch 3200 ma batch 3200 + batch 32000 0.2 + batch 32000}; ol © full dataset © full dataset La 0.1 j 0 5 10 15 20 0 5 10 15 20 Epoch Epoch FP (eps=0.5) Varying Batch Size FP (eps=0.5) Varying Batch Size 0.7 0.7 0.6 0.6 B05 Bo.5 oO oO £ £ go4 Z 0.4 <4 © batch 32 2 ee batch 32 : a batch 320 03 sa batch 320 02 ma batch 3200 ma batch 3200 + batch 32000 0.2 + batch 32000 ol © full dataset © full dataset : : 0.1 = } 0 5 10 15 20 0 5 10 15 20 Epoch Epoch Figure 12:
1611.09823#58
1611.09823#60
1611.09823
[ "1511.06931" ]
1611.09823#60
Dialogue Learning With Human-In-The-Loop
WikiMovies: Training epoch vs. test accuracy with varying batch size for FP on Task 2 (top left panel), 3 (top right panel), 4 (bottom left panel) and 6 (top right panel) setting « = 0.5. The model is robust to the choice of batch size. 21 # Under review as a conference paper at ICLR 2017 C.1 ADDITIONAL EXPERIMENTS FOR MECHANICAL TURK SETUP In the experiment in Section 5.2 we conducted experiments with real human feedback. Here, we compare this to a form of synthetic feedback, mostly as a sanity check, but also to see how much improvement we can get if the signal is simpler and cleaner (as it is synthetic). We hence constructed synthetic feedback for the 10,000 responses, using either Task 2 (positive or negative feedback), Task 3 (answers provided by teacher) or a mix (Task 2+3) where we use one or the other for each example (50% chance of each). The latter makes the synthetic data have a mixed setup of responses, which more closely mimics the real data case. The results are given in Table 4. The RBI+FP combination is better using the synthetic data than the real data with Task 2+3 or Task 3, which is to be expected, but the real data is competitive, despite the difï¬
1611.09823#59
1611.09823#61
1611.09823
[ "1511.06931" ]
1611.09823#61
Dialogue Learning With Human-In-The-Loop
culty of dealing with its lexical and semantic variability. The real data is better than using Task 2 synthetic data. For comparison purposes, we also ran a supervised (imitation learning) MemN2N on different sized training sets of turker authored questions with gold annotated labels (so, there are no numerical rewards or textual feedback, this is a pure supervised setting). The results are given in Table 5. They indicate that RBI+FP and even FP alone get close to the performance of fully supervised learning. Model Reward Based Imitation (RBI) Forward Prediction (FP) [real] RBI+FP [real] Forward Prediction (FP) [synthetic Task 2] Forward Prediction (FP) [synthetic Task 2+3] Forward Prediction (FP) [synthetic Task 3] RBI+FP [synthetic Task 2] RBI+FP [synthetic Task 2+3] RBI+FP [synthetic Task 3] r = 0 0.333 0.358 0.431 0.188 0.328 0.361 0.382 0.459 0.473 r = 0.1 0.340 0.358 0.438 0.188 0.328 0.361 0.383 0.465 0.486 r = 0.5 0.365 0.358 0.443 0.188 0.328 0.361 0.407 0.464 0.490 r = 1 0.375 0.358 0.441 0.188 0.328 0.361 0.408 0.478 0.494 Table 4: Incorporating Feedback From Humans via Mechanical Turk: comparing real human feedback to synthetic feedback. Textual feedback is provided for 10,000 model predictions (from a model trained with 1k labeled training examples), and additional sparse binary rewards (fraction r of examples have rewards). We compare real feedback (rows 2 and 3) to synthetic feedback when using FP or RBI+FP (rows 4 and 5). Train data size Supervised MemN2N 0.333 1k 5k 0.429 10k 0.476 20k 0.526 60k 0.599
1611.09823#60
1611.09823#62
1611.09823
[ "1511.06931" ]
1611.09823#62
Dialogue Learning With Human-In-The-Loop
# Table 5: Fully Supervised (Imitation Learning) Results on Human Questions [r=0|r=01 | r=05|r=1 e=0 0.499 | 0.502 0.501 | 0.502 e⠬=0.1 | 0.494 | 0.496 0.501 | 0.502 ⠬ = 0.25 | 0.493 | 0.495 0.496 | 0.499 ⠬=0.5 | 0.501 | 0.499 0.501 | 0.504 e=1 0.497 | 0.497 0.498 | 0.497 Table 6: Second Iteration of Feedback Using synthetic textual feedback of synthetic Task2+3 with the RBI+FP method, an additional iteration of data collection of 10k examples, varying sparse binary reward fraction r and exploration «. The performance of the first iteration model was 0.478. C.2 SECOND ITERATION OF FEEDBACK We conducted experiments with an additional iteration of data collection for the case of binary rewards and textual feedback using the synthetic Task 2+3 mix. We selected the best model from the previous training, using RBI+FP with r = 1 which previously gave a test accuracy of 0.478 (see Table 4). Using that model as a predictor, we collected an additional 10,000 training examples.
1611.09823#61
1611.09823#63
1611.09823
[ "1511.06931" ]
1611.09823#63
Dialogue Learning With Human-In-The-Loop
22 # Under review as a conference paper at ICLR 2017 We then continue to train our model using the original 1k+10k training set, plus the additional 10k. As before, we report the test accuracy varying r on the additional collected set. We also report the performance from varying ¢, the proportion of random exploration of predictions on the new set. The results are reported in Table [6] Overall, performance is improved in the second iteration, with slightly better performance for large r and ⠬ = 0.5. However, the improvement is mostly invariant to those parameters, likely because FP takes advantage of feedback from incorrect predictions in any case.
1611.09823#62
1611.09823#64
1611.09823
[ "1511.06931" ]
1611.09823#64
Dialogue Learning With Human-In-The-Loop
23
1611.09823#63
1611.09823
[ "1511.06931" ]
1611.09830#0
NewsQA: A Machine Comprehension Dataset
7 1 0 2 b e F 7 ] L C . s c [ 3 v 0 3 8 9 0 . 1 1 6 1 : v i X r a # NEWSQA: A MACHINE COMPREHENSION DATASET Justin Harris Alessandro Sordoni Philip Bachman Kaheer Suleman {adam.trischler, tong.wang, eric.yuan, justin.harris, alessandro.sordoni, phil.bachman, k.suleman}@maluuba.com # Maluuba Research Montréal, Québec, Canada # ABSTRACT We present NewsQA, a challenging machine comprehension dataset of over 100,000 human-generated question-answer pairs. Crowdworkers supply questions and an- swers based on a set of over 10,000 news articles from CNN, with answers consist- ing of spans of text from the corresponding articles. We collect this dataset through a four-stage process designed to solicit exploratory questions that require reasoning. A thorough analysis conï¬ rms that NewsQA demands abilities beyond simple word matching and recognizing textual entailment. We measure human performance on the dataset and compare it to several strong neural models. The performance gap between humans and machines (0.198 in F1) indicates that signiï¬ cant progress can be made on NewsQA through future research. The dataset is freely available at https://datasets.maluuba.com/NewsQA.
1611.09830#1
1611.09830
[ "1606.02245" ]
1611.09830#1
NewsQA: A Machine Comprehension Dataset
# INTRODUCTION Almost all human knowledge is recorded in the medium of text. As such, comprehension of written language by machines, at a near-human level, would enable a broad class of artiï¬ cial intelligence applications. In human students we evaluate reading comprehension by posing questions based on a text passage and then assessing a studentâ s answers. Such comprehension tests are appealing because they are objectively gradable and may measure a range of important abilities, from basic understanding to causal reasoning to inference (Richardson et al., 2013). To teach literacy to machines, the research community has taken a similar approach with machine comprehension (MC). Recent years have seen the release of a host of MC datasets. Generally, these consist of (document, question, answer) triples to be used in a supervised learning framework. Existing datasets vary in size, difï¬ culty, and collection methodology; however, as pointed out by Rajpurkar et al. (2016), most suffer from one of two shortcomings: those that are designed explicitly to test comprehension (Richardson et al., 2013) are too small for training data-intensive deep learning models, while those that are sufï¬ ciently large for deep learning (Hermann et al., 2015; Hill et al., 2016; Bajgar et al., 2016) are generated synthetically, yielding questions that are not posed in natural language and that may not test comprehension directly (Chen et al., 2016). More recently, Rajpurkar et al. (2016) sought to overcome these deï¬ ciencies with their crowdsourced dataset, SQuAD. Here we present a challenging new largescale dataset for machine comprehension: NewsQA. NewsQA contains 119,633 natural language questions posed by crowdworkers on 12,744 news articles from CNN. Answers to these questions consist of spans of text within the corresponding article highlighted also by crowdworkers. To build NewsQA we utilized a four-stage collection process designed to encourage exploratory, curiosity-based questions that reï¬ ect human information seeking. CNN articles were chosen as the source material because they have been used in the past (Hermann et al., 2015) and, in our view, machine comprehension systems are particularly suited to high-volume, rapidly changing information sources like news.
1611.09830#0
1611.09830#2
1611.09830
[ "1606.02245" ]
1611.09830#2
NewsQA: A Machine Comprehension Dataset
â These three authors contributed equally. 1 As Trischler et al. (2016a), Chen et al. (2016), and others have argued, it is important for datasets to be sufï¬ ciently challenging to teach models the abilities we wish them to learn. Thus, in line with Richardson et al. (2013), our goal with NewsQA was to construct a corpus of questions that necessitates reasoning-like behaviors â for example, synthesis of information across different parts of an article. We designed our collection methodology explicitly to capture such questions. The challenging characteristics of NewsQA that distinguish it from most previous comprehension tasks are as follows: 1. Answers are spans of arbitrary length within an article, rather than single words or entities. 2. Some questions have no answer in the corresponding article (the null span). 3. There are no candidate answers from which to choose. 4. Our collection process encourages lexical and syntactic divergence between questions and answers. 5.
1611.09830#1
1611.09830#3
1611.09830
[ "1606.02245" ]
1611.09830#3
NewsQA: A Machine Comprehension Dataset
A signiï¬ cant proportion of questions requires reasoning beyond simple word- and context- matching (as shown in our analysis). Some of these characteristics are present also in SQuAD, the MC dataset most similar to NewsQA. However, we demonstrate through several metrics that NewsQA offers a greater challenge to existing models. In this paper we describe the collection methodology for NewsQA, provide a variety of statistics to characterize it and contrast it with previous datasets, and assess its difï¬ culty. In particular, we measure human performance and compare it to that of two strong neural-network baselines. Humans signiï¬ cantly outperform powerful question-answering models. This suggests there is room for improvement through further advances in machine comprehension research. # 2 RELATED DATASETS NewsQA follows in the tradition of several recent comprehension datasets.
1611.09830#2
1611.09830#4
1611.09830
[ "1606.02245" ]
1611.09830#4
NewsQA: A Machine Comprehension Dataset
These vary in size, difï¬ culty, and collection methodology, and each has its own distinguishing characteristics. We agree with Bajgar et al. (2016) who have said â models could certainly beneï¬ t from as diverse a collection of datasets as possible.â We discuss this collection below. # 2.1 MCTEST MCTest (Richardson et al., 2013) is a crowdsourced collection of 660 elementary-level childrenâ s stories with associated questions and answers. The stories are ï¬ ctional, to ensure that the answer must be found in the text itself, and carefully limited to what a young child can understand. Each question comes with a set of 4 candidate answers that range from single words to full explanatory sentences. The questions are designed to require rudimentary reasoning and synthesis of information across sentences, making the dataset quite challenging. This is compounded by the datasetâ s size, which limits the training of expressive statistical models. Nevertheless, recent comprehension models have performed well on MCTest (Sachan et al., 2015; Wang et al., 2015), including a highly structured neural model (Trischler et al., 2016a). These models all rely on access to the small set of candidate answers, a crutch that NewsQA does not provide. 2.2 CNN/DAILY MAIL The CNN/Daily Mail corpus (Hermann et al., 2015) consists of news articles scraped from those outlets with corresponding cloze-style questions. Cloze questions are constructed synthetically by deleting a single entity from abstractive summary points that accompany each article (written presumably by human authors). As such, determining the correct answer relies mostly on recognizing textual entailment between the article and the question. The named entities within an article are identiï¬ ed and anonymized in a preprocessing step and constitute the set of candidate answers; contrast this with NewsQA in which answers often include longer phrases and no candidates are given. Because the cloze process is automatic, it is straightforward to collect a signiï¬ cant amount of data to support deep-learning approaches: CNN/Daily Mail contains about 1.4 million question-answer 2 pairs.
1611.09830#3
1611.09830#5
1611.09830
[ "1606.02245" ]
1611.09830#5
NewsQA: A Machine Comprehension Dataset
However, Chen et al. (2016) demonstrated that the task requires only limited reasoning and, in fact, performance of the strongest models (Kadlec et al., 2016; Trischler et al., 2016b; Sordoni et al., 2016) nearly matches that of humans. 2.3 CHILDRENâ S BOOK TEST The Childrenâ s Book Test (CBT) (Hill et al., 2016) was collected using a process similar to that of CNN/Daily Mail. Text passages are 20-sentence excerpts from childrenâ s books available through Project Gutenberg; questions are generated by deleting a single word in the next (i.e., 21st) sentence. Consequently, CBT evaluates word prediction based on context. It is a comprehension task insofar as comprehension is likely necessary for this prediction, but comprehension may be insufï¬ cient and other mechanisms may be more important. 2.4 BOOKTEST Bajgar et al. (2016) convincingly argue that, because existing datasets are not large enough, we have yet to reach the full capacity of existing comprehension models. As a remedy they present BookTest. This is an extension to the named-entity and common-noun strata of CBT that increases their size by over 60 times. Bajgar et al. (2016) demonstrate that training on the augmented dataset yields a model (Kadlec et al., 2016) that matches human performance on CBT. This is impressive and suggests that much is to be gained from more data, but we repeat our concerns about the relevance of story prediction as a comprehension task.
1611.09830#4
1611.09830#6
1611.09830
[ "1606.02245" ]
1611.09830#6
NewsQA: A Machine Comprehension Dataset
We also wish to encourage more efï¬ cient learning from less data. # 2.5 SQUAD The comprehension dataset most closely related to NewsQA is SQuAD (Rajpurkar et al., 2016). It consists of natural language questions posed by crowdworkers on paragraphs from high-PageRank Wikipedia articles. As in NewsQA, each answer consists of a span of text from the related paragraph and no candidates are provided. Despite the effort of manual labelling, SQuADâ s size is signiï¬
1611.09830#5
1611.09830#7
1611.09830
[ "1606.02245" ]
1611.09830#7
NewsQA: A Machine Comprehension Dataset
cant and amenable to deep learning approaches: 107,785 question-answer pairs based on 536 articles. Although SQuAD is a more realistic and more challenging comprehension task than the other largescale MC datasets, machine performance has rapidly improved towards that of humans in recent months. The SQuAD authors measured human accuracy at 0.905 in F1 (we measured human F1 at 0.807 using a different methodology); at the time of writing, the strongest published model to date achieves 0.778 F1 (Wang et al., 2016).
1611.09830#6
1611.09830#8
1611.09830
[ "1606.02245" ]
1611.09830#8
NewsQA: A Machine Comprehension Dataset
This suggests that new, more difï¬ cult alternatives like NewsQA could further push the development of more intelligent MC systems. # 3 COLLECTION METHODOLOGY We collected NewsQA through a four-stage process: article curation, question sourcing, answer sourcing, and validation. We also applied a post-processing step with answer agreement consolidation and span merging to enhance the usability of the dataset. These steps are detailed below. 3.1 ARTICLE CURATION We retrieve articles from CNN using the script created by Hermann et al. (2015) for CNN/Daily Mail. From the returned set of 90,266 articles, we select 12,744 uniformly at random. These cover a wide range of topics that includes politics, economics, and current events. Articles are partitioned at random into a training set (90%), a development set (5%), and a test set (5%).
1611.09830#7
1611.09830#9
1611.09830
[ "1606.02245" ]
1611.09830#9
NewsQA: A Machine Comprehension Dataset
3.2 QUESTION SOURCING It was important to us to collect challenging questions that could not be answered using straightforward word- or context-matching. Like Richardson et al. (2013) we want to encourage reasoning in comprehension models. We are also interested in questions that, in some sense, model human curiosity and reï¬ ect actual human use-cases of information seeking. Along a similar line, we consider it an important (though as yet overlooked) capacity of a comprehension model to recognize when
1611.09830#8
1611.09830#10
1611.09830
[ "1606.02245" ]
1611.09830#10
NewsQA: A Machine Comprehension Dataset
3 given information is inadequate, so we are also interested in questions that may not have sufï¬ cient evidence in the text. Our question sourcing stage was designed to solicit questions of this nature, and deliberately separated from the answer sourcing stage for the same reason. Questioners (a distinct set of crowdworkers) see only a news articleâ s headline and its summary points (also available from CNN); they do not see the full article itself. They are asked to formulate a question from this incomplete information. This encourages curiosity about the contents of the full article and prevents questions that are simple reformulations of sentences in the text. It also increases the likelihood of questions whose answers do not exist in the text. We reject questions that have signiï¬ cant word overlap with the summary points to ensure that crowdworkers do not treat the summaries as mini-articles, and further discouraged this in the instructions. During collection each Questioner is solicited for up to three questions about an article. They are provided with positive and negative examples to prompt and guide them (detailed instructions are shown in Figure 3). # 3.3 ANSWER SOURCING A second set of crowdworkers (Answerers) provide answers. Although this separation of question and answer increases the overall cognitive load, we hypothesized that unburdening Questioners in this way would encourage more complex questions. Answerers receive a full article along with a crowdsourced question and are tasked with determining the answer. They may also reject the question as nonsensical, or select the null answer if the article contains insufï¬ cient information. Answers are submitted by clicking on and highlighting words in the article, while instructions encourage the set of answer words to consist of a single continuous span (again, we give an example prompt in the Appendix). For each question we solicit answers from multiple crowdworkers (avg. 2.73) with the aim of achieving agreement between at least two Answerers. 3.4 VALIDATION Crowdsourcing is a powerful tool but it is not without peril (collection glitches; uninterested or malicious workers). To obtain a dataset of the highest possible quality we use a validation process that mitigates some of these issues. In validation, a third set of crowdworkers sees the full article, a question, and the set of unique answers to that question. We task these workers with choosing the best answer from the candidate set or rejecting all answers.
1611.09830#9
1611.09830#11
1611.09830
[ "1606.02245" ]
1611.09830#11
NewsQA: A Machine Comprehension Dataset
Each article-question pair is validated by an average of 2.48 crowdworkers. Validation was used on those questions without answer-agreement after the previous stage, amounting to 43.2% of all questions. 3.5 ANSWER MARKING AND CLEANUP After validation, 86.0% of all questions in NewsQA have answers agreed upon by at least two separate crowdworkersâ either at the initial answer sourcing stage or in the top-answer selection. This improves the datasetâ s quality. We choose to include the questions without agreed answers in the corpus also, but they are specially marked. Such questions could be treated as having the null answer and used to train models that are aware of poorly posed questions.
1611.09830#10
1611.09830#12
1611.09830
[ "1606.02245" ]
1611.09830#12
NewsQA: A Machine Comprehension Dataset
As a ï¬ nal cleanup step we combine answer spans that are less than 3 words apart (punctuation is discounted). We ï¬ nd that 5.68% of answers consist of multiple spans, while 71.3% of multi-spans are within the 3-word threshold. Looking more closely at the data reveals that the multi-span answers often represent lists. These may present an interesting challenge for comprehension models moving forward. # 4 DATA ANALYSIS We provide a thorough analysis of NewsQA to demonstrate its challenge and its usefulness as a machine comprehension benchmark. The analysis focuses on the types of answers that appear in the dataset and the various forms of reasoning required to solve it.1 1Additional statistics are available at https://datasets.maluuba.com/NewsQA/stats. 4 Table 1: The variety of answer types appearing in NewsQA, with proportion statistics and examples. Answer type Example Proportion (%) Date/Time Numeric Person Location Other Entity Common Noun Phr. Adjective Phr. Verb Phr. Clause Phr. Prepositional Phr.
1611.09830#11
1611.09830#13
1611.09830
[ "1606.02245" ]
1611.09830#13
NewsQA: A Machine Comprehension Dataset
Other March 12, 2008 24.3 million Ludwig van Beethoven Torrance, California Pew Hispanic Center federal prosecutors 5-hour suffered minor damage trampling on human rights in the attack nearly half 2.9 9.8 14.8 7.8 5.8 22.2 1.9 1.4 18.3 3.8 11.2 4.1 ANSWER TYPES Following Rajpurkar et al. (2016), we categorize answers based on their linguistic type (see Table 1). This categorization relies on Stanford CoreNLP to generate constituency parses, POS tags, and NER tags for answer spans (see Rajpurkar et al. (2016) for more details). From the table we see that the majority of answers (22.2%) are common noun phrases. Thereafter, answers are fairly evenly spread among the clause phrase (18.3%), person (14.8%), numeric (9.8%), and other (11.2%) types. Clearly, answers in NewsQA are linguistically diverse. The proportions in Table 1 only account for cases when an answer span exists. The complement of this set comprises questions with an agreed null answer (9.5% of the full corpus) and answers without agreement after validation (4.5% of the full corpus). 4.2 REASONING TYPES The forms of reasoning required to solve NewsQA directly inï¬ uence the abilities that models will learn from the dataset.
1611.09830#12
1611.09830#14
1611.09830
[ "1606.02245" ]
1611.09830#14
NewsQA: A Machine Comprehension Dataset
We stratiï¬ ed reasoning types using a variation on the taxonomy presented by Chen et al. (2016) in their analysis of the CNN/Daily Mail dataset. Types are as follows, in ascending order of difï¬ culty: 1. Word Matching: Important words in the question exactly match words in the immediate context of an answer span, such that a keyword search algorithm could perform well on this subset. 2. Paraphrasing: A single sentence in the article entails or paraphrases the question. Para- phrase recognition may require synonymy and world knowledge.
1611.09830#13
1611.09830#15
1611.09830
[ "1606.02245" ]
1611.09830#15
NewsQA: A Machine Comprehension Dataset
3. Inference: The answer must be inferred from incomplete information in the article or by recognizing conceptual overlap. This typically draws on world knowledge. 4. Synthesis: The answer can only be inferred by synthesizing information distributed across multiple sentences. 5. Ambiguous/Insufï¬ cient: The question has no answer or no unique answer in the article. For both NewsQA and SQuAD, we manually labelled 1,000 examples (drawn randomly from the respective development sets) according to these types and compiled the results in Table 2. Some examples fall into more than one category, in which case we defaulted to the more challenging type. We can see from the table that word matching, the easiest type, makes up the largest subset in both datasets (32.7% for NewsQA and 39.8% for SQuAD). Paraphrasing constitutes a larger proportion in SQuAD than in NewsQA (34.3% vs 27.0%), possibly a result from the explicit encouragement of lexical variety in SQuAD question sourcing. However, NewsQA signiï¬ cantly outnumbers SQuAD on the distribution of the more difï¬ cult forms of reasoning: synthesis and inference make up a combined 33.9% of the data in contrast to 20.5% in SQuAD.
1611.09830#14
1611.09830#16
1611.09830
[ "1606.02245" ]
1611.09830#16
NewsQA: A Machine Comprehension Dataset
5 Table 2: Reasoning mechanisms needed to answer questions. For each we show an example question with the sentence that contains the answer span. Words relevant to the reasoning type are in bold. The corresponding proportion in the human-evaluated subset of both NewsQA and SQuAD (1,000 samples each) is also given. Reasoning Example Proportion (%) NewsQA SQuAD Word Matching Q: When were the ï¬ ndings published? S: Both sets of research ï¬ ndings were published Thursday... 32.7 39.8 Paraphrasing Q:
1611.09830#15
1611.09830#17
1611.09830
[ "1606.02245" ]
1611.09830#17
NewsQA: A Machine Comprehension Dataset
Who is the struggle between in Rwanda? S: The struggle pits ethnic Tutsis, supported by Rwanda, against ethnic Hutu, backed by Congo. 27.0 34.3 Inference Q: Who drew inspiration from presidents? S: Rudy Ruiz says the lives of US presidents can make them positive role models for students. 13.2 8.6 Synthesis Q: Where is Brittanee Drexel from? S: The mother of a 17-year-old Rochester, New York high school student ... says she did not give her daughter permission to go on the trip. Brittanee Marie Drexelâ s mom says... 20.7 11.9 Ambiguous/Insufï¬ cient Q:
1611.09830#16
1611.09830#18
1611.09830
[ "1606.02245" ]
1611.09830#18
NewsQA: A Machine Comprehension Dataset
Whose mother is moving to the White House? S: ... Barack Obamaâ s mother-in-law, Marian Robinson, will join the Obamas at the familyâ s private quarters at 1600 Pennsylvania Avenue. [Michelle is never mentioned] 6.4 5.4 # 5 BASELINE MODELS We test the performance of three comprehension systems on NewsQA: human data analysts and two neural models. The ï¬ rst neural model is the match-LSTM (mLSTM) system of Wang & Jiang (2016b). The second is a model of our own design that is similar but computationally cheaper. We describe these models below but omit the personal details of our analysts. Implementation details of the models are described in Appendix A. # 5.1 MATCH-LSTM We selected the mLSTM model because it is straightforward to implement and offers strong, though not state-of-the-art, performance on the similar SQuAD dataset. There are three stages involved in the mLSTM. First, LSTM networks encode the document and question (represented by GloVe word embeddings (Pennington et al., 2014)) as sequences of hidden states. Second, an mLSTM network (Wang & Jiang, 2016a) compares the document encodings with the question encodings. This network processes the document sequentially and at each token uses an attention mechanism to obtain a weighted vector representation of the question; the weighted combination is concatenated with the encoding of the current token and fed into a standard LSTM. Finally, a Pointer Network uses the hidden states of the mLSTM to select the boundaries of the answer span. We refer the reader to Wang & Jiang (2016a;b) for full details. 5.2 THE BILINEAR ANNOTATION RE-ENCODING BOUNDARY (BARB) MODEL The match-LSTM is computationally intensive since it computes an attention over the entire question at each document token in the recurrence. To facilitate faster experimentation with NewsQA we developed a lighter-weight model (BARB) that achieves similar results on SQuAD2. Our model consists of four stages: Encoding All words in the document and question are mapped to real-valued vectors using the GloVe embeddings W â R|V |à d. This yields d1, . . . , dn â Rd and q1, . . . , qm â Rd. A bidirec- 2With the conï¬
1611.09830#17
1611.09830#19
1611.09830
[ "1606.02245" ]
1611.09830#19
NewsQA: A Machine Comprehension Dataset
gurations for the results reported in Section 6.2, one epoch of training on NewsQA takes about 3.9k seconds for BARB and 8.1k seconds for mLSTM. 6 tional GRU network (Bahdanau et al., 2015) encodes di into contextual states hi â RD1 for the document. The same encoder is applied to qj to derive contextual states kj â RD1 for the question.3 Bilinear Annotation Next we compare the document and question encodings using a set of C bilinear transformations, i T[1:C]kj, Tc â RD1à D1 , gij â RC, which we use to produce an (n à m à C)-dimensional tensor of annotation scores, G = [gij]. We take the maximum over the question-token (second) dimension and call the columns of the resulting matrix gi â
1611.09830#18
1611.09830#20
1611.09830
[ "1606.02245" ]
1611.09830#20
NewsQA: A Machine Comprehension Dataset
RC. We use this matrix as an annotation over the document word dimension. In contrast with the more typical multiplicative application of attention vectors, this annotation matrix is concatenated to the encoder RNN input in the re-encoding stage. Re-encoding For each document word, the input of the re-encoding RNN (another biGRU) consists of three components: the document encodings hi, the annotation vectors gi, and a binary feature qi indicating whether the document word appears in the question. The resulting vectors fi = [hi; gi; qi] are fed into the re-encoding RNN to produce D2-dimensional encodings ei for the boundary-pointing stage. Boundary pointing Finally, we search for the boundaries of the answer span using a convolutional network (in a process similar to edge detection). Encodings e; are arranged in matrix E â ¬ R?2*â . E is convolved with a bank of n¢ filters, Fi â ¬ R?2*â , where w is the filter width, k indexes the different filters, and ¢ indexes the layer of the convolutional network. Each layer has the same number of filters of the same dimensions. We add a bias term and apply a nonlinearity (ReLU) following each convolution, with the result an (ny x )-dimensional matrix Be. We use two convolutional layers in the boundary-pointing stage. Given B, and Bog, the answer spanâ s start- and end-location probabilities are computed using p(s) o exp (v7 Bi + bs) and p(e) x exp (v? Bz + be) , respectively. We also concatenate p(s) to the input of the second convolutional layer (along the n-dimension) so as to condition the end-boundary pointing on the start-boundary. Vectors vs, Ve â ¬ Râ S and scalars b,, be â ¬ R are trainable parameters. We also provide an intermediate level of â guidanceâ
1611.09830#19
1611.09830#21
1611.09830
[ "1606.02245" ]
1611.09830#21
NewsQA: A Machine Comprehension Dataset
to the annotation mechanism by first reducing the feature dimension C' in G with mean-pooling, then maximizing the softmax probabilities in the resulting (n-dimensional) vector corresponding to the answer word positions in each document. This auxiliary task is observed empirically to improve performance. # 6 EXPERIMENTS4 6.1 HUMAN EVALUATION We tested four English speakers on a total of 1,000 questions from the NewsQA development set. We used four performance measures: F1 and exact match (EM) scores (the same measures used by SQuAD), as well as BLEU and CIDEr5. BLEU is a precision-based metric popular in machine translation that uses a weighted average of variable length phrase matches (n-grams) against the reference sentence (Papineni et al., 2002). CIDEr was designed to correlate better with human judgements of sentence similarity, and uses tf-idf scores over n-grams (Vedantam et al., 2015). As given in Table 4, humans averaged 0.694 F1 on NewsQA. The human EM scores are relatively low at 0.465. These lower scores are a reï¬ ection of the fact that, particularly in a dataset as complex as NewsQA, there are multiple ways to select semantically equivalent answers, e.g., â 1996â versus â in 1996â .
1611.09830#20
1611.09830#22
1611.09830
[ "1606.02245" ]
1611.09830#22
NewsQA: A Machine Comprehension Dataset
Although these answers are equally correct they would be measured at 0.5 F1 and 0.0 EM. 3A bidirectional GRU concatenates the hidden states of two GRU networks running in opposite directions. Each of these has hidden size 1 2 D1. 4All experiments in this section use the subset of NewsQA dataset with answer agreements (92,549 samples for training, 5,166 for validation, and 5,126 for testing). We leave the challenge of identifying the unanswerable questions for future work. 5We use https://github.com/tylin/coco-caption to calculate these two scores. 7 Table 3:
1611.09830#21
1611.09830#23
1611.09830
[ "1606.02245" ]
1611.09830#23
NewsQA: A Machine Comprehension Dataset
Model performance on SQuAD and NewsQA datasets. Random are taken from Rajpurkar et al. (2016), and mLSTM from Wang & Jiang (2016b). SQuAD Exact Match F1 NewsQA Exact Match F1 Model Dev Test Dev Test Model Dev Test Dev Test Random 0.11 mLSTM 0.591 0.591 BARB 0.13 0.595 - 0.41 0.700 0.709 0.43 0.703 - Random 0.00 mLSTM 0.344 0.361 BARB 0.00 0.349 0.341 0.30 0.496 0.496 0.30 0.500 0.482 Table 4: Human performance on SQuAD and NewsQA datasets.
1611.09830#22
1611.09830#24
1611.09830
[ "1606.02245" ]
1611.09830#24
NewsQA: A Machine Comprehension Dataset
The ï¬ rst row is taken from Rajpurkar et al. (2016), and the last two rows correspond to machine performance (BARB) on the human- evaluated subsets. Dataset Exact Match F1 BLEU CIDEr SQuAD SQuAD (ours) NewsQA 0.803 0.650 0.465 0.905 0.807 0.694 - 0.625 0.560 - 3.998 3.596 SQuADBARB NewsQABARB 0.553 0.340 0.685 0.501 0.366 0.081 2.845 2.431 This suggests that simpler automatic metrics are not equal to the task of complex MC evaluation, a problem that has been noted in other domains (Liu et al., 2016). Therefore we also measure according to BLEU and CIDEr: humans score 0.560 and 3.596 on these metrics, respectively. The original SQuAD evaluation of human performance compares distinct answers given by crowd- workers according to EM and F1; for a closer comparison with NewsQA, we replicated our human test on the same number of validation data (1,000) with the same humans. We measured human answers against the second group of crowdsourced responses in SQuADâ s development set, yielding 0.807 F1, 0.625 BLEU, and 3.998 CIDEr. Note that the F1 score is close to the top single-model performance of 0.778 achieved in Wang et al. (2016).
1611.09830#23
1611.09830#25
1611.09830
[ "1606.02245" ]
1611.09830#25
NewsQA: A Machine Comprehension Dataset
We ï¬ nally compared human performance on the answers that had crowdworker agreement with and without validation, ï¬ nding a difference of only 1.4 percentage points F1. This suggests our validation stage yields good-quality answers. 6.2 MODEL PERFORMANCE Performance of the baseline models and humans is measured by EM and F1 with the ofï¬ cial evaluation script from SQuAD and listed in Table 4. We supplement these with BLEU and CIDEr measures on the 1,000 human-annotated dev questions. Unless otherwise stated, hyperparameters are determined by hyperopt (Appendix A). The gap between human and machine performance on NewsQA is a striking 0.198 points F1 â much larger than the gap on SQuAD (0.098) under the same human evaluation scheme. The gaps suggest a large margin for improvement with machine comprehension methods.
1611.09830#24
1611.09830#26
1611.09830
[ "1606.02245" ]
1611.09830#26
NewsQA: A Machine Comprehension Dataset
Figure 1 stratiï¬ es model (BARB) performance according to answer type (left) and reasoning type (right) as deï¬ ned in Sections 4.1 and 4.2, respectively. The answer-type stratiï¬ cation suggests that the model is better at pointing to named entities compared to other types of answers. The reasoning- type stratiï¬ cation, on the other hand, shows that questions requiring inference and synthesis are, not surprisingly, more difï¬
1611.09830#25
1611.09830#27
1611.09830
[ "1606.02245" ]
1611.09830#27
NewsQA: A Machine Comprehension Dataset
cult for the model. Consistent with observations in Table 4, stratiï¬ ed performance on NewsQA is signiï¬ cantly lower than on SQuAD. The difference is smallest on word matching and largest on synthesis. We postulate that the longer stories in NewsQA make synthesizing information from separate sentences more difï¬ cult, since the relevant sentences may be farther apart. This requires the model to track longer-term dependencies. It is also interesting to observe that on SQuAD, BARB outperforms human annotators in answering ambiguous questions or those with incomplete information.
1611.09830#26
1611.09830#28
1611.09830
[ "1606.02245" ]
1611.09830#28
NewsQA: A Machine Comprehension Dataset
8 Datenime Numeric Word Person Matching â Adjective Phrase Paraphrasing Location Propositional Phrase Inference â Common Noun Phrase thor Other entity Synthesis Clause Phrase Ambiguous! nsutfelent I NewsQA Verb Phrase = EM insufficient = SQUAD ° 02 oO 06 oe 0.000 0.180 0.300 0.450 0.600 0.750 Figure 1: Left: BARB performance (F1 and EM) stratiï¬
1611.09830#27
1611.09830#29
1611.09830
[ "1606.02245" ]
1611.09830#29
NewsQA: A Machine Comprehension Dataset
ed by answer type on the full development set of NewsQA. Right: BARB performance (F1) stratiï¬ ed by reasoning type on the human-assessed subset on both NewsQA and SQuAD. Error bars indicate performance differences between BARB and human annotators. # Table 5: Sentence-level accuracy on artiï¬ cially-lengthened SQuAD documents. SQuAD NewsQA # documents Avg # sentences isf 1 4.9 14.3 23.2 31.8 40.3 79.6 74.9 73.0 72.3 71.0 3 5 7 9 1 30.7 35.4 # 6.3 SENTENCE-LEVEL SCORING We propose a simple sentence-level subtask as an additional quantitative demonstration of the relative difï¬ culty of NewsQA. Given a document and a question, the goal is to ï¬ nd the sentence containing the answer span. We hypothesize that simple techniques like word-matching are inadequate to this task owing to the more involved reasoning required by NewsQA. We employ a technique that resembles inverse document frequency (idf ), which we call inverse sentence frequency (isf ). Given a sentence Si from an article and its corresponding question Q, the isf score is given by the sum of the idf scores of the words common to Si and Q (each sentence is treated as a document for the idf computation). The sentence with the highest isf is taken as the answer sentence Sâ , that is, Sâ = arg max isf (w). i wâ Siâ ©Q The isf method achieves an impressive 79.4% sentence-level accuracy on SQuADâ s development set but only 35.4% accuracy on NewsQAâ s development set, highlighting the comparative difï¬
1611.09830#28
1611.09830#30
1611.09830
[ "1606.02245" ]
1611.09830#30
NewsQA: A Machine Comprehension Dataset
culty of the latter. To eliminate the difference in article length as a possible cause of the performance gap, we also artiï¬ cially increased the article lengths in SQuAD by concatenating adjacent SQuAD articles from the same Wikipedia article. Accuracy decreases as expected with the increased SQuAD article length, yet remains signiï¬ cantly higher than on NewsQA with comparable or even greater article length (see Table 5). # 7 CONCLUSION We have introduced a challenging new comprehension dataset: NewsQA. We collected the 100,000+ examples of NewsQA using teams of crowdworkers, who variously read CNN articles or highlights, posed questions about them, and determined answers. Our methodology yields diverse answer types and a signiï¬ cant proportion of questions that require some reasoning ability to solve. This makes the corpus challenging, as conï¬ rmed by the large performance gap between humans and deep neural models (0.198 F1, 0.479 BLEU, 1.165 CIDEr). By its size and complexity, NewsQA makes a signiï¬ cant extension to the existing body of comprehension datasets. We hope that our corpus will spur further advances in machine comprehension and guide the development of literate artiï¬ cial intelligence.
1611.09830#29
1611.09830#31
1611.09830
[ "1606.02245" ]
1611.09830#31
NewsQA: A Machine Comprehension Dataset
9 0.800 # ACKNOWLEDGMENTS The authors would like to thank à aË glar Gülçehre, Sandeep Subramanian and Saizheng Zhang for helpful discussions. # REFERENCES Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. ICLR, 2015. Ondrej Bajgar, Rudolf Kadlec, and Jan Kleindienst.
1611.09830#30
1611.09830#32
1611.09830
[ "1606.02245" ]
1611.09830#32
NewsQA: A Machine Comprehension Dataset
Embracing data abundance: Booktest dataset for reading comprehension. arXiv preprint arXiv:1610.00956, 2016. J. Bergstra, O. Breuleux, F. Bastien, P. Lamblin, R. Pascanu, G. Desjardins, J. Turian, D. Warde- Farley, and Y. Bengio. Theano: a CPU and GPU math expression compiler. In In Proc. of SciPy, 2010.
1611.09830#31
1611.09830#33
1611.09830
[ "1606.02245" ]
1611.09830#33
NewsQA: A Machine Comprehension Dataset
Danqi Chen, Jason Bolton, and Christopher D. Manning. A thorough examination of the cnn / daily mail reading comprehension task. In Association for Computational Linguistics (ACL), 2016. # François Chollet. keras. https://github.com/fchollet/keras, 2015. Xavier Glorot and Yoshua Bengio. Understanding the difï¬ culty of training deep feedforward neural networks. In Aistats, volume 9, pp. 249â 256, 2010.
1611.09830#32
1611.09830#34
1611.09830
[ "1606.02245" ]
1611.09830#34
NewsQA: A Machine Comprehension Dataset
Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems, pp. 1684â 1692, 2015. Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. The goldilocks principle: Reading childrenâ s books with explicit memory representations. ICLR, 2016.
1611.09830#33
1611.09830#35
1611.09830
[ "1606.02245" ]
1611.09830#35
NewsQA: A Machine Comprehension Dataset
Rudolf Kadlec, Martin Schmid, Ondrej Bajgar, and Jan Kleindienst. Text understanding with the attention sum reader network. arXiv preprint arXiv:1603.01547, 2016. Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. ICLR, 2015. Chia-Wei Liu, Ryan Lowe, Iulian V Serban, Michael Noseworthy, Laurent Charlin, and Joelle Pineau.
1611.09830#34
1611.09830#36
1611.09830
[ "1606.02245" ]
1611.09830#36
NewsQA: A Machine Comprehension Dataset
How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. arXiv preprint arXiv:1603.08023, 2016. Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pp. 311â 318. Association for Computational Linguistics, 2002. Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio.
1611.09830#35
1611.09830#37
1611.09830
[ "1606.02245" ]
1611.09830#37
NewsQA: A Machine Comprehension Dataset
On the difï¬ culty of training recurrent neural networks. ICML (3), 28:1310â 1318, 2013. Jeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vectors for word representation. In EMNLP, volume 14, pp. 1532â 43, 2014. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250, 2016. Matthew Richardson, Christopher JC Burges, and Erin Renshaw. Mctest: A challenge dataset for the open-domain machine comprehension of text. In EMNLP, volume 1, pp. 2, 2013. Mrinmaya Sachan, Avinava Dubey, Eric P Xing, and Matthew Richardson. Learning answerentailing structures for machine comprehension. In Proceedings of ACL, 2015. Andrew M Saxe, James L McClelland, and Surya Ganguli. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. arXiv preprint arXiv:1312.6120, 2013.
1611.09830#36
1611.09830#38
1611.09830
[ "1606.02245" ]
1611.09830#38
NewsQA: A Machine Comprehension Dataset
10 Alessandro Sordoni, Philip Bachman, and Yoshua Bengio. Iterative alternating neural attention for machine reading. arXiv preprint arXiv:1606.02245, 2016. Adam Trischler, Zheng Ye, Xingdi Yuan, Jing He, Philip Bachman, and Kaheer Suleman. A parallel- hierarchical model for machine comprehension on sparse data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, 2016a. Adam Trischler, Zheng Ye, Xingdi Yuan, and Kaheer Suleman. Natural language comprehension with the epireader. In EMNLP, 2016b.
1611.09830#37
1611.09830#39
1611.09830
[ "1606.02245" ]
1611.09830#39
NewsQA: A Machine Comprehension Dataset
Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. Cider: Consensus-based image description evaluation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4566â 4575, 2015. Hai Wang, Mohit Bansal, Kevin Gimpel, and David McAllester. Machine comprehension with syntax, frames, and semantics. In Proceedings of ACL, Volume 2: Short Papers, pp. 700, 2015. Shuohang Wang and Jing Jiang.
1611.09830#38
1611.09830#40
1611.09830
[ "1606.02245" ]
1611.09830#40
NewsQA: A Machine Comprehension Dataset
Learning natural language inference with lstm. NAACL, 2016a. Shuohang Wang and Jing Jiang. Machine comprehension using match-lstm and answer pointer. arXiv preprint arXiv:1608.07905, 2016b. Zhiguo Wang, Haitao Mi, Wael Hamza, and Radu Florian. Multi-perspective context matching for machine comprehension. arXiv preprint arXiv:1612.04211, 2016.
1611.09830#39
1611.09830#41
1611.09830
[ "1606.02245" ]
1611.09830#41
NewsQA: A Machine Comprehension Dataset
11 APPENDICES # A IMPLEMENTATION DETAILS Both mLSTM and BARB are implemented with the Keras framework (Chollet, 2015) using the Theano (Bergstra et al., 2010) backend. Word embeddings are initialized using GloVe vectors (Pennington et al., 2014) pre-trained on the 840-billion Common Crawl corpus. The word embeddings are not updated during training. Embeddings for out-of-vocabulary words are initialized with zero. For both models, the training objective is to maximize the log likelihood of the boundary pointers. Optimization is performed using stochastic gradient descent (with a batch-size of 32) with the ADAM optimizer (Kingma & Ba, 2015). The initial learning rate is 0.003 for mLSTM and 0.0005 for BARB. The learning rate is decayed by a factor of 0.7 if validation loss does not decrease at the end of each epoch. Gradient clipping (Pascanu et al., 2013) is applied with a threshold of 5. Parameter tuning is performed on both models using hyperopt6. For each model, conï¬ gurations for the best observed performance are as follows: # mLSTM Both the pre-processing layer and the answer-pointing layer use bi-directional RNN with a hidden size of 192. These settings are consistent with those used by Wang & Jiang (2016b). Model parameters are initialized with either the normal distribution (N (0, 0.05)) or the orthogonal initialization (O, Saxe et al. 2013) in Keras. All weight matrices in the LSTMs are initialized with O. In the Match-LSTM layer, W q, W p, and W r are initialized with O, bp and w are initialized with N , and b is initialized as 1. In the answer-pointing layer, V and W a are initialized with O, ba and v are initialized with N , and c is initialized as 1.
1611.09830#40
1611.09830#42
1611.09830
[ "1606.02245" ]
1611.09830#42
NewsQA: A Machine Comprehension Dataset
# BARB For BARB, the following hyperparameters are used on both SQuAD and NewsQA: d = 300, D1 = 128, C = 64, D2 = 256, w = 3, and nf = 128. Weight matrices in the GRU, the bilinear models, as well as the boundary decoder (vs and ve) are initialized with O. The ï¬ lter weights in the boundary decoder are initialized with glorot_uniform (Glorot & Bengio 2010, default in Keras). The bilinear biases are initialized with N , and the boundary decoder biases are initialized with 0.
1611.09830#41
1611.09830#43
1611.09830
[ "1606.02245" ]
1611.09830#43
NewsQA: A Machine Comprehension Dataset
# B DATA COLLECTION USER INTERFACE Here we present the user interfaces used in question sourcing, answer sourcing, and question/answer validation. 6https://github.com/hyperopt/hyperopt 12 Highlights e Three women to jointly receive the 2011 Nobel Peace Prize ¢ Prize recognizes non-violent struggle of safety of women and women's rights. e Prize winners to be honored with a concert on Sunday hosted by Helen Mirren Qi: Who were the prize winners Q2: { What country were the prize winners from4 ] Q3: [ Write a question that relates to a highlight. } Qi: Who were the prize winners Q2: { What country were the prize winners from4 ] Q3: [ Write a question that relates to a highlight. } Question What is the age of Patrick McGoohan? © Click here if the question does not make sense or is not a question. (CNN) -- Emmy-winning Patrick McGoohan, the actor who created one of British television's most surreal thrillers, has died aged 8OJaccording to British media reports. Fans holding placards of Patrick McGoohan recreate a scene from â The Prisonerâ to celebrate the 40th anniversary of the show in 2007.
1611.09830#42
1611.09830#44
1611.09830
[ "1606.02245" ]
1611.09830#44
NewsQA: A Machine Comprehension Dataset
The Press Association, quoting his son-in-law Cleve Landsberg, reported he died in Los Angeles after a short illness. McGoohan, star of the 1960s show â The Danger Man, is best remembered for writing and starring in 'The Prisonerâ about a former spy locked away in an isolated village who tries to escape each episode. Question When was the lockdown initiated? Select the best answer: Tucson, Arizona, © 10:30am. -- liam, * Allanswers are very bad. * The question doesn't make sense.
1611.09830#43
1611.09830#45
1611.09830
[ "1606.02245" ]
1611.09830#45
NewsQA: A Machine Comprehension Dataset
Story (for your convenience) (CNN) -- U.S. Air Force officials called off their response late Friday afternoon at a Tucson, Arizona, base after reports that an armed man had entered an office building, the U.S. military branch said in a statement. Earlier in the day, a U.S. military official told CNN that a gunman was believed to be holed up in a building at the Davis-Monthan Air Force Base. This precipitated the Air Force Question What is the age of Patrick McGoohan? © Click here if the question does not make sense or is not a question. (CNN) -- Emmy-winning Patrick McGoohan, the actor who created one of British television's most surreal thrillers, has died aged 8OJaccording to British media reports. Fans holding placards of Patrick McGoohan recreate a scene from â The Prisonerâ to celebrate the 40th anniversary of the show in 2007.
1611.09830#44
1611.09830#46
1611.09830
[ "1606.02245" ]
1611.09830#46
NewsQA: A Machine Comprehension Dataset
The Press Association, quoting his son-in-law Cleve Landsberg, reported he died in Los Angeles after a short illness. McGoohan, star of the 1960s show â The Danger Man, is best remembered for writing and starring in 'The Prisonerâ about a former spy locked away in an isolated village who tries to escape each episode. Question When was the lockdown initiated? Select the best answer: Tucson, Arizona, © 10:30am. -- liam, * Allanswers are very bad. * The question doesn't make sense.
1611.09830#45
1611.09830#47
1611.09830
[ "1606.02245" ]
1611.09830#47
NewsQA: A Machine Comprehension Dataset
Story (for your convenience) (CNN) -- U.S. Air Force officials called off their response late Friday afternoon at a Tucson, Arizona, base after reports that an armed man had entered an office building, the U.S. military branch said in a statement. Earlier in the day, a U.S. military official told CNN that a gunman was believed to be holed up in a building at the Davis-Monthan Air Force Base. This precipitated the Air Force to call for a lock-down -- which began at 10:30 a.m. following the unconfirmed sighting of" such a man. No shots were ever fired and law enforcement teams are on site, said the official, who had direct knowledge of the situation from conversations with base officials but did not want to be identified. In fact, at 6 p.m., Col. John Cherrey -- who commands the Air Force's 355th Fighter Wing -- told reporters that no gunman or weapon was ever found. He added that the building, where the gunman was once thought to
1611.09830#46
1611.09830#48
1611.09830
[ "1606.02245" ]
1611.09830#48
NewsQA: A Machine Comprehension Dataset
Figure 2: Examples of user interfaces for question sourcing, answer sourcing, and validation. 13 Write Questions From A Summary Instructions + Overview Write questions about the highlights of a story. Steps 1. Read the highlights 2. Write questions about the highlights Example Highlights * Sarah Palin from Alaska meets with McCain e Fareed Zakaria says John McCain did not put country first with his choice © Zakaria: This is "hell of a time" for Palin to start thinking about national, global issues Questions The questions can refer directly to the highlights, for example: © Where is Palin from? © What did Fareed say about John McCain's choice? e Whois thinking about global issues? Questions must always be related to the highlights but their answers don't have to be in the highlights. You can assume that the highlights summarize a document which can answer other questions for example: e What was the meeting about? * What was McCain's choice? © What issues is Palin thinking about? Other Rules * Donot re-use the same or very similar questions. ® Questions should be written to have short answers. ¢ Donot write "how" nor "why" type questions since their answers are not short. "How far/long/many/much" are okay.
1611.09830#47
1611.09830#49
1611.09830
[ "1606.02245" ]
1611.09830#49
NewsQA: A Machine Comprehension Dataset
Figure 3: Question sourcing instructions for the crowdworkers. 14
1611.09830#48
1611.09830
[ "1606.02245" ]
1611.09268#0
MS MARCO: A Human Generated MAchine Reading COmprehension Dataset
8 1 0 2 t c O 1 3 ] L C . s c [ 3 v 8 6 2 9 0 . 1 1 6 1 : v i X r a # MS MARCO: A Human Generated MAchine Reading COmprehension Dataset Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, and Tong Wang Microsoft AI & Research # Abstract We introduce a large scale MAchine Reading COmprehension dataset, which we name MS MARCO. The dataset comprises of 1,010,916 anonymized questionsâ sampled from Bingâ s search query logsâ each with a human generated answer and 182,669 completely human rewritten generated answers. In addition, the dataset contains 8,841,823 passagesâ extracted from 3,563,535 web documents retrieved by Bingâ that provide the information necessary for curating the natural language answers. A question in the MS MARCO dataset may have multiple answers or no answers at all. Using this dataset, we propose three different tasks with varying levels of difï¬ culty: (i) predict if a question is answerable given a set of context passages, and extract and synthesize the answer as a human would (ii) generate a well-formed answer (if possible) based on the context passages that can be understood with the question and passage context, and ï¬ nally (iii) rank a set of retrieved passages given a question. The size of the dataset and the fact that the questions are derived from real user search queries distinguishes MS MARCO from other well-known publicly available datasets for machine reading comprehension and question-answering. We believe that the scale and the real-world nature of this dataset makes it attractive for benchmarking machine reading comprehension and question-answering models.
1611.09268#1
1611.09268
[ "1810.12885" ]
1611.09268#1
MS MARCO: A Human Generated MAchine Reading COmprehension Dataset
# Introduction Building intelligent agents with machine reading comprehension (MRC) or open-domain question answering (QA) capabilities using real world data is an important goal of artiï¬ cial intelligence. Progress in developing these capabilities can be of signiï¬ cant consumer value if employed in automated assistantsâ e.g., Cortana [Cortana], Siri [Siri], Alexa [Amazon Alexa], or Google Assistant [Google Assistant]â on mobile devices and smart speakers, such as Amazon Echo [Amazon Echo]. Many of these devices rely heavily on recent advances in speech recognition technology powered by neural models with deep architectures [Hinton et al., 2012, Dahl et al., 2012]. The rising popularity of spoken interfaces makes it more attractive for users to use natural language dialog for question- answering and information retrieval from the web as opposed to viewing traditional search result pages on a web browser [Gao et al., 2018]. Chatbots and other messenger based intelligent agents are also becoming popular in automating business processesâ e.g., answering customer service requests. All of these scenarios can beneï¬
1611.09268#0
1611.09268#2
1611.09268
[ "1810.12885" ]
1611.09268#2
MS MARCO: A Human Generated MAchine Reading COmprehension Dataset
t from fundamental improvements in MRC models. However, MRC in the wild is extremely challenging. Successful MRC systems should be able to learn good representations from raw text, infer and reason over learned representations, and ï¬ nally generate a summarized response that is correct in both form and content. The public availability of large datasets has been instrumental in many AI research breakthroughs [Wissner-Gross, 2016]. For example, ImageNetâ s [Deng et al., 2009] release of 1.5 million labeled 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. examples with 1000 object categories led to the development of object classiï¬ cation models that perform better than humans on the ImageNet task [He et al., 2015]. Similarly, the large speech database collected over 20 years by DARPA enabled new breakthroughs in speech recognition performance from deep learning models Deng and Huang [2004]. Several MRC and QA datasets have also recently emerged. However, many of these existing datasets are not sufï¬ ciently large to train deep neural models with large number of parameters. Large scale existing MRC datasets, when available, are often synthetic. Furthermore, a common characteristic, shared by many of these datasets, is that the questions are usually generated by crowd workers based on provided text spans or documents. In MS MARCO, in contrast, the questions correspond to actual search queries that users submitted to Bing, and therefore may be more representative of a â
1611.09268#1
1611.09268#3
1611.09268
[ "1810.12885" ]