doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1611.10012 | 55 | Table 5 summarizes the ï¬nal selected model speciï¬ca- tions as well as their individual performance on COCO as single models.4 Ensembling these ï¬ve models using the procedure described in [13] (Appendix A) and using multi- crop inference then yielded our ï¬nal model. Note that we do not use multiscale training, horizontal ï¬ipping, box reï¬ne- ment, box voting, or global context which are sometimes used in the literature. Table 6 compares a single modelâs performance against two ways of ensembling, and shows that (1) encouraging for diversity did help against a hand selected ensemble, and (2) ensembling and multicrop were responsible for almost 7 points of improvement over a sin- gle model.
# 4.3. Example detections | 1611.10012#55 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.10012 | 56 | # 4.3. Example detections
In Figures 12 to 17 we visualize detections on images from the COCO dataset, showing side-by-side comparisons of ï¬ve of the detectors that lie on the âoptimality frontierâ of the speed-accuracy trade-off plot. To visualize, we select detections with score greater than a threshold and plot the top 20 detections in each image. We use a threshold of .5 for Faster R-CNN and R-FCN and .3 for SSD. These thresh- olds were hand-tuned for (subjective) visual attractiveness and not using rigorous criteria so we caution viewers from reading too much into the tea leaves from these visualiza- tions. This being said, we see that across our examples, all of the detectors perform reasonably well on large objects â SSD only shows its weakness on small objects, missing some of the smaller kites and people in the ï¬rst image as well as the smaller cups and bottles on the dining table in
4Note that these numbers were computed on a held-out validation set and are not strictly comparable to the ofï¬cial COCO test-dev data results (though they are expected to be very close).
13 | 1611.10012#56 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.10012 | 57 | 13
60 MAP Subset ooâ 50. @ [email protected] co © [email protected] e 40 COP < 30 20 10 0 10 15 20 25 30 35 40 Overall mAP
Figure 11: Overall COCO mAP (@[.5:.95]) for all experiments plotted against corresponding [email protected] and [email protected]. It is unsurprising that these numbers are correlated, but it is interesting that they are almost perfectly correlated so for these models, it is never the case that a model has strong performance at 50% IOU but weak performance at 75% IOU.
Ours MSRA2015 Trimps-Soushen AP 0.413 0.371 0.359 [email protected] 0.62 0.588 0.58 [email protected] 0.45 0.398 0.383 APsmall 0.231 0.173 0.158 APmed 0.436 0.415 0.407 APlarge 0.547 0.525 0.509 AR@100 0.604 0.489 0.497 ARsmall 0.424 0.267 0.269 ARmed 0.641 0.552 0.557 ARlarge 0.748 0.679 0.683 | 1611.10012#57 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.10012 | 58 | Table 4: Performance on the 2016 COCO test-challenge dataset. AP and AR refer to (mean) average precision and average recall respectively. Our model achieves a relative improvement of nearly 60% on small objects recall over the previous state-of-the-art COCO detector.
AP 32.93 33.3 34.75 35.0 35.64 Feature Extractor Resnet 101 Resnet 101 Inception Resnet (v2) Inception Resnet (v2) Inception Resnet (v2) Output stride 8 8 16 16 8 loss ratio 3:1 1:1 1:1 2:1 1:1
Table 5: Summary of single models that were automatically selected to be part of the diverse ensemble. Loss ratio refers to the multipliers α, β for location and classiï¬cation losses, respectively.
Faster RCNN with Inception Resnet (v2) Hand selected Faster RCNN ensemble w/multicrop Diverse Faster RCNN ensemble w/multicrop AP 0.347 0.41 0.416 [email protected] 0.555 0.617 0.619 [email protected] 0.367 0.449 0.454 APsmall 0.135 0.236 0.239 APmed 0.381 0.43 0.435 APlarge 0.52 0.542 0.549 | 1611.10012#58 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.10012 | 59 | Table 6: Effects of ensembling and multicrop inference. Numbers reported on COCO test-dev dataset. Second row (hand selected ensemble) consists of 6 Faster RCNN models with 3 Resnet 101 (v1) and 3 Inception Resnet (v2) and the third row (diverse ensemble) is described in detail in Table 5.
the last image.
# Acknowledgements
# 5. Conclusion
We would like to thank the following people for their advice and sup- port throughout this project: Tom Duerig, Dumitru Erhan, Jitendra Ma- lik, George Papandreou, Dominik Roblek, Chuck Rosenberg, Nathan Sil- berman, Abhinav Srivastava, Rahul Sukthankar, Christian Szegedy, Jasper Uijlings, Jay Yagnik, Xiangxin Zhu.
We have performed an experimental comparison of some of the main aspects that inï¬uence the speed and accuracy of modern object detectors. We hope this will help practi- tioners choose an appropriate method when deploying ob- ject detection in the real world. We have also identiï¬ed some new techniques for improving speed without sacri- ï¬cing much accuracy, such as using many fewer proposals than is usual for Faster R-CNN.
# References | 1611.10012#59 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.10012 | 62 | (e) FRCNN+IncResnetV2, 300 proposals
Figure 12: Example detections from 5 different models.
Inside- outside net: Detecting objects in context with skip arXiv preprint pooling and recurrent neural networks. arXiv:1512.04143, 2015. 3, 5
[3] A. Canziani, A. Paszke, and E. Culurciello. An analysis of deep neural network models for practical applications. arXiv preprint arXiv:1605.07678, 2016. 1
[6] J. Dai, Y. Li, K. He, and J. Sun. R-fcn: Object detection via region-based fully convolutional networks. arXiv preprint arXiv:1605.06409, 2016. 1, 2, 3, 4, 5, 6
[7] J. Dean, G. Corrado, R. Monga, K. Chen, M. Devin, M. Mao, A. Senior, P. Tucker, K. Yang, Q. V. Le, et al. Large scale dis- tributed deep networks. In Advances in neural information processing systems, pages 1223â1231, 2012. 4, 5 | 1611.10012#62 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.10012 | 63 | [4] R. Collobert, K. Kavukcuoglu, and C. Farabet. Torch7: A matlab-like environment for machine learning. In BigLearn, NIPS Workshop, number EPFL-CONF-192376, 2011. 4
[8] D. Erhan, C. Szegedy, A. Toshev, and D. Anguelov. Scal- In Pro- able object detection using deep neural networks. ceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2147â2154, 2014. 2
Instance-aware semantic seg- mentation via multi-task network cascades. arXiv preprint arXiv:1512.04412, 2015. 3, 5, 6
[9] C.-Y. Fu, W. Liu, A. Ranga, A. Tyagi, and A. C. Berg. Dssd: Deconvolutional single shot detector. arXiv preprint arXiv:1701.06659, 2017. 3
15 | 1611.10012#63 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.10012 | 65 | (e) FRCNN+IncResnetV2, 300 proposals
Figure 13: Example detections from 5 different models.
[10] R. Girshick. Fast r-cnn. In Proceedings of the IEEE Inter- national Conference on Computer Vision, pages 1440â1448, 2015. 2, 3, 5
generation. In Proceedings of The 32nd International Con- ference on Machine Learning, pages 1462â1471, 2015. 5
[11] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich fea- ture hierarchies for accurate object detection and semantic In Proceedings of the IEEE conference on segmentation. computer vision and pattern recognition, pages 580â587, 2014. 2, 5
I. Danihelka, A. Graves, D. Rezende, and D. Wierstra. Draw: A recurrent neural network for image
[13] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learn- ing for image recognition. arXiv preprint arXiv:1512.03385, 2015. 3, 4, 5, 6, 7, 13 | 1611.10012#65 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.10012 | 68 | (e) FRCNN+IncResnetV2, 300 proposals
Figure 14: Example detections from 5 different models.
[15] P. J. Huber et al. Robust estimation of a location parameter. The Annals of Mathematical Statistics, 35(1):73â101, 1964. 5
[19] K.-H. Kim, S. Hong, B. Roh, Y. Cheon, and M. Park. Pvanet: Deep but lightweight neural networks for real-time object de- tection. arXiv preprint arXiv:1608.08021, 2016. 3
[16] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015. 4, 6, 7
[17] M. Jaderberg, K. Simonyan, A. Zisserman, et al. Spatial In Advances in Neural Information transformer networks. Processing Systems, pages 2017â2025, 2015. 5 | 1611.10012#68 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.10012 | 69 | [18] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Gir- shick, S. Guadarrama, and T. Darrell. Caffe: Convolu- tional architecture for fast feature embedding. In Proceed- ings of the 22nd ACM international conference on Multime- dia, pages 675â678. ACM, 2014. 4
Imagenet classiï¬cation with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097â1105, 2012. 2
[21] S. Lee, S. Purushwalkam, M. Cogswell, D. Crandall, and D. Batra. Why M heads are better than one: Training a di- verse ensemble of deep networks. 19 Nov. 2015. 13
[22] Y. Li, H. Qi, J. Dai, X. Ji, and W. Yichen. Translation- aware fully convolutional instance segmentation. https: //github.com/daijifeng001/TA-FCN, 2016. 3
17 | 1611.10012#69 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.10012 | 71 | (e) FRCNN+IncResnetV2, 300 proposals
Figure 15: Example detections from 5 different models.
[23] T. Y. Lin and P. Dollar. Ms coco api. https://github. com/pdollar/coco, 2016. 5
[24] T.-Y. Lin, P. Doll´ar, R. Girshick, K. He, B. Hariharan, and S. Belongie. Feature pyramid networks for object detection. arXiv preprint arXiv:1612.03144, 2016. 3
[25] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ra- manan, P. Doll´ar, and C. Lawrence Zitnick. Microsoft COCO: Common objects in context. In ECCV, 1 May 2014. 1, 4
[26] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.- Y. Fu, and A. C. Berg. Ssd: Single shot multibox detector. In European Conference on Computer Vision, pages 21â37. Springer, 2016. 1, 2, 3, 4, 5, 6, 7 | 1611.10012#71 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.10012 | 73 | (a) SSD+Mobilenet, lowres
(b) SSD+InceptionV2, lowres
rar go: 90% lap i
aaa aa iol i
(c) FRCNN+Resnet101, 100 proposals
(d) RFCN+Resnet10, 300 proposals
fraser filer 0% lap
(e) FRCNN+IncResnetV2, 300 proposals
Figure 16: Example detections from 5 different models.
arXiv preprint arXiv:1609.05590, 2016. 3
[29] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi. You only look once: Uniï¬ed, real-time object detection. arXiv preprint arXiv:1506.02640, 2015. 1, 3
[30] J. Redmon and A. Farhadi. Yolo9000: Better, faster, stronger. arXiv preprint arXiv:1612.08242, 2016. 3
[31] S. Ren, K. He, R. Girshick, and J. Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pages 91â99, 2015. 1, 2, 3, 4, 5, 6 | 1611.10012#73 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.10012 | 74 | [32] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, Imagenet large scale visual recognition challenge. et al.
International Journal of Computer Vision, 115(3):211â252, 2015. 4
[33] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun. Overfeat: Integrated recognition, localization and detection using convolutional networks. arXiv preprint arXiv:1312.6229, 2013. 3
[34] A. Shrivastava and A. Gupta. Contextual priming and feed- back for faster r-cnn. In European Conference on Computer Vision, pages 330â348. Springer, 2016. 3
[35] A. Shrivastava, A. Gupta, and R. Girshick. Training region- based object detectors with online hard example mining. arXiv preprint arXiv:1604.03540, 2016. 3
Tf-slim: A high library to deï¬ne complex models in tensorï¬ow.
19 | 1611.10012#74 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.10012 | 76 | (e) FRCNN+IncResnetV2, 300 proposals
Figure 17: Example detections from 5 different models.
https://research.googleblog.com/2016/08/ tf-slim-high-level-library-to-define. html, 2016. [Online; accessed 6-November-2016]. 4
[40] C. Szegedy, S. Reed, D. Erhan, and D. Anguelov. arXiv preprint Scalable, high-quality object detection. arXiv:1412.1441, 2014. 1, 2, 3
[37] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. 4, 6, 7
[41] C. Szegedy, A. Toshev, and D. Erhan. Deep neural networks for object detection. In Advances in Neural Information Pro- cessing Systems, pages 2553â2561, 2013. 2
Inception-v4, inception-resnet and the impact of residual connections on learning. arXiv preprint arXiv:1602.07261, 2016. 4, 6, 7 | 1611.10012#76 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.10012 | 77 | [42] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. Rethinking the inception architecture for computer vision. arXiv preprint arXiv:1512.00567, 2015. 4, 6
[39] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1â9, 2015. 3
[43] T. Tieleman and G. Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 4(2), 2012. 5
20
[44] P. Viola and M. J. Jones. Robust real-time face detection. International journal of computer vision, 57(2):137â154, 2004. 2
[45] B. Yang, J. Yan, Z. Lei, and S. Z. Li. Craft objects from im- ages. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6043â6051, 2016. 3 | 1611.10012#77 | Speed/accuracy trade-offs for modern convolutional object detectors | The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | http://arxiv.org/pdf/1611.10012 | Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy | cs.CV | Accepted to CVPR 2017 | null | cs.CV | 20161130 | 20170425 | [
{
"id": "1512.00567"
},
{
"id": "1502.03167"
},
{
"id": "1612.03144"
},
{
"id": "1602.07261"
},
{
"id": "1506.02640"
},
{
"id": "1612.08242"
},
{
"id": "1608.08021"
},
{
"id": "1605.07678"
},
{
"id": "1604.02135"
},
{
"id": "1701.06659"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1704.04861"
},
{
"id": "1512.04143"
},
{
"id": "1609.05590"
},
{
"id": "1604.03540"
},
{
"id": "1702.04680"
},
{
"id": "1512.04412"
}
] |
1611.09823 | 0 | 7 1 0 2
# n a J
3 1
] I A . s c [ 3 v 3 2 8 9 0 . 1 1 6 1 : v i X r a
# Under review as a conference paper at ICLR 2017
# DIALOGUE LEARNING WITH HUMAN-IN-THE-LOOP
Jiwei Li, Alexander H. Miller, Sumit Chopra, MarcâAurelio Ranzato, Jason Weston Facebook AI Research, New York, USA {jiwel,ahm,spchopra,ranzato,jase}@fb.com
# ABSTRACT
An important aspect of developing conversational agents is to give a bot the ability to improve through communicating with humans and to learn from the mistakes that it makes. Most research has focused on learning from ï¬xed training sets of labeled data rather than interacting with a dialogue partner in an online fashion. In this paper we explore this direction in a reinforcement learning setting where the bot improves its question-answering ability from feedback a teacher gives fol- lowing its generated responses. We build a simulator that tests various aspects of such learning in a synthetic environment, and introduce models that work in this regime. Finally, real experiments with Mechanical Turk validate the approach.
# INTRODUCTION | 1611.09823#0 | Dialogue Learning With Human-In-The-Loop | An important aspect of developing conversational agents is to give a bot the
ability to improve through communicating with humans and to learn from the
mistakes that it makes. Most research has focused on learning from fixed
training sets of labeled data rather than interacting with a dialogue partner
in an online fashion. In this paper we explore this direction in a
reinforcement learning setting where the bot improves its question-answering
ability from feedback a teacher gives following its generated responses. We
build a simulator that tests various aspects of such learning in a synthetic
environment, and introduce models that work in this regime. Finally, real
experiments with Mechanical Turk validate the approach. | http://arxiv.org/pdf/1611.09823 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.AI, cs.CL | null | null | cs.AI | 20161129 | 20170113 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1606.05250"
},
{
"id": "1511.06732"
},
{
"id": "1502.05698"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
}
] |
1611.09823 | 1 | # INTRODUCTION
A good conversational agent (which we sometimes refer to as a learner or bot1) should have the ability to learn from the online feedback from a teacher: adapting its model when making mistakes and reinforcing the model when the teacherâs feedback is positive. This is particularly important in the situation where the bot is initially trained in a supervised way on a ï¬xed synthetic, domain- speciï¬c or pre-built dataset before release, but will be exposed to a different environment after release (e.g., more diverse natural language utterance usage when talking with real humans, different distributions, special cases, etc.). Most recent research has focused on training a bot from ï¬xed training sets of labeled data but seldom on how the bot can improve through online interaction with humans. Human (rather than machine) language learning happens during communication (Bassiri, 2011; Werts et al., 1995), and not from labeled datasets, hence making this an important subject to study. | 1611.09823#1 | Dialogue Learning With Human-In-The-Loop | An important aspect of developing conversational agents is to give a bot the
ability to improve through communicating with humans and to learn from the
mistakes that it makes. Most research has focused on learning from fixed
training sets of labeled data rather than interacting with a dialogue partner
in an online fashion. In this paper we explore this direction in a
reinforcement learning setting where the bot improves its question-answering
ability from feedback a teacher gives following its generated responses. We
build a simulator that tests various aspects of such learning in a synthetic
environment, and introduce models that work in this regime. Finally, real
experiments with Mechanical Turk validate the approach. | http://arxiv.org/pdf/1611.09823 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.AI, cs.CL | null | null | cs.AI | 20161129 | 20170113 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1606.05250"
},
{
"id": "1511.06732"
},
{
"id": "1502.05698"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
}
] |
1611.09830 | 1 | # Maluuba Research Montréal, Québec, Canada
# ABSTRACT
We present NewsQA, a challenging machine comprehension dataset of over 100,000 human-generated question-answer pairs. Crowdworkers supply questions and an- swers based on a set of over 10,000 news articles from CNN, with answers consist- ing of spans of text from the corresponding articles. We collect this dataset through a four-stage process designed to solicit exploratory questions that require reasoning. A thorough analysis conï¬rms that NewsQA demands abilities beyond simple word matching and recognizing textual entailment. We measure human performance on the dataset and compare it to several strong neural models. The performance gap between humans and machines (0.198 in F1) indicates that signiï¬cant progress can be made on NewsQA through future research. The dataset is freely available at https://datasets.maluuba.com/NewsQA.
# INTRODUCTION | 1611.09830#1 | NewsQA: A Machine Comprehension Dataset | We present NewsQA, a challenging machine comprehension dataset of over
100,000 human-generated question-answer pairs. Crowdworkers supply questions
and answers based on a set of over 10,000 news articles from CNN, with answers
consisting of spans of text from the corresponding articles. We collect this
dataset through a four-stage process designed to solicit exploratory questions
that require reasoning. A thorough analysis confirms that NewsQA demands
abilities beyond simple word matching and recognizing textual entailment. We
measure human performance on the dataset and compare it to several strong
neural models. The performance gap between humans and machines (0.198 in F1)
indicates that significant progress can be made on NewsQA through future
research. The dataset is freely available at
https://datasets.maluuba.com/NewsQA. | http://arxiv.org/pdf/1611.09830 | Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, Kaheer Suleman | cs.CL, cs.AI | null | null | cs.CL | 20161129 | 20170207 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1606.05250"
},
{
"id": "1610.00956"
},
{
"id": "1612.04211"
},
{
"id": "1603.01547"
},
{
"id": "1603.08023"
}
] |
1611.09823 | 2 | In this work, we explore this direction by training a bot through interaction with teachers in an online fashion. The task is formalized under the general framework of reinforcement learning via the teacherâs (dialogue partnerâs) feedback to the dialogue actions from the bot. The dialogue takes place in the context of question-answering tasks and the bot has to, given either a short story or a set of facts, answer a set of questions from the teacher. We consider two types of feedback: explicit numerical rewards as in conventional reinforcement learning, and textual feedback which is more natural in human dialogue, following (Weston, 2016). We consider two online training scenarios: (i) where the task is built with a dialogue simulator allowing for easy analysis and repeatability of experiments; and (ii) where the teachers are real humans using Amazon Mechanical Turk.
We explore important issues involved in online learning such as how a bot can be most efï¬ciently trained using a minimal amount of teacherâs feedback, how a bot can harness different types of feedback signal, how to avoid pitfalls such as instability during online learing with different types of feedback via data balancing and exploration, and how to make learning with real humans feasible via data batching. Our ï¬ndings indicate that it is feasible to build a pipeline that starts from a model trained with ï¬xed data and then learns from interactions with humans to improve itself. | 1611.09823#2 | Dialogue Learning With Human-In-The-Loop | An important aspect of developing conversational agents is to give a bot the
ability to improve through communicating with humans and to learn from the
mistakes that it makes. Most research has focused on learning from fixed
training sets of labeled data rather than interacting with a dialogue partner
in an online fashion. In this paper we explore this direction in a
reinforcement learning setting where the bot improves its question-answering
ability from feedback a teacher gives following its generated responses. We
build a simulator that tests various aspects of such learning in a synthetic
environment, and introduce models that work in this regime. Finally, real
experiments with Mechanical Turk validate the approach. | http://arxiv.org/pdf/1611.09823 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.AI, cs.CL | null | null | cs.AI | 20161129 | 20170113 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1606.05250"
},
{
"id": "1511.06732"
},
{
"id": "1502.05698"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
}
] |
1611.09830 | 2 | # INTRODUCTION
Almost all human knowledge is recorded in the medium of text. As such, comprehension of written language by machines, at a near-human level, would enable a broad class of artiï¬cial intelligence applications. In human students we evaluate reading comprehension by posing questions based on a text passage and then assessing a studentâs answers. Such comprehension tests are appealing because they are objectively gradable and may measure a range of important abilities, from basic understanding to causal reasoning to inference (Richardson et al., 2013). To teach literacy to machines, the research community has taken a similar approach with machine comprehension (MC). | 1611.09830#2 | NewsQA: A Machine Comprehension Dataset | We present NewsQA, a challenging machine comprehension dataset of over
100,000 human-generated question-answer pairs. Crowdworkers supply questions
and answers based on a set of over 10,000 news articles from CNN, with answers
consisting of spans of text from the corresponding articles. We collect this
dataset through a four-stage process designed to solicit exploratory questions
that require reasoning. A thorough analysis confirms that NewsQA demands
abilities beyond simple word matching and recognizing textual entailment. We
measure human performance on the dataset and compare it to several strong
neural models. The performance gap between humans and machines (0.198 in F1)
indicates that significant progress can be made on NewsQA through future
research. The dataset is freely available at
https://datasets.maluuba.com/NewsQA. | http://arxiv.org/pdf/1611.09830 | Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, Kaheer Suleman | cs.CL, cs.AI | null | null | cs.CL | 20161129 | 20170207 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1606.05250"
},
{
"id": "1610.00956"
},
{
"id": "1612.04211"
},
{
"id": "1603.01547"
},
{
"id": "1603.08023"
}
] |
1611.09823 | 3 | 1In this paper, we refer to a learner (either a human or a bot/dialogue agent which is a machine learning algorithm) as the student, and their more knowledgeable dialogue partner as the teacher.
1
# Under review as a conference paper at ICLR 2017
# 2 RELATED WORK
Reinforcement learning has been widely applied to dialogue, especially in slot ï¬lling to solve domain-speciï¬c tasks (Walker, 2000; Schatzmann et al., 2006; Singh et al., 2000; 2002). Efforts include Markov Decision Processes (MDPs) (Levin et al., 1997; 2000; Walker et al., 2003; Pierac- cini et al., 2009), POMDP models (Young et al., 2010; 2013; GaËsic et al., 2013; 2014) and policy learning (Su et al., 2016). Such a line of research focuses mainly on frames with slots to ï¬ll, where the bot will use reinforcement learning to model a state transition pattern, generating dialogue ut- terances to prompt the appropriate user responses to put in the desired slots. This goal is different from ours, where we study end-to-end learning systems and also consider non-reward based setups via textual feedback. | 1611.09823#3 | Dialogue Learning With Human-In-The-Loop | An important aspect of developing conversational agents is to give a bot the
ability to improve through communicating with humans and to learn from the
mistakes that it makes. Most research has focused on learning from fixed
training sets of labeled data rather than interacting with a dialogue partner
in an online fashion. In this paper we explore this direction in a
reinforcement learning setting where the bot improves its question-answering
ability from feedback a teacher gives following its generated responses. We
build a simulator that tests various aspects of such learning in a synthetic
environment, and introduce models that work in this regime. Finally, real
experiments with Mechanical Turk validate the approach. | http://arxiv.org/pdf/1611.09823 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.AI, cs.CL | null | null | cs.AI | 20161129 | 20170113 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1606.05250"
},
{
"id": "1511.06732"
},
{
"id": "1502.05698"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
}
] |
1611.09830 | 3 | Recent years have seen the release of a host of MC datasets. Generally, these consist of (document, question, answer) triples to be used in a supervised learning framework. Existing datasets vary in size, difï¬culty, and collection methodology; however, as pointed out by Rajpurkar et al. (2016), most suffer from one of two shortcomings: those that are designed explicitly to test comprehension (Richardson et al., 2013) are too small for training data-intensive deep learning models, while those that are sufï¬ciently large for deep learning (Hermann et al., 2015; Hill et al., 2016; Bajgar et al., 2016) are generated synthetically, yielding questions that are not posed in natural language and that may not test comprehension directly (Chen et al., 2016). More recently, Rajpurkar et al. (2016) sought to overcome these deï¬ciencies with their crowdsourced dataset, SQuAD. | 1611.09830#3 | NewsQA: A Machine Comprehension Dataset | We present NewsQA, a challenging machine comprehension dataset of over
100,000 human-generated question-answer pairs. Crowdworkers supply questions
and answers based on a set of over 10,000 news articles from CNN, with answers
consisting of spans of text from the corresponding articles. We collect this
dataset through a four-stage process designed to solicit exploratory questions
that require reasoning. A thorough analysis confirms that NewsQA demands
abilities beyond simple word matching and recognizing textual entailment. We
measure human performance on the dataset and compare it to several strong
neural models. The performance gap between humans and machines (0.198 in F1)
indicates that significant progress can be made on NewsQA through future
research. The dataset is freely available at
https://datasets.maluuba.com/NewsQA. | http://arxiv.org/pdf/1611.09830 | Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, Kaheer Suleman | cs.CL, cs.AI | null | null | cs.CL | 20161129 | 20170207 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1606.05250"
},
{
"id": "1610.00956"
},
{
"id": "1612.04211"
},
{
"id": "1603.01547"
},
{
"id": "1603.08023"
}
] |
1611.09823 | 4 | Our work is related to the line of research that focuses on supervised learning for question answering (QA) from dialogues (Dodge et al., 2015; Weston, 2016), either given a database of knowledge (Bordes et al., 2015; Miller et al., 2016) or short texts (Weston et al., 2015; Hermann et al., 2015; Rajpurkar et al., 2016). In our work, the discourse includes the statements made in the past, the question and answer, and crucially the response from the teacher. The latter is what makes the setting different from the standard QA setting, i.e. we use methods that leverage this response also, not just answering questions. Further, QA works only consider ï¬xed datasets with gold annotations, i.e. they do not consider a reinforcement learning setting. | 1611.09823#4 | Dialogue Learning With Human-In-The-Loop | An important aspect of developing conversational agents is to give a bot the
ability to improve through communicating with humans and to learn from the
mistakes that it makes. Most research has focused on learning from fixed
training sets of labeled data rather than interacting with a dialogue partner
in an online fashion. In this paper we explore this direction in a
reinforcement learning setting where the bot improves its question-answering
ability from feedback a teacher gives following its generated responses. We
build a simulator that tests various aspects of such learning in a synthetic
environment, and introduce models that work in this regime. Finally, real
experiments with Mechanical Turk validate the approach. | http://arxiv.org/pdf/1611.09823 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.AI, cs.CL | null | null | cs.AI | 20161129 | 20170113 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1606.05250"
},
{
"id": "1511.06732"
},
{
"id": "1502.05698"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
}
] |
1611.09830 | 4 | Here we present a challenging new largescale dataset for machine comprehension: NewsQA. NewsQA contains 119,633 natural language questions posed by crowdworkers on 12,744 news articles from CNN. Answers to these questions consist of spans of text within the corresponding article highlighted also by crowdworkers. To build NewsQA we utilized a four-stage collection process designed to encourage exploratory, curiosity-based questions that reï¬ect human information seeking. CNN articles were chosen as the source material because they have been used in the past (Hermann et al., 2015) and, in our view, machine comprehension systems are particularly suited to high-volume, rapidly changing information sources like news.
âThese three authors contributed equally.
1
As Trischler et al. (2016a), Chen et al. (2016), and others have argued, it is important for datasets to be sufï¬ciently challenging to teach models the abilities we wish them to learn. Thus, in line with Richardson et al. (2013), our goal with NewsQA was to construct a corpus of questions that necessitates reasoning-like behaviors â for example, synthesis of information across different parts of an article. We designed our collection methodology explicitly to capture such questions.
The challenging characteristics of NewsQA that distinguish it from most previous comprehension tasks are as follows: | 1611.09830#4 | NewsQA: A Machine Comprehension Dataset | We present NewsQA, a challenging machine comprehension dataset of over
100,000 human-generated question-answer pairs. Crowdworkers supply questions
and answers based on a set of over 10,000 news articles from CNN, with answers
consisting of spans of text from the corresponding articles. We collect this
dataset through a four-stage process designed to solicit exploratory questions
that require reasoning. A thorough analysis confirms that NewsQA demands
abilities beyond simple word matching and recognizing textual entailment. We
measure human performance on the dataset and compare it to several strong
neural models. The performance gap between humans and machines (0.198 in F1)
indicates that significant progress can be made on NewsQA through future
research. The dataset is freely available at
https://datasets.maluuba.com/NewsQA. | http://arxiv.org/pdf/1611.09830 | Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, Kaheer Suleman | cs.CL, cs.AI | null | null | cs.CL | 20161129 | 20170207 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1606.05250"
},
{
"id": "1610.00956"
},
{
"id": "1612.04211"
},
{
"id": "1603.01547"
},
{
"id": "1603.08023"
}
] |
1611.09823 | 5 | Our work is closely related to a recent work from Weston (2016) that learns through conducting conversations where supervision is given naturally in the response during the conversation. That work introduced the use of forward prediction that learns by predicting the teacherâs feedback, in addition to using reward-based learning of correct answers. However, two important issues were not addressed: (i) it did not use a reinforcement learning setting, but instead used pre-built datasets with ï¬xed policies given in advance; and (ii) experiments used only simulated and no real language data. Hence, models that can learn policies from real online communication were not investigated. To make the differences with our work clear, we will now detail these points further. | 1611.09823#5 | Dialogue Learning With Human-In-The-Loop | An important aspect of developing conversational agents is to give a bot the
ability to improve through communicating with humans and to learn from the
mistakes that it makes. Most research has focused on learning from fixed
training sets of labeled data rather than interacting with a dialogue partner
in an online fashion. In this paper we explore this direction in a
reinforcement learning setting where the bot improves its question-answering
ability from feedback a teacher gives following its generated responses. We
build a simulator that tests various aspects of such learning in a synthetic
environment, and introduce models that work in this regime. Finally, real
experiments with Mechanical Turk validate the approach. | http://arxiv.org/pdf/1611.09823 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.AI, cs.CL | null | null | cs.AI | 20161129 | 20170113 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1606.05250"
},
{
"id": "1511.06732"
},
{
"id": "1502.05698"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
}
] |
1611.09830 | 5 | The challenging characteristics of NewsQA that distinguish it from most previous comprehension tasks are as follows:
1. Answers are spans of arbitrary length within an article, rather than single words or entities. 2. Some questions have no answer in the corresponding article (the null span). 3. There are no candidate answers from which to choose. 4. Our collection process encourages lexical and syntactic divergence between questions and
answers.
5. A signiï¬cant proportion of questions requires reasoning beyond simple word- and context- matching (as shown in our analysis).
Some of these characteristics are present also in SQuAD, the MC dataset most similar to NewsQA. However, we demonstrate through several metrics that NewsQA offers a greater challenge to existing models.
In this paper we describe the collection methodology for NewsQA, provide a variety of statistics to characterize it and contrast it with previous datasets, and assess its difï¬culty. In particular, we measure human performance and compare it to that of two strong neural-network baselines. Humans signiï¬cantly outperform powerful question-answering models. This suggests there is room for improvement through further advances in machine comprehension research.
# 2 RELATED DATASETS | 1611.09830#5 | NewsQA: A Machine Comprehension Dataset | We present NewsQA, a challenging machine comprehension dataset of over
100,000 human-generated question-answer pairs. Crowdworkers supply questions
and answers based on a set of over 10,000 news articles from CNN, with answers
consisting of spans of text from the corresponding articles. We collect this
dataset through a four-stage process designed to solicit exploratory questions
that require reasoning. A thorough analysis confirms that NewsQA demands
abilities beyond simple word matching and recognizing textual entailment. We
measure human performance on the dataset and compare it to several strong
neural models. The performance gap between humans and machines (0.198 in F1)
indicates that significant progress can be made on NewsQA through future
research. The dataset is freely available at
https://datasets.maluuba.com/NewsQA. | http://arxiv.org/pdf/1611.09830 | Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, Kaheer Suleman | cs.CL, cs.AI | null | null | cs.CL | 20161129 | 20170207 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1606.05250"
},
{
"id": "1610.00956"
},
{
"id": "1612.04211"
},
{
"id": "1603.01547"
},
{
"id": "1603.08023"
}
] |
1611.09823 | 6 | The experiments in (Weston, 2016) involve constructing pre-built ï¬xed datasets, rather than training the learner within a simulator, as in our work. Pre-built datasets can only be made by ï¬xing a prior in advance. They achieve this by choosing an omniscient (but deliberately imperfect) labeler that gets Ïacc examples always correct (the paper looked at values 50%, 10% and 1%). Again, this was not learned, and was ï¬xed to generate the datasets. Note that the paper refers to these answers as coming from âthe learnerâ (which should be the model), but since the policy is ï¬xed it actually does not depend on the model. In a realistic setting one does not have access to an omniscient labeler, one has to learn a policy completely from scratch, online, starting with a random policy, so their setting was not practically viable. In our work, when policy training is viewed as batch learning over iterations of the dataset, updating the policy on each iteration, (Weston, 2016) can be viewed as training only one iteration, whereas we perform multiple iterations. This is explained further in Sections 4.2 and | 1611.09823#6 | Dialogue Learning With Human-In-The-Loop | An important aspect of developing conversational agents is to give a bot the
ability to improve through communicating with humans and to learn from the
mistakes that it makes. Most research has focused on learning from fixed
training sets of labeled data rather than interacting with a dialogue partner
in an online fashion. In this paper we explore this direction in a
reinforcement learning setting where the bot improves its question-answering
ability from feedback a teacher gives following its generated responses. We
build a simulator that tests various aspects of such learning in a synthetic
environment, and introduce models that work in this regime. Finally, real
experiments with Mechanical Turk validate the approach. | http://arxiv.org/pdf/1611.09823 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.AI, cs.CL | null | null | cs.AI | 20161129 | 20170113 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1606.05250"
},
{
"id": "1511.06732"
},
{
"id": "1502.05698"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
}
] |
1611.09830 | 6 | # 2 RELATED DATASETS
NewsQA follows in the tradition of several recent comprehension datasets. These vary in size, difï¬culty, and collection methodology, and each has its own distinguishing characteristics. We agree with Bajgar et al. (2016) who have said âmodels could certainly beneï¬t from as diverse a collection of datasets as possible.â We discuss this collection below.
# 2.1 MCTEST
MCTest (Richardson et al., 2013) is a crowdsourced collection of 660 elementary-level childrenâs stories with associated questions and answers. The stories are ï¬ctional, to ensure that the answer must be found in the text itself, and carefully limited to what a young child can understand. Each question comes with a set of 4 candidate answers that range from single words to full explanatory sentences. The questions are designed to require rudimentary reasoning and synthesis of information across sentences, making the dataset quite challenging. This is compounded by the datasetâs size, which limits the training of expressive statistical models. Nevertheless, recent comprehension models have performed well on MCTest (Sachan et al., 2015; Wang et al., 2015), including a highly structured neural model (Trischler et al., 2016a). These models all rely on access to the small set of candidate answers, a crutch that NewsQA does not provide. | 1611.09830#6 | NewsQA: A Machine Comprehension Dataset | We present NewsQA, a challenging machine comprehension dataset of over
100,000 human-generated question-answer pairs. Crowdworkers supply questions
and answers based on a set of over 10,000 news articles from CNN, with answers
consisting of spans of text from the corresponding articles. We collect this
dataset through a four-stage process designed to solicit exploratory questions
that require reasoning. A thorough analysis confirms that NewsQA demands
abilities beyond simple word matching and recognizing textual entailment. We
measure human performance on the dataset and compare it to several strong
neural models. The performance gap between humans and machines (0.198 in F1)
indicates that significant progress can be made on NewsQA through future
research. The dataset is freely available at
https://datasets.maluuba.com/NewsQA. | http://arxiv.org/pdf/1611.09830 | Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, Kaheer Suleman | cs.CL, cs.AI | null | null | cs.CL | 20161129 | 20170207 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1606.05250"
},
{
"id": "1610.00956"
},
{
"id": "1612.04211"
},
{
"id": "1603.01547"
},
{
"id": "1603.08023"
}
] |
1611.09823 | 7 | (Weston, 2016) can be viewed as training only one iteration, whereas we perform multiple iterations. This is explained further in Sections 4.2 and 5.1. We show in our experiments that performance improves over the iterations, i.e. it is better than the ï¬rst iteration. We show that such online learning works for both reward- based numerical feedback and for forward prediction methods using textual feedback (under certain conditions which are detailed). This is a key contribution of our work. | 1611.09823#7 | Dialogue Learning With Human-In-The-Loop | An important aspect of developing conversational agents is to give a bot the
ability to improve through communicating with humans and to learn from the
mistakes that it makes. Most research has focused on learning from fixed
training sets of labeled data rather than interacting with a dialogue partner
in an online fashion. In this paper we explore this direction in a
reinforcement learning setting where the bot improves its question-answering
ability from feedback a teacher gives following its generated responses. We
build a simulator that tests various aspects of such learning in a synthetic
environment, and introduce models that work in this regime. Finally, real
experiments with Mechanical Turk validate the approach. | http://arxiv.org/pdf/1611.09823 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.AI, cs.CL | null | null | cs.AI | 20161129 | 20170113 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1606.05250"
},
{
"id": "1511.06732"
},
{
"id": "1502.05698"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
}
] |
1611.09830 | 7 | 2.2 CNN/DAILY MAIL
The CNN/Daily Mail corpus (Hermann et al., 2015) consists of news articles scraped from those outlets with corresponding cloze-style questions. Cloze questions are constructed synthetically by deleting a single entity from abstractive summary points that accompany each article (written presumably by human authors). As such, determining the correct answer relies mostly on recognizing textual entailment between the article and the question. The named entities within an article are identiï¬ed and anonymized in a preprocessing step and constitute the set of candidate answers; contrast this with NewsQA in which answers often include longer phrases and no candidates are given.
Because the cloze process is automatic, it is straightforward to collect a signiï¬cant amount of data to support deep-learning approaches: CNN/Daily Mail contains about 1.4 million question-answer
2
pairs. However, Chen et al. (2016) demonstrated that the task requires only limited reasoning and, in fact, performance of the strongest models (Kadlec et al., 2016; Trischler et al., 2016b; Sordoni et al., 2016) nearly matches that of humans.
2.3 CHILDRENâS BOOK TEST | 1611.09830#7 | NewsQA: A Machine Comprehension Dataset | We present NewsQA, a challenging machine comprehension dataset of over
100,000 human-generated question-answer pairs. Crowdworkers supply questions
and answers based on a set of over 10,000 news articles from CNN, with answers
consisting of spans of text from the corresponding articles. We collect this
dataset through a four-stage process designed to solicit exploratory questions
that require reasoning. A thorough analysis confirms that NewsQA demands
abilities beyond simple word matching and recognizing textual entailment. We
measure human performance on the dataset and compare it to several strong
neural models. The performance gap between humans and machines (0.198 in F1)
indicates that significant progress can be made on NewsQA through future
research. The dataset is freely available at
https://datasets.maluuba.com/NewsQA. | http://arxiv.org/pdf/1611.09830 | Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, Kaheer Suleman | cs.CL, cs.AI | null | null | cs.CL | 20161129 | 20170207 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1606.05250"
},
{
"id": "1610.00956"
},
{
"id": "1612.04211"
},
{
"id": "1603.01547"
},
{
"id": "1603.08023"
}
] |
1611.09823 | 8 | Finally, (Weston, 2016) only conducted experiments on synthetic or templated language, and not real language, especially the feedback from the teacher was scripted. While we believe that synthetic datasets are very important for developing understanding (hence we develop a simulator and conduct experiments also with synthetic data), for a new method to gain traction it must be shown to work on real data. We hence employ Mechanical Turk to collect real language data for the questions and importantly for the teacher feedback and construct experiments in this real setting.
# 3 DATASET AND TASKS
We begin by describing the data setup we use. In our ï¬rst set of experiments we build a simulator as a testbed for learning algorithms. In our second set of experiments we use Mechanical Turk to provide real human teachers giving feedback.
2
# Under review as a conference paper at ICLR 2017
# 3.1 SIMULATOR | 1611.09823#8 | Dialogue Learning With Human-In-The-Loop | An important aspect of developing conversational agents is to give a bot the
ability to improve through communicating with humans and to learn from the
mistakes that it makes. Most research has focused on learning from fixed
training sets of labeled data rather than interacting with a dialogue partner
in an online fashion. In this paper we explore this direction in a
reinforcement learning setting where the bot improves its question-answering
ability from feedback a teacher gives following its generated responses. We
build a simulator that tests various aspects of such learning in a synthetic
environment, and introduce models that work in this regime. Finally, real
experiments with Mechanical Turk validate the approach. | http://arxiv.org/pdf/1611.09823 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.AI, cs.CL | null | null | cs.AI | 20161129 | 20170113 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1606.05250"
},
{
"id": "1511.06732"
},
{
"id": "1502.05698"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
}
] |
1611.09830 | 8 | 2.3 CHILDRENâS BOOK TEST
The Childrenâs Book Test (CBT) (Hill et al., 2016) was collected using a process similar to that of CNN/Daily Mail. Text passages are 20-sentence excerpts from childrenâs books available through Project Gutenberg; questions are generated by deleting a single word in the next (i.e., 21st) sentence. Consequently, CBT evaluates word prediction based on context. It is a comprehension task insofar as comprehension is likely necessary for this prediction, but comprehension may be insufï¬cient and other mechanisms may be more important.
2.4 BOOKTEST
Bajgar et al. (2016) convincingly argue that, because existing datasets are not large enough, we have yet to reach the full capacity of existing comprehension models. As a remedy they present BookTest. This is an extension to the named-entity and common-noun strata of CBT that increases their size by over 60 times. Bajgar et al. (2016) demonstrate that training on the augmented dataset yields a model (Kadlec et al., 2016) that matches human performance on CBT. This is impressive and suggests that much is to be gained from more data, but we repeat our concerns about the relevance of story prediction as a comprehension task. We also wish to encourage more efï¬cient learning from less data.
# 2.5 SQUAD | 1611.09830#8 | NewsQA: A Machine Comprehension Dataset | We present NewsQA, a challenging machine comprehension dataset of over
100,000 human-generated question-answer pairs. Crowdworkers supply questions
and answers based on a set of over 10,000 news articles from CNN, with answers
consisting of spans of text from the corresponding articles. We collect this
dataset through a four-stage process designed to solicit exploratory questions
that require reasoning. A thorough analysis confirms that NewsQA demands
abilities beyond simple word matching and recognizing textual entailment. We
measure human performance on the dataset and compare it to several strong
neural models. The performance gap between humans and machines (0.198 in F1)
indicates that significant progress can be made on NewsQA through future
research. The dataset is freely available at
https://datasets.maluuba.com/NewsQA. | http://arxiv.org/pdf/1611.09830 | Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, Kaheer Suleman | cs.CL, cs.AI | null | null | cs.CL | 20161129 | 20170207 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1606.05250"
},
{
"id": "1610.00956"
},
{
"id": "1612.04211"
},
{
"id": "1603.01547"
},
{
"id": "1603.08023"
}
] |
1611.09823 | 9 | 2
# Under review as a conference paper at ICLR 2017
# 3.1 SIMULATOR
The simulator adapts two existing ï¬xed datasets to our online setting. Following Weston (2016), we use (i) the single supporting fact problem from the bAbI datasets (Weston et al., 2015) which consists of 1000 short stories from a simulated world interspersed with questions; and (ii) the WikiMovies dataset (Weston et al., 2015) which consists of roughly 100k (templated) questions over 75k entities based on questions with answers in the open movie database (OMDb). Each dialogue takes place between a teacher, scripted by the simulation, and a bot. The communication protocol is as follows: (1) the teacher ï¬rst asks a question from the ï¬xed set of questions existing in the dataset, (2) the bot answers the question, and ï¬nally (3) the teacher gives feedback on the botâs answer. | 1611.09823#9 | Dialogue Learning With Human-In-The-Loop | An important aspect of developing conversational agents is to give a bot the
ability to improve through communicating with humans and to learn from the
mistakes that it makes. Most research has focused on learning from fixed
training sets of labeled data rather than interacting with a dialogue partner
in an online fashion. In this paper we explore this direction in a
reinforcement learning setting where the bot improves its question-answering
ability from feedback a teacher gives following its generated responses. We
build a simulator that tests various aspects of such learning in a synthetic
environment, and introduce models that work in this regime. Finally, real
experiments with Mechanical Turk validate the approach. | http://arxiv.org/pdf/1611.09823 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.AI, cs.CL | null | null | cs.AI | 20161129 | 20170113 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1606.05250"
},
{
"id": "1511.06732"
},
{
"id": "1502.05698"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
}
] |
1611.09830 | 9 | # 2.5 SQUAD
The comprehension dataset most closely related to NewsQA is SQuAD (Rajpurkar et al., 2016). It consists of natural language questions posed by crowdworkers on paragraphs from high-PageRank Wikipedia articles. As in NewsQA, each answer consists of a span of text from the related paragraph and no candidates are provided. Despite the effort of manual labelling, SQuADâs size is signiï¬cant and amenable to deep learning approaches: 107,785 question-answer pairs based on 536 articles.
Although SQuAD is a more realistic and more challenging comprehension task than the other largescale MC datasets, machine performance has rapidly improved towards that of humans in recent months. The SQuAD authors measured human accuracy at 0.905 in F1 (we measured human F1 at 0.807 using a different methodology); at the time of writing, the strongest published model to date achieves 0.778 F1 (Wang et al., 2016). This suggests that new, more difï¬cult alternatives like NewsQA could further push the development of more intelligent MC systems.
# 3 COLLECTION METHODOLOGY
We collected NewsQA through a four-stage process: article curation, question sourcing, answer sourcing, and validation. We also applied a post-processing step with answer agreement consolidation and span merging to enhance the usability of the dataset. These steps are detailed below. | 1611.09830#9 | NewsQA: A Machine Comprehension Dataset | We present NewsQA, a challenging machine comprehension dataset of over
100,000 human-generated question-answer pairs. Crowdworkers supply questions
and answers based on a set of over 10,000 news articles from CNN, with answers
consisting of spans of text from the corresponding articles. We collect this
dataset through a four-stage process designed to solicit exploratory questions
that require reasoning. A thorough analysis confirms that NewsQA demands
abilities beyond simple word matching and recognizing textual entailment. We
measure human performance on the dataset and compare it to several strong
neural models. The performance gap between humans and machines (0.198 in F1)
indicates that significant progress can be made on NewsQA through future
research. The dataset is freely available at
https://datasets.maluuba.com/NewsQA. | http://arxiv.org/pdf/1611.09830 | Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, Kaheer Suleman | cs.CL, cs.AI | null | null | cs.CL | 20161129 | 20170207 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1606.05250"
},
{
"id": "1610.00956"
},
{
"id": "1612.04211"
},
{
"id": "1603.01547"
},
{
"id": "1603.08023"
}
] |
1611.09823 | 10 | We follow the paradigm deï¬ned in (Weston, 2016) where the teacherâs feedback takes the form of either textual feedback, a numerical reward, or both, depending on the task. For each dataset, there are ten tasks, which are further described in Sec. A and illustrated in Figure 5 of the appendix. We also refer the readers to (Weston, 2016) for more detailed descriptions and the motivation behind these tasks. In the main text of this paper we only consider Task 6 (âpartial feedbackâ): the teacher replies with positive textual feedback (6 possible templates) when the bot answers correctly, and positive reward is given only 50% of the time. When the bot is wrong, the teacher gives textual feedback containing the answer. Descriptions and experiments on the other tasks are detailed in the appendix. Example dialogues are given in Figure 1.
The difference between our simulation and the original ï¬xed tasks of Weston (2016) is that models are trained on-the-ï¬y. After receiving feedback and/or rewards, we update the model (policy) and then deploy it to collect teacherâs feedback in the next episode or batch. This means the modelâs policy affects the data which is used to train it, which was not the case in the previous work. | 1611.09823#10 | Dialogue Learning With Human-In-The-Loop | An important aspect of developing conversational agents is to give a bot the
ability to improve through communicating with humans and to learn from the
mistakes that it makes. Most research has focused on learning from fixed
training sets of labeled data rather than interacting with a dialogue partner
in an online fashion. In this paper we explore this direction in a
reinforcement learning setting where the bot improves its question-answering
ability from feedback a teacher gives following its generated responses. We
build a simulator that tests various aspects of such learning in a synthetic
environment, and introduce models that work in this regime. Finally, real
experiments with Mechanical Turk validate the approach. | http://arxiv.org/pdf/1611.09823 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.AI, cs.CL | null | null | cs.AI | 20161129 | 20170113 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1606.05250"
},
{
"id": "1511.06732"
},
{
"id": "1502.05698"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
}
] |
1611.09830 | 10 | 3.1 ARTICLE CURATION
We retrieve articles from CNN using the script created by Hermann et al. (2015) for CNN/Daily Mail. From the returned set of 90,266 articles, we select 12,744 uniformly at random. These cover a wide range of topics that includes politics, economics, and current events. Articles are partitioned at random into a training set (90%), a development set (5%), and a test set (5%).
3.2 QUESTION SOURCING
It was important to us to collect challenging questions that could not be answered using straightforward word- or context-matching. Like Richardson et al. (2013) we want to encourage reasoning in comprehension models. We are also interested in questions that, in some sense, model human curiosity and reï¬ect actual human use-cases of information seeking. Along a similar line, we consider it an important (though as yet overlooked) capacity of a comprehension model to recognize when
3
given information is inadequate, so we are also interested in questions that may not have sufï¬cient evidence in the text. Our question sourcing stage was designed to solicit questions of this nature, and deliberately separated from the answer sourcing stage for the same reason. | 1611.09830#10 | NewsQA: A Machine Comprehension Dataset | We present NewsQA, a challenging machine comprehension dataset of over
100,000 human-generated question-answer pairs. Crowdworkers supply questions
and answers based on a set of over 10,000 news articles from CNN, with answers
consisting of spans of text from the corresponding articles. We collect this
dataset through a four-stage process designed to solicit exploratory questions
that require reasoning. A thorough analysis confirms that NewsQA demands
abilities beyond simple word matching and recognizing textual entailment. We
measure human performance on the dataset and compare it to several strong
neural models. The performance gap between humans and machines (0.198 in F1)
indicates that significant progress can be made on NewsQA through future
research. The dataset is freely available at
https://datasets.maluuba.com/NewsQA. | http://arxiv.org/pdf/1611.09830 | Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, Kaheer Suleman | cs.CL, cs.AI | null | null | cs.CL | 20161129 | 20170207 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1606.05250"
},
{
"id": "1610.00956"
},
{
"id": "1612.04211"
},
{
"id": "1603.01547"
},
{
"id": "1603.08023"
}
] |
1611.09823 | 11 | Figure 1: Simulator sample dialogues for the bAbI task (left) and WikiMovies (right). We consider 10 different tasks following Weston (2016) but here describe only Task 6; other tasks are detailed in the appendix. The teacherâs dialogue is in black and the bot is in red. (+) indicates receiving positive reward, given only 50% of the time even when correct.
bAbI Task 6: Partial Rewards Mary went to the hallway. John moved to the bathroom. Mary travelled to the kitchen. Where is Mary? Yes, thatâs right! Where is John? Yes, thatâs correct! (+) kitchen bathroom WikiMovies Task 6: Partial Rewards What ï¬lms are about Hawaii? Correct! Who acted in Licence to Kill? No, the answer is Timothy Dalton. What genre is Saratoga Trunk in? Yes! (+) . . . 50 First Dates Billy Madison Drama
Figure 2: Human Dialogue from Mechanical Turk (based on WikiMovies) The human teacherâs dialogue is in black and the bot is in red. We show examples where the bot answers correctly (left) and incorrectly (right). Real humans provide more variability of language in both questions and textual feedback than in the simulator setup (cf. Figure 1). | 1611.09823#11 | Dialogue Learning With Human-In-The-Loop | An important aspect of developing conversational agents is to give a bot the
ability to improve through communicating with humans and to learn from the
mistakes that it makes. Most research has focused on learning from fixed
training sets of labeled data rather than interacting with a dialogue partner
in an online fashion. In this paper we explore this direction in a
reinforcement learning setting where the bot improves its question-answering
ability from feedback a teacher gives following its generated responses. We
build a simulator that tests various aspects of such learning in a synthetic
environment, and introduce models that work in this regime. Finally, real
experiments with Mechanical Turk validate the approach. | http://arxiv.org/pdf/1611.09823 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.AI, cs.CL | null | null | cs.AI | 20161129 | 20170113 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1606.05250"
},
{
"id": "1511.06732"
},
{
"id": "1502.05698"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
}
] |
1611.09830 | 11 | Questioners (a distinct set of crowdworkers) see only a news articleâs headline and its summary points (also available from CNN); they do not see the full article itself. They are asked to formulate a question from this incomplete information. This encourages curiosity about the contents of the full article and prevents questions that are simple reformulations of sentences in the text. It also increases the likelihood of questions whose answers do not exist in the text. We reject questions that have signiï¬cant word overlap with the summary points to ensure that crowdworkers do not treat the summaries as mini-articles, and further discouraged this in the instructions. During collection each Questioner is solicited for up to three questions about an article. They are provided with positive and negative examples to prompt and guide them (detailed instructions are shown in Figure 3).
# 3.3 ANSWER SOURCING | 1611.09830#11 | NewsQA: A Machine Comprehension Dataset | We present NewsQA, a challenging machine comprehension dataset of over
100,000 human-generated question-answer pairs. Crowdworkers supply questions
and answers based on a set of over 10,000 news articles from CNN, with answers
consisting of spans of text from the corresponding articles. We collect this
dataset through a four-stage process designed to solicit exploratory questions
that require reasoning. A thorough analysis confirms that NewsQA demands
abilities beyond simple word matching and recognizing textual entailment. We
measure human performance on the dataset and compare it to several strong
neural models. The performance gap between humans and machines (0.198 in F1)
indicates that significant progress can be made on NewsQA through future
research. The dataset is freely available at
https://datasets.maluuba.com/NewsQA. | http://arxiv.org/pdf/1611.09830 | Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, Kaheer Suleman | cs.CL, cs.AI | null | null | cs.CL | 20161129 | 20170207 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1606.05250"
},
{
"id": "1610.00956"
},
{
"id": "1612.04211"
},
{
"id": "1603.01547"
},
{
"id": "1603.08023"
}
] |
1611.09823 | 12 | Sample dialogues with correct answers from the bot: Who wrote the Linguini Incident ? Richard Shepard is one of the right answers here. What year did The World Before Her premiere? Yep! Thatâs when it came out. Which are the movie genres of Mystery of the 13th Guest? Right, it can also be categorized as a mystery. Sample dialogues with incorrect answers from the bot: What are some movies about a supermarket ? There were many options and this one was not among them. Which are the genres of the ï¬lm Juwanna Mann ? That is incorrect. Remember the question asked for a genre not name. Who wrote the story of movie Coraline ? fantasy Thatâs a movie genre and not the name of the writer. A better answer would of been Henry Selick or Neil Gaiman.
3
# Under review as a conference paper at ICLR 2017
3.2 MECHANICAL TURK EXPERIMENTS | 1611.09823#12 | Dialogue Learning With Human-In-The-Loop | An important aspect of developing conversational agents is to give a bot the
ability to improve through communicating with humans and to learn from the
mistakes that it makes. Most research has focused on learning from fixed
training sets of labeled data rather than interacting with a dialogue partner
in an online fashion. In this paper we explore this direction in a
reinforcement learning setting where the bot improves its question-answering
ability from feedback a teacher gives following its generated responses. We
build a simulator that tests various aspects of such learning in a synthetic
environment, and introduce models that work in this regime. Finally, real
experiments with Mechanical Turk validate the approach. | http://arxiv.org/pdf/1611.09823 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.AI, cs.CL | null | null | cs.AI | 20161129 | 20170113 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1606.05250"
},
{
"id": "1511.06732"
},
{
"id": "1502.05698"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
}
] |
1611.09830 | 12 | # 3.3 ANSWER SOURCING
A second set of crowdworkers (Answerers) provide answers. Although this separation of question and answer increases the overall cognitive load, we hypothesized that unburdening Questioners in this way would encourage more complex questions. Answerers receive a full article along with a crowdsourced question and are tasked with determining the answer. They may also reject the question as nonsensical, or select the null answer if the article contains insufï¬cient information. Answers are submitted by clicking on and highlighting words in the article, while instructions encourage the set of answer words to consist of a single continuous span (again, we give an example prompt in the Appendix). For each question we solicit answers from multiple crowdworkers (avg. 2.73) with the aim of achieving agreement between at least two Answerers.
3.4 VALIDATION | 1611.09830#12 | NewsQA: A Machine Comprehension Dataset | We present NewsQA, a challenging machine comprehension dataset of over
100,000 human-generated question-answer pairs. Crowdworkers supply questions
and answers based on a set of over 10,000 news articles from CNN, with answers
consisting of spans of text from the corresponding articles. We collect this
dataset through a four-stage process designed to solicit exploratory questions
that require reasoning. A thorough analysis confirms that NewsQA demands
abilities beyond simple word matching and recognizing textual entailment. We
measure human performance on the dataset and compare it to several strong
neural models. The performance gap between humans and machines (0.198 in F1)
indicates that significant progress can be made on NewsQA through future
research. The dataset is freely available at
https://datasets.maluuba.com/NewsQA. | http://arxiv.org/pdf/1611.09830 | Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, Kaheer Suleman | cs.CL, cs.AI | null | null | cs.CL | 20161129 | 20170207 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1606.05250"
},
{
"id": "1610.00956"
},
{
"id": "1612.04211"
},
{
"id": "1603.01547"
},
{
"id": "1603.08023"
}
] |
1611.09823 | 13 | 3
# Under review as a conference paper at ICLR 2017
3.2 MECHANICAL TURK EXPERIMENTS
Finally, we extended WikiMovies using Mechanical Turk so that real human teachers are giving feedback rather than using a simulation. As both the questions and feedback are templated in the simulation, they are now both replaced with natural human utterances. Rather than having a set of simulated tasks, we have only one task, and we gave instructions to the teachers that they could give feedback as they see ï¬t. The exact instructions given to the Turkers is given in Appendix B. In general, each independent response contains feedback like (i) positive or negative sentences; or (ii) a phrase containing the answer or (iii) a hint, which are similar to setups deï¬ned in the simulator. However, some human responses cannot be so easily categorized, and the lexical variability is much larger in human responses. Some examples of the collected data are given in Figure 2.
# 4 METHODS
4.1 MODEL ARCHITECTURE
In our experiments, we used variants of the End-to-End Memory Network (MemN2N) model (Sukhbaatar et al., 2015) as our underlying architecture for learning from dialogue. | 1611.09823#13 | Dialogue Learning With Human-In-The-Loop | An important aspect of developing conversational agents is to give a bot the
ability to improve through communicating with humans and to learn from the
mistakes that it makes. Most research has focused on learning from fixed
training sets of labeled data rather than interacting with a dialogue partner
in an online fashion. In this paper we explore this direction in a
reinforcement learning setting where the bot improves its question-answering
ability from feedback a teacher gives following its generated responses. We
build a simulator that tests various aspects of such learning in a synthetic
environment, and introduce models that work in this regime. Finally, real
experiments with Mechanical Turk validate the approach. | http://arxiv.org/pdf/1611.09823 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.AI, cs.CL | null | null | cs.AI | 20161129 | 20170113 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1606.05250"
},
{
"id": "1511.06732"
},
{
"id": "1502.05698"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
}
] |
1611.09830 | 13 | 3.4 VALIDATION
Crowdsourcing is a powerful tool but it is not without peril (collection glitches; uninterested or malicious workers). To obtain a dataset of the highest possible quality we use a validation process that mitigates some of these issues. In validation, a third set of crowdworkers sees the full article, a question, and the set of unique answers to that question. We task these workers with choosing the best answer from the candidate set or rejecting all answers. Each article-question pair is validated by an average of 2.48 crowdworkers. Validation was used on those questions without answer-agreement after the previous stage, amounting to 43.2% of all questions.
3.5 ANSWER MARKING AND CLEANUP
After validation, 86.0% of all questions in NewsQA have answers agreed upon by at least two separate crowdworkersâeither at the initial answer sourcing stage or in the top-answer selection. This improves the datasetâs quality. We choose to include the questions without agreed answers in the corpus also, but they are specially marked. Such questions could be treated as having the null answer and used to train models that are aware of poorly posed questions. | 1611.09830#13 | NewsQA: A Machine Comprehension Dataset | We present NewsQA, a challenging machine comprehension dataset of over
100,000 human-generated question-answer pairs. Crowdworkers supply questions
and answers based on a set of over 10,000 news articles from CNN, with answers
consisting of spans of text from the corresponding articles. We collect this
dataset through a four-stage process designed to solicit exploratory questions
that require reasoning. A thorough analysis confirms that NewsQA demands
abilities beyond simple word matching and recognizing textual entailment. We
measure human performance on the dataset and compare it to several strong
neural models. The performance gap between humans and machines (0.198 in F1)
indicates that significant progress can be made on NewsQA through future
research. The dataset is freely available at
https://datasets.maluuba.com/NewsQA. | http://arxiv.org/pdf/1611.09830 | Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, Kaheer Suleman | cs.CL, cs.AI | null | null | cs.CL | 20161129 | 20170207 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1606.05250"
},
{
"id": "1610.00956"
},
{
"id": "1612.04211"
},
{
"id": "1603.01547"
},
{
"id": "1603.08023"
}
] |
1611.09823 | 14 | The input to MemN2N is the last utterance of the dialogue history x as well as a set of memories (context) C=c1, c2, ..., cN . The memory C encodes both short-term memory, e.g., dialogue histories between the bot and the teacher, and long-term memories, e.g., the knowledge base facts that the bot has access to. Given the input x and C, the goal is to produce an output/label a.
In the ï¬rst step, the query x is transformed to a vector representation u0 by summing up its con- stituent word embeddings: u0 = Ax. The input x is a bag-of-words vector and A is the d à V word embedding matrix where d denotes the emebbding dimension and V denotes the vocabulary size. Each memory ci is similarly transformed to a vector mi. The model will read information from the memory by comparing input representation u0 with memory vectors mi using softmax weights:
o1 = p1 i mi i = softmax(uT p1 0 mi) i (1) | 1611.09823#14 | Dialogue Learning With Human-In-The-Loop | An important aspect of developing conversational agents is to give a bot the
ability to improve through communicating with humans and to learn from the
mistakes that it makes. Most research has focused on learning from fixed
training sets of labeled data rather than interacting with a dialogue partner
in an online fashion. In this paper we explore this direction in a
reinforcement learning setting where the bot improves its question-answering
ability from feedback a teacher gives following its generated responses. We
build a simulator that tests various aspects of such learning in a synthetic
environment, and introduce models that work in this regime. Finally, real
experiments with Mechanical Turk validate the approach. | http://arxiv.org/pdf/1611.09823 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.AI, cs.CL | null | null | cs.AI | 20161129 | 20170113 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1606.05250"
},
{
"id": "1511.06732"
},
{
"id": "1502.05698"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
}
] |
1611.09830 | 14 | As a ï¬nal cleanup step we combine answer spans that are less than 3 words apart (punctuation is discounted). We ï¬nd that 5.68% of answers consist of multiple spans, while 71.3% of multi-spans are within the 3-word threshold. Looking more closely at the data reveals that the multi-span answers often represent lists. These may present an interesting challenge for comprehension models moving forward.
# 4 DATA ANALYSIS
We provide a thorough analysis of NewsQA to demonstrate its challenge and its usefulness as a machine comprehension benchmark. The analysis focuses on the types of answers that appear in the dataset and the various forms of reasoning required to solve it.1
1Additional statistics are available at https://datasets.maluuba.com/NewsQA/stats.
4
Table 1: The variety of answer types appearing in NewsQA, with proportion statistics and examples. | 1611.09830#14 | NewsQA: A Machine Comprehension Dataset | We present NewsQA, a challenging machine comprehension dataset of over
100,000 human-generated question-answer pairs. Crowdworkers supply questions
and answers based on a set of over 10,000 news articles from CNN, with answers
consisting of spans of text from the corresponding articles. We collect this
dataset through a four-stage process designed to solicit exploratory questions
that require reasoning. A thorough analysis confirms that NewsQA demands
abilities beyond simple word matching and recognizing textual entailment. We
measure human performance on the dataset and compare it to several strong
neural models. The performance gap between humans and machines (0.198 in F1)
indicates that significant progress can be made on NewsQA through future
research. The dataset is freely available at
https://datasets.maluuba.com/NewsQA. | http://arxiv.org/pdf/1611.09830 | Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, Kaheer Suleman | cs.CL, cs.AI | null | null | cs.CL | 20161129 | 20170207 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1606.05250"
},
{
"id": "1610.00956"
},
{
"id": "1612.04211"
},
{
"id": "1603.01547"
},
{
"id": "1603.08023"
}
] |
1611.09823 | 15 | o1 = p1 i mi i = softmax(uT p1 0 mi) i (1)
This process selects memories relevant to the last utterance x, i.e., the memories with large values of p1 i . The returned memory vector o1 is the weighted sum of memory vectors. This process can be repeated to query the memory N times (so called âhopsâ) by adding on to the original input, u1 = o1 + u0, or to the previous state, un = on + unâ1, and then using un to query the memories again.
In the end, uN is input to a softmax function for the ï¬nal prediction:
N y1, uT where y1, . . . , yL denote the set of candidate answers. If the answer is a word, yi is the corresponding word embedding. If the answer is a sentence, yi is the embedding for the sentence achieved in the same way that we obtain embeddings for query x and memory C.
The standard way MemN2N is trained is via a cross entropy criterion on known input-output pairs, which we refer to as supervised or imitation learning. As our work is in a reinforcement learning setup where our model must make predictions to learn, this procedure will not work, so we instead consider reinforcement learning algorithms which we describe next.
4.2 REINFORCEMENT LEARNING | 1611.09823#15 | Dialogue Learning With Human-In-The-Loop | An important aspect of developing conversational agents is to give a bot the
ability to improve through communicating with humans and to learn from the
mistakes that it makes. Most research has focused on learning from fixed
training sets of labeled data rather than interacting with a dialogue partner
in an online fashion. In this paper we explore this direction in a
reinforcement learning setting where the bot improves its question-answering
ability from feedback a teacher gives following its generated responses. We
build a simulator that tests various aspects of such learning in a synthetic
environment, and introduce models that work in this regime. Finally, real
experiments with Mechanical Turk validate the approach. | http://arxiv.org/pdf/1611.09823 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.AI, cs.CL | null | null | cs.AI | 20161129 | 20170113 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1606.05250"
},
{
"id": "1511.06732"
},
{
"id": "1502.05698"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
}
] |
1611.09830 | 15 | 4
Table 1: The variety of answer types appearing in NewsQA, with proportion statistics and examples.
Answer type Example Proportion (%) Date/Time Numeric Person Location Other Entity Common Noun Phr. Adjective Phr. Verb Phr. Clause Phr. Prepositional Phr. Other March 12, 2008 24.3 million Ludwig van Beethoven Torrance, California Pew Hispanic Center federal prosecutors 5-hour suffered minor damage trampling on human rights in the attack nearly half 2.9 9.8 14.8 7.8 5.8 22.2 1.9 1.4 18.3 3.8 11.2
4.1 ANSWER TYPES
Following Rajpurkar et al. (2016), we categorize answers based on their linguistic type (see Table 1). This categorization relies on Stanford CoreNLP to generate constituency parses, POS tags, and NER tags for answer spans (see Rajpurkar et al. (2016) for more details). From the table we see that the majority of answers (22.2%) are common noun phrases. Thereafter, answers are fairly evenly spread among the clause phrase (18.3%), person (14.8%), numeric (9.8%), and other (11.2%) types. Clearly, answers in NewsQA are linguistically diverse. | 1611.09830#15 | NewsQA: A Machine Comprehension Dataset | We present NewsQA, a challenging machine comprehension dataset of over
100,000 human-generated question-answer pairs. Crowdworkers supply questions
and answers based on a set of over 10,000 news articles from CNN, with answers
consisting of spans of text from the corresponding articles. We collect this
dataset through a four-stage process designed to solicit exploratory questions
that require reasoning. A thorough analysis confirms that NewsQA demands
abilities beyond simple word matching and recognizing textual entailment. We
measure human performance on the dataset and compare it to several strong
neural models. The performance gap between humans and machines (0.198 in F1)
indicates that significant progress can be made on NewsQA through future
research. The dataset is freely available at
https://datasets.maluuba.com/NewsQA. | http://arxiv.org/pdf/1611.09830 | Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, Kaheer Suleman | cs.CL, cs.AI | null | null | cs.CL | 20161129 | 20170207 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1606.05250"
},
{
"id": "1610.00956"
},
{
"id": "1612.04211"
},
{
"id": "1603.01547"
},
{
"id": "1603.08023"
}
] |
1611.09823 | 16 | 4.2 REINFORCEMENT LEARNING
In this section, we present the algorithms we used to train MemN2N in an online fashion. Our learn- ing setup can be cast as a particular form of Reinforcement Learning. The policy is implemented by the MemN2N model. The state is the dialogue history. The action space corresponds to the set of answers the MemN2N has to choose from to answer the teacherâs question. In our setting, the policy chooses only one action for each episode. The reward is either 1 (a reward from the teacher when the bot answers correctly) or 0 otherwise. Note that in our experiments, a reward equal to 0 might mean that the answer is incorrect or that the positive reward is simply missing. The overall setup is closest to standard contextual bandits, except that the reward is binary.
4
# Under review as a conference paper at ICLR 2017
When working with real human dialogues, e.g. collecting data via Mechanical Turk, it is easier to set up a task whereby a bot is deployed to respond to a large batch of utterances, as opposed to a single one. The latter would be more difï¬cult to manage and scale up since it would require some form of synchronization between the model replicas interacting with each human. | 1611.09823#16 | Dialogue Learning With Human-In-The-Loop | An important aspect of developing conversational agents is to give a bot the
ability to improve through communicating with humans and to learn from the
mistakes that it makes. Most research has focused on learning from fixed
training sets of labeled data rather than interacting with a dialogue partner
in an online fashion. In this paper we explore this direction in a
reinforcement learning setting where the bot improves its question-answering
ability from feedback a teacher gives following its generated responses. We
build a simulator that tests various aspects of such learning in a synthetic
environment, and introduce models that work in this regime. Finally, real
experiments with Mechanical Turk validate the approach. | http://arxiv.org/pdf/1611.09823 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.AI, cs.CL | null | null | cs.AI | 20161129 | 20170113 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1606.05250"
},
{
"id": "1511.06732"
},
{
"id": "1502.05698"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
}
] |
1611.09830 | 16 | The proportions in Table 1 only account for cases when an answer span exists. The complement of this set comprises questions with an agreed null answer (9.5% of the full corpus) and answers without agreement after validation (4.5% of the full corpus).
4.2 REASONING TYPES
The forms of reasoning required to solve NewsQA directly inï¬uence the abilities that models will learn from the dataset. We stratiï¬ed reasoning types using a variation on the taxonomy presented by Chen et al. (2016) in their analysis of the CNN/Daily Mail dataset. Types are as follows, in ascending order of difï¬culty:
1. Word Matching: Important words in the question exactly match words in the immediate context of an answer span, such that a keyword search algorithm could perform well on this subset.
2. Paraphrasing: A single sentence in the article entails or paraphrases the question. Para- phrase recognition may require synonymy and world knowledge.
3. Inference: The answer must be inferred from incomplete information in the article or by recognizing conceptual overlap. This typically draws on world knowledge.
4. Synthesis: The answer can only be inferred by synthesizing information distributed across multiple sentences.
5. Ambiguous/Insufï¬cient: The question has no answer or no unique answer in the article. | 1611.09830#16 | NewsQA: A Machine Comprehension Dataset | We present NewsQA, a challenging machine comprehension dataset of over
100,000 human-generated question-answer pairs. Crowdworkers supply questions
and answers based on a set of over 10,000 news articles from CNN, with answers
consisting of spans of text from the corresponding articles. We collect this
dataset through a four-stage process designed to solicit exploratory questions
that require reasoning. A thorough analysis confirms that NewsQA demands
abilities beyond simple word matching and recognizing textual entailment. We
measure human performance on the dataset and compare it to several strong
neural models. The performance gap between humans and machines (0.198 in F1)
indicates that significant progress can be made on NewsQA through future
research. The dataset is freely available at
https://datasets.maluuba.com/NewsQA. | http://arxiv.org/pdf/1611.09830 | Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, Kaheer Suleman | cs.CL, cs.AI | null | null | cs.CL | 20161129 | 20170207 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1606.05250"
},
{
"id": "1610.00956"
},
{
"id": "1612.04211"
},
{
"id": "1603.01547"
},
{
"id": "1603.08023"
}
] |
1611.09823 | 17 | This is comparable to the real world situation where a teacher can either ask a student a single question and give feedback right away, or set up a test that contains many questions and grade all of them at once. Only after the learner completes all questions, it can hear feedback from the teacher.
We use batch size to refer to how many dialogue episodes the current model is used to collect feedback before updating its parameters. In the Reinforcement Learning literature, batch size is related to off-policy learning since the MemN2N policy is trained using episodes collected with a stale version of the model. Our experiments show that our model and base algorithms are very robust to the choice of batch size, alleviating the need for correction terms in the learning algorithm (Bottou et al., 2013).
We consider two strategies: (i) online batch size, whereby the target policy is updated after doing a single pass over each batch (a batch size of 1 reverts to the usual on-policy online learning); and (ii) dataset-sized batch, whereby training is continued to convergence on the batch which is the size of the dataset, and then the target policy is updated with the new model, and a new batch is drawn and the procedure iterates. These strategies can be applied to all the methods we use, described below.
Next, we discuss the learning algorithms we considered in this work.
4.2.1 REWARD-BASED IMITATION (RBI) | 1611.09823#17 | Dialogue Learning With Human-In-The-Loop | An important aspect of developing conversational agents is to give a bot the
ability to improve through communicating with humans and to learn from the
mistakes that it makes. Most research has focused on learning from fixed
training sets of labeled data rather than interacting with a dialogue partner
in an online fashion. In this paper we explore this direction in a
reinforcement learning setting where the bot improves its question-answering
ability from feedback a teacher gives following its generated responses. We
build a simulator that tests various aspects of such learning in a synthetic
environment, and introduce models that work in this regime. Finally, real
experiments with Mechanical Turk validate the approach. | http://arxiv.org/pdf/1611.09823 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.AI, cs.CL | null | null | cs.AI | 20161129 | 20170113 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1606.05250"
},
{
"id": "1511.06732"
},
{
"id": "1502.05698"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
}
] |
1611.09830 | 17 | 5. Ambiguous/Insufï¬cient: The question has no answer or no unique answer in the article.
For both NewsQA and SQuAD, we manually labelled 1,000 examples (drawn randomly from the respective development sets) according to these types and compiled the results in Table 2. Some examples fall into more than one category, in which case we defaulted to the more challenging type. We can see from the table that word matching, the easiest type, makes up the largest subset in both datasets (32.7% for NewsQA and 39.8% for SQuAD). Paraphrasing constitutes a larger proportion in SQuAD than in NewsQA (34.3% vs 27.0%), possibly a result from the explicit encouragement of lexical variety in SQuAD question sourcing. However, NewsQA signiï¬cantly outnumbers SQuAD on the distribution of the more difï¬cult forms of reasoning: synthesis and inference make up a combined 33.9% of the data in contrast to 20.5% in SQuAD.
5
Table 2: Reasoning mechanisms needed to answer questions. For each we show an example question with the sentence that contains the answer span. Words relevant to the reasoning type are in bold. The corresponding proportion in the human-evaluated subset of both NewsQA and SQuAD (1,000 samples each) is also given. | 1611.09830#17 | NewsQA: A Machine Comprehension Dataset | We present NewsQA, a challenging machine comprehension dataset of over
100,000 human-generated question-answer pairs. Crowdworkers supply questions
and answers based on a set of over 10,000 news articles from CNN, with answers
consisting of spans of text from the corresponding articles. We collect this
dataset through a four-stage process designed to solicit exploratory questions
that require reasoning. A thorough analysis confirms that NewsQA demands
abilities beyond simple word matching and recognizing textual entailment. We
measure human performance on the dataset and compare it to several strong
neural models. The performance gap between humans and machines (0.198 in F1)
indicates that significant progress can be made on NewsQA through future
research. The dataset is freely available at
https://datasets.maluuba.com/NewsQA. | http://arxiv.org/pdf/1611.09830 | Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, Kaheer Suleman | cs.CL, cs.AI | null | null | cs.CL | 20161129 | 20170207 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1606.05250"
},
{
"id": "1610.00956"
},
{
"id": "1612.04211"
},
{
"id": "1603.01547"
},
{
"id": "1603.08023"
}
] |
1611.09823 | 18 | Next, we discuss the learning algorithms we considered in this work.
4.2.1 REWARD-BASED IMITATION (RBI)
The simplest algorithm we ï¬rst consider is the one employed in Weston (2016). RBI relies on positive rewards provided by the teacher. It is trained to imitate the correct behavior of the learner, i.e., learning to predict the correct answers (with reward 1) at training time and disregarding the other ones. This is implemented by using a MemN2N that maps a dialogue input to a prediction, i.e. using the cross entropy criterion on the positively rewarded subset of the data.
In order to make this work in the online setting which requires exploration to find the correct answer, we employ an e-greedy strategy: the learner makes a prediction using its own model (the answer assigned the highest probability) with probability 1 â ¢, otherwise it picks a random answer with probability «. The teacher will then give a reward of +1 if the answer is correct, otherwise 0. The bot will then learn to imitate the correct answers: predicting the correct answers while ignoring the incorrect ones.
# 4.2.2 REINFORCE | 1611.09823#18 | Dialogue Learning With Human-In-The-Loop | An important aspect of developing conversational agents is to give a bot the
ability to improve through communicating with humans and to learn from the
mistakes that it makes. Most research has focused on learning from fixed
training sets of labeled data rather than interacting with a dialogue partner
in an online fashion. In this paper we explore this direction in a
reinforcement learning setting where the bot improves its question-answering
ability from feedback a teacher gives following its generated responses. We
build a simulator that tests various aspects of such learning in a synthetic
environment, and introduce models that work in this regime. Finally, real
experiments with Mechanical Turk validate the approach. | http://arxiv.org/pdf/1611.09823 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.AI, cs.CL | null | null | cs.AI | 20161129 | 20170113 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1606.05250"
},
{
"id": "1511.06732"
},
{
"id": "1502.05698"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
}
] |
1611.09830 | 18 | Reasoning Example Proportion (%) NewsQA SQuAD Word Matching Q: When were the ï¬ndings published? S: Both sets of research ï¬ndings were published Thursday... 32.7 39.8 Paraphrasing Q: Who is the struggle between in Rwanda? S: The struggle pits ethnic Tutsis, supported by Rwanda, against ethnic Hutu, backed by Congo. 27.0 34.3 Inference Q: Who drew inspiration from presidents? S: Rudy Ruiz says the lives of US presidents can make them positive role models for students. 13.2 8.6 Synthesis Q: Where is Brittanee Drexel from? S: The mother of a 17-year-old Rochester, New York high school student ... says she did not give her daughter permission to go on the trip. Brittanee Marie Drexelâs mom says... 20.7 11.9 Ambiguous/Insufï¬cient Q: Whose mother is moving to the White House? S: ... Barack Obamaâs mother-in-law, Marian Robinson, will join the Obamas at the familyâs private quarters at 1600 Pennsylvania Avenue. [Michelle is never mentioned] 6.4 5.4
# 5 BASELINE MODELS | 1611.09830#18 | NewsQA: A Machine Comprehension Dataset | We present NewsQA, a challenging machine comprehension dataset of over
100,000 human-generated question-answer pairs. Crowdworkers supply questions
and answers based on a set of over 10,000 news articles from CNN, with answers
consisting of spans of text from the corresponding articles. We collect this
dataset through a four-stage process designed to solicit exploratory questions
that require reasoning. A thorough analysis confirms that NewsQA demands
abilities beyond simple word matching and recognizing textual entailment. We
measure human performance on the dataset and compare it to several strong
neural models. The performance gap between humans and machines (0.198 in F1)
indicates that significant progress can be made on NewsQA through future
research. The dataset is freely available at
https://datasets.maluuba.com/NewsQA. | http://arxiv.org/pdf/1611.09830 | Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, Kaheer Suleman | cs.CL, cs.AI | null | null | cs.CL | 20161129 | 20170207 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1606.05250"
},
{
"id": "1610.00956"
},
{
"id": "1612.04211"
},
{
"id": "1603.01547"
},
{
"id": "1603.08023"
}
] |
1611.09823 | 19 | # 4.2.2 REINFORCE
The second algorithm we use is the REINFORCE algorithm (Williams, 1992), which maximizes the expected cumulative reward of the episode, in our case the expected reward provided by the teacher. The expectation is approximated by sampling an answer from the model distribution. Let a denote the answer that the learner gives, p(a) denote the probability that current model assigns to a, r denote the teacherâs reward, and J(θ) denote the expectation of the reward. We have:
âJ(θ) â â log p(a)[r â b] (3)
where b is the baseline value, which is estimated using a linear regression model that takes as input the output of the memory network after the last hop, and outputs a scalar b denoting the estimation of the future reward. The baseline model is trained by minimizing the mean squared loss between the estimated reward b and actual reward r, ||r â b||2. We refer the readers to (Ranzato et al., 2015; Zaremba & Sutskever, 2015) for more details. The baseline estimator model is independent from the policy model, and its error is not backpropagated through the policy model. | 1611.09823#19 | Dialogue Learning With Human-In-The-Loop | An important aspect of developing conversational agents is to give a bot the
ability to improve through communicating with humans and to learn from the
mistakes that it makes. Most research has focused on learning from fixed
training sets of labeled data rather than interacting with a dialogue partner
in an online fashion. In this paper we explore this direction in a
reinforcement learning setting where the bot improves its question-answering
ability from feedback a teacher gives following its generated responses. We
build a simulator that tests various aspects of such learning in a synthetic
environment, and introduce models that work in this regime. Finally, real
experiments with Mechanical Turk validate the approach. | http://arxiv.org/pdf/1611.09823 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.AI, cs.CL | null | null | cs.AI | 20161129 | 20170113 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1606.05250"
},
{
"id": "1511.06732"
},
{
"id": "1502.05698"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
}
] |
1611.09830 | 19 | # 5 BASELINE MODELS
We test the performance of three comprehension systems on NewsQA: human data analysts and two neural models. The ï¬rst neural model is the match-LSTM (mLSTM) system of Wang & Jiang (2016b). The second is a model of our own design that is similar but computationally cheaper. We describe these models below but omit the personal details of our analysts. Implementation details of the models are described in Appendix A.
# 5.1 MATCH-LSTM | 1611.09830#19 | NewsQA: A Machine Comprehension Dataset | We present NewsQA, a challenging machine comprehension dataset of over
100,000 human-generated question-answer pairs. Crowdworkers supply questions
and answers based on a set of over 10,000 news articles from CNN, with answers
consisting of spans of text from the corresponding articles. We collect this
dataset through a four-stage process designed to solicit exploratory questions
that require reasoning. A thorough analysis confirms that NewsQA demands
abilities beyond simple word matching and recognizing textual entailment. We
measure human performance on the dataset and compare it to several strong
neural models. The performance gap between humans and machines (0.198 in F1)
indicates that significant progress can be made on NewsQA through future
research. The dataset is freely available at
https://datasets.maluuba.com/NewsQA. | http://arxiv.org/pdf/1611.09830 | Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, Kaheer Suleman | cs.CL, cs.AI | null | null | cs.CL | 20161129 | 20170207 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1606.05250"
},
{
"id": "1610.00956"
},
{
"id": "1612.04211"
},
{
"id": "1603.01547"
},
{
"id": "1603.08023"
}
] |
1611.09823 | 20 | The major difference between RBI and REINFORCE is that (i) the learner only tries to imitate correct behavior in RBI while in REINFORCE it also leverages the incorrect behavior, and (ii) the learner explores using an e-greedy strategy in RBI while in REINFORCE it uses the distribution over actions produced by the model itself.
5
# Under review as a conference paper at ICLR 2017
4.2.3 FORWARD PREDICTION (FP)
FP (Weston, 2016) handles the situation where a numerical reward for a botâs answer is not available, meaning that there are no +1 or 0 labels available after a studentâs utterance. Instead, the model assumes the teacher gives textual feedback t to the botâs answer, taking the form of a dialogue utterance, and the model tries to predict this instead. Suppose that x denotes the teacherâs question and C=c1, c2, ..., cN denotes the dialogue history as before. In FP, the model ï¬rst maps the teacherâs initial question x and dialogue history C to a vector representation u using a memory network with multiple hops. Then the model will perform another hop of attention over all possible studentâs answers in A, with an additional part that incorporates the information of which candidate (i.e., a) was actually selected in the dialogue: | 1611.09823#20 | Dialogue Learning With Human-In-The-Loop | An important aspect of developing conversational agents is to give a bot the
ability to improve through communicating with humans and to learn from the
mistakes that it makes. Most research has focused on learning from fixed
training sets of labeled data rather than interacting with a dialogue partner
in an online fashion. In this paper we explore this direction in a
reinforcement learning setting where the bot improves its question-answering
ability from feedback a teacher gives following its generated responses. We
build a simulator that tests various aspects of such learning in a synthetic
environment, and introduce models that work in this regime. Finally, real
experiments with Mechanical Turk validate the approach. | http://arxiv.org/pdf/1611.09823 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.AI, cs.CL | null | null | cs.AI | 20161129 | 20170113 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1606.05250"
},
{
"id": "1511.06732"
},
{
"id": "1502.05698"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
}
] |
1611.09830 | 20 | # 5.1 MATCH-LSTM
We selected the mLSTM model because it is straightforward to implement and offers strong, though not state-of-the-art, performance on the similar SQuAD dataset. There are three stages involved in the mLSTM. First, LSTM networks encode the document and question (represented by GloVe word embeddings (Pennington et al., 2014)) as sequences of hidden states. Second, an mLSTM network (Wang & Jiang, 2016a) compares the document encodings with the question encodings. This network processes the document sequentially and at each token uses an attention mechanism to obtain a weighted vector representation of the question; the weighted combination is concatenated with the encoding of the current token and fed into a standard LSTM. Finally, a Pointer Network uses the hidden states of the mLSTM to select the boundaries of the answer span. We refer the reader to Wang & Jiang (2016a;b) for full details.
5.2 THE BILINEAR ANNOTATION RE-ENCODING BOUNDARY (BARB) MODEL | 1611.09830#20 | NewsQA: A Machine Comprehension Dataset | We present NewsQA, a challenging machine comprehension dataset of over
100,000 human-generated question-answer pairs. Crowdworkers supply questions
and answers based on a set of over 10,000 news articles from CNN, with answers
consisting of spans of text from the corresponding articles. We collect this
dataset through a four-stage process designed to solicit exploratory questions
that require reasoning. A thorough analysis confirms that NewsQA demands
abilities beyond simple word matching and recognizing textual entailment. We
measure human performance on the dataset and compare it to several strong
neural models. The performance gap between humans and machines (0.198 in F1)
indicates that significant progress can be made on NewsQA through future
research. The dataset is freely available at
https://datasets.maluuba.com/NewsQA. | http://arxiv.org/pdf/1611.09830 | Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, Kaheer Suleman | cs.CL, cs.AI | null | null | cs.CL | 20161129 | 20170207 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1606.05250"
},
{
"id": "1610.00956"
},
{
"id": "1612.04211"
},
{
"id": "1603.01547"
},
{
"id": "1603.08023"
}
] |
1611.09823 | 21 | pËa = softmax(uT yËa) o = pËa(yËa + β · 1[Ëa = a]) ËaâA (4)
where yËa denotes the vector representation for the studentâs answer candidate Ëa. β is a (learned) d-dimensional vector to signify the actual action a that the student chooses. o is then combined with u to predict the teacherâs feedback t using a softmax: u1 = o + u t = softmax(uT
(5) where xri denotes the embedding for the ith response. In the online setting, the teacher will give textual feedback, and the learner needs to update its model using the feedback. It was shown in Weston (2016) that in an off-line setting this procedure can work either on its own, or in conjunction with a method that uses numerical rewards as well for improved performance. In the online setting, we consider two simple extensions:
e e-greedy exploration: with probability ⬠the student will give a random answer, and with probability 1 â ¢ it will give the answer that its model assigns the largest probability. This method enables the model to explore the space of actions and to potentially discover correct answers. | 1611.09823#21 | Dialogue Learning With Human-In-The-Loop | An important aspect of developing conversational agents is to give a bot the
ability to improve through communicating with humans and to learn from the
mistakes that it makes. Most research has focused on learning from fixed
training sets of labeled data rather than interacting with a dialogue partner
in an online fashion. In this paper we explore this direction in a
reinforcement learning setting where the bot improves its question-answering
ability from feedback a teacher gives following its generated responses. We
build a simulator that tests various aspects of such learning in a synthetic
environment, and introduce models that work in this regime. Finally, real
experiments with Mechanical Turk validate the approach. | http://arxiv.org/pdf/1611.09823 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.AI, cs.CL | null | null | cs.AI | 20161129 | 20170113 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1606.05250"
},
{
"id": "1511.06732"
},
{
"id": "1502.05698"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
}
] |
1611.09830 | 21 | 5.2 THE BILINEAR ANNOTATION RE-ENCODING BOUNDARY (BARB) MODEL
The match-LSTM is computationally intensive since it computes an attention over the entire question at each document token in the recurrence. To facilitate faster experimentation with NewsQA we developed a lighter-weight model (BARB) that achieves similar results on SQuAD2. Our model consists of four stages:
Encoding All words in the document and question are mapped to real-valued vectors using the GloVe embeddings W â R|V |Ãd. This yields d1, . . . , dn â Rd and q1, . . . , qm â Rd. A bidirec2With the conï¬gurations for the results reported in Section 6.2, one epoch of training on NewsQA takes about 3.9k seconds for BARB and 8.1k seconds for mLSTM.
6
tional GRU network (Bahdanau et al., 2015) encodes di into contextual states hi â RD1 for the document. The same encoder is applied to qj to derive contextual states kj â RD1 for the question.3
Bilinear Annotation Next we compare the document and question encodings using a set of C bilinear transformations, | 1611.09830#21 | NewsQA: A Machine Comprehension Dataset | We present NewsQA, a challenging machine comprehension dataset of over
100,000 human-generated question-answer pairs. Crowdworkers supply questions
and answers based on a set of over 10,000 news articles from CNN, with answers
consisting of spans of text from the corresponding articles. We collect this
dataset through a four-stage process designed to solicit exploratory questions
that require reasoning. A thorough analysis confirms that NewsQA demands
abilities beyond simple word matching and recognizing textual entailment. We
measure human performance on the dataset and compare it to several strong
neural models. The performance gap between humans and machines (0.198 in F1)
indicates that significant progress can be made on NewsQA through future
research. The dataset is freely available at
https://datasets.maluuba.com/NewsQA. | http://arxiv.org/pdf/1611.09830 | Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, Kaheer Suleman | cs.CL, cs.AI | null | null | cs.CL | 20161129 | 20170207 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1606.05250"
},
{
"id": "1610.00956"
},
{
"id": "1612.04211"
},
{
"id": "1603.01547"
},
{
"id": "1603.08023"
}
] |
1611.09823 | 22 | ⢠data balancing: cluster the set of teacher responses t and then balance training across the clusters equally.2 This is a type of experience replay (Mnih et al., 2013) but sampling with an evened distribution. Balancing stops part of the distribution dominating the learning. For example, if the model is not exposed to sufï¬cient positive and negative feedback, and one class overly dominates, the learning process degenerates to a model that always predicts the same output regardless of its input.
# 5 EXPERIMENTS
Experiments are ï¬rst conducted using our simulator, and then using Amazon Mechanical Turk with real human subjects taking the role of the teacher3.
5.1 SIMULATOR
Online Experiments In our first experiments, we considered both the bAbI and WikiMovies tasks and varied batch size, random exploration rate â¬, and type of model. Figure [3]and Figure [4] shows (Task 6) results on bAbI and WikiMovies. Other tasks yield similar conclusions and are reported in the appendix.
Overall, we obtain the following conclusions:
⢠In general RBI and FP do work in a reinforcement learning setting, but can perform better with random exploration.
⢠In particular RBI can fail without exploration. RBI needs random noise for exploring labels otherwise it can get stuck predicting a subset of labels and fail. | 1611.09823#22 | Dialogue Learning With Human-In-The-Loop | An important aspect of developing conversational agents is to give a bot the
ability to improve through communicating with humans and to learn from the
mistakes that it makes. Most research has focused on learning from fixed
training sets of labeled data rather than interacting with a dialogue partner
in an online fashion. In this paper we explore this direction in a
reinforcement learning setting where the bot improves its question-answering
ability from feedback a teacher gives following its generated responses. We
build a simulator that tests various aspects of such learning in a synthetic
environment, and introduce models that work in this regime. Finally, real
experiments with Mechanical Turk validate the approach. | http://arxiv.org/pdf/1611.09823 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.AI, cs.CL | null | null | cs.AI | 20161129 | 20170113 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1606.05250"
},
{
"id": "1511.06732"
},
{
"id": "1502.05698"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
}
] |
1611.09830 | 22 | Bilinear Annotation Next we compare the document and question encodings using a set of C bilinear transformations,
i T[1:C]kj, Tc â RD1ÃD1 , gij â RC,
which we use to produce an (n à m à C)-dimensional tensor of annotation scores, G = [gij]. We take the maximum over the question-token (second) dimension and call the columns of the resulting matrix gi â RC. We use this matrix as an annotation over the document word dimension. In contrast with the more typical multiplicative application of attention vectors, this annotation matrix is concatenated to the encoder RNN input in the re-encoding stage.
Re-encoding For each document word, the input of the re-encoding RNN (another biGRU) consists of three components: the document encodings hi, the annotation vectors gi, and a binary feature qi indicating whether the document word appears in the question. The resulting vectors fi = [hi; gi; qi] are fed into the re-encoding RNN to produce D2-dimensional encodings ei for the boundary-pointing stage. | 1611.09830#22 | NewsQA: A Machine Comprehension Dataset | We present NewsQA, a challenging machine comprehension dataset of over
100,000 human-generated question-answer pairs. Crowdworkers supply questions
and answers based on a set of over 10,000 news articles from CNN, with answers
consisting of spans of text from the corresponding articles. We collect this
dataset through a four-stage process designed to solicit exploratory questions
that require reasoning. A thorough analysis confirms that NewsQA demands
abilities beyond simple word matching and recognizing textual entailment. We
measure human performance on the dataset and compare it to several strong
neural models. The performance gap between humans and machines (0.198 in F1)
indicates that significant progress can be made on NewsQA through future
research. The dataset is freely available at
https://datasets.maluuba.com/NewsQA. | http://arxiv.org/pdf/1611.09830 | Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, Kaheer Suleman | cs.CL, cs.AI | null | null | cs.CL | 20161129 | 20170207 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1606.05250"
},
{
"id": "1610.00956"
},
{
"id": "1612.04211"
},
{
"id": "1603.01547"
},
{
"id": "1603.08023"
}
] |
1611.09823 | 23 | ⢠In particular RBI can fail without exploration. RBI needs random noise for exploring labels otherwise it can get stuck predicting a subset of labels and fail.
2In the simulated data, because the responses are templates, this can be implemented by ï¬rst randomly sampling the response, and then randomly sampling a story with that response; we keep the history of all stories seen from which we sample. For real data slightly more sophisticated clustering should be used.
3 Code and data are available at https://github.com/facebook/MemNN/tree/master/HITL.
6
# Under review as a conference paper at ICLR 2017 | 1611.09823#23 | Dialogue Learning With Human-In-The-Loop | An important aspect of developing conversational agents is to give a bot the
ability to improve through communicating with humans and to learn from the
mistakes that it makes. Most research has focused on learning from fixed
training sets of labeled data rather than interacting with a dialogue partner
in an online fashion. In this paper we explore this direction in a
reinforcement learning setting where the bot improves its question-answering
ability from feedback a teacher gives following its generated responses. We
build a simulator that tests various aspects of such learning in a synthetic
environment, and introduce models that work in this regime. Finally, real
experiments with Mechanical Turk validate the approach. | http://arxiv.org/pdf/1611.09823 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.AI, cs.CL | null | null | cs.AI | 20161129 | 20170113 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1606.05250"
},
{
"id": "1511.06732"
},
{
"id": "1502.05698"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
}
] |
1611.09830 | 23 | Boundary pointing Finally, we search for the boundaries of the answer span using a convolutional network (in a process similar to edge detection). Encodings e; are arranged in matrix E ⬠R?2*â. E is convolved with a bank of n¢ filters, Fi ⬠R?2*â, where w is the filter width, k indexes the different filters, and ¢ indexes the layer of the convolutional network. Each layer has the same number of filters of the same dimensions. We add a bias term and apply a nonlinearity (ReLU) following each convolution, with the result an (ny x )-dimensional matrix Be. | 1611.09830#23 | NewsQA: A Machine Comprehension Dataset | We present NewsQA, a challenging machine comprehension dataset of over
100,000 human-generated question-answer pairs. Crowdworkers supply questions
and answers based on a set of over 10,000 news articles from CNN, with answers
consisting of spans of text from the corresponding articles. We collect this
dataset through a four-stage process designed to solicit exploratory questions
that require reasoning. A thorough analysis confirms that NewsQA demands
abilities beyond simple word matching and recognizing textual entailment. We
measure human performance on the dataset and compare it to several strong
neural models. The performance gap between humans and machines (0.198 in F1)
indicates that significant progress can be made on NewsQA through future
research. The dataset is freely available at
https://datasets.maluuba.com/NewsQA. | http://arxiv.org/pdf/1611.09830 | Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, Kaheer Suleman | cs.CL, cs.AI | null | null | cs.CL | 20161129 | 20170207 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1606.05250"
},
{
"id": "1610.00956"
},
{
"id": "1612.04211"
},
{
"id": "1603.01547"
},
{
"id": "1603.08023"
}
] |
1611.09823 | 24 | Random Exploration for FP SACP ci Sata cE RS 0.9) 0.8) > 0.7 UV £ 0.6 a U oe g 0.5) â 0.4 oa -_ 0.3] o-~ 0.2 _â 60 80 ie) 20 40 60 80 Epoch Epoch Random Exploration for FP with Balancin 1,0,__Comparing RBI, FP and REINFORCE 0.9) 0.9) 0.8) 0.8] 0.7 > 9-7] fs) © 0.6) £06 5 gos £05 0.4| 0.4 i ] @âe REINFORCE 0.3 0.3] a RBI 0.2 0.2) ma FP i?) 20 40 60 80 i¢) 20 40 60 80 Epoch Epoch 1.0, RBI (eps=0.6) Varying Batch Size FP (eps=0.6) Varying Batch Size 0.9| 0.9) 0.8} 0.8} 5, 0.7 50.7 8 8 © 0.6 § 0.6 Sos 3g gt 05 0.4 ee batch 20 e@âe batch 20 a batch 80 0.4 4 batch 80 0.3 mm batch 320 03 mm batch 320 o2 |e batch 1000 : + batch 1000 | 1611.09823#24 | Dialogue Learning With Human-In-The-Loop | An important aspect of developing conversational agents is to give a bot the
ability to improve through communicating with humans and to learn from the
mistakes that it makes. Most research has focused on learning from fixed
training sets of labeled data rather than interacting with a dialogue partner
in an online fashion. In this paper we explore this direction in a
reinforcement learning setting where the bot improves its question-answering
ability from feedback a teacher gives following its generated responses. We
build a simulator that tests various aspects of such learning in a synthetic
environment, and introduce models that work in this regime. Finally, real
experiments with Mechanical Turk validate the approach. | http://arxiv.org/pdf/1611.09823 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.AI, cs.CL | null | null | cs.AI | 20161129 | 20170113 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1606.05250"
},
{
"id": "1511.06732"
},
{
"id": "1502.05698"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
}
] |
1611.09830 | 24 | We use two convolutional layers in the boundary-pointing stage. Given B, and Bog, the answer spanâs start- and end-location probabilities are computed using p(s) o exp (v7 Bi + bs) and p(e) x exp (v? Bz + be) , respectively. We also concatenate p(s) to the input of the second convolutional layer (along the n-dimension) so as to condition the end-boundary pointing on the start-boundary. Vectors vs, Ve ⬠RâS and scalars b,, be ⬠R are trainable parameters. We also provide an intermediate level of âguidanceâ to the annotation mechanism by first reducing the feature dimension C' in G with mean-pooling, then maximizing the softmax probabilities in the resulting (n-dimensional) vector corresponding to the answer word positions in each document. This auxiliary task is observed empirically to improve performance.
# 6 EXPERIMENTS4
6.1 HUMAN EVALUATION | 1611.09830#24 | NewsQA: A Machine Comprehension Dataset | We present NewsQA, a challenging machine comprehension dataset of over
100,000 human-generated question-answer pairs. Crowdworkers supply questions
and answers based on a set of over 10,000 news articles from CNN, with answers
consisting of spans of text from the corresponding articles. We collect this
dataset through a four-stage process designed to solicit exploratory questions
that require reasoning. A thorough analysis confirms that NewsQA demands
abilities beyond simple word matching and recognizing textual entailment. We
measure human performance on the dataset and compare it to several strong
neural models. The performance gap between humans and machines (0.198 in F1)
indicates that significant progress can be made on NewsQA through future
research. The dataset is freely available at
https://datasets.maluuba.com/NewsQA. | http://arxiv.org/pdf/1611.09830 | Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, Kaheer Suleman | cs.CL, cs.AI | null | null | cs.CL | 20161129 | 20170207 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1606.05250"
},
{
"id": "1610.00956"
},
{
"id": "1612.04211"
},
{
"id": "1603.01547"
},
{
"id": "1603.08023"
}
] |
1611.09830 | 25 | # 6 EXPERIMENTS4
6.1 HUMAN EVALUATION
We tested four English speakers on a total of 1,000 questions from the NewsQA development set. We used four performance measures: F1 and exact match (EM) scores (the same measures used by SQuAD), as well as BLEU and CIDEr5. BLEU is a precision-based metric popular in machine translation that uses a weighted average of variable length phrase matches (n-grams) against the reference sentence (Papineni et al., 2002). CIDEr was designed to correlate better with human judgements of sentence similarity, and uses tf-idf scores over n-grams (Vedantam et al., 2015).
As given in Table 4, humans averaged 0.694 F1 on NewsQA. The human EM scores are relatively low at 0.465. These lower scores are a reï¬ection of the fact that, particularly in a dataset as complex as NewsQA, there are multiple ways to select semantically equivalent answers, e.g., â1996â versus âin 1996â. Although these answers are equally correct they would be measured at 0.5 F1 and 0.0 EM.
3A bidirectional GRU concatenates the hidden states of two GRU networks running in opposite directions. Each of these has hidden size 1 2 D1. | 1611.09830#25 | NewsQA: A Machine Comprehension Dataset | We present NewsQA, a challenging machine comprehension dataset of over
100,000 human-generated question-answer pairs. Crowdworkers supply questions
and answers based on a set of over 10,000 news articles from CNN, with answers
consisting of spans of text from the corresponding articles. We collect this
dataset through a four-stage process designed to solicit exploratory questions
that require reasoning. A thorough analysis confirms that NewsQA demands
abilities beyond simple word matching and recognizing textual entailment. We
measure human performance on the dataset and compare it to several strong
neural models. The performance gap between humans and machines (0.198 in F1)
indicates that significant progress can be made on NewsQA through future
research. The dataset is freely available at
https://datasets.maluuba.com/NewsQA. | http://arxiv.org/pdf/1611.09830 | Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, Kaheer Suleman | cs.CL, cs.AI | null | null | cs.CL | 20161129 | 20170207 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1606.05250"
},
{
"id": "1610.00956"
},
{
"id": "1612.04211"
},
{
"id": "1603.01547"
},
{
"id": "1603.08023"
}
] |
1611.09823 | 26 | Figure 3: Training epoch vs. test accuracy for bAbI (Task 6) varying exploration ¢ and batch size. Random exploration is important for both reward-based (RBI) and forward prediction (FP). Performance is largely independent of batch size, and RBI performs similarly to REINFORCE. Note that supervised, rather than reinforcement learning, with gold standard labels achieves 100% accuracy on this task.
e REINFORCE obtains similar performance to RBI with optimal e.
e FP with balancing or with exploration via ⬠both outperform FP alone.
⢠For both RBI and FP, performance is largely independent of online batch size.
Dataset Batch Size Experiments Given that larger online batch sizes appear to work well, and that this could be important in a real-world data collection setup where the same model is deployed to gather a large amount of feedback from humans, we conducted further experiments where the batch size is exactly equal to the dataset size and for each batch training is completed to convergence.
7
# Under review as a conference paper at ICLR 2017 | 1611.09823#26 | Dialogue Learning With Human-In-The-Loop | An important aspect of developing conversational agents is to give a bot the
ability to improve through communicating with humans and to learn from the
mistakes that it makes. Most research has focused on learning from fixed
training sets of labeled data rather than interacting with a dialogue partner
in an online fashion. In this paper we explore this direction in a
reinforcement learning setting where the bot improves its question-answering
ability from feedback a teacher gives following its generated responses. We
build a simulator that tests various aspects of such learning in a synthetic
environment, and introduce models that work in this regime. Finally, real
experiments with Mechanical Turk validate the approach. | http://arxiv.org/pdf/1611.09823 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.AI, cs.CL | null | null | cs.AI | 20161129 | 20170113 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1606.05250"
},
{
"id": "1511.06732"
},
{
"id": "1502.05698"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
}
] |
1611.09830 | 26 | 3A bidirectional GRU concatenates the hidden states of two GRU networks running in opposite directions. Each of these has hidden size 1 2 D1.
4All experiments in this section use the subset of NewsQA dataset with answer agreements (92,549 samples for training, 5,166 for validation, and 5,126 for testing). We leave the challenge of identifying the unanswerable questions for future work.
5We use https://github.com/tylin/coco-caption to calculate these two scores.
7
Table 3: Model performance on SQuAD and NewsQA datasets. Random are taken from Rajpurkar et al. (2016), and mLSTM from Wang & Jiang (2016b).
SQuAD Exact Match F1 NewsQA Exact Match F1 Model Dev Test Dev Test Model Dev Test Dev Test Random 0.11 mLSTM 0.591 0.591 BARB 0.13 0.595 - 0.41 0.700 0.709 0.43 0.703 - Random 0.00 mLSTM 0.344 0.361 BARB 0.00 0.349 0.341 0.30 0.496 0.496 0.30 0.500 0.482 | 1611.09830#26 | NewsQA: A Machine Comprehension Dataset | We present NewsQA, a challenging machine comprehension dataset of over
100,000 human-generated question-answer pairs. Crowdworkers supply questions
and answers based on a set of over 10,000 news articles from CNN, with answers
consisting of spans of text from the corresponding articles. We collect this
dataset through a four-stage process designed to solicit exploratory questions
that require reasoning. A thorough analysis confirms that NewsQA demands
abilities beyond simple word matching and recognizing textual entailment. We
measure human performance on the dataset and compare it to several strong
neural models. The performance gap between humans and machines (0.198 in F1)
indicates that significant progress can be made on NewsQA through future
research. The dataset is freely available at
https://datasets.maluuba.com/NewsQA. | http://arxiv.org/pdf/1611.09830 | Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, Kaheer Suleman | cs.CL, cs.AI | null | null | cs.CL | 20161129 | 20170207 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1606.05250"
},
{
"id": "1610.00956"
},
{
"id": "1612.04211"
},
{
"id": "1603.01547"
},
{
"id": "1603.08023"
}
] |
1611.09823 | 27 | 7
# Under review as a conference paper at ICLR 2017
Random Exploration for RBI Random Exploration for FP 0.7| 0.7} 06 0.6, 0.5| a Zo. £04 one £ = Fe} â J 0.4 â g03 _ g = _ 0.34 _ 0.2| oo oo 01 9 0.2 9 as as 0.1 0 5 10 15 20 0 5 10 15 20 Epoch Epoch RBI (eps=0.5) Varying Batch Size Comparing RBI, FP and REINFORCE 0.7 0.71 0.6] 0.6 > 0.5 >o.5) foal £ 5% 3 0.4 uu uu 203 = batch 32 2 03 sa batch 320 0.2 ma batch 3200 02 ee REINFORCE = batch 32000 oa RBI 0.1 © full dataset 0.1 mm FP ) 5 10 15 20 () 5 10 15 20 Epoch Epoch | 1611.09823#27 | Dialogue Learning With Human-In-The-Loop | An important aspect of developing conversational agents is to give a bot the
ability to improve through communicating with humans and to learn from the
mistakes that it makes. Most research has focused on learning from fixed
training sets of labeled data rather than interacting with a dialogue partner
in an online fashion. In this paper we explore this direction in a
reinforcement learning setting where the bot improves its question-answering
ability from feedback a teacher gives following its generated responses. We
build a simulator that tests various aspects of such learning in a synthetic
environment, and introduce models that work in this regime. Finally, real
experiments with Mechanical Turk validate the approach. | http://arxiv.org/pdf/1611.09823 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.AI, cs.CL | null | null | cs.AI | 20161129 | 20170113 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1606.05250"
},
{
"id": "1511.06732"
},
{
"id": "1502.05698"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
}
] |
1611.09830 | 27 | Table 4: Human performance on SQuAD and NewsQA datasets. The ï¬rst row is taken from Rajpurkar et al. (2016), and the last two rows correspond to machine performance (BARB) on the human- evaluated subsets.
Dataset Exact Match F1 BLEU CIDEr SQuAD SQuAD (ours) NewsQA 0.803 0.650 0.465 0.905 0.807 0.694 - 0.625 0.560 - 3.998 3.596 SQuADBARB NewsQABARB 0.553 0.340 0.685 0.501 0.366 0.081 2.845 2.431
This suggests that simpler automatic metrics are not equal to the task of complex MC evaluation, a problem that has been noted in other domains (Liu et al., 2016). Therefore we also measure according to BLEU and CIDEr: humans score 0.560 and 3.596 on these metrics, respectively. | 1611.09830#27 | NewsQA: A Machine Comprehension Dataset | We present NewsQA, a challenging machine comprehension dataset of over
100,000 human-generated question-answer pairs. Crowdworkers supply questions
and answers based on a set of over 10,000 news articles from CNN, with answers
consisting of spans of text from the corresponding articles. We collect this
dataset through a four-stage process designed to solicit exploratory questions
that require reasoning. A thorough analysis confirms that NewsQA demands
abilities beyond simple word matching and recognizing textual entailment. We
measure human performance on the dataset and compare it to several strong
neural models. The performance gap between humans and machines (0.198 in F1)
indicates that significant progress can be made on NewsQA through future
research. The dataset is freely available at
https://datasets.maluuba.com/NewsQA. | http://arxiv.org/pdf/1611.09830 | Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, Kaheer Suleman | cs.CL, cs.AI | null | null | cs.CL | 20161129 | 20170207 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1606.05250"
},
{
"id": "1610.00956"
},
{
"id": "1612.04211"
},
{
"id": "1603.01547"
},
{
"id": "1603.08023"
}
] |
1611.09823 | 28 | Figure 4: WikiMovies: Training epoch vs. test accuracy on Task 6 varying (top left panel) explo- ration rate ⬠while setting batch size to 32 for RBI, (top right panel) for FP, (bottom left) batch size for RBI, and (bottom right) comparing RBI, REINFORCE and FP with ⬠= 0.5. The model is robust to the choice of batch size. RBI and REINFORCE perform comparably. Note that supervised, rather than reinforcement learning, with gold standard labels achieves 80% accuracy on this task (
After the model has been trained on the dataset, it is deployed to collect a new dataset of questions and answers, and the process is repeated. Table 1 reports test error at each iteration of training, using the bAbI Task 6 as the case study (see the appendix for results on other tasks). The following conclusions can be made for this setting:
⢠RBI improves in performance as we iterate. Unlike in the online case, RBI does not need random exploration. We believe this is because the ï¬rst batch, which is collected with a randomly initialized model, contains enough variety of examples with positive rewards that the model does not get stuck predicting a subset of labels. | 1611.09823#28 | Dialogue Learning With Human-In-The-Loop | An important aspect of developing conversational agents is to give a bot the
ability to improve through communicating with humans and to learn from the
mistakes that it makes. Most research has focused on learning from fixed
training sets of labeled data rather than interacting with a dialogue partner
in an online fashion. In this paper we explore this direction in a
reinforcement learning setting where the bot improves its question-answering
ability from feedback a teacher gives following its generated responses. We
build a simulator that tests various aspects of such learning in a synthetic
environment, and introduce models that work in this regime. Finally, real
experiments with Mechanical Turk validate the approach. | http://arxiv.org/pdf/1611.09823 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.AI, cs.CL | null | null | cs.AI | 20161129 | 20170113 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1606.05250"
},
{
"id": "1511.06732"
},
{
"id": "1502.05698"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
}
] |
1611.09830 | 28 | The original SQuAD evaluation of human performance compares distinct answers given by crowd- workers according to EM and F1; for a closer comparison with NewsQA, we replicated our human test on the same number of validation data (1,000) with the same humans. We measured human answers against the second group of crowdsourced responses in SQuADâs development set, yielding 0.807 F1, 0.625 BLEU, and 3.998 CIDEr. Note that the F1 score is close to the top single-model performance of 0.778 achieved in Wang et al. (2016).
We ï¬nally compared human performance on the answers that had crowdworker agreement with and without validation, ï¬nding a difference of only 1.4 percentage points F1. This suggests our validation stage yields good-quality answers.
6.2 MODEL PERFORMANCE | 1611.09830#28 | NewsQA: A Machine Comprehension Dataset | We present NewsQA, a challenging machine comprehension dataset of over
100,000 human-generated question-answer pairs. Crowdworkers supply questions
and answers based on a set of over 10,000 news articles from CNN, with answers
consisting of spans of text from the corresponding articles. We collect this
dataset through a four-stage process designed to solicit exploratory questions
that require reasoning. A thorough analysis confirms that NewsQA demands
abilities beyond simple word matching and recognizing textual entailment. We
measure human performance on the dataset and compare it to several strong
neural models. The performance gap between humans and machines (0.198 in F1)
indicates that significant progress can be made on NewsQA through future
research. The dataset is freely available at
https://datasets.maluuba.com/NewsQA. | http://arxiv.org/pdf/1611.09830 | Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, Kaheer Suleman | cs.CL, cs.AI | null | null | cs.CL | 20161129 | 20170207 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1606.05250"
},
{
"id": "1610.00956"
},
{
"id": "1612.04211"
},
{
"id": "1603.01547"
},
{
"id": "1603.08023"
}
] |
1611.09823 | 29 | ⢠FP is not stable in this setting. This is because once the model gets very good at making predictions (at the third iteration), it is not exposed to a sufï¬cient number of negative re- sponses anymore. From that point on, learning degenerates and performance drops as the model always predicts the same responses. At the next iteration, it will recover again since it has a more balanced training set, but then it will collapse again in an oscillating behavior.
e FP does work if extended with balancing or random exploration with sufficiently large e.
⢠RBI+FP also works well and helps with the instability of FP, alleviating the need for random exploration and data balancing.
Overall, our simulation results indicate that while a bot can be effectively trained fully online from bot-teacher interactions, collecting real dialogue data in batches (which is easier to collect and iterate experiments over) is also a viable approach. We hence pursue the latter approach in our next set of experiments.
8
# Under review as a conference paper at ICLR 2017 | 1611.09823#29 | Dialogue Learning With Human-In-The-Loop | An important aspect of developing conversational agents is to give a bot the
ability to improve through communicating with humans and to learn from the
mistakes that it makes. Most research has focused on learning from fixed
training sets of labeled data rather than interacting with a dialogue partner
in an online fashion. In this paper we explore this direction in a
reinforcement learning setting where the bot improves its question-answering
ability from feedback a teacher gives following its generated responses. We
build a simulator that tests various aspects of such learning in a synthetic
environment, and introduce models that work in this regime. Finally, real
experiments with Mechanical Turk validate the approach. | http://arxiv.org/pdf/1611.09823 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.AI, cs.CL | null | null | cs.AI | 20161129 | 20170113 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1606.05250"
},
{
"id": "1511.06732"
},
{
"id": "1502.05698"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
}
] |
1611.09830 | 29 | 6.2 MODEL PERFORMANCE
Performance of the baseline models and humans is measured by EM and F1 with the ofï¬cial evaluation script from SQuAD and listed in Table 4. We supplement these with BLEU and CIDEr measures on the 1,000 human-annotated dev questions. Unless otherwise stated, hyperparameters are determined by hyperopt (Appendix A). The gap between human and machine performance on NewsQA is a striking 0.198 points F1 â much larger than the gap on SQuAD (0.098) under the same human evaluation scheme. The gaps suggest a large margin for improvement with machine comprehension methods. | 1611.09830#29 | NewsQA: A Machine Comprehension Dataset | We present NewsQA, a challenging machine comprehension dataset of over
100,000 human-generated question-answer pairs. Crowdworkers supply questions
and answers based on a set of over 10,000 news articles from CNN, with answers
consisting of spans of text from the corresponding articles. We collect this
dataset through a four-stage process designed to solicit exploratory questions
that require reasoning. A thorough analysis confirms that NewsQA demands
abilities beyond simple word matching and recognizing textual entailment. We
measure human performance on the dataset and compare it to several strong
neural models. The performance gap between humans and machines (0.198 in F1)
indicates that significant progress can be made on NewsQA through future
research. The dataset is freely available at
https://datasets.maluuba.com/NewsQA. | http://arxiv.org/pdf/1611.09830 | Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, Kaheer Suleman | cs.CL, cs.AI | null | null | cs.CL | 20161129 | 20170207 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1606.05250"
},
{
"id": "1610.00956"
},
{
"id": "1612.04211"
},
{
"id": "1603.01547"
},
{
"id": "1603.08023"
}
] |
1611.09823 | 30 | 8
# Under review as a conference paper at ICLR 2017
Iteration 1 2 3 4 5 6 Imitation Learning 0.24 | 0.23 | 0.23 | 0.22 | 0.23 | 0.23 Reward Based Imitation (RBI) | 0.74 | 0.87 | 0.90 | 0.96 | 0.96 | 0.98 Forward Pred. (FP) 0.99 | 0.96 | 1.00 | 0.30 | 1.00 | 0.29 RBI+FP 0.99 | 0.96 | 0.97 | 0.95 | 0.94 | 0.97 FP (balanced) 0.99 | 0.97 | 0.97 | 0.97 | 0.97 | 0.97 FP (rand. exploration ⬠= 0.25) | 0.96 | 0.88 | 0.94 | 0.26 | 0.64 | 0.99 FP (rand. exploration ⬠= 0.5) | 0.98 | 0.98 | 0.99 | 0.98 | 0.95 | 0.99
Table 1: Test accuracy of various models per iteration in the dataset batch size case (using batch size equal to the size of the full training set) for bAbI, Task 6. Results > 0.95 are in bold. | 1611.09823#30 | Dialogue Learning With Human-In-The-Loop | An important aspect of developing conversational agents is to give a bot the
ability to improve through communicating with humans and to learn from the
mistakes that it makes. Most research has focused on learning from fixed
training sets of labeled data rather than interacting with a dialogue partner
in an online fashion. In this paper we explore this direction in a
reinforcement learning setting where the bot improves its question-answering
ability from feedback a teacher gives following its generated responses. We
build a simulator that tests various aspects of such learning in a synthetic
environment, and introduce models that work in this regime. Finally, real
experiments with Mechanical Turk validate the approach. | http://arxiv.org/pdf/1611.09823 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.AI, cs.CL | null | null | cs.AI | 20161129 | 20170113 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1606.05250"
},
{
"id": "1511.06732"
},
{
"id": "1502.05698"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
}
] |
1611.09830 | 30 | Figure 1 stratiï¬es model (BARB) performance according to answer type (left) and reasoning type (right) as deï¬ned in Sections 4.1 and 4.2, respectively. The answer-type stratiï¬cation suggests that the model is better at pointing to named entities compared to other types of answers. The reasoning- type stratiï¬cation, on the other hand, shows that questions requiring inference and synthesis are, not surprisingly, more difï¬cult for the model. Consistent with observations in Table 4, stratiï¬ed performance on NewsQA is signiï¬cantly lower than on SQuAD. The difference is smallest on word matching and largest on synthesis. We postulate that the longer stories in NewsQA make synthesizing information from separate sentences more difï¬cult, since the relevant sentences may be farther apart. This requires the model to track longer-term dependencies. It is also interesting to observe that on SQuAD, BARB outperforms human annotators in answering ambiguous questions or those with incomplete information.
8 | 1611.09830#30 | NewsQA: A Machine Comprehension Dataset | We present NewsQA, a challenging machine comprehension dataset of over
100,000 human-generated question-answer pairs. Crowdworkers supply questions
and answers based on a set of over 10,000 news articles from CNN, with answers
consisting of spans of text from the corresponding articles. We collect this
dataset through a four-stage process designed to solicit exploratory questions
that require reasoning. A thorough analysis confirms that NewsQA demands
abilities beyond simple word matching and recognizing textual entailment. We
measure human performance on the dataset and compare it to several strong
neural models. The performance gap between humans and machines (0.198 in F1)
indicates that significant progress can be made on NewsQA through future
research. The dataset is freely available at
https://datasets.maluuba.com/NewsQA. | http://arxiv.org/pdf/1611.09830 | Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, Kaheer Suleman | cs.CL, cs.AI | null | null | cs.CL | 20161129 | 20170207 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1606.05250"
},
{
"id": "1610.00956"
},
{
"id": "1612.04211"
},
{
"id": "1603.01547"
},
{
"id": "1603.08023"
}
] |
1611.09823 | 31 | Relation to experiments in Weston (2016) As described in detail in Section 2 the datasets we use in our experiments were introduced in (Weston et al., 2015). However, that work involved constructing pre-built ï¬xed policies (and hence, datasets), rather than training the learner in a rein- forcement/interactive learning using a simulator, as in our work. They achieved this by choosing an omniscient (but deliberately imperfect) labeler that gets Ïacc examples always correct (the paper looked at values 1%, 10% and 50%). In a realistic setting one does not have access to an omniscient labeler, one has to learn a policy completely from scratch, online, starting with a random policy, as we do here. Nevertheless, it is possible to compare our learnt policies to those results because we use the same train/valid/test splits. | 1611.09823#31 | Dialogue Learning With Human-In-The-Loop | An important aspect of developing conversational agents is to give a bot the
ability to improve through communicating with humans and to learn from the
mistakes that it makes. Most research has focused on learning from fixed
training sets of labeled data rather than interacting with a dialogue partner
in an online fashion. In this paper we explore this direction in a
reinforcement learning setting where the bot improves its question-answering
ability from feedback a teacher gives following its generated responses. We
build a simulator that tests various aspects of such learning in a synthetic
environment, and introduce models that work in this regime. Finally, real
experiments with Mechanical Turk validate the approach. | http://arxiv.org/pdf/1611.09823 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.AI, cs.CL | null | null | cs.AI | 20161129 | 20170113 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1606.05250"
},
{
"id": "1511.06732"
},
{
"id": "1502.05698"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
}
] |
1611.09830 | 31 | 8
Datenime Numeric Word Person Matching âAdjective Phrase Paraphrasing Location Propositional Phrase Inference âCommon Noun Phrase thor Other entity Synthesis Clause Phrase Ambiguous! nsutfelent I NewsQA Verb Phrase = EM insufficient = SQUAD ° 02 oO 06 oe 0.000 0.180 0.300 0.450 0.600 0.750
Figure 1: Left: BARB performance (F1 and EM) stratiï¬ed by answer type on the full development set of NewsQA. Right: BARB performance (F1) stratiï¬ed by reasoning type on the human-assessed subset on both NewsQA and SQuAD. Error bars indicate performance differences between BARB and human annotators.
# Table 5: Sentence-level accuracy on artiï¬cially-lengthened SQuAD documents.
SQuAD NewsQA # documents Avg # sentences isf 1 4.9 14.3 23.2 31.8 40.3 79.6 74.9 73.0 72.3 71.0 3 5 7 9 1 30.7 35.4
# 6.3 SENTENCE-LEVEL SCORING | 1611.09830#31 | NewsQA: A Machine Comprehension Dataset | We present NewsQA, a challenging machine comprehension dataset of over
100,000 human-generated question-answer pairs. Crowdworkers supply questions
and answers based on a set of over 10,000 news articles from CNN, with answers
consisting of spans of text from the corresponding articles. We collect this
dataset through a four-stage process designed to solicit exploratory questions
that require reasoning. A thorough analysis confirms that NewsQA demands
abilities beyond simple word matching and recognizing textual entailment. We
measure human performance on the dataset and compare it to several strong
neural models. The performance gap between humans and machines (0.198 in F1)
indicates that significant progress can be made on NewsQA through future
research. The dataset is freely available at
https://datasets.maluuba.com/NewsQA. | http://arxiv.org/pdf/1611.09830 | Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, Kaheer Suleman | cs.CL, cs.AI | null | null | cs.CL | 20161129 | 20170207 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1606.05250"
},
{
"id": "1610.00956"
},
{
"id": "1612.04211"
},
{
"id": "1603.01547"
},
{
"id": "1603.08023"
}
] |
1611.09823 | 32 | The clearest comparison comparison is via Table 1, where the policy is learnt using batch iterations of the dataset, updating the policy on each iteration. Weston et al. (2015) can be viewed as training only one iteration, with a pre-built policy, as explained above, where 59%, 81% and 99% accuracy was obtained for RBI for Ïacc with 1%, 10% and 50% respectively4. While Ïacc of 50% is good In this work a random policy begins with 74% enough to solve the task, lower values are not. accuracy on the ï¬rst iteration, but importantly on each iteration the policy is updated and improves, with values of 87%, 90% on iterations 2 and 3 respectively, and 98% on iteration 6. This is a key differentiator to the work of (Weston et al., 2015) where such improvement was not shown. We show that such online learning works for both reward-based numerical feedback and for forward prediction methods using textual feedback (as long as balancing or random exploration is performed sufï¬ciently). The ï¬nal performance outperforms most values of Ïacc from Weston et al. (2015) unless Ï is so large that the task is already solved. This is a key contribution of our work. | 1611.09823#32 | Dialogue Learning With Human-In-The-Loop | An important aspect of developing conversational agents is to give a bot the
ability to improve through communicating with humans and to learn from the
mistakes that it makes. Most research has focused on learning from fixed
training sets of labeled data rather than interacting with a dialogue partner
in an online fashion. In this paper we explore this direction in a
reinforcement learning setting where the bot improves its question-answering
ability from feedback a teacher gives following its generated responses. We
build a simulator that tests various aspects of such learning in a synthetic
environment, and introduce models that work in this regime. Finally, real
experiments with Mechanical Turk validate the approach. | http://arxiv.org/pdf/1611.09823 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.AI, cs.CL | null | null | cs.AI | 20161129 | 20170113 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1606.05250"
},
{
"id": "1511.06732"
},
{
"id": "1502.05698"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
}
] |
1611.09830 | 32 | # 6.3 SENTENCE-LEVEL SCORING
We propose a simple sentence-level subtask as an additional quantitative demonstration of the relative difï¬culty of NewsQA. Given a document and a question, the goal is to ï¬nd the sentence containing the answer span. We hypothesize that simple techniques like word-matching are inadequate to this task owing to the more involved reasoning required by NewsQA.
We employ a technique that resembles inverse document frequency (idf ), which we call inverse sentence frequency (isf ). Given a sentence Si from an article and its corresponding question Q, the isf score is given by the sum of the idf scores of the words common to Si and Q (each sentence is treated as a document for the idf computation). The sentence with the highest isf is taken as the answer sentence Sâ, that is,
Sâ = arg max isf (w). i wâSiâ©Q | 1611.09830#32 | NewsQA: A Machine Comprehension Dataset | We present NewsQA, a challenging machine comprehension dataset of over
100,000 human-generated question-answer pairs. Crowdworkers supply questions
and answers based on a set of over 10,000 news articles from CNN, with answers
consisting of spans of text from the corresponding articles. We collect this
dataset through a four-stage process designed to solicit exploratory questions
that require reasoning. A thorough analysis confirms that NewsQA demands
abilities beyond simple word matching and recognizing textual entailment. We
measure human performance on the dataset and compare it to several strong
neural models. The performance gap between humans and machines (0.198 in F1)
indicates that significant progress can be made on NewsQA through future
research. The dataset is freely available at
https://datasets.maluuba.com/NewsQA. | http://arxiv.org/pdf/1611.09830 | Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, Kaheer Suleman | cs.CL, cs.AI | null | null | cs.CL | 20161129 | 20170207 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1606.05250"
},
{
"id": "1610.00956"
},
{
"id": "1612.04211"
},
{
"id": "1603.01547"
},
{
"id": "1603.08023"
}
] |
1611.09830 | 33 | Sâ = arg max isf (w). i wâSiâ©Q
The isf method achieves an impressive 79.4% sentence-level accuracy on SQuADâs development set but only 35.4% accuracy on NewsQAâs development set, highlighting the comparative difï¬culty of the latter. To eliminate the difference in article length as a possible cause of the performance gap, we also artiï¬cially increased the article lengths in SQuAD by concatenating adjacent SQuAD articles from the same Wikipedia article. Accuracy decreases as expected with the increased SQuAD article length, yet remains signiï¬cantly higher than on NewsQA with comparable or even greater article length (see Table 5).
# 7 CONCLUSION | 1611.09830#33 | NewsQA: A Machine Comprehension Dataset | We present NewsQA, a challenging machine comprehension dataset of over
100,000 human-generated question-answer pairs. Crowdworkers supply questions
and answers based on a set of over 10,000 news articles from CNN, with answers
consisting of spans of text from the corresponding articles. We collect this
dataset through a four-stage process designed to solicit exploratory questions
that require reasoning. A thorough analysis confirms that NewsQA demands
abilities beyond simple word matching and recognizing textual entailment. We
measure human performance on the dataset and compare it to several strong
neural models. The performance gap between humans and machines (0.198 in F1)
indicates that significant progress can be made on NewsQA through future
research. The dataset is freely available at
https://datasets.maluuba.com/NewsQA. | http://arxiv.org/pdf/1611.09830 | Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, Kaheer Suleman | cs.CL, cs.AI | null | null | cs.CL | 20161129 | 20170207 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1606.05250"
},
{
"id": "1610.00956"
},
{
"id": "1612.04211"
},
{
"id": "1603.01547"
},
{
"id": "1603.08023"
}
] |
1611.09823 | 34 | 5.2 HUMAN FEEDBACK
We employed Turkers to both ask questions and then give textual feedback on the botâs answers, as described in Section 3.2. Our experimental protocol was as follows. We ï¬rst trained a MemN2N using supervised (i.e., imitation) learning on a training set of 1000 questions produced by Turkers and using the known correct answers provided by the original dataset (and no textual feedback). Next, using the trained policy, we collected textual feedback for the responses of the bot for an additional 10,000 questions. Examples from the collected dataset are given in Figure 2. Given this dataset, we compare various models: RBI, FP and FP+RBI. As we know the correct answers to the additional questions, we can assign a positive reward to questions the bot got correct. We hence measure the impact of the sparseness of this reward signal, where a fraction r of additional examples have rewards. The models are tested on a test set of â¼8,000 questions (produced by Turkers), and hyperparameters are tuned on a similarly sized validation set. Note this is a harder task than the WikiMovies task in the simulator due to the use natural language from Turkers, hence lower test performance is expected. | 1611.09823#34 | Dialogue Learning With Human-In-The-Loop | An important aspect of developing conversational agents is to give a bot the
ability to improve through communicating with humans and to learn from the
mistakes that it makes. Most research has focused on learning from fixed
training sets of labeled data rather than interacting with a dialogue partner
in an online fashion. In this paper we explore this direction in a
reinforcement learning setting where the bot improves its question-answering
ability from feedback a teacher gives following its generated responses. We
build a simulator that tests various aspects of such learning in a synthetic
environment, and introduce models that work in this regime. Finally, real
experiments with Mechanical Turk validate the approach. | http://arxiv.org/pdf/1611.09823 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.AI, cs.CL | null | null | cs.AI | 20161129 | 20170113 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1606.05250"
},
{
"id": "1511.06732"
},
{
"id": "1502.05698"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
}
] |
1611.09830 | 34 | # 7 CONCLUSION
We have introduced a challenging new comprehension dataset: NewsQA. We collected the 100,000+ examples of NewsQA using teams of crowdworkers, who variously read CNN articles or highlights, posed questions about them, and determined answers. Our methodology yields diverse answer types and a signiï¬cant proportion of questions that require some reasoning ability to solve. This makes the corpus challenging, as conï¬rmed by the large performance gap between humans and deep neural models (0.198 F1, 0.479 BLEU, 1.165 CIDEr). By its size and complexity, NewsQA makes a signiï¬cant extension to the existing body of comprehension datasets. We hope that our corpus will spur further advances in machine comprehension and guide the development of literate artiï¬cial intelligence.
9
0.800
# ACKNOWLEDGMENTS
The authors would like to thank ÃaËglar Gülçehre, Sandeep Subramanian and Saizheng Zhang for helpful discussions.
# REFERENCES
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. ICLR, 2015. | 1611.09830#34 | NewsQA: A Machine Comprehension Dataset | We present NewsQA, a challenging machine comprehension dataset of over
100,000 human-generated question-answer pairs. Crowdworkers supply questions
and answers based on a set of over 10,000 news articles from CNN, with answers
consisting of spans of text from the corresponding articles. We collect this
dataset through a four-stage process designed to solicit exploratory questions
that require reasoning. A thorough analysis confirms that NewsQA demands
abilities beyond simple word matching and recognizing textual entailment. We
measure human performance on the dataset and compare it to several strong
neural models. The performance gap between humans and machines (0.198 in F1)
indicates that significant progress can be made on NewsQA through future
research. The dataset is freely available at
https://datasets.maluuba.com/NewsQA. | http://arxiv.org/pdf/1611.09830 | Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, Kaheer Suleman | cs.CL, cs.AI | null | null | cs.CL | 20161129 | 20170207 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1606.05250"
},
{
"id": "1610.00956"
},
{
"id": "1612.04211"
},
{
"id": "1603.01547"
},
{
"id": "1603.08023"
}
] |
1611.09823 | 35 | 4Note, this is not the same as a randomly initialized neural network policy, because due to the synthetic construction with an omniscient labeler the labels will be balanced. In our work, we learn the policy from randomly initialized weights which are updated as we learn the policy.
9
# Under review as a conference paper at ICLR 2017
Results are given in Table 2. They indicate that both RBI and FP are useful. When rewards are sparse, FP still works via the textual feedback while RBI can only use the initial 1000 examples when r = 0. As FP does not use numericalrewards at all, it is invariant to the parameter r. The combination of FP and RBI outperforms either alone.
Model Reward Based Imitation (RBI) Forward Prediction (FP) RBI+FP r = 0 0.333 0.358 0.431 r = 0.1 0.340 0.358 0.438 r = 0.5 0.365 0.358 0.443 r = 1 0.375 0.358 0.441
Table 2: Incorporating Feedback From Humans via Mechanical Turk. Textual feedback is provided for 10,000 model predictions (from a model trained with 1k labeled training examples), and additional sparse binary rewards (fraction r of examples have rewards). Forward Prediction and Reward-based Imitation are both useful, with their combination performing best. | 1611.09823#35 | Dialogue Learning With Human-In-The-Loop | An important aspect of developing conversational agents is to give a bot the
ability to improve through communicating with humans and to learn from the
mistakes that it makes. Most research has focused on learning from fixed
training sets of labeled data rather than interacting with a dialogue partner
in an online fashion. In this paper we explore this direction in a
reinforcement learning setting where the bot improves its question-answering
ability from feedback a teacher gives following its generated responses. We
build a simulator that tests various aspects of such learning in a synthetic
environment, and introduce models that work in this regime. Finally, real
experiments with Mechanical Turk validate the approach. | http://arxiv.org/pdf/1611.09823 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.AI, cs.CL | null | null | cs.AI | 20161129 | 20170113 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1606.05250"
},
{
"id": "1511.06732"
},
{
"id": "1502.05698"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
}
] |
1611.09830 | 35 | Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. ICLR, 2015.
Ondrej Bajgar, Rudolf Kadlec, and Jan Kleindienst. Embracing data abundance: Booktest dataset for reading comprehension. arXiv preprint arXiv:1610.00956, 2016.
J. Bergstra, O. Breuleux, F. Bastien, P. Lamblin, R. Pascanu, G. Desjardins, J. Turian, D. Warde- Farley, and Y. Bengio. Theano: a CPU and GPU math expression compiler. In In Proc. of SciPy, 2010.
Danqi Chen, Jason Bolton, and Christopher D. Manning. A thorough examination of the cnn / daily mail reading comprehension task. In Association for Computational Linguistics (ACL), 2016.
# François Chollet. keras. https://github.com/fchollet/keras, 2015.
Xavier Glorot and Yoshua Bengio. Understanding the difï¬culty of training deep feedforward neural networks. In Aistats, volume 9, pp. 249â256, 2010. | 1611.09830#35 | NewsQA: A Machine Comprehension Dataset | We present NewsQA, a challenging machine comprehension dataset of over
100,000 human-generated question-answer pairs. Crowdworkers supply questions
and answers based on a set of over 10,000 news articles from CNN, with answers
consisting of spans of text from the corresponding articles. We collect this
dataset through a four-stage process designed to solicit exploratory questions
that require reasoning. A thorough analysis confirms that NewsQA demands
abilities beyond simple word matching and recognizing textual entailment. We
measure human performance on the dataset and compare it to several strong
neural models. The performance gap between humans and machines (0.198 in F1)
indicates that significant progress can be made on NewsQA through future
research. The dataset is freely available at
https://datasets.maluuba.com/NewsQA. | http://arxiv.org/pdf/1611.09830 | Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, Kaheer Suleman | cs.CL, cs.AI | null | null | cs.CL | 20161129 | 20170207 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1606.05250"
},
{
"id": "1610.00956"
},
{
"id": "1612.04211"
},
{
"id": "1603.01547"
},
{
"id": "1603.08023"
}
] |
1611.09823 | 36 | We also conducted additional experiments comparing with (i) synthetic feedback and (ii) the fully supervised case which are given in Appendix C.1. They show that the results with human feedback are competitive with these approaches.
# 6 CONCLUSION
We studied dialogue learning of end-to-end models using textual feedback and numerical rewards. Both fully online and iterative batch settings are viable approaches to policy learning, as long as possible instabilities in the learning algorithms are taken into account. Secondly, we showed for the ï¬rst time that the recently introduced FP method can work in both an online setting and on real human feedback. Overall, our results indicate that it is feasible to build a practical pipeline that starts with a model trained on an initial ï¬xed dataset, which then learns from interactions with humans in a (semi-)online fashion to improve itself. Future research should work towards doing this in a never-ending learning setup.
# REFERENCES
Mohammad Amin Bassiri. Interactional feedback and the impact of attitude and motivation on noticing l2 form. English Language and Literature Studies, 1(2):61, 2011.
Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. Large-scale simple question answering with memory networks. arXiv preprint arXiv:1506.02075, 2015. | 1611.09823#36 | Dialogue Learning With Human-In-The-Loop | An important aspect of developing conversational agents is to give a bot the
ability to improve through communicating with humans and to learn from the
mistakes that it makes. Most research has focused on learning from fixed
training sets of labeled data rather than interacting with a dialogue partner
in an online fashion. In this paper we explore this direction in a
reinforcement learning setting where the bot improves its question-answering
ability from feedback a teacher gives following its generated responses. We
build a simulator that tests various aspects of such learning in a synthetic
environment, and introduce models that work in this regime. Finally, real
experiments with Mechanical Turk validate the approach. | http://arxiv.org/pdf/1611.09823 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.AI, cs.CL | null | null | cs.AI | 20161129 | 20170113 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1606.05250"
},
{
"id": "1511.06732"
},
{
"id": "1502.05698"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
}
] |
1611.09830 | 36 | Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems, pp. 1684â1692, 2015.
Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. The goldilocks principle: Reading childrenâs books with explicit memory representations. ICLR, 2016.
Rudolf Kadlec, Martin Schmid, Ondrej Bajgar, and Jan Kleindienst. Text understanding with the attention sum reader network. arXiv preprint arXiv:1603.01547, 2016.
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. ICLR, 2015.
Chia-Wei Liu, Ryan Lowe, Iulian V Serban, Michael Noseworthy, Laurent Charlin, and Joelle Pineau. How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. arXiv preprint arXiv:1603.08023, 2016. | 1611.09830#36 | NewsQA: A Machine Comprehension Dataset | We present NewsQA, a challenging machine comprehension dataset of over
100,000 human-generated question-answer pairs. Crowdworkers supply questions
and answers based on a set of over 10,000 news articles from CNN, with answers
consisting of spans of text from the corresponding articles. We collect this
dataset through a four-stage process designed to solicit exploratory questions
that require reasoning. A thorough analysis confirms that NewsQA demands
abilities beyond simple word matching and recognizing textual entailment. We
measure human performance on the dataset and compare it to several strong
neural models. The performance gap between humans and machines (0.198 in F1)
indicates that significant progress can be made on NewsQA through future
research. The dataset is freely available at
https://datasets.maluuba.com/NewsQA. | http://arxiv.org/pdf/1611.09830 | Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, Kaheer Suleman | cs.CL, cs.AI | null | null | cs.CL | 20161129 | 20170207 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1606.05250"
},
{
"id": "1610.00956"
},
{
"id": "1612.04211"
},
{
"id": "1603.01547"
},
{
"id": "1603.08023"
}
] |
1611.09823 | 37 | Leon Bottou, Jonas Peters, Denis X. Quionero-Candela, Joaquin amd Charles, D. Max Chicker- ing, Elon Portugaly, Dipankar Ray, Patrice Simard, and Ed Snelson. Counterfactual reasoning and learning systems: The example of computational advertising. Journal of Machine Learning Research, 14:3207â3260, 2013.
Jesse Dodge, Andreea Gane, Xiang Zhang, Antoine Bordes, Sumit Chopra, Alexander Miller, Arthur Szlam, and Jason Weston. Evaluating prerequisite qualities for learning end-to-end dialog sys- tems. arXiv preprint arXiv:1511.06931, 2015.
Milica GaËsic, Catherine Breslin, Matthew Henderson, Dongho Kim, Martin Szummer, Blaise Thom- son, Pirros Tsiakoulis, and Steve Young. Pomdp-based dialogue manager adaptation to extended domains. In Proceedings of SIGDIAL, 2013.
Milica GaËsic, Dongho Kim, Pirros Tsiakoulis, Catherine Breslin, Matthew Henderson, Martin Szummer, Blaise Thomson, and Steve Young. Incremental on-line adaptation of pomdp-based dialogue managers to extended domains. In Proceedings on InterSpeech, 2014. | 1611.09823#37 | Dialogue Learning With Human-In-The-Loop | An important aspect of developing conversational agents is to give a bot the
ability to improve through communicating with humans and to learn from the
mistakes that it makes. Most research has focused on learning from fixed
training sets of labeled data rather than interacting with a dialogue partner
in an online fashion. In this paper we explore this direction in a
reinforcement learning setting where the bot improves its question-answering
ability from feedback a teacher gives following its generated responses. We
build a simulator that tests various aspects of such learning in a synthetic
environment, and introduce models that work in this regime. Finally, real
experiments with Mechanical Turk validate the approach. | http://arxiv.org/pdf/1611.09823 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.AI, cs.CL | null | null | cs.AI | 20161129 | 20170113 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1606.05250"
},
{
"id": "1511.06732"
},
{
"id": "1502.05698"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
}
] |
1611.09830 | 37 | Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pp. 311â318. Association for Computational Linguistics, 2002.
Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the difï¬culty of training recurrent neural networks. ICML (3), 28:1310â1318, 2013.
Jeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vectors for word representation. In EMNLP, volume 14, pp. 1532â43, 2014.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250, 2016.
Matthew Richardson, Christopher JC Burges, and Erin Renshaw. Mctest: A challenge dataset for the open-domain machine comprehension of text. In EMNLP, volume 1, pp. 2, 2013.
Mrinmaya Sachan, Avinava Dubey, Eric P Xing, and Matthew Richardson. Learning answerentailing structures for machine comprehension. In Proceedings of ACL, 2015. | 1611.09830#37 | NewsQA: A Machine Comprehension Dataset | We present NewsQA, a challenging machine comprehension dataset of over
100,000 human-generated question-answer pairs. Crowdworkers supply questions
and answers based on a set of over 10,000 news articles from CNN, with answers
consisting of spans of text from the corresponding articles. We collect this
dataset through a four-stage process designed to solicit exploratory questions
that require reasoning. A thorough analysis confirms that NewsQA demands
abilities beyond simple word matching and recognizing textual entailment. We
measure human performance on the dataset and compare it to several strong
neural models. The performance gap between humans and machines (0.198 in F1)
indicates that significant progress can be made on NewsQA through future
research. The dataset is freely available at
https://datasets.maluuba.com/NewsQA. | http://arxiv.org/pdf/1611.09830 | Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, Kaheer Suleman | cs.CL, cs.AI | null | null | cs.CL | 20161129 | 20170207 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1606.05250"
},
{
"id": "1610.00956"
},
{
"id": "1612.04211"
},
{
"id": "1603.01547"
},
{
"id": "1603.08023"
}
] |
1611.09823 | 38 | Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa In Advances in Suleyman, and Phil Blunsom. Teaching machines to read and comprehend. Neural Information Processing Systems, pp. 1693â1701, 2015.
10
# Under review as a conference paper at ICLR 2017
Esther Levin, Roberto Pieraccini, and Wieland Eckert. Learning dialogue strategies within the markov decision process framework. In Automatic Speech Recognition and Understanding, 1997. Proceedings., 1997 IEEE Workshop on, pp. 72â79. IEEE, 1997.
Esther Levin, Roberto Pieraccini, and Wieland Eckert. A stochastic model of human-machine in- teraction for learning dialog strategies. IEEE Transactions on speech and audio processing, 8(1): 11â23, 2000.
Alexander Miller, Adam Fisch, Jesse Dodge, Amir-Hossein Karimi, Antoine Bordes, and Ja- arXiv preprint son Weston. Key-value memory networks for directly reading documents. arXiv:1606.03126, 2016. | 1611.09823#38 | Dialogue Learning With Human-In-The-Loop | An important aspect of developing conversational agents is to give a bot the
ability to improve through communicating with humans and to learn from the
mistakes that it makes. Most research has focused on learning from fixed
training sets of labeled data rather than interacting with a dialogue partner
in an online fashion. In this paper we explore this direction in a
reinforcement learning setting where the bot improves its question-answering
ability from feedback a teacher gives following its generated responses. We
build a simulator that tests various aspects of such learning in a synthetic
environment, and introduce models that work in this regime. Finally, real
experiments with Mechanical Turk validate the approach. | http://arxiv.org/pdf/1611.09823 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.AI, cs.CL | null | null | cs.AI | 20161129 | 20170113 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1606.05250"
},
{
"id": "1511.06732"
},
{
"id": "1502.05698"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
}
] |
1611.09830 | 38 | Mrinmaya Sachan, Avinava Dubey, Eric P Xing, and Matthew Richardson. Learning answerentailing structures for machine comprehension. In Proceedings of ACL, 2015.
Andrew M Saxe, James L McClelland, and Surya Ganguli. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. arXiv preprint arXiv:1312.6120, 2013.
10
Alessandro Sordoni, Philip Bachman, and Yoshua Bengio. Iterative alternating neural attention for machine reading. arXiv preprint arXiv:1606.02245, 2016.
Adam Trischler, Zheng Ye, Xingdi Yuan, Jing He, Philip Bachman, and Kaheer Suleman. A parallel- hierarchical model for machine comprehension on sparse data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, 2016a.
Adam Trischler, Zheng Ye, Xingdi Yuan, and Kaheer Suleman. Natural language comprehension with the epireader. In EMNLP, 2016b.
Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. Cider: Consensus-based image description evaluation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4566â4575, 2015. | 1611.09830#38 | NewsQA: A Machine Comprehension Dataset | We present NewsQA, a challenging machine comprehension dataset of over
100,000 human-generated question-answer pairs. Crowdworkers supply questions
and answers based on a set of over 10,000 news articles from CNN, with answers
consisting of spans of text from the corresponding articles. We collect this
dataset through a four-stage process designed to solicit exploratory questions
that require reasoning. A thorough analysis confirms that NewsQA demands
abilities beyond simple word matching and recognizing textual entailment. We
measure human performance on the dataset and compare it to several strong
neural models. The performance gap between humans and machines (0.198 in F1)
indicates that significant progress can be made on NewsQA through future
research. The dataset is freely available at
https://datasets.maluuba.com/NewsQA. | http://arxiv.org/pdf/1611.09830 | Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, Kaheer Suleman | cs.CL, cs.AI | null | null | cs.CL | 20161129 | 20170207 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1606.05250"
},
{
"id": "1610.00956"
},
{
"id": "1612.04211"
},
{
"id": "1603.01547"
},
{
"id": "1603.08023"
}
] |
1611.09823 | 39 | Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wier- stra, and Martin Riedmiller. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013.
Roberto Pieraccini, David Suendermann, Krishna Dayanidhi, and Jackson Liscombe. Are we there yet? research in commercial spoken dialog systems. In International Conference on Text, Speech and Dialogue, pp. 3â13. Springer, 2009.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250, 2016.
MarcâAurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. Sequence level train- ing with recurrent neural networks. arXiv preprint arXiv:1511.06732, 2015.
Jost Schatzmann, Karl Weilhammer, Matt Stuttle, and Steve Young. A survey of statistical user sim- ulation techniques for reinforcement-learning of dialogue management strategies. The knowledge engineering review, 21(02):97â126, 2006. | 1611.09823#39 | Dialogue Learning With Human-In-The-Loop | An important aspect of developing conversational agents is to give a bot the
ability to improve through communicating with humans and to learn from the
mistakes that it makes. Most research has focused on learning from fixed
training sets of labeled data rather than interacting with a dialogue partner
in an online fashion. In this paper we explore this direction in a
reinforcement learning setting where the bot improves its question-answering
ability from feedback a teacher gives following its generated responses. We
build a simulator that tests various aspects of such learning in a synthetic
environment, and introduce models that work in this regime. Finally, real
experiments with Mechanical Turk validate the approach. | http://arxiv.org/pdf/1611.09823 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.AI, cs.CL | null | null | cs.AI | 20161129 | 20170113 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1606.05250"
},
{
"id": "1511.06732"
},
{
"id": "1502.05698"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
}
] |
1611.09830 | 39 | Hai Wang, Mohit Bansal, Kevin Gimpel, and David McAllester. Machine comprehension with syntax, frames, and semantics. In Proceedings of ACL, Volume 2: Short Papers, pp. 700, 2015.
Shuohang Wang and Jing Jiang. Learning natural language inference with lstm. NAACL, 2016a.
Shuohang Wang and Jing Jiang. Machine comprehension using match-lstm and answer pointer. arXiv preprint arXiv:1608.07905, 2016b.
Zhiguo Wang, Haitao Mi, Wael Hamza, and Radu Florian. Multi-perspective context matching for machine comprehension. arXiv preprint arXiv:1612.04211, 2016.
11
APPENDICES
# A IMPLEMENTATION DETAILS
Both mLSTM and BARB are implemented with the Keras framework (Chollet, 2015) using the Theano (Bergstra et al., 2010) backend. Word embeddings are initialized using GloVe vectors (Pennington et al., 2014) pre-trained on the 840-billion Common Crawl corpus. The word embeddings are not updated during training. Embeddings for out-of-vocabulary words are initialized with zero. | 1611.09830#39 | NewsQA: A Machine Comprehension Dataset | We present NewsQA, a challenging machine comprehension dataset of over
100,000 human-generated question-answer pairs. Crowdworkers supply questions
and answers based on a set of over 10,000 news articles from CNN, with answers
consisting of spans of text from the corresponding articles. We collect this
dataset through a four-stage process designed to solicit exploratory questions
that require reasoning. A thorough analysis confirms that NewsQA demands
abilities beyond simple word matching and recognizing textual entailment. We
measure human performance on the dataset and compare it to several strong
neural models. The performance gap between humans and machines (0.198 in F1)
indicates that significant progress can be made on NewsQA through future
research. The dataset is freely available at
https://datasets.maluuba.com/NewsQA. | http://arxiv.org/pdf/1611.09830 | Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, Kaheer Suleman | cs.CL, cs.AI | null | null | cs.CL | 20161129 | 20170207 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1606.05250"
},
{
"id": "1610.00956"
},
{
"id": "1612.04211"
},
{
"id": "1603.01547"
},
{
"id": "1603.08023"
}
] |
1611.09823 | 40 | Satinder Singh, Michael Kearns, Diane J Litman, Marilyn A Walker, et al. Empirical evaluation of a reinforcement learning spoken dialogue system. In AAAI/IAAI, pp. 645â651, 2000.
Satinder Singh, Diane Litman, Michael Kearns, and Marilyn Walker. Optimizing dialogue man- agement with reinforcement learning: Experiments with the njfun system. Journal of Artiï¬cial Intelligence Research, 16:105â133, 2002.
Pei-Hao Su, Milica Gasic, Nikola Mrksic, Lina Rojas-Barahona, Stefan Ultes, David Vandyke, Tsung-Hsien Wen, and Steve Young. Continuously learning neural dialogue management. arXiv preprint arXiv:1606.02689, 2016.
Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. End-to-end memory networks. In Advances in neural information processing systems, pp. 2440â2448, 2015.
Marilyn A. Walker. An application of reinforcement learning to dialogue strategy selection in a spoken dialogue system for email. Journal of Artiï¬cial Intelligence Research, 12:387â416, 2000. | 1611.09823#40 | Dialogue Learning With Human-In-The-Loop | An important aspect of developing conversational agents is to give a bot the
ability to improve through communicating with humans and to learn from the
mistakes that it makes. Most research has focused on learning from fixed
training sets of labeled data rather than interacting with a dialogue partner
in an online fashion. In this paper we explore this direction in a
reinforcement learning setting where the bot improves its question-answering
ability from feedback a teacher gives following its generated responses. We
build a simulator that tests various aspects of such learning in a synthetic
environment, and introduce models that work in this regime. Finally, real
experiments with Mechanical Turk validate the approach. | http://arxiv.org/pdf/1611.09823 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.AI, cs.CL | null | null | cs.AI | 20161129 | 20170113 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1606.05250"
},
{
"id": "1511.06732"
},
{
"id": "1502.05698"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
}
] |
1611.09830 | 40 | For both models, the training objective is to maximize the log likelihood of the boundary pointers. Optimization is performed using stochastic gradient descent (with a batch-size of 32) with the ADAM optimizer (Kingma & Ba, 2015). The initial learning rate is 0.003 for mLSTM and 0.0005 for BARB. The learning rate is decayed by a factor of 0.7 if validation loss does not decrease at the end of each epoch. Gradient clipping (Pascanu et al., 2013) is applied with a threshold of 5.
Parameter tuning is performed on both models using hyperopt6. For each model, conï¬gurations for the best observed performance are as follows:
# mLSTM
Both the pre-processing layer and the answer-pointing layer use bi-directional RNN with a hidden size of 192. These settings are consistent with those used by Wang & Jiang (2016b). | 1611.09830#40 | NewsQA: A Machine Comprehension Dataset | We present NewsQA, a challenging machine comprehension dataset of over
100,000 human-generated question-answer pairs. Crowdworkers supply questions
and answers based on a set of over 10,000 news articles from CNN, with answers
consisting of spans of text from the corresponding articles. We collect this
dataset through a four-stage process designed to solicit exploratory questions
that require reasoning. A thorough analysis confirms that NewsQA demands
abilities beyond simple word matching and recognizing textual entailment. We
measure human performance on the dataset and compare it to several strong
neural models. The performance gap between humans and machines (0.198 in F1)
indicates that significant progress can be made on NewsQA through future
research. The dataset is freely available at
https://datasets.maluuba.com/NewsQA. | http://arxiv.org/pdf/1611.09830 | Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, Kaheer Suleman | cs.CL, cs.AI | null | null | cs.CL | 20161129 | 20170207 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1606.05250"
},
{
"id": "1610.00956"
},
{
"id": "1612.04211"
},
{
"id": "1603.01547"
},
{
"id": "1603.08023"
}
] |
1611.09823 | 41 | Marilyn A Walker, Rashmi Prasad, and Amanda Stent. A trainable generator for recommendations in multimodal dialog. In INTERSPEECH, 2003.
Margaret G Werts, Mark Wolery, Ariane Holcombe, and David L Gast. Instructive feedback: Review of parameters and effects. Journal of Behavioral Education, 5(1):55â75, 1995.
Jason Weston. Dialog-based language learning. arXiv preprint arXiv:1604.06045, 2016.
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M Rush, Bart van Merri¨enboer, Armand Joulin, and Tomas Mikolov. Towards ai-complete question answering: A set of prerequisite toy tasks. arXiv preprint arXiv:1502.05698, 2015.
Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229â256, 1992.
Steve Young, Milica GaËsi´c, Simon Keizer, Franc¸ois Mairesse, Jost Schatzmann, Blaise Thomson, and Kai Yu. The hidden information state model: A practical framework for pomdp-based spoken dialogue management. Computer Speech & Language, 24(2):150â174, 2010.
11
# Under review as a conference paper at ICLR 2017 | 1611.09823#41 | Dialogue Learning With Human-In-The-Loop | An important aspect of developing conversational agents is to give a bot the
ability to improve through communicating with humans and to learn from the
mistakes that it makes. Most research has focused on learning from fixed
training sets of labeled data rather than interacting with a dialogue partner
in an online fashion. In this paper we explore this direction in a
reinforcement learning setting where the bot improves its question-answering
ability from feedback a teacher gives following its generated responses. We
build a simulator that tests various aspects of such learning in a synthetic
environment, and introduce models that work in this regime. Finally, real
experiments with Mechanical Turk validate the approach. | http://arxiv.org/pdf/1611.09823 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.AI, cs.CL | null | null | cs.AI | 20161129 | 20170113 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1606.05250"
},
{
"id": "1511.06732"
},
{
"id": "1502.05698"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
}
] |
1611.09830 | 41 | Model parameters are initialized with either the normal distribution (N (0, 0.05)) or the orthogonal initialization (O, Saxe et al. 2013) in Keras. All weight matrices in the LSTMs are initialized with O. In the Match-LSTM layer, W q, W p, and W r are initialized with O, bp and w are initialized with N , and b is initialized as 1. In the answer-pointing layer, V and W a are initialized with O, ba and v are initialized with N , and c is initialized as 1.
# BARB
For BARB, the following hyperparameters are used on both SQuAD and NewsQA: d = 300, D1 = 128, C = 64, D2 = 256, w = 3, and nf = 128. Weight matrices in the GRU, the bilinear models, as well as the boundary decoder (vs and ve) are initialized with O. The ï¬lter weights in the boundary decoder are initialized with glorot_uniform (Glorot & Bengio 2010, default in Keras). The bilinear biases are initialized with N , and the boundary decoder biases are initialized with 0.
# B DATA COLLECTION USER INTERFACE
Here we present the user interfaces used in question sourcing, answer sourcing, and question/answer validation. | 1611.09830#41 | NewsQA: A Machine Comprehension Dataset | We present NewsQA, a challenging machine comprehension dataset of over
100,000 human-generated question-answer pairs. Crowdworkers supply questions
and answers based on a set of over 10,000 news articles from CNN, with answers
consisting of spans of text from the corresponding articles. We collect this
dataset through a four-stage process designed to solicit exploratory questions
that require reasoning. A thorough analysis confirms that NewsQA demands
abilities beyond simple word matching and recognizing textual entailment. We
measure human performance on the dataset and compare it to several strong
neural models. The performance gap between humans and machines (0.198 in F1)
indicates that significant progress can be made on NewsQA through future
research. The dataset is freely available at
https://datasets.maluuba.com/NewsQA. | http://arxiv.org/pdf/1611.09830 | Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, Kaheer Suleman | cs.CL, cs.AI | null | null | cs.CL | 20161129 | 20170207 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1606.05250"
},
{
"id": "1610.00956"
},
{
"id": "1612.04211"
},
{
"id": "1603.01547"
},
{
"id": "1603.08023"
}
] |
1611.09823 | 42 | 11
# Under review as a conference paper at ICLR 2017
Steve Young, Milica GaËsi´c, Blaise Thomson, and Jason D Williams. Pomdp-based statistical spoken dialog systems: A review. Proceedings of the IEEE, 101(5):1160â1179, 2013.
Wojciech Zaremba and Ilya Sutskever. Reinforcement learning neural turing machines. arXiv preprint arXiv:1505.00521, 362, 2015.
12
# Under review as a conference paper at ICLR 2017
# A FURTHER SIMULATOR TASK DETAILS | 1611.09823#42 | Dialogue Learning With Human-In-The-Loop | An important aspect of developing conversational agents is to give a bot the
ability to improve through communicating with humans and to learn from the
mistakes that it makes. Most research has focused on learning from fixed
training sets of labeled data rather than interacting with a dialogue partner
in an online fashion. In this paper we explore this direction in a
reinforcement learning setting where the bot improves its question-answering
ability from feedback a teacher gives following its generated responses. We
build a simulator that tests various aspects of such learning in a synthetic
environment, and introduce models that work in this regime. Finally, real
experiments with Mechanical Turk validate the approach. | http://arxiv.org/pdf/1611.09823 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.AI, cs.CL | null | null | cs.AI | 20161129 | 20170113 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1606.05250"
},
{
"id": "1511.06732"
},
{
"id": "1502.05698"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
}
] |
1611.09830 | 42 | # B DATA COLLECTION USER INTERFACE
Here we present the user interfaces used in question sourcing, answer sourcing, and question/answer validation.
6https://github.com/hyperopt/hyperopt
12
Highlights e Three women to jointly receive the 2011 Nobel Peace Prize ¢ Prize recognizes non-violent struggle of safety of women and women's rights. e Prize winners to be honored with a concert on Sunday hosted by Helen Mirren Qi: Who were the prize winners Q2: { What country were the prize winners from4 ] Q3: [ Write a question that relates to a highlight. } | 1611.09830#42 | NewsQA: A Machine Comprehension Dataset | We present NewsQA, a challenging machine comprehension dataset of over
100,000 human-generated question-answer pairs. Crowdworkers supply questions
and answers based on a set of over 10,000 news articles from CNN, with answers
consisting of spans of text from the corresponding articles. We collect this
dataset through a four-stage process designed to solicit exploratory questions
that require reasoning. A thorough analysis confirms that NewsQA demands
abilities beyond simple word matching and recognizing textual entailment. We
measure human performance on the dataset and compare it to several strong
neural models. The performance gap between humans and machines (0.198 in F1)
indicates that significant progress can be made on NewsQA through future
research. The dataset is freely available at
https://datasets.maluuba.com/NewsQA. | http://arxiv.org/pdf/1611.09830 | Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, Kaheer Suleman | cs.CL, cs.AI | null | null | cs.CL | 20161129 | 20170207 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1606.05250"
},
{
"id": "1610.00956"
},
{
"id": "1612.04211"
},
{
"id": "1603.01547"
},
{
"id": "1603.08023"
}
] |
1611.09823 | 43 | 12
# Under review as a conference paper at ICLR 2017
# A FURTHER SIMULATOR TASK DETAILS
The tasks in Weston (2016) were speciï¬cally: - Task 1: The teacher tells the student exactly what they should have said (supervised baseline). - Task 2: The teacher replies with positive textual feedback and reward, or negative textual feedback. - Task 3: The teacher gives textual feedback containing the answer when the bot is wrong. - Task 4: The teacher provides a hint by providing the class of the correct answer, e.g., âNo itâs a movieâ for the question âwhich movie did Forest Gump star in?â. - Task 5: The teacher provides a reason why the studentâs answer is wrong by pointing out the relevant supporting fact from the knowledge base. - Task 6: The teacher gives positive reward only 50% of the time. - Task 7: Rewards are missing and the teacher only gives natural language feedback. - Task 8: Combines Tasks 1 and 2 to see whether a learner can learn successfully from both forms of supervision at once. - Task 9: The bot asks questions of the teacher about what it has done wrong. - Task 10: The bot will receive a hint rather than the correct answer after asking for help. | 1611.09823#43 | Dialogue Learning With Human-In-The-Loop | An important aspect of developing conversational agents is to give a bot the
ability to improve through communicating with humans and to learn from the
mistakes that it makes. Most research has focused on learning from fixed
training sets of labeled data rather than interacting with a dialogue partner
in an online fashion. In this paper we explore this direction in a
reinforcement learning setting where the bot improves its question-answering
ability from feedback a teacher gives following its generated responses. We
build a simulator that tests various aspects of such learning in a synthetic
environment, and introduce models that work in this regime. Finally, real
experiments with Mechanical Turk validate the approach. | http://arxiv.org/pdf/1611.09823 | Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston | cs.AI, cs.CL | null | null | cs.AI | 20161129 | 20170113 | [
{
"id": "1511.06931"
},
{
"id": "1505.00521"
},
{
"id": "1606.05250"
},
{
"id": "1511.06732"
},
{
"id": "1502.05698"
},
{
"id": "1604.06045"
},
{
"id": "1606.02689"
},
{
"id": "1506.02075"
},
{
"id": "1606.03126"
}
] |
1611.09830 | 43 | Qi: Who were the prize winners Q2: { What country were the prize winners from4 ] Q3: [ Write a question that relates to a highlight. } Question What is the age of Patrick McGoohan? © Click here if the question does not make sense or is not a question. (CNN) -- Emmy-winning Patrick McGoohan, the actor who created one of British television's most surreal thrillers, has died aged 8OJaccording to British media reports. Fans holding placards of Patrick McGoohan recreate a scene from âThe Prisonerâ to celebrate the 40th anniversary of the show in 2007. The Press Association, quoting his son-in-law Cleve Landsberg, reported he died in Los Angeles after a short illness. McGoohan, star of the 1960s show âThe Danger Man, is best remembered for writing and starring in 'The Prisonerâ about a former spy locked away in an isolated village who tries to escape each episode. Question When was the lockdown initiated? Select the best answer: Tucson, Arizona, © 10:30am. -- liam, * Allanswers are very bad. * The question doesn't make sense. Story (for your convenience) (CNN) -- | 1611.09830#43 | NewsQA: A Machine Comprehension Dataset | We present NewsQA, a challenging machine comprehension dataset of over
100,000 human-generated question-answer pairs. Crowdworkers supply questions
and answers based on a set of over 10,000 news articles from CNN, with answers
consisting of spans of text from the corresponding articles. We collect this
dataset through a four-stage process designed to solicit exploratory questions
that require reasoning. A thorough analysis confirms that NewsQA demands
abilities beyond simple word matching and recognizing textual entailment. We
measure human performance on the dataset and compare it to several strong
neural models. The performance gap between humans and machines (0.198 in F1)
indicates that significant progress can be made on NewsQA through future
research. The dataset is freely available at
https://datasets.maluuba.com/NewsQA. | http://arxiv.org/pdf/1611.09830 | Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, Kaheer Suleman | cs.CL, cs.AI | null | null | cs.CL | 20161129 | 20170207 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1606.05250"
},
{
"id": "1610.00956"
},
{
"id": "1612.04211"
},
{
"id": "1603.01547"
},
{
"id": "1603.08023"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.