doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1704.05119 | 29 | Wenlin Chen, James T. Wilson, Stephen Tyree, Kilian Q. Weinberger, and Yixin Chen. Compressing neural networks with the hashing trick. CoRR, abs/1504.04788, 2015b. URL http://arxiv. org/abs/1504.04788.
Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Hol- ger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014.
Misha Denil, Babak Shakibi, Laurent Dinh, MarcâAurelio Ranzato, and Nando de Freitas. Predicting parameters in deep learning. CoRR, abs/1306.0543, 2013. URL http://arxiv.org/abs/ 1306.0543.
Emily Denton, Wojciech Zaremba, Joan Bruna, Yann LeCun, and Rob Fergus. Exploiting linear structure within convolutional networks for efï¬cient evaluation. CoRR, abs/1404.0736, 2014. URL http://arxiv.org/abs/1404.0736. | 1704.05119#29 | Exploring Sparsity in Recurrent Neural Networks | Recurrent Neural Networks (RNN) are widely used to solve a variety of
problems and as the quantity of data and the amount of available compute have
increased, so have model sizes. The number of parameters in recent
state-of-the-art networks makes them hard to deploy, especially on mobile
phones and embedded devices. The challenge is due to both the size of the model
and the time it takes to evaluate it. In order to deploy these RNNs
efficiently, we propose a technique to reduce the parameters of a network by
pruning weights during the initial training of the network. At the end of
training, the parameters of the network are sparse while accuracy is still
close to the original dense neural network. The network size is reduced by 8x
and the time required to train the model remains constant. Additionally, we can
prune a larger dense network to achieve better than baseline performance while
still reducing the total number of parameters significantly. Pruning RNNs
reduces the size of the model and can also help achieve significant inference
time speed-up using sparse matrix multiply. Benchmarks show that using our
technique model size can be reduced by 90% and speed-up is around 2x to 7x. | http://arxiv.org/pdf/1704.05119 | Sharan Narang, Erich Elsen, Gregory Diamos, Shubho Sengupta | cs.LG, cs.CL | Published as a conference paper at ICLR 2017 | null | cs.LG | 20170417 | 20171106 | [
{
"id": "1512.02595"
}
] |
1704.04861 | 30 | PN a bus: 98% wsdl
Figure 6. Example objection detection results using MobileNet SSD.
and evaluated on minival. For both frameworks, MobileNet achieves comparable results to other networks with only a fraction of computational complexity and model size.
# 4.7. Face Embeddings
The FaceNet model is a state of the art face recognition model [25]. It builds face embeddings based on the triplet loss. To build a mobile FaceNet model we use distillation to train by minimizing the squared differences of the output
Table 14. MobileNet Distilled from FaceNet
Model FaceNet [25] 1.0 MobileNet-160 1.0 MobileNet-128 0.75 MobileNet-128 0.75 MobileNet-128 1e-4 Million Accuracy Mult-Adds 83% 79.4% 78.3% 75.2% 72.5% 1600 286 185 166 108 Million Parameters 7.5 4.9 5.5 3.4 3.8
of FaceNet and MobileNet on the training data. Results for very small MobileNet models can be found in table 14.
# 5. Conclusion | 1704.04861#30 | MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications | We present a class of efficient models called MobileNets for mobile and
embedded vision applications. MobileNets are based on a streamlined
architecture that uses depth-wise separable convolutions to build light weight
deep neural networks. We introduce two simple global hyper-parameters that
efficiently trade off between latency and accuracy. These hyper-parameters
allow the model builder to choose the right sized model for their application
based on the constraints of the problem. We present extensive experiments on
resource and accuracy tradeoffs and show strong performance compared to other
popular models on ImageNet classification. We then demonstrate the
effectiveness of MobileNets across a wide range of applications and use cases
including object detection, finegrain classification, face attributes and large
scale geo-localization. | http://arxiv.org/pdf/1704.04861 | Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam | cs.CV | null | null | cs.CV | 20170417 | 20170417 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1511.06789"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1609.07061"
},
{
"id": "1512.02325"
},
{
"id": "1512.00567"
},
{
"id": "1608.04337"
},
{
"id": "1611.10012"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1610.02357"
},
{
"id": "1512.06473"
}
] |
1704.05119 | 30 | Greg Diamos, Shubho Sengupta, Bryan Catanzaro, Mike Chrzanowski, Adam Coates, Erich Elsen, Jesse Engel, Awni Hannun, and Sanjeev Satheesh. Persistent rnns: Stashing recurrent weights on- chip. In Proceedings of The 33rd International Conference on Machine Learning, pp. 2024â2033, 2016.
Yunchao Gong, Liu Liu, Ming Yang, and Lubomir D. Bourdev. Compressing deep convolutional networks using vector quantization. CoRR, abs/1412.6115, 2014. URL http://arxiv.org/ abs/1412.6115.
Alex Graves and Navdeep Jaitly. Towards end-to-end speech recognition with recurrent neural networks. In ICML, volume 14, pp. 1764â1772, 2014.
Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding. CoRR, abs/1510.00149, 2, 2015. | 1704.05119#30 | Exploring Sparsity in Recurrent Neural Networks | Recurrent Neural Networks (RNN) are widely used to solve a variety of
problems and as the quantity of data and the amount of available compute have
increased, so have model sizes. The number of parameters in recent
state-of-the-art networks makes them hard to deploy, especially on mobile
phones and embedded devices. The challenge is due to both the size of the model
and the time it takes to evaluate it. In order to deploy these RNNs
efficiently, we propose a technique to reduce the parameters of a network by
pruning weights during the initial training of the network. At the end of
training, the parameters of the network are sparse while accuracy is still
close to the original dense neural network. The network size is reduced by 8x
and the time required to train the model remains constant. Additionally, we can
prune a larger dense network to achieve better than baseline performance while
still reducing the total number of parameters significantly. Pruning RNNs
reduces the size of the model and can also help achieve significant inference
time speed-up using sparse matrix multiply. Benchmarks show that using our
technique model size can be reduced by 90% and speed-up is around 2x to 7x. | http://arxiv.org/pdf/1704.05119 | Sharan Narang, Erich Elsen, Gregory Diamos, Shubho Sengupta | cs.LG, cs.CL | Published as a conference paper at ICLR 2017 | null | cs.LG | 20170417 | 20171106 | [
{
"id": "1512.02595"
}
] |
1704.04861 | 31 | of FaceNet and MobileNet on the training data. Results for very small MobileNet models can be found in table 14.
# 5. Conclusion
We proposed a new model architecture called Mo- bileNets based on depthwise separable convolutions. We investigated some of the important design decisions leading to an efï¬cient model. We then demonstrated how to build smaller and faster MobileNets using width multiplier and resolution multiplier by trading off a reasonable amount of accuracy to reduce size and latency. We then compared dif- ferent MobileNets to popular models demonstrating supe- rior size, speed and accuracy characteristics. We concluded by demonstrating MobileNetâs effectiveness when applied to a wide variety of tasks. As a next step to help adoption and exploration of MobileNets, we plan on releasing mod- els in Tensor Flow.
# References
[1] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, et al. Tensorï¬ow: Large-scale machine learning on heterogeneous systems, 2015. Software available from tensorï¬ow. org, 1, 2015. 4 | 1704.04861#31 | MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications | We present a class of efficient models called MobileNets for mobile and
embedded vision applications. MobileNets are based on a streamlined
architecture that uses depth-wise separable convolutions to build light weight
deep neural networks. We introduce two simple global hyper-parameters that
efficiently trade off between latency and accuracy. These hyper-parameters
allow the model builder to choose the right sized model for their application
based on the constraints of the problem. We present extensive experiments on
resource and accuracy tradeoffs and show strong performance compared to other
popular models on ImageNet classification. We then demonstrate the
effectiveness of MobileNets across a wide range of applications and use cases
including object detection, finegrain classification, face attributes and large
scale geo-localization. | http://arxiv.org/pdf/1704.04861 | Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam | cs.CV | null | null | cs.CV | 20170417 | 20170417 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1511.06789"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1609.07061"
},
{
"id": "1512.02325"
},
{
"id": "1512.00567"
},
{
"id": "1608.04337"
},
{
"id": "1611.10012"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1610.02357"
},
{
"id": "1512.06473"
}
] |
1704.05119 | 31 | Awni Hannun, Carl Case, Jared Casper, Bryan Catanzaro, Greg Diamos, Erich Elsen, Ryan Prenger, Sanjeev Satheesh, Shubho Sengupta, Adam Coates, et al. Deep speech: Scaling up end-to-end speech recognition. arXiv preprint arXiv:1412.5567, 2014.
Stephen Jos´e Hanson and Lorien Pratt. Advances in neural information processing systems 1. chap- ter Comparing Biases for Minimal Network Construction with Back-propagation, pp. 177â185. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 1989. ISBN 1-558-60015-9. URL http://dl.acm.org/citation.cfm?id=89851.89872.
9
Published as a conference paper at ICLR 2017
Babak Hassibi, David G Stork, and Gregory J Wolff. Optimal brain surgeon and general network In Neural Networks, 1993., IEEE International Conference on, pp. 293â299. IEEE, pruning. 1993.
Max Jaderberg, Andrea Vedaldi, and Andrew Zisserman. Speeding up convolutional neural networks with low rank expansions. CoRR, abs/1405.3866, 2014. URL http://arxiv.org/abs/ 1405.3866. | 1704.05119#31 | Exploring Sparsity in Recurrent Neural Networks | Recurrent Neural Networks (RNN) are widely used to solve a variety of
problems and as the quantity of data and the amount of available compute have
increased, so have model sizes. The number of parameters in recent
state-of-the-art networks makes them hard to deploy, especially on mobile
phones and embedded devices. The challenge is due to both the size of the model
and the time it takes to evaluate it. In order to deploy these RNNs
efficiently, we propose a technique to reduce the parameters of a network by
pruning weights during the initial training of the network. At the end of
training, the parameters of the network are sparse while accuracy is still
close to the original dense neural network. The network size is reduced by 8x
and the time required to train the model remains constant. Additionally, we can
prune a larger dense network to achieve better than baseline performance while
still reducing the total number of parameters significantly. Pruning RNNs
reduces the size of the model and can also help achieve significant inference
time speed-up using sparse matrix multiply. Benchmarks show that using our
technique model size can be reduced by 90% and speed-up is around 2x to 7x. | http://arxiv.org/pdf/1704.05119 | Sharan Narang, Erich Elsen, Gregory Diamos, Shubho Sengupta | cs.LG, cs.CL | Published as a conference paper at ICLR 2017 | null | cs.LG | 20170417 | 20171106 | [
{
"id": "1512.02595"
}
] |
1704.04861 | 32 | [2] W. Chen, J. T. Wilson, S. Tyree, K. Q. Weinberger, and Y. Chen. Compressing neural networks with the hashing trick. CoRR, abs/1504.04788, 2015. 2
[3] F. Chollet. Xception: Deep learning with depthwise separa- ble convolutions. arXiv preprint arXiv:1610.02357v2, 2016. 1
[4] M. Courbariaux, J.-P. David, and Y. Bengio. Training deep neural networks with low precision multiplications. arXiv preprint arXiv:1412.7024, 2014. 2
[5] S. Han, H. Mao, and W. J. Dally. Deep compression: Com- pressing deep neural network with pruning, trained quantiza- tion and huffman coding. CoRR, abs/1510.00149, 2, 2015. 2
[6] J. Hays and A. Efros. IM2GPS: estimating geographic in- formation from a single image. In Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, 2008. 7 | 1704.04861#32 | MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications | We present a class of efficient models called MobileNets for mobile and
embedded vision applications. MobileNets are based on a streamlined
architecture that uses depth-wise separable convolutions to build light weight
deep neural networks. We introduce two simple global hyper-parameters that
efficiently trade off between latency and accuracy. These hyper-parameters
allow the model builder to choose the right sized model for their application
based on the constraints of the problem. We present extensive experiments on
resource and accuracy tradeoffs and show strong performance compared to other
popular models on ImageNet classification. We then demonstrate the
effectiveness of MobileNets across a wide range of applications and use cases
including object detection, finegrain classification, face attributes and large
scale geo-localization. | http://arxiv.org/pdf/1704.04861 | Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam | cs.CV | null | null | cs.CV | 20170417 | 20170417 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1511.06789"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1609.07061"
},
{
"id": "1512.02325"
},
{
"id": "1512.00567"
},
{
"id": "1608.04337"
},
{
"id": "1611.10012"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1610.02357"
},
{
"id": "1512.06473"
}
] |
1704.05119 | 32 | Rafal J´ozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring the limits of language modeling. CoRR, abs/1602.02410, 2016. URL http://arxiv.org/abs/ 1602.02410.
Yann LeCun, John S Denker, Sara A Solla, Richard E Howard, and Lawrence D Jackel. Optimal brain damage. In NIPs, volume 2, pp. 598â605, 1989.
Xing Liu, Mikhail Smelyanskiy, Edmond Chow, and Pradeep Dubey. Efï¬cient sparse matrix-vector multiplication on x86-based many-core processors. In Proceedings of the 27th International ACM Conference on International Conference on Supercomputing, ICS â13, pp. 273â282, New York, NY, USA, 2013. ACM. ISBN 978-1-4503-2130-3. doi: 10.1145/2464996.2465013. URL http: //doi.acm.org/10.1145/2464996.2465013. | 1704.05119#32 | Exploring Sparsity in Recurrent Neural Networks | Recurrent Neural Networks (RNN) are widely used to solve a variety of
problems and as the quantity of data and the amount of available compute have
increased, so have model sizes. The number of parameters in recent
state-of-the-art networks makes them hard to deploy, especially on mobile
phones and embedded devices. The challenge is due to both the size of the model
and the time it takes to evaluate it. In order to deploy these RNNs
efficiently, we propose a technique to reduce the parameters of a network by
pruning weights during the initial training of the network. At the end of
training, the parameters of the network are sparse while accuracy is still
close to the original dense neural network. The network size is reduced by 8x
and the time required to train the model remains constant. Additionally, we can
prune a larger dense network to achieve better than baseline performance while
still reducing the total number of parameters significantly. Pruning RNNs
reduces the size of the model and can also help achieve significant inference
time speed-up using sparse matrix multiply. Benchmarks show that using our
technique model size can be reduced by 90% and speed-up is around 2x to 7x. | http://arxiv.org/pdf/1704.05119 | Sharan Narang, Erich Elsen, Gregory Diamos, Shubho Sengupta | cs.LG, cs.CL | Published as a conference paper at ICLR 2017 | null | cs.LG | 20170417 | 20171106 | [
{
"id": "1512.02595"
}
] |
1704.04861 | 33 | [7] J. Hays and A. Efros. Large-Scale Image Geolocalization. In J. Choi and G. Friedland, editors, Multimodal Location Estimation of Videos and Images. Springer, 2014. 6, 7
[8] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learn- ing for image recognition. arXiv preprint arXiv:1512.03385, 2015. 1
[9] G. Hinton, O. Vinyals, and J. Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015. 2, 7 | 1704.04861#33 | MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications | We present a class of efficient models called MobileNets for mobile and
embedded vision applications. MobileNets are based on a streamlined
architecture that uses depth-wise separable convolutions to build light weight
deep neural networks. We introduce two simple global hyper-parameters that
efficiently trade off between latency and accuracy. These hyper-parameters
allow the model builder to choose the right sized model for their application
based on the constraints of the problem. We present extensive experiments on
resource and accuracy tradeoffs and show strong performance compared to other
popular models on ImageNet classification. We then demonstrate the
effectiveness of MobileNets across a wide range of applications and use cases
including object detection, finegrain classification, face attributes and large
scale geo-localization. | http://arxiv.org/pdf/1704.04861 | Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam | cs.CV | null | null | cs.CV | 20170417 | 20170417 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1511.06789"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1609.07061"
},
{
"id": "1512.02325"
},
{
"id": "1512.00567"
},
{
"id": "1608.04337"
},
{
"id": "1611.10012"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1610.02357"
},
{
"id": "1512.06473"
}
] |
1704.05119 | 33 | Zhiyun Lu, Vikas Sindhwani, and Tara N. Sainath. Learning compact recurrent neural networks. CoRR, abs/1604.02594, 2016. URL http://arxiv.org/abs/1604.02594.
Vincent Vanhoucke, Andrew Senior, and Mark Z. Mao. Improving the speed of neural networks on cpus. In Deep Learning and Unsupervised Feature Learning Workshop, NIPS 2011, 2011.
Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin John- son, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. Googleâs neural machine translation system: Bridging the gap between human and machine translation. CoRR, abs/1609.08144, 2016. URL http://arxiv.org/abs/1609.08144. | 1704.05119#33 | Exploring Sparsity in Recurrent Neural Networks | Recurrent Neural Networks (RNN) are widely used to solve a variety of
problems and as the quantity of data and the amount of available compute have
increased, so have model sizes. The number of parameters in recent
state-of-the-art networks makes them hard to deploy, especially on mobile
phones and embedded devices. The challenge is due to both the size of the model
and the time it takes to evaluate it. In order to deploy these RNNs
efficiently, we propose a technique to reduce the parameters of a network by
pruning weights during the initial training of the network. At the end of
training, the parameters of the network are sparse while accuracy is still
close to the original dense neural network. The network size is reduced by 8x
and the time required to train the model remains constant. Additionally, we can
prune a larger dense network to achieve better than baseline performance while
still reducing the total number of parameters significantly. Pruning RNNs
reduces the size of the model and can also help achieve significant inference
time speed-up using sparse matrix multiply. Benchmarks show that using our
technique model size can be reduced by 90% and speed-up is around 2x to 7x. | http://arxiv.org/pdf/1704.05119 | Sharan Narang, Erich Elsen, Gregory Diamos, Shubho Sengupta | cs.LG, cs.CL | Published as a conference paper at ICLR 2017 | null | cs.LG | 20170417 | 20171106 | [
{
"id": "1512.02595"
}
] |
1704.04861 | 34 | [10] J. Huang, V. Rathod, C. Sun, M. Zhu, A. Korattikara, A. Fathi, I. Fischer, Z. Wojna, Y. Song, S. Guadarrama, et al. Speed/accuracy trade-offs for modern convolutional object detectors. arXiv preprint arXiv:1611.10012, 2016. 7 [11] I. Hubara, M. Courbariaux, D. Soudry, R. El-Yaniv, and Y. Bengio. Quantized neural networks: Training neural net- works with low precision weights and activations. arXiv preprint arXiv:1609.07061, 2016. 2
[12] F. N. Iandola, M. W. Moskewicz, K. Ashraf, S. Han, W. J. Dally, and K. Keutzer. Squeezenet: Alexnet-level accuracy with 50x fewer parameters and¡ 1mb model size. arXiv preprint arXiv:1602.07360, 2016. 1, 6
[13] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015. 1, 3, 7 | 1704.04861#34 | MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications | We present a class of efficient models called MobileNets for mobile and
embedded vision applications. MobileNets are based on a streamlined
architecture that uses depth-wise separable convolutions to build light weight
deep neural networks. We introduce two simple global hyper-parameters that
efficiently trade off between latency and accuracy. These hyper-parameters
allow the model builder to choose the right sized model for their application
based on the constraints of the problem. We present extensive experiments on
resource and accuracy tradeoffs and show strong performance compared to other
popular models on ImageNet classification. We then demonstrate the
effectiveness of MobileNets across a wide range of applications and use cases
including object detection, finegrain classification, face attributes and large
scale geo-localization. | http://arxiv.org/pdf/1704.04861 | Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam | cs.CV | null | null | cs.CV | 20170417 | 20170417 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1511.06789"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1609.07061"
},
{
"id": "1512.02325"
},
{
"id": "1512.00567"
},
{
"id": "1608.04337"
},
{
"id": "1611.10012"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1610.02357"
},
{
"id": "1512.06473"
}
] |
1704.04861 | 35 | [14] M. Jaderberg, A. Vedaldi, and A. Zisserman. Speeding up convolutional neural networks with low rank expansions. arXiv preprint arXiv:1405.3866, 2014. 2
[15] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Gir- shick, S. Guadarrama, and T. Darrell. Caffe: Convolu- tional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014. 4
[16] J. Jin, A. Dundar, and E. Culurciello. Flattened convolutional neural networks for feedforward acceleration. arXiv preprint arXiv:1412.5474, 2014. 1, 3
[17] A. Khosla, N. Jayadevaprakash, B. Yao, and L. Fei-Fei. Novel dataset for ï¬ne-grained image categorization. In First Workshop on Fine-Grained Visual Categorization, IEEE Conference on Computer Vision and Pattern Recognition, Colorado Springs, CO, June 2011. 6 | 1704.04861#35 | MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications | We present a class of efficient models called MobileNets for mobile and
embedded vision applications. MobileNets are based on a streamlined
architecture that uses depth-wise separable convolutions to build light weight
deep neural networks. We introduce two simple global hyper-parameters that
efficiently trade off between latency and accuracy. These hyper-parameters
allow the model builder to choose the right sized model for their application
based on the constraints of the problem. We present extensive experiments on
resource and accuracy tradeoffs and show strong performance compared to other
popular models on ImageNet classification. We then demonstrate the
effectiveness of MobileNets across a wide range of applications and use cases
including object detection, finegrain classification, face attributes and large
scale geo-localization. | http://arxiv.org/pdf/1704.04861 | Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam | cs.CV | null | null | cs.CV | 20170417 | 20170417 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1511.06789"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1609.07061"
},
{
"id": "1512.02325"
},
{
"id": "1512.00567"
},
{
"id": "1608.04337"
},
{
"id": "1611.10012"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1610.02357"
},
{
"id": "1512.06473"
}
] |
1704.04861 | 36 | [18] J. Krause, B. Sapp, A. Howard, H. Zhou, A. Toshev, T. Duerig, J. Philbin, and L. Fei-Fei. The unreasonable ef- fectiveness of noisy data for ï¬ne-grained recognition. arXiv preprint arXiv:1511.06789, 2015. 6
Imagenet In classiï¬cation with deep convolutional neural networks. Advances in neural information processing systems, pages 1097â1105, 2012. 1, 6
I. Oseledets, and V. Lempitsky. Speeding-up convolutional neural net- works using ï¬ne-tuned cp-decomposition. arXiv preprint arXiv:1412.6553, 2014. 2
[21] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, and S. Reed. arXiv preprint Ssd: arXiv:1512.02325, 2015. 7 Single shot multibox detector. | 1704.04861#36 | MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications | We present a class of efficient models called MobileNets for mobile and
embedded vision applications. MobileNets are based on a streamlined
architecture that uses depth-wise separable convolutions to build light weight
deep neural networks. We introduce two simple global hyper-parameters that
efficiently trade off between latency and accuracy. These hyper-parameters
allow the model builder to choose the right sized model for their application
based on the constraints of the problem. We present extensive experiments on
resource and accuracy tradeoffs and show strong performance compared to other
popular models on ImageNet classification. We then demonstrate the
effectiveness of MobileNets across a wide range of applications and use cases
including object detection, finegrain classification, face attributes and large
scale geo-localization. | http://arxiv.org/pdf/1704.04861 | Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam | cs.CV | null | null | cs.CV | 20170417 | 20170417 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1511.06789"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1609.07061"
},
{
"id": "1512.02325"
},
{
"id": "1512.00567"
},
{
"id": "1608.04337"
},
{
"id": "1611.10012"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1610.02357"
},
{
"id": "1512.06473"
}
] |
1704.04861 | 37 | [22] M. Rastegari, V. Ordonez, J. Redmon, and A. Farhadi. Xnor- net: Imagenet classiï¬cation using binary convolutional neu- ral networks. arXiv preprint arXiv:1603.05279, 2016. 1, 2
[23] S. Ren, K. He, R. Girshick, and J. Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pages 91â99, 2015. 7
[24] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211â252, 2015. 1
[25] F. Schroff, D. Kalenichenko, and J. Philbin. Facenet: A uni- ï¬ed embedding for face recognition and clustering. In Pro- ceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 815â823, 2015. 7, 8 | 1704.04861#37 | MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications | We present a class of efficient models called MobileNets for mobile and
embedded vision applications. MobileNets are based on a streamlined
architecture that uses depth-wise separable convolutions to build light weight
deep neural networks. We introduce two simple global hyper-parameters that
efficiently trade off between latency and accuracy. These hyper-parameters
allow the model builder to choose the right sized model for their application
based on the constraints of the problem. We present extensive experiments on
resource and accuracy tradeoffs and show strong performance compared to other
popular models on ImageNet classification. We then demonstrate the
effectiveness of MobileNets across a wide range of applications and use cases
including object detection, finegrain classification, face attributes and large
scale geo-localization. | http://arxiv.org/pdf/1704.04861 | Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam | cs.CV | null | null | cs.CV | 20170417 | 20170417 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1511.06789"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1609.07061"
},
{
"id": "1512.02325"
},
{
"id": "1512.00567"
},
{
"id": "1608.04337"
},
{
"id": "1611.10012"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1610.02357"
},
{
"id": "1512.06473"
}
] |
1704.04861 | 38 | [26] L. Sifre. Rigid-motion scattering for image classiï¬cation. PhD thesis, Ph. D. thesis, 2014. 1, 3
[27] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. 1, 6
[28] V. Sindhwani, T. Sainath, and S. Kumar. Structured trans- In Advances in forms for small-footprint deep learning. Neural Information Processing Systems, pages 3088â3096, 2015. 1
Inception-v4, inception-resnet and the impact of residual connections on learning. arXiv preprint arXiv:1602.07261, 2016. 1
[30] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1â9, 2015. 6 | 1704.04861#38 | MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications | We present a class of efficient models called MobileNets for mobile and
embedded vision applications. MobileNets are based on a streamlined
architecture that uses depth-wise separable convolutions to build light weight
deep neural networks. We introduce two simple global hyper-parameters that
efficiently trade off between latency and accuracy. These hyper-parameters
allow the model builder to choose the right sized model for their application
based on the constraints of the problem. We present extensive experiments on
resource and accuracy tradeoffs and show strong performance compared to other
popular models on ImageNet classification. We then demonstrate the
effectiveness of MobileNets across a wide range of applications and use cases
including object detection, finegrain classification, face attributes and large
scale geo-localization. | http://arxiv.org/pdf/1704.04861 | Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam | cs.CV | null | null | cs.CV | 20170417 | 20170417 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1511.06789"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1609.07061"
},
{
"id": "1512.02325"
},
{
"id": "1512.00567"
},
{
"id": "1608.04337"
},
{
"id": "1611.10012"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1610.02357"
},
{
"id": "1512.06473"
}
] |
1704.04861 | 39 | [31] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. Rethinking the inception architecture for computer vision. arXiv preprint arXiv:1512.00567, 2015. 1, 3, 4, 7
[32] B. Thomee, D. A. Shamma, G. Friedland, B. Elizalde, K. Ni, D. Poland, D. Borth, and L.-J. Li. Yfcc100m: The new data in multimedia research. Communications of the ACM, 59(2):64â73, 2016. 7
[33] T. Tieleman and G. Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 4(2), 2012. 4
[34] M. Wang, B. Liu, and H. Foroosh. Factorized convolutional neural networks. arXiv preprint arXiv:1608.04337, 2016. 1 [35] T. Weyand, I. Kostrikov, and J. Philbin. PlaNet - Photo Ge- olocation with Convolutional Neural Networks. In European Conference on Computer Vision (ECCV), 2016. 6 | 1704.04861#39 | MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications | We present a class of efficient models called MobileNets for mobile and
embedded vision applications. MobileNets are based on a streamlined
architecture that uses depth-wise separable convolutions to build light weight
deep neural networks. We introduce two simple global hyper-parameters that
efficiently trade off between latency and accuracy. These hyper-parameters
allow the model builder to choose the right sized model for their application
based on the constraints of the problem. We present extensive experiments on
resource and accuracy tradeoffs and show strong performance compared to other
popular models on ImageNet classification. We then demonstrate the
effectiveness of MobileNets across a wide range of applications and use cases
including object detection, finegrain classification, face attributes and large
scale geo-localization. | http://arxiv.org/pdf/1704.04861 | Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam | cs.CV | null | null | cs.CV | 20170417 | 20170417 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1511.06789"
},
{
"id": "1503.02531"
},
{
"id": "1602.07261"
},
{
"id": "1609.07061"
},
{
"id": "1512.02325"
},
{
"id": "1512.00567"
},
{
"id": "1608.04337"
},
{
"id": "1611.10012"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
},
{
"id": "1610.02357"
},
{
"id": "1512.06473"
}
] |
1704.04651 | 0 | 8 1 0 2
n u J 9 1 ] I A . s c [
2 v 1 5 6 4 0 . 4 0 7 1 : v i X r a
Published as a conference paper at ICLR 2018
# THE REACTOR: A FAST AND SAMPLE-EFFICIENT ACTOR-CRITIC AGENT FOR REINFORCEMENT LEARNING
Audr ¯unas Gruslys, DeepMind [email protected]
Will Dabney, DeepMind [email protected]
Mohammad Gheshlaghi Azar, DeepMind [email protected]
# Bilal Piot, DeepMind [email protected]
Marc G. Bellemare, Google Brain [email protected]
Rémi Munos, DeepMind [email protected]
# ABSTRACT | 1704.04651#0 | The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning | In this work we present a new agent architecture, called Reactor, which
combines multiple algorithmic and architectural contributions to produce an
agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al.,
2016) and Categorical DQN (Bellemare et al., 2017), while giving better
run-time performance than A3C (Mnih et al., 2016). Our first contribution is a
new policy evaluation algorithm called Distributional Retrace, which brings
multi-step off-policy updates to the distributional reinforcement learning
setting. The same approach can be used to convert several classes of multi-step
policy evaluation algorithms designed for expected value evaluation into
distributional ones. Next, we introduce the \b{eta}-leave-one-out policy
gradient algorithm which improves the trade-off between variance and bias by
using action values as a baseline. Our final algorithmic contribution is a new
prioritized replay algorithm for sequences, which exploits the temporal
locality of neighboring observations for more efficient replay prioritization.
Using the Atari 2600 benchmarks, we show that each of these innovations
contribute to both the sample efficiency and final agent performance. Finally,
we demonstrate that Reactor reaches state-of-the-art performance after 200
million frames and less than a day of training. | http://arxiv.org/pdf/1704.04651 | Audrunas Gruslys, Will Dabney, Mohammad Gheshlaghi Azar, Bilal Piot, Marc Bellemare, Remi Munos | cs.AI | null | null | cs.AI | 20170415 | 20180619 | [
{
"id": "1707.06347"
},
{
"id": "1703.01161"
},
{
"id": "1509.02971"
},
{
"id": "1710.02298"
},
{
"id": "1706.10295"
},
{
"id": "1707.06887"
},
{
"id": "1511.05952"
}
] |
1704.04651 | 1 | # ABSTRACT
In this work, we present a new agent architecture, called Reactor, which combines multiple algorithmic and architectural contributions to produce an agent with higher sample-efï¬ciency than Prioritized Dueling DQN (Wang et al., 2017) and Categori- cal DQN (Bellemare et al., 2017), while giving better run-time performance than A3C (Mnih et al., 2016). Our ï¬rst contribution is a new policy evaluation algorithm called Distributional Retrace, which brings multi-step off-policy updates to the distributional reinforcement learning setting. The same approach can be used to convert several classes of multi-step policy evaluation algorithms, designed for expected value evaluation, into distributional algorithms. Next, we introduce the β-leave-one-out policy gradient algorithm, which improves the trade-off between variance and bias by using action values as a baseline. Our ï¬nal algorithmic con- tribution is a new prioritized replay algorithm for sequences, which exploits the temporal locality of neighboring observations for more efï¬cient replay prioritiza- tion. Using the Atari 2600 benchmarks, we show that each of these innovations contribute to both sample efï¬ciency and ï¬nal agent performance. Finally, we demonstrate that Reactor reaches state-of-the-art performance after 200 million frames and less than a day of training. | 1704.04651#1 | The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning | In this work we present a new agent architecture, called Reactor, which
combines multiple algorithmic and architectural contributions to produce an
agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al.,
2016) and Categorical DQN (Bellemare et al., 2017), while giving better
run-time performance than A3C (Mnih et al., 2016). Our first contribution is a
new policy evaluation algorithm called Distributional Retrace, which brings
multi-step off-policy updates to the distributional reinforcement learning
setting. The same approach can be used to convert several classes of multi-step
policy evaluation algorithms designed for expected value evaluation into
distributional ones. Next, we introduce the \b{eta}-leave-one-out policy
gradient algorithm which improves the trade-off between variance and bias by
using action values as a baseline. Our final algorithmic contribution is a new
prioritized replay algorithm for sequences, which exploits the temporal
locality of neighboring observations for more efficient replay prioritization.
Using the Atari 2600 benchmarks, we show that each of these innovations
contribute to both the sample efficiency and final agent performance. Finally,
we demonstrate that Reactor reaches state-of-the-art performance after 200
million frames and less than a day of training. | http://arxiv.org/pdf/1704.04651 | Audrunas Gruslys, Will Dabney, Mohammad Gheshlaghi Azar, Bilal Piot, Marc Bellemare, Remi Munos | cs.AI | null | null | cs.AI | 20170415 | 20180619 | [
{
"id": "1707.06347"
},
{
"id": "1703.01161"
},
{
"id": "1509.02971"
},
{
"id": "1710.02298"
},
{
"id": "1706.10295"
},
{
"id": "1707.06887"
},
{
"id": "1511.05952"
}
] |
1704.04683 | 1 | # Abstract
We present RACE, a new dataset for benchmark evaluation of methods in the reading comprehension task. Collected from the English exams for middle and high school Chinese students in the age range between 12 to 18, RACE con- sists of near 28,000 passages and near 100,000 questions generated by human experts (English instructors), and cov- ers a variety of topics which are care- fully designed for evaluating the studentsâ ability in understanding and reasoning. In particular, the proportion of questions that requires reasoning is much larger in RACE than that in other benchmark datasets for reading comprehension, and there is a signiï¬cant gap between the performance of the state-of-the-art mod- els (43%) and the ceiling human perfor- mance (95%). We hope this new dataset can serve as a valuable resource for re- search and evaluation in machine com- prehension. The dataset is freely avail- able at http://www.cs.cmu.edu/ Ëglai1/data/race/ and the code is available at https://github.com/ qizhex/RACE_AR_baselines
# Introduction | 1704.04683#1 | RACE: Large-scale ReAding Comprehension Dataset From Examinations | We present RACE, a new dataset for benchmark evaluation of methods in the
reading comprehension task. Collected from the English exams for middle and
high school Chinese students in the age range between 12 to 18, RACE consists
of near 28,000 passages and near 100,000 questions generated by human experts
(English instructors), and covers a variety of topics which are carefully
designed for evaluating the students' ability in understanding and reasoning.
In particular, the proportion of questions that requires reasoning is much
larger in RACE than that in other benchmark datasets for reading comprehension,
and there is a significant gap between the performance of the state-of-the-art
models (43%) and the ceiling human performance (95%). We hope this new dataset
can serve as a valuable resource for research and evaluation in machine
comprehension. The dataset is freely available at
http://www.cs.cmu.edu/~glai1/data/race/ and the code is available at
https://github.com/qizhex/RACE_AR_baselines. | http://arxiv.org/pdf/1704.04683 | Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, Eduard Hovy | cs.CL, cs.AI, cs.LG | EMNLP 2017 | null | cs.CL | 20170415 | 20171205 | [
{
"id": "1511.02301"
},
{
"id": "1608.05457"
},
{
"id": "1702.02206"
},
{
"id": "1606.05250"
},
{
"id": "1604.06076"
},
{
"id": "1611.09268"
},
{
"id": "1606.02858"
},
{
"id": "1610.00956"
},
{
"id": "1606.01549"
},
{
"id": "1611.09830"
},
{
"id": "1603.01547"
}
] |
1704.04683 | 2 | # Introduction
Constructing an intelligence agent capable of un- derstanding text as people is the major challenge of NLP research. With recent advances in deep learning techniques, it seems possible to achieve human-level performance in certain language un- derstanding tasks, and a surge of effort has been devoted to the machine comprehension task where people aim to construct a system with the ability to
answer questions related to a document that it has to comprehend (Chen et al., 2016; Kadlec et al., 2016; Dhingra et al., 2016; Yang et al., 2017). | 1704.04683#2 | RACE: Large-scale ReAding Comprehension Dataset From Examinations | We present RACE, a new dataset for benchmark evaluation of methods in the
reading comprehension task. Collected from the English exams for middle and
high school Chinese students in the age range between 12 to 18, RACE consists
of near 28,000 passages and near 100,000 questions generated by human experts
(English instructors), and covers a variety of topics which are carefully
designed for evaluating the students' ability in understanding and reasoning.
In particular, the proportion of questions that requires reasoning is much
larger in RACE than that in other benchmark datasets for reading comprehension,
and there is a significant gap between the performance of the state-of-the-art
models (43%) and the ceiling human performance (95%). We hope this new dataset
can serve as a valuable resource for research and evaluation in machine
comprehension. The dataset is freely available at
http://www.cs.cmu.edu/~glai1/data/race/ and the code is available at
https://github.com/qizhex/RACE_AR_baselines. | http://arxiv.org/pdf/1704.04683 | Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, Eduard Hovy | cs.CL, cs.AI, cs.LG | EMNLP 2017 | null | cs.CL | 20170415 | 20171205 | [
{
"id": "1511.02301"
},
{
"id": "1608.05457"
},
{
"id": "1702.02206"
},
{
"id": "1606.05250"
},
{
"id": "1604.06076"
},
{
"id": "1611.09268"
},
{
"id": "1606.02858"
},
{
"id": "1610.00956"
},
{
"id": "1606.01549"
},
{
"id": "1611.09830"
},
{
"id": "1603.01547"
}
] |
1704.04651 | 3 | Much of the recent work can be divided into two categories. First, those of which that, often building on the DQN framework, act e-greedily according to an action-value function and train using mini- batches of transitions sampled from an experience replay buffer 2015} He et al. 2017} Anschel et al. O17). These value-function agents benefit from improved sample complexity, but tend to suffer from long runtimes (e.g. DQN requires approximately a week to train on Atari). The second category are the actor-critic agents, which includes the asynchronous advantage actor-critic (A3C) algorithm, introduced by [Mnih et al.|(2016). These agents train on transitions collected by multiple actors running, and often training, in parallel (Schulman et al.|/2017 2017). The deep actor-critic agents train on each trajectory only once, and thus tend to have worse sample complexity. However, their distributed nature allows significantly faster training in terms of wall-clock time. Still, not all existing algorithms can be put in the above two categories and various hybrid approaches do exist (Zhao et al./2016}{Oâ"Donoghue et al.|[2017}/Gu| 2017 2017).
1
Published as a conference paper at ICLR 2018 | 1704.04651#3 | The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning | In this work we present a new agent architecture, called Reactor, which
combines multiple algorithmic and architectural contributions to produce an
agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al.,
2016) and Categorical DQN (Bellemare et al., 2017), while giving better
run-time performance than A3C (Mnih et al., 2016). Our first contribution is a
new policy evaluation algorithm called Distributional Retrace, which brings
multi-step off-policy updates to the distributional reinforcement learning
setting. The same approach can be used to convert several classes of multi-step
policy evaluation algorithms designed for expected value evaluation into
distributional ones. Next, we introduce the \b{eta}-leave-one-out policy
gradient algorithm which improves the trade-off between variance and bias by
using action values as a baseline. Our final algorithmic contribution is a new
prioritized replay algorithm for sequences, which exploits the temporal
locality of neighboring observations for more efficient replay prioritization.
Using the Atari 2600 benchmarks, we show that each of these innovations
contribute to both the sample efficiency and final agent performance. Finally,
we demonstrate that Reactor reaches state-of-the-art performance after 200
million frames and less than a day of training. | http://arxiv.org/pdf/1704.04651 | Audrunas Gruslys, Will Dabney, Mohammad Gheshlaghi Azar, Bilal Piot, Marc Bellemare, Remi Munos | cs.AI | null | null | cs.AI | 20170415 | 20180619 | [
{
"id": "1707.06347"
},
{
"id": "1703.01161"
},
{
"id": "1509.02971"
},
{
"id": "1710.02298"
},
{
"id": "1706.10295"
},
{
"id": "1707.06887"
},
{
"id": "1511.05952"
}
] |
1704.04683 | 3 | Towards this goal, several large-scale datasets (Rajpurkar et al., 2016; Onishi et al., 2016; Hill et al., 2015; Trischler et al., 2016; Hermann et al., 2015) have been proposed, which allow re- searchers to train deep learning systems and ob- tain results comparable to the human performance. While having a suitable dataset is crucial for eval- uating the systemâs true ability in reading compre- hension, the existing datasets suffer several critical limitations. Firstly, in all datasets, the candidate options are directly extracted from the context (as a single entity or a text span), which leads to the fact that lots of questions can be solved trivially via word-based search and context-matching with- out deeper reasoning; this constrains the types of questions as well. Secondly, answers and ques- tions of most datasets are either crowd-sourced or automatically-generated, bringing a signiï¬cant amount of noises in the datasets and limits the ceil- ing performance by domain experts, such as 82% for Childrens Book Test and 84% for Who-did- What. Yet another issue in existing datasets is that the topic coverages are | 1704.04683#3 | RACE: Large-scale ReAding Comprehension Dataset From Examinations | We present RACE, a new dataset for benchmark evaluation of methods in the
reading comprehension task. Collected from the English exams for middle and
high school Chinese students in the age range between 12 to 18, RACE consists
of near 28,000 passages and near 100,000 questions generated by human experts
(English instructors), and covers a variety of topics which are carefully
designed for evaluating the students' ability in understanding and reasoning.
In particular, the proportion of questions that requires reasoning is much
larger in RACE than that in other benchmark datasets for reading comprehension,
and there is a significant gap between the performance of the state-of-the-art
models (43%) and the ceiling human performance (95%). We hope this new dataset
can serve as a valuable resource for research and evaluation in machine
comprehension. The dataset is freely available at
http://www.cs.cmu.edu/~glai1/data/race/ and the code is available at
https://github.com/qizhex/RACE_AR_baselines. | http://arxiv.org/pdf/1704.04683 | Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, Eduard Hovy | cs.CL, cs.AI, cs.LG | EMNLP 2017 | null | cs.CL | 20170415 | 20171205 | [
{
"id": "1511.02301"
},
{
"id": "1608.05457"
},
{
"id": "1702.02206"
},
{
"id": "1606.05250"
},
{
"id": "1604.06076"
},
{
"id": "1611.09268"
},
{
"id": "1606.02858"
},
{
"id": "1610.00956"
},
{
"id": "1606.01549"
},
{
"id": "1611.09830"
},
{
"id": "1603.01547"
}
] |
1704.04651 | 4 | 1
Published as a conference paper at ICLR 2018
Data-efï¬ciency and off-policy learning are essential for many real-world domains where interactions with the environment are expensive. Similarly, wall-clock time (time-efï¬ciency) directly impacts an algorithmâs applicability through resource costs. The focus of this work is to produce an agent that is sample- and time-efï¬cient. To this end, we introduce a new reinforcement learning agent, called Reactor (Retrace-Actor), which takes a principled approach to combining the sample-efï¬ciency of off-policy experience replay with the time-efï¬ciency of asynchronous algorithms. We combine recent advances in both categories of agents with novel contributions to produce an agent that inherits the beneï¬ts of both and reaches state-of-the-art performance over 57 Atari 2600 games.
Our primary contributions are (1) a novel policy gradient algorithm, β-LOO, which makes better use of action-value estimates to improve the policy gradient; (2) the ï¬rst multi-step off-policy distributional reinforcement learning algorithm, distributional Retrace(λ); (3) a novel prioritized replay for off-policy sequences of transitions; and (4) an optimized network and parallel training architecture. | 1704.04651#4 | The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning | In this work we present a new agent architecture, called Reactor, which
combines multiple algorithmic and architectural contributions to produce an
agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al.,
2016) and Categorical DQN (Bellemare et al., 2017), while giving better
run-time performance than A3C (Mnih et al., 2016). Our first contribution is a
new policy evaluation algorithm called Distributional Retrace, which brings
multi-step off-policy updates to the distributional reinforcement learning
setting. The same approach can be used to convert several classes of multi-step
policy evaluation algorithms designed for expected value evaluation into
distributional ones. Next, we introduce the \b{eta}-leave-one-out policy
gradient algorithm which improves the trade-off between variance and bias by
using action values as a baseline. Our final algorithmic contribution is a new
prioritized replay algorithm for sequences, which exploits the temporal
locality of neighboring observations for more efficient replay prioritization.
Using the Atari 2600 benchmarks, we show that each of these innovations
contribute to both the sample efficiency and final agent performance. Finally,
we demonstrate that Reactor reaches state-of-the-art performance after 200
million frames and less than a day of training. | http://arxiv.org/pdf/1704.04651 | Audrunas Gruslys, Will Dabney, Mohammad Gheshlaghi Azar, Bilal Piot, Marc Bellemare, Remi Munos | cs.AI | null | null | cs.AI | 20170415 | 20180619 | [
{
"id": "1707.06347"
},
{
"id": "1703.01161"
},
{
"id": "1509.02971"
},
{
"id": "1710.02298"
},
{
"id": "1706.10295"
},
{
"id": "1707.06887"
},
{
"id": "1511.05952"
}
] |
1704.04683 | 4 | performance by domain experts, such as 82% for Childrens Book Test and 84% for Who-did- What. Yet another issue in existing datasets is that the topic coverages are often biased due to the spe- ciï¬c ways that the data were initially collected, making it hard to evaluate the ability of systems in text comprehension over a broader range of topics. To address the aforementioned limitations, we constructed a new dataset by collecting a large set of questions, answers and associated pas- sages in the English exams for middle-school and high-school Chinese students within the 12â18 age range. Those exams were designed by do- main experts (instructors) for evaluating the read- ing comprehension ability of students, with en- Fur- sured quality and broad topic coverage. thermore, the answers by machines or by hu- mans can be objectively graded for evaluation | 1704.04683#4 | RACE: Large-scale ReAding Comprehension Dataset From Examinations | We present RACE, a new dataset for benchmark evaluation of methods in the
reading comprehension task. Collected from the English exams for middle and
high school Chinese students in the age range between 12 to 18, RACE consists
of near 28,000 passages and near 100,000 questions generated by human experts
(English instructors), and covers a variety of topics which are carefully
designed for evaluating the students' ability in understanding and reasoning.
In particular, the proportion of questions that requires reasoning is much
larger in RACE than that in other benchmark datasets for reading comprehension,
and there is a significant gap between the performance of the state-of-the-art
models (43%) and the ceiling human performance (95%). We hope this new dataset
can serve as a valuable resource for research and evaluation in machine
comprehension. The dataset is freely available at
http://www.cs.cmu.edu/~glai1/data/race/ and the code is available at
https://github.com/qizhex/RACE_AR_baselines. | http://arxiv.org/pdf/1704.04683 | Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, Eduard Hovy | cs.CL, cs.AI, cs.LG | EMNLP 2017 | null | cs.CL | 20170415 | 20171205 | [
{
"id": "1511.02301"
},
{
"id": "1608.05457"
},
{
"id": "1702.02206"
},
{
"id": "1606.05250"
},
{
"id": "1604.06076"
},
{
"id": "1611.09268"
},
{
"id": "1606.02858"
},
{
"id": "1610.00956"
},
{
"id": "1606.01549"
},
{
"id": "1611.09830"
},
{
"id": "1603.01547"
}
] |
1704.04651 | 5 | We begin by reviewing background material, including relevant improvements to both value-function agents and actor-critic agents. In Section 3 we introduce each of our primary contributions and present the Reactor agent. Finally, in Section 4, we present experimental results on the 57 Atari 2600 games from the Arcade Learning Environment (ALE) (Bellemare et al., 2013), as well as a series of ablation studies for the various components of Reactor.
# 2 BACKGROUND
We consider a Markov decision process (MDP) with state space X and ï¬nite action space A. A (stochastic) policy Ï(·|x) is a mapping from states x â X to a probability distribution over actions. We consider a γ-discounted inï¬nite-horizon criterion, with γ â [0, 1) the discount factor, and deï¬ne for policy Ï the action-value of a state-action pair (x, a) as
, def Q" (x, 0)â B[ D507! releo =, a0 = 4,7], | 1704.04651#5 | The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning | In this work we present a new agent architecture, called Reactor, which
combines multiple algorithmic and architectural contributions to produce an
agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al.,
2016) and Categorical DQN (Bellemare et al., 2017), while giving better
run-time performance than A3C (Mnih et al., 2016). Our first contribution is a
new policy evaluation algorithm called Distributional Retrace, which brings
multi-step off-policy updates to the distributional reinforcement learning
setting. The same approach can be used to convert several classes of multi-step
policy evaluation algorithms designed for expected value evaluation into
distributional ones. Next, we introduce the \b{eta}-leave-one-out policy
gradient algorithm which improves the trade-off between variance and bias by
using action values as a baseline. Our final algorithmic contribution is a new
prioritized replay algorithm for sequences, which exploits the temporal
locality of neighboring observations for more efficient replay prioritization.
Using the Atari 2600 benchmarks, we show that each of these innovations
contribute to both the sample efficiency and final agent performance. Finally,
we demonstrate that Reactor reaches state-of-the-art performance after 200
million frames and less than a day of training. | http://arxiv.org/pdf/1704.04651 | Audrunas Gruslys, Will Dabney, Mohammad Gheshlaghi Azar, Bilal Piot, Marc Bellemare, Remi Munos | cs.AI | null | null | cs.AI | 20170415 | 20180619 | [
{
"id": "1707.06347"
},
{
"id": "1703.01161"
},
{
"id": "1509.02971"
},
{
"id": "1710.02298"
},
{
"id": "1706.10295"
},
{
"id": "1707.06887"
},
{
"id": "1511.05952"
}
] |
1704.04683 | 5 | â* indicates equal contribution
and comparison using the same evaluation met- rics. Although efforts have been made with a sim- ilar motivation, including the MCTest dataset cre- ated by (Richardson et al., 2013) (containing 500 passages and 2000 questions) and several others (PeËnas et al., 2014; Rodrigo et al., 2015; Khashabi et al., 2016; Shibuki et al., 2014), the usefulness of those datasets is signiï¬cantly restricted due to their small sizes, especially not suitable for train- ing powerful deep neural networks whose success relies on the availability of relatively large training sets.
Our new dataset, namely RACE, consists of 27,933 passages and 97,687 questions. After read- ing each passage, each student is asked to answer several questions where each question is provided with four candidate answers â only one of them is correct . Unlike existing datasets, both the ques- tions and candidate answers in RACE are not re- stricted to be the text spans in the original passage; instead, they can be described in any words. A sample from our dataset is presented in Table 1. | 1704.04683#5 | RACE: Large-scale ReAding Comprehension Dataset From Examinations | We present RACE, a new dataset for benchmark evaluation of methods in the
reading comprehension task. Collected from the English exams for middle and
high school Chinese students in the age range between 12 to 18, RACE consists
of near 28,000 passages and near 100,000 questions generated by human experts
(English instructors), and covers a variety of topics which are carefully
designed for evaluating the students' ability in understanding and reasoning.
In particular, the proportion of questions that requires reasoning is much
larger in RACE than that in other benchmark datasets for reading comprehension,
and there is a significant gap between the performance of the state-of-the-art
models (43%) and the ceiling human performance (95%). We hope this new dataset
can serve as a valuable resource for research and evaluation in machine
comprehension. The dataset is freely available at
http://www.cs.cmu.edu/~glai1/data/race/ and the code is available at
https://github.com/qizhex/RACE_AR_baselines. | http://arxiv.org/pdf/1704.04683 | Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, Eduard Hovy | cs.CL, cs.AI, cs.LG | EMNLP 2017 | null | cs.CL | 20170415 | 20171205 | [
{
"id": "1511.02301"
},
{
"id": "1608.05457"
},
{
"id": "1702.02206"
},
{
"id": "1606.05250"
},
{
"id": "1604.06076"
},
{
"id": "1611.09268"
},
{
"id": "1606.02858"
},
{
"id": "1610.00956"
},
{
"id": "1606.01549"
},
{
"id": "1611.09830"
},
{
"id": "1603.01547"
}
] |
1704.04651 | 6 | , def Q" (x, 0)â B[ D507! releo =, a0 = 4,7],
where ({xt}tâ¥0) is a trajectory generated by choosing a in x and following Ï thereafter, i.e., at â¼ Ï(·|xt) (for t ⥠1), and rt is the reward signal. The objective in reinforcement learning is to ï¬nd an optimal policy Ïâ, which maximises QÏ(x, a). The optimal action-values are given by Qâ(x, a) = maxÏ QÏ(x, a).
2.1 VALUE-BASED ALGORITHMS | 1704.04651#6 | The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning | In this work we present a new agent architecture, called Reactor, which
combines multiple algorithmic and architectural contributions to produce an
agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al.,
2016) and Categorical DQN (Bellemare et al., 2017), while giving better
run-time performance than A3C (Mnih et al., 2016). Our first contribution is a
new policy evaluation algorithm called Distributional Retrace, which brings
multi-step off-policy updates to the distributional reinforcement learning
setting. The same approach can be used to convert several classes of multi-step
policy evaluation algorithms designed for expected value evaluation into
distributional ones. Next, we introduce the \b{eta}-leave-one-out policy
gradient algorithm which improves the trade-off between variance and bias by
using action values as a baseline. Our final algorithmic contribution is a new
prioritized replay algorithm for sequences, which exploits the temporal
locality of neighboring observations for more efficient replay prioritization.
Using the Atari 2600 benchmarks, we show that each of these innovations
contribute to both the sample efficiency and final agent performance. Finally,
we demonstrate that Reactor reaches state-of-the-art performance after 200
million frames and less than a day of training. | http://arxiv.org/pdf/1704.04651 | Audrunas Gruslys, Will Dabney, Mohammad Gheshlaghi Azar, Bilal Piot, Marc Bellemare, Remi Munos | cs.AI | null | null | cs.AI | 20170415 | 20180619 | [
{
"id": "1707.06347"
},
{
"id": "1703.01161"
},
{
"id": "1509.02971"
},
{
"id": "1710.02298"
},
{
"id": "1706.10295"
},
{
"id": "1707.06887"
},
{
"id": "1511.05952"
}
] |
1704.04683 | 6 | Our latter analysis shows that correctly answer- ing a large portion of questions in RACE requires the ability of reasoning, the most important fea- ture as a machine comprehension dataset (Chen et al., 2016). RACE also offers two important sub- divisions of the reasoning types in its questions, namely passage summarization and attitude anal- ysis, which have not been introduced by the any of the existing large-scale datasets to our knowledge. In addition, compared to other existing datasets where passages are either domain-speciï¬c or of a single ï¬xed style (namely news stories for CNN/- Daily Mail, NEWSQA and Who-did-What, ï¬ction stories for Childrenâs Book Test and Book Test, and Wikipedia articles for SQUAD), passages in RACE almost cover all types of human articles, such as news, stories, ads, biography, philosophy, etc., in a variety of styles. This comprehensiveness of topic/style coverage makes RACE a desirable resource for evaluating the reading comprehension ability of machine learning systems in general.
The advantages of our proposed dataset over ex- isting large datasets in machine reading compre- hension can be summarized as follows: | 1704.04683#6 | RACE: Large-scale ReAding Comprehension Dataset From Examinations | We present RACE, a new dataset for benchmark evaluation of methods in the
reading comprehension task. Collected from the English exams for middle and
high school Chinese students in the age range between 12 to 18, RACE consists
of near 28,000 passages and near 100,000 questions generated by human experts
(English instructors), and covers a variety of topics which are carefully
designed for evaluating the students' ability in understanding and reasoning.
In particular, the proportion of questions that requires reasoning is much
larger in RACE than that in other benchmark datasets for reading comprehension,
and there is a significant gap between the performance of the state-of-the-art
models (43%) and the ceiling human performance (95%). We hope this new dataset
can serve as a valuable resource for research and evaluation in machine
comprehension. The dataset is freely available at
http://www.cs.cmu.edu/~glai1/data/race/ and the code is available at
https://github.com/qizhex/RACE_AR_baselines. | http://arxiv.org/pdf/1704.04683 | Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, Eduard Hovy | cs.CL, cs.AI, cs.LG | EMNLP 2017 | null | cs.CL | 20170415 | 20171205 | [
{
"id": "1511.02301"
},
{
"id": "1608.05457"
},
{
"id": "1702.02206"
},
{
"id": "1606.05250"
},
{
"id": "1604.06076"
},
{
"id": "1611.09268"
},
{
"id": "1606.02858"
},
{
"id": "1610.00956"
},
{
"id": "1606.01549"
},
{
"id": "1611.09830"
},
{
"id": "1603.01547"
}
] |
1704.04651 | 7 | 2.1 VALUE-BASED ALGORITHMS
The Deep Q-Network (DQN) framework, introduced by Mnih et al. (2015), popularised the current line of research into deep reinforcement learning by reaching human-level, and beyond, performance across 57 Atari 2600 games in the ALE. While DQN includes many speciï¬c components, the essence of the framework, much of which is shared by Neural Fitted Q-Learning (Riedmiller, 2005), is to use of a deep convolutional neural network to approximate an action-value function, training this approximate action-value function using the Q-Learning algorithm (Watkins & Dayan, 1992) and mini-batches of one-step transitions (xt, at, rt, xt+1, γt) drawn randomly from an experience replay buffer (Lin, 1992). Additionally, the next-state action-values are taken from a target network, which is updated to match the current network periodically. Thus, the temporal difference (TD) error for transition t used by these algorithms is given by
be =e + max Q(r1+1, 4/39) â Q(x1, a4; 9), (1)
where θ denotes the parameters of the network and ¯θ are the parameters of the target network. | 1704.04651#7 | The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning | In this work we present a new agent architecture, called Reactor, which
combines multiple algorithmic and architectural contributions to produce an
agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al.,
2016) and Categorical DQN (Bellemare et al., 2017), while giving better
run-time performance than A3C (Mnih et al., 2016). Our first contribution is a
new policy evaluation algorithm called Distributional Retrace, which brings
multi-step off-policy updates to the distributional reinforcement learning
setting. The same approach can be used to convert several classes of multi-step
policy evaluation algorithms designed for expected value evaluation into
distributional ones. Next, we introduce the \b{eta}-leave-one-out policy
gradient algorithm which improves the trade-off between variance and bias by
using action values as a baseline. Our final algorithmic contribution is a new
prioritized replay algorithm for sequences, which exploits the temporal
locality of neighboring observations for more efficient replay prioritization.
Using the Atari 2600 benchmarks, we show that each of these innovations
contribute to both the sample efficiency and final agent performance. Finally,
we demonstrate that Reactor reaches state-of-the-art performance after 200
million frames and less than a day of training. | http://arxiv.org/pdf/1704.04651 | Audrunas Gruslys, Will Dabney, Mohammad Gheshlaghi Azar, Bilal Piot, Marc Bellemare, Remi Munos | cs.AI | null | null | cs.AI | 20170415 | 20180619 | [
{
"id": "1707.06347"
},
{
"id": "1703.01161"
},
{
"id": "1509.02971"
},
{
"id": "1710.02298"
},
{
"id": "1706.10295"
},
{
"id": "1707.06887"
},
{
"id": "1511.05952"
}
] |
1704.04683 | 7 | The advantages of our proposed dataset over ex- isting large datasets in machine reading compre- hension can be summarized as follows:
⢠All questions and candidate options are gen- erated by human experts, which are intention- ally designed to test human agentâs ability in reading comprehension. This makes RACE a relatively accurate indicator for reï¬ecting the
text comprehension ability of machine learn- ing systems under human judge.
⢠The questions are substantially more difï¬cult than those in existing datasets, in terms of the large portion of questions involving reason- ing. At the meantime, it is also sufï¬ciently large to support the training of deep learning models.
⢠Unlike existing large-scale datasets, candi- date options in RACE are human generated sentences which may not appear in the origi- nal passage. This makes the task more chal- lenging and allows a rich type of questions such as passage summarization and attitude analysis.
⢠Broad coverage in various domains and writ- ing styles: a desirable property for evaluating generic (in contrast to domain/style-speciï¬c) comprehension ability of learning models.
# 2 Related Work
In this section, we brieï¬y outline existing datasets for the machine reading comprehension task, in- cluding their strengths and weaknesses.
# 2.1 MCTest | 1704.04683#7 | RACE: Large-scale ReAding Comprehension Dataset From Examinations | We present RACE, a new dataset for benchmark evaluation of methods in the
reading comprehension task. Collected from the English exams for middle and
high school Chinese students in the age range between 12 to 18, RACE consists
of near 28,000 passages and near 100,000 questions generated by human experts
(English instructors), and covers a variety of topics which are carefully
designed for evaluating the students' ability in understanding and reasoning.
In particular, the proportion of questions that requires reasoning is much
larger in RACE than that in other benchmark datasets for reading comprehension,
and there is a significant gap between the performance of the state-of-the-art
models (43%) and the ceiling human performance (95%). We hope this new dataset
can serve as a valuable resource for research and evaluation in machine
comprehension. The dataset is freely available at
http://www.cs.cmu.edu/~glai1/data/race/ and the code is available at
https://github.com/qizhex/RACE_AR_baselines. | http://arxiv.org/pdf/1704.04683 | Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, Eduard Hovy | cs.CL, cs.AI, cs.LG | EMNLP 2017 | null | cs.CL | 20170415 | 20171205 | [
{
"id": "1511.02301"
},
{
"id": "1608.05457"
},
{
"id": "1702.02206"
},
{
"id": "1606.05250"
},
{
"id": "1604.06076"
},
{
"id": "1611.09268"
},
{
"id": "1606.02858"
},
{
"id": "1610.00956"
},
{
"id": "1606.01549"
},
{
"id": "1611.09830"
},
{
"id": "1603.01547"
}
] |
1704.04651 | 8 | where θ denotes the parameters of the network and ¯θ are the parameters of the target network.
Since this seminal work, we have seen numerous extensions and improvements that all share the same underlying framework. Double DQN (2016), attempts to cor- rect for the over-estimation bias inherent in Q-Learning by changing the second term of to Q(@141, arg MaxXac 4 Q(X141, aâ; 9); 8). The dueling architecture ( 2015), changes the and A(z, a;@) with
2
Published as a conference paper at ICLR 2018
Recently, Hessel et al. (2017) introduced Rainbow, a value-based reinforcement learning agent combining many of these improvements into a single agent and demonstrating that they are largely complementary. Rainbow signiï¬cantly out performs previous methods, but also inherits the poorer time-efï¬ciency of the DQN framework. We include a detailed comparison between Reactor and Rainbow in the Appendix. In the remainder of the section we will describe in more depth other recent improvements to DQN.
2.1.1 PRIORITIZED EXPERIENCE REPLAY | 1704.04651#8 | The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning | In this work we present a new agent architecture, called Reactor, which
combines multiple algorithmic and architectural contributions to produce an
agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al.,
2016) and Categorical DQN (Bellemare et al., 2017), while giving better
run-time performance than A3C (Mnih et al., 2016). Our first contribution is a
new policy evaluation algorithm called Distributional Retrace, which brings
multi-step off-policy updates to the distributional reinforcement learning
setting. The same approach can be used to convert several classes of multi-step
policy evaluation algorithms designed for expected value evaluation into
distributional ones. Next, we introduce the \b{eta}-leave-one-out policy
gradient algorithm which improves the trade-off between variance and bias by
using action values as a baseline. Our final algorithmic contribution is a new
prioritized replay algorithm for sequences, which exploits the temporal
locality of neighboring observations for more efficient replay prioritization.
Using the Atari 2600 benchmarks, we show that each of these innovations
contribute to both the sample efficiency and final agent performance. Finally,
we demonstrate that Reactor reaches state-of-the-art performance after 200
million frames and less than a day of training. | http://arxiv.org/pdf/1704.04651 | Audrunas Gruslys, Will Dabney, Mohammad Gheshlaghi Azar, Bilal Piot, Marc Bellemare, Remi Munos | cs.AI | null | null | cs.AI | 20170415 | 20180619 | [
{
"id": "1707.06347"
},
{
"id": "1703.01161"
},
{
"id": "1509.02971"
},
{
"id": "1710.02298"
},
{
"id": "1706.10295"
},
{
"id": "1707.06887"
},
{
"id": "1511.05952"
}
] |
1704.04683 | 8 | In this section, we brieï¬y outline existing datasets for the machine reading comprehension task, in- cluding their strengths and weaknesses.
# 2.1 MCTest
MCTest (Richardson et al., 2013) is a popular dataset for question answering in the same for- mat as RACE, where each question is associated with four candidate answers with a single cor- rect answer. Although questions in MCTest are of high-quality ensured by careful examinations through crowdsourcing, it contains only 500 stores and 2000 questions, which substantially restricts its usage in training advanced machine compre- hension models. Moreover, while MCTest is de- signed for 7 years old children, RACE is con- structed for middle and high school students at 12â18 years old hence is more complicated and requires stronger reasoning skills. In other words, RACE can be viewed as a larger and more difï¬cult version of the MCTest dataset.
# 2.2 Cloze-style datasets
The past few years have witnessed several large- scale cloze-style datasets (Hermann et al., 2015; Hill et al., 2015; Bajgar et al., 2016; Onishi et al., 2016), whose questions are formulated by obliter- ating a word or an entity in a sentence. | 1704.04683#8 | RACE: Large-scale ReAding Comprehension Dataset From Examinations | We present RACE, a new dataset for benchmark evaluation of methods in the
reading comprehension task. Collected from the English exams for middle and
high school Chinese students in the age range between 12 to 18, RACE consists
of near 28,000 passages and near 100,000 questions generated by human experts
(English instructors), and covers a variety of topics which are carefully
designed for evaluating the students' ability in understanding and reasoning.
In particular, the proportion of questions that requires reasoning is much
larger in RACE than that in other benchmark datasets for reading comprehension,
and there is a significant gap between the performance of the state-of-the-art
models (43%) and the ceiling human performance (95%). We hope this new dataset
can serve as a valuable resource for research and evaluation in machine
comprehension. The dataset is freely available at
http://www.cs.cmu.edu/~glai1/data/race/ and the code is available at
https://github.com/qizhex/RACE_AR_baselines. | http://arxiv.org/pdf/1704.04683 | Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, Eduard Hovy | cs.CL, cs.AI, cs.LG | EMNLP 2017 | null | cs.CL | 20170415 | 20171205 | [
{
"id": "1511.02301"
},
{
"id": "1608.05457"
},
{
"id": "1702.02206"
},
{
"id": "1606.05250"
},
{
"id": "1604.06076"
},
{
"id": "1611.09268"
},
{
"id": "1606.02858"
},
{
"id": "1610.00956"
},
{
"id": "1606.01549"
},
{
"id": "1611.09830"
},
{
"id": "1603.01547"
}
] |
1704.04651 | 9 | 2.1.1 PRIORITIZED EXPERIENCE REPLAY
The experience replay buffer was ï¬rst introduced by Lin (1992) and later used in DQN (Mnih et al., 2015). Typically, the replay buffer is essentially a ï¬rst-in-ï¬rst-out queue with new transitions gradually replacing older transitions. The agent would then sample a mini-batch uniformly at random from the replay buffer. Drawing inspiration from prioritized sweeping (Moore & Atkeson, 1993), prioritized experience replay replaces the uniform sampling with prioritized sampling proportional to the absolute TD error (Schaul et al., 2016).
Speciï¬cally, for a replay buffer of size N , prioritized experience replay samples transition t with probability P (t), and applies weighted importance-sampling with wt to correct for the prioritization bias, where
B ce 1 o4\ P(t) are wr (+7) » p=|dil+e, a,6,e>0. (2) k Pk
Prioritized DQN signiï¬cantly increases both the sample-efï¬ciency and ï¬nal performance over DQN on the Atari 2600 benchmarks (Schaul et al., 2015).
# 2.1.2 RETRACE(λ) | 1704.04651#9 | The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning | In this work we present a new agent architecture, called Reactor, which
combines multiple algorithmic and architectural contributions to produce an
agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al.,
2016) and Categorical DQN (Bellemare et al., 2017), while giving better
run-time performance than A3C (Mnih et al., 2016). Our first contribution is a
new policy evaluation algorithm called Distributional Retrace, which brings
multi-step off-policy updates to the distributional reinforcement learning
setting. The same approach can be used to convert several classes of multi-step
policy evaluation algorithms designed for expected value evaluation into
distributional ones. Next, we introduce the \b{eta}-leave-one-out policy
gradient algorithm which improves the trade-off between variance and bias by
using action values as a baseline. Our final algorithmic contribution is a new
prioritized replay algorithm for sequences, which exploits the temporal
locality of neighboring observations for more efficient replay prioritization.
Using the Atari 2600 benchmarks, we show that each of these innovations
contribute to both the sample efficiency and final agent performance. Finally,
we demonstrate that Reactor reaches state-of-the-art performance after 200
million frames and less than a day of training. | http://arxiv.org/pdf/1704.04651 | Audrunas Gruslys, Will Dabney, Mohammad Gheshlaghi Azar, Bilal Piot, Marc Bellemare, Remi Munos | cs.AI | null | null | cs.AI | 20170415 | 20180619 | [
{
"id": "1707.06347"
},
{
"id": "1703.01161"
},
{
"id": "1509.02971"
},
{
"id": "1710.02298"
},
{
"id": "1706.10295"
},
{
"id": "1707.06887"
},
{
"id": "1511.05952"
}
] |
1704.04683 | 9 | Passage: In a small village in England about 150 years ago, a mail coach was standing on the street. It didnât come to that village often. People had to pay a lot to get a letter. The person who sent the letter didnât have to pay the postage, while the receiver had to. âHereâs a letter for Miss Alice Brown,â said the mailman. â Iâm Alice Brown,â a girl of about 18 said in a low voice. Alice looked at the envelope for a minute, and then handed it back to the mailman. âIâm sorry I canât take it, I donât have enough money to pay itâ, she said. A gentleman standing around were very sorry for her. Then he came up and paid the postage for her. When the gentleman gave the letter to her, she said with a smile, â Thank you very much, This letter is from Tom. Iâm going to marry him. He went to London to look for work. Iâve waited a long time for this letter, but now I donât need it, there is nothing in it.â âReally? How do you know that?â the gentleman said in surprise. âHe | 1704.04683#9 | RACE: Large-scale ReAding Comprehension Dataset From Examinations | We present RACE, a new dataset for benchmark evaluation of methods in the
reading comprehension task. Collected from the English exams for middle and
high school Chinese students in the age range between 12 to 18, RACE consists
of near 28,000 passages and near 100,000 questions generated by human experts
(English instructors), and covers a variety of topics which are carefully
designed for evaluating the students' ability in understanding and reasoning.
In particular, the proportion of questions that requires reasoning is much
larger in RACE than that in other benchmark datasets for reading comprehension,
and there is a significant gap between the performance of the state-of-the-art
models (43%) and the ceiling human performance (95%). We hope this new dataset
can serve as a valuable resource for research and evaluation in machine
comprehension. The dataset is freely available at
http://www.cs.cmu.edu/~glai1/data/race/ and the code is available at
https://github.com/qizhex/RACE_AR_baselines. | http://arxiv.org/pdf/1704.04683 | Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, Eduard Hovy | cs.CL, cs.AI, cs.LG | EMNLP 2017 | null | cs.CL | 20170415 | 20171205 | [
{
"id": "1511.02301"
},
{
"id": "1608.05457"
},
{
"id": "1702.02206"
},
{
"id": "1606.05250"
},
{
"id": "1604.06076"
},
{
"id": "1611.09268"
},
{
"id": "1606.02858"
},
{
"id": "1610.00956"
},
{
"id": "1606.01549"
},
{
"id": "1611.09830"
},
{
"id": "1603.01547"
}
] |
1704.04651 | 10 | # 2.1.2 RETRACE(λ)
Retrace(λ) is a convergent off-policy multi-step algorithm extending the DQN agent (Munos et al., 2016). Assume that some trajectory {x0, a0, r0, x1, a1, r1, . . . , xt, at, rt, . . . , } has been generated according to behaviour policy µ, i.e., at ⼠µ(·|xt). Now, we aim to evaluate the value of a different target policy Ï, i.e. we want to estimate QÏ. The Retrace algorithm will update our current estimate Q of QÏ in the direction of
def AQ(x1,44) = Ves â(co ».1â¬5)0,Q, (3)
# where
s Q def= rs + γEÏ[Q(xs+1, ·)] â Q(xs, as) is the temporal difference at time s under Ï, and
cr was) cs =Amin(1,ps), ps (4) | 1704.04651#10 | The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning | In this work we present a new agent architecture, called Reactor, which
combines multiple algorithmic and architectural contributions to produce an
agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al.,
2016) and Categorical DQN (Bellemare et al., 2017), while giving better
run-time performance than A3C (Mnih et al., 2016). Our first contribution is a
new policy evaluation algorithm called Distributional Retrace, which brings
multi-step off-policy updates to the distributional reinforcement learning
setting. The same approach can be used to convert several classes of multi-step
policy evaluation algorithms designed for expected value evaluation into
distributional ones. Next, we introduce the \b{eta}-leave-one-out policy
gradient algorithm which improves the trade-off between variance and bias by
using action values as a baseline. Our final algorithmic contribution is a new
prioritized replay algorithm for sequences, which exploits the temporal
locality of neighboring observations for more efficient replay prioritization.
Using the Atari 2600 benchmarks, we show that each of these innovations
contribute to both the sample efficiency and final agent performance. Finally,
we demonstrate that Reactor reaches state-of-the-art performance after 200
million frames and less than a day of training. | http://arxiv.org/pdf/1704.04651 | Audrunas Gruslys, Will Dabney, Mohammad Gheshlaghi Azar, Bilal Piot, Marc Bellemare, Remi Munos | cs.AI | null | null | cs.AI | 20170415 | 20180619 | [
{
"id": "1707.06347"
},
{
"id": "1703.01161"
},
{
"id": "1509.02971"
},
{
"id": "1710.02298"
},
{
"id": "1706.10295"
},
{
"id": "1707.06887"
},
{
"id": "1511.05952"
}
] |
1704.04683 | 10 | I donât need it, there is nothing in it.â âReally? How do you know that?â the gentleman said in surprise. âHe told me that he would put some signs on the envelope. Look, sir, this cross in the corner means that he is well and this circle means he has found work. Thatâs good news.â The gentleman was Sir Rowland Hill. He didnât forgot Alice and her letter. âThe postage to be paid by the receiver has to be changed,â he said to himself and had a good plan. âThe postage has to be much lower, what about a penny? And the person who sends the letter pays the postage. He has to buy a stamp and put it on the envelope.â he said . The government accepted his plan. Then the ï¬rst stamp was put out in 1840. It was called the âPenny Blackâ. It had a picture of the Queen on it. Questions: 1): The ï¬rst postage stamp was made . A. in England B. in America C. by Alice D. in 1910 2): The girl handed the letter back to the mailman because . A. she didnât know whose letter it | 1704.04683#10 | RACE: Large-scale ReAding Comprehension Dataset From Examinations | We present RACE, a new dataset for benchmark evaluation of methods in the
reading comprehension task. Collected from the English exams for middle and
high school Chinese students in the age range between 12 to 18, RACE consists
of near 28,000 passages and near 100,000 questions generated by human experts
(English instructors), and covers a variety of topics which are carefully
designed for evaluating the students' ability in understanding and reasoning.
In particular, the proportion of questions that requires reasoning is much
larger in RACE than that in other benchmark datasets for reading comprehension,
and there is a significant gap between the performance of the state-of-the-art
models (43%) and the ceiling human performance (95%). We hope this new dataset
can serve as a valuable resource for research and evaluation in machine
comprehension. The dataset is freely available at
http://www.cs.cmu.edu/~glai1/data/race/ and the code is available at
https://github.com/qizhex/RACE_AR_baselines. | http://arxiv.org/pdf/1704.04683 | Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, Eduard Hovy | cs.CL, cs.AI, cs.LG | EMNLP 2017 | null | cs.CL | 20170415 | 20171205 | [
{
"id": "1511.02301"
},
{
"id": "1608.05457"
},
{
"id": "1702.02206"
},
{
"id": "1606.05250"
},
{
"id": "1604.06076"
},
{
"id": "1611.09268"
},
{
"id": "1606.02858"
},
{
"id": "1610.00956"
},
{
"id": "1606.01549"
},
{
"id": "1611.09830"
},
{
"id": "1603.01547"
}
] |
1704.04651 | 11 | cr was) cs =Amin(1,ps), ps (4)
The Retrace algorithm comes with the theoretical guarantee that in ï¬nite state and action spaces, repeatedly updating our current estimate Q according to (3) produces a sequence of Q functions which converges to QÏ for a ï¬xed Ï or to Qâ if we consider a sequence of policies Ï which become increasingly greedy w.r.t. the Q estimates (Munos et al., 2016).
# 2.1.3 DISTRIBUTIONAL RL
Distributional reinforcement learning refers to a class of algorithms that directly estimate the distri- bution over returns, whose expectation gives the traditional value function (Bellemare et al., 2017). Such approaches can be made tractable with a distributional Bellman equation, and the recently proposed algorithm C51 showed state-of-the-art performance in the Atari 2600 benchmarks. C51 parameterizes the distribution over returns with a mixture over Diracs centered on a uniform grid,
N-1 4; («,a) e .Umax â Umin Q(x, 4:8) = So agile, a;9)zi, Gi Nal gyeay? 7 = Yin HiT?) i=
with hyperparameters vmin, vmax that bound the distribution support of size N .
3
Published as a conference paper at ICLR 2018 | 1704.04651#11 | The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning | In this work we present a new agent architecture, called Reactor, which
combines multiple algorithmic and architectural contributions to produce an
agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al.,
2016) and Categorical DQN (Bellemare et al., 2017), while giving better
run-time performance than A3C (Mnih et al., 2016). Our first contribution is a
new policy evaluation algorithm called Distributional Retrace, which brings
multi-step off-policy updates to the distributional reinforcement learning
setting. The same approach can be used to convert several classes of multi-step
policy evaluation algorithms designed for expected value evaluation into
distributional ones. Next, we introduce the \b{eta}-leave-one-out policy
gradient algorithm which improves the trade-off between variance and bias by
using action values as a baseline. Our final algorithmic contribution is a new
prioritized replay algorithm for sequences, which exploits the temporal
locality of neighboring observations for more efficient replay prioritization.
Using the Atari 2600 benchmarks, we show that each of these innovations
contribute to both the sample efficiency and final agent performance. Finally,
we demonstrate that Reactor reaches state-of-the-art performance after 200
million frames and less than a day of training. | http://arxiv.org/pdf/1704.04651 | Audrunas Gruslys, Will Dabney, Mohammad Gheshlaghi Azar, Bilal Piot, Marc Bellemare, Remi Munos | cs.AI | null | null | cs.AI | 20170415 | 20180619 | [
{
"id": "1707.06347"
},
{
"id": "1703.01161"
},
{
"id": "1509.02971"
},
{
"id": "1710.02298"
},
{
"id": "1706.10295"
},
{
"id": "1707.06887"
},
{
"id": "1511.05952"
}
] |
1704.04683 | 11 | in England B. in America C. by Alice D. in 1910 2): The girl handed the letter back to the mailman because . A. she didnât know whose letter it was B. she had no money to pay the postage C. she received the letter but she didnât want to open it D. she had already known what was written in the letter 3): We can know from Aliceâs words that A. Tom had told her what the signs meant before leaving B. Alice was clever and could guess the meaning of the signs C. Alice had put the signs on the envelope herself D. Tom had put the signs as Alice had told him to . 4): The idea of using stamps was thought of by . A. the government B. Sir Rowland Hill C. Alice Brown D. Tom 5): From the passage we know the high postage made . A. people never send each other letters B. lovers almost lose every touch with each other C. people try their best to avoid paying it D. receivers refuse to pay the coming letters Answer: ADABC | 1704.04683#11 | RACE: Large-scale ReAding Comprehension Dataset From Examinations | We present RACE, a new dataset for benchmark evaluation of methods in the
reading comprehension task. Collected from the English exams for middle and
high school Chinese students in the age range between 12 to 18, RACE consists
of near 28,000 passages and near 100,000 questions generated by human experts
(English instructors), and covers a variety of topics which are carefully
designed for evaluating the students' ability in understanding and reasoning.
In particular, the proportion of questions that requires reasoning is much
larger in RACE than that in other benchmark datasets for reading comprehension,
and there is a significant gap between the performance of the state-of-the-art
models (43%) and the ceiling human performance (95%). We hope this new dataset
can serve as a valuable resource for research and evaluation in machine
comprehension. The dataset is freely available at
http://www.cs.cmu.edu/~glai1/data/race/ and the code is available at
https://github.com/qizhex/RACE_AR_baselines. | http://arxiv.org/pdf/1704.04683 | Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, Eduard Hovy | cs.CL, cs.AI, cs.LG | EMNLP 2017 | null | cs.CL | 20170415 | 20171205 | [
{
"id": "1511.02301"
},
{
"id": "1608.05457"
},
{
"id": "1702.02206"
},
{
"id": "1606.05250"
},
{
"id": "1604.06076"
},
{
"id": "1611.09268"
},
{
"id": "1606.02858"
},
{
"id": "1610.00956"
},
{
"id": "1606.01549"
},
{
"id": "1611.09830"
},
{
"id": "1603.01547"
}
] |
1704.04651 | 12 | with hyperparameters vmin, vmax that bound the distribution support of size N .
3
Published as a conference paper at ICLR 2018
# 2.2 ACTOR-CRITIC ALGORITHMS
In this section we review the actor-critic framework for reinforcement learning algorithms and then discuss recent advances in actor-critic algorithms along with their various trade-offs. The asynchronous advantage actor-critic (A3C) algorithm (Mnih et al., 2016), maintains a parameterized policy Ï(a|x; θ) and value function V (x; θv), which are updated with
AO = Vo log r(ai|213 9) A(xt, 2430), AO, = Alar, a1; Ov) Vo, V(x), (6)
n-1 where, â A(a4, a1; 0) = > V rege +°V (14n) â V(a2)- (7) k
A3C uses M = 16 parallel CPU workers, each acting independently in the environment and applying the above updates asynchronously to a shared set of parameters. In contrast to the previously discussed value-based methods, A3C is an on-policy algorithm, and does not use a GPU nor a replay buffer. | 1704.04651#12 | The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning | In this work we present a new agent architecture, called Reactor, which
combines multiple algorithmic and architectural contributions to produce an
agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al.,
2016) and Categorical DQN (Bellemare et al., 2017), while giving better
run-time performance than A3C (Mnih et al., 2016). Our first contribution is a
new policy evaluation algorithm called Distributional Retrace, which brings
multi-step off-policy updates to the distributional reinforcement learning
setting. The same approach can be used to convert several classes of multi-step
policy evaluation algorithms designed for expected value evaluation into
distributional ones. Next, we introduce the \b{eta}-leave-one-out policy
gradient algorithm which improves the trade-off between variance and bias by
using action values as a baseline. Our final algorithmic contribution is a new
prioritized replay algorithm for sequences, which exploits the temporal
locality of neighboring observations for more efficient replay prioritization.
Using the Atari 2600 benchmarks, we show that each of these innovations
contribute to both the sample efficiency and final agent performance. Finally,
we demonstrate that Reactor reaches state-of-the-art performance after 200
million frames and less than a day of training. | http://arxiv.org/pdf/1704.04651 | Audrunas Gruslys, Will Dabney, Mohammad Gheshlaghi Azar, Bilal Piot, Marc Bellemare, Remi Munos | cs.AI | null | null | cs.AI | 20170415 | 20180619 | [
{
"id": "1707.06347"
},
{
"id": "1703.01161"
},
{
"id": "1509.02971"
},
{
"id": "1710.02298"
},
{
"id": "1706.10295"
},
{
"id": "1707.06887"
},
{
"id": "1511.05952"
}
] |
1704.04683 | 12 | Table 1: Sample reading comprehension problems from our dataset.
CNN/Daily Mail (Hermann et al., 2015) are the largest machine comprehension datasets with 1.4M questions. However, both require limited reasoning ability (Chen et al., 2016). In fact, the best machine performance obtained by researchers (Chen et al., 2016; Dhingra et al., 2016) is close to humanâs performance on CNN/Daily Mail.
using one as the passage and the other as the ques- tion.
High noise is inevitable in cloze-style datasets due to their automatic generation process, which is reï¬ected in the human performance on these datasets: 82% for CBT and 84% for WDW.
Childrens Book Test (CBT) (Hill et al., 2015) and Book Test (BT) (Bajgar et al., 2016) are con- structed in a similar manner. Each passage in CBT consist of 20 contiguous sentences extracted from childrenâs books and the next (21st) sentence is used to make the question. The main difference between the two datasets is the size of BT being 60 times larger. Machine comprehension models have also matched human performance on CBT (Bajgar et al., 2016). | 1704.04683#12 | RACE: Large-scale ReAding Comprehension Dataset From Examinations | We present RACE, a new dataset for benchmark evaluation of methods in the
reading comprehension task. Collected from the English exams for middle and
high school Chinese students in the age range between 12 to 18, RACE consists
of near 28,000 passages and near 100,000 questions generated by human experts
(English instructors), and covers a variety of topics which are carefully
designed for evaluating the students' ability in understanding and reasoning.
In particular, the proportion of questions that requires reasoning is much
larger in RACE than that in other benchmark datasets for reading comprehension,
and there is a significant gap between the performance of the state-of-the-art
models (43%) and the ceiling human performance (95%). We hope this new dataset
can serve as a valuable resource for research and evaluation in machine
comprehension. The dataset is freely available at
http://www.cs.cmu.edu/~glai1/data/race/ and the code is available at
https://github.com/qizhex/RACE_AR_baselines. | http://arxiv.org/pdf/1704.04683 | Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, Eduard Hovy | cs.CL, cs.AI, cs.LG | EMNLP 2017 | null | cs.CL | 20170415 | 20171205 | [
{
"id": "1511.02301"
},
{
"id": "1608.05457"
},
{
"id": "1702.02206"
},
{
"id": "1606.05250"
},
{
"id": "1604.06076"
},
{
"id": "1611.09268"
},
{
"id": "1606.02858"
},
{
"id": "1610.00956"
},
{
"id": "1606.01549"
},
{
"id": "1611.09830"
},
{
"id": "1603.01547"
}
] |
1704.04651 | 13 | Proximal Policy Optimization (PPO) is a closely related actor-critic algorithm (Schulman et al., 2017), which replaces the advantage (7) with,
min(p,A(xz, a2; 9v), clip(pr, 1 â â¬, 1 + â¬) A(z, at; Ov), ⬠> 0,
where Ït is as deï¬ned in Section 2.1.2. Although both PPO and A3C run M parallel workers collecting trajectories independently in the environment, PPO collects these experiences to perform a single, synchronous, update in contrast with the asynchronous updates of A3C.
Actor-Critic Experience Replay (ACER) extends the A3C framework with an experience replay buffer, Retrace algorithm for off-policy corrections, and the Truncated Importance Sampling Likelihood Ratio (TISLR) algorithm used for off-policy policy optimization (Wang et al., 2017).
# 3 THE REACTOR
The Reactor is a combination of four novel contributions on top of recent improvements to both deep value-based RL and policy-gradient algorithms. Each contribution moves Reactor towards our goal of achieving both sample and time efï¬ciency.
# 3.1 β-LOO | 1704.04651#13 | The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning | In this work we present a new agent architecture, called Reactor, which
combines multiple algorithmic and architectural contributions to produce an
agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al.,
2016) and Categorical DQN (Bellemare et al., 2017), while giving better
run-time performance than A3C (Mnih et al., 2016). Our first contribution is a
new policy evaluation algorithm called Distributional Retrace, which brings
multi-step off-policy updates to the distributional reinforcement learning
setting. The same approach can be used to convert several classes of multi-step
policy evaluation algorithms designed for expected value evaluation into
distributional ones. Next, we introduce the \b{eta}-leave-one-out policy
gradient algorithm which improves the trade-off between variance and bias by
using action values as a baseline. Our final algorithmic contribution is a new
prioritized replay algorithm for sequences, which exploits the temporal
locality of neighboring observations for more efficient replay prioritization.
Using the Atari 2600 benchmarks, we show that each of these innovations
contribute to both the sample efficiency and final agent performance. Finally,
we demonstrate that Reactor reaches state-of-the-art performance after 200
million frames and less than a day of training. | http://arxiv.org/pdf/1704.04651 | Audrunas Gruslys, Will Dabney, Mohammad Gheshlaghi Azar, Bilal Piot, Marc Bellemare, Remi Munos | cs.AI | null | null | cs.AI | 20170415 | 20180619 | [
{
"id": "1707.06347"
},
{
"id": "1703.01161"
},
{
"id": "1509.02971"
},
{
"id": "1710.02298"
},
{
"id": "1706.10295"
},
{
"id": "1707.06887"
},
{
"id": "1511.05952"
}
] |
1704.04683 | 13 | Who Did What (WDW) (Onishi et al., 2016) is yet another cloze-style dataset constructed from the LDC English Gigaword newswire corpus. The authors generate passages and questions by pick- ing two news articles describing the same event,
# 2.3 Datasets with Span-based Answers
In datasets such as SQUAD (Rajpurkar et al., 2016), NEWSQA (Trischler et al., 2016) MS MARCO (Nguyen et al., 2016) and recently pro- posed TriviaQA (Joshi et al., 2017). the answer to each question is in the form of a text span in the article. Articles of SQUAD, NEWSQA and MS MARCO come from Wikipedia, CNN news and the Bing search engine respectively. The answer to a certain question may not be unique and could be multiple spans. Instead of evaluating the accuracy, researchers need to use F1 score, BLEU (Papineni et al., 2002) or ROUGE (Lin and Hovy, 2003) as metrics, which measure the overlap between the prediction and ground truth answers since the
questions come without candidate spans. | 1704.04683#13 | RACE: Large-scale ReAding Comprehension Dataset From Examinations | We present RACE, a new dataset for benchmark evaluation of methods in the
reading comprehension task. Collected from the English exams for middle and
high school Chinese students in the age range between 12 to 18, RACE consists
of near 28,000 passages and near 100,000 questions generated by human experts
(English instructors), and covers a variety of topics which are carefully
designed for evaluating the students' ability in understanding and reasoning.
In particular, the proportion of questions that requires reasoning is much
larger in RACE than that in other benchmark datasets for reading comprehension,
and there is a significant gap between the performance of the state-of-the-art
models (43%) and the ceiling human performance (95%). We hope this new dataset
can serve as a valuable resource for research and evaluation in machine
comprehension. The dataset is freely available at
http://www.cs.cmu.edu/~glai1/data/race/ and the code is available at
https://github.com/qizhex/RACE_AR_baselines. | http://arxiv.org/pdf/1704.04683 | Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, Eduard Hovy | cs.CL, cs.AI, cs.LG | EMNLP 2017 | null | cs.CL | 20170415 | 20171205 | [
{
"id": "1511.02301"
},
{
"id": "1608.05457"
},
{
"id": "1702.02206"
},
{
"id": "1606.05250"
},
{
"id": "1604.06076"
},
{
"id": "1611.09268"
},
{
"id": "1606.02858"
},
{
"id": "1610.00956"
},
{
"id": "1606.01549"
},
{
"id": "1611.09830"
},
{
"id": "1603.01547"
}
] |
1704.04651 | 14 | # 3.1 β-LOO
The Reactor architecture represents both a policy 7(a|z) and action-value function Q(x, a). We use a policy gradient algorithm to train the actor 7 which makes use of our current estimate Q(x, a) of Q⢠(x, a). Let Vâ (xq) be the value function at some initial state x, the policy gradient theorem says that VV" (ao) = E[ >, 7â , Q7(21,@)Vr(alxz)], where V refers to the gradient w.r.t. policy parameters (Sutton et al.|/2000). We now consider several possible ways to estimate this gradient.
To simplify notation, we drop the dependence on the state x for now and consider the problem of estimating the quantity
# aQÏ(a)âÏ(a).
(8)
In the off-policy case, we consider estimating G using a single action Ëa drawn from a (possibly different from Ï) behaviour distribution Ëa ⼠µ. Let us assume that for the chosen action Ëa we have access to an unbiased estimate R(Ëa) of QÏ(Ëa). Then, we can use likelihood ratio (LR) method combined with an importance sampling (IS) ratio (which we call ISLR) to build an unbiased estimate of G: | 1704.04651#14 | The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning | In this work we present a new agent architecture, called Reactor, which
combines multiple algorithmic and architectural contributions to produce an
agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al.,
2016) and Categorical DQN (Bellemare et al., 2017), while giving better
run-time performance than A3C (Mnih et al., 2016). Our first contribution is a
new policy evaluation algorithm called Distributional Retrace, which brings
multi-step off-policy updates to the distributional reinforcement learning
setting. The same approach can be used to convert several classes of multi-step
policy evaluation algorithms designed for expected value evaluation into
distributional ones. Next, we introduce the \b{eta}-leave-one-out policy
gradient algorithm which improves the trade-off between variance and bias by
using action values as a baseline. Our final algorithmic contribution is a new
prioritized replay algorithm for sequences, which exploits the temporal
locality of neighboring observations for more efficient replay prioritization.
Using the Atari 2600 benchmarks, we show that each of these innovations
contribute to both the sample efficiency and final agent performance. Finally,
we demonstrate that Reactor reaches state-of-the-art performance after 200
million frames and less than a day of training. | http://arxiv.org/pdf/1704.04651 | Audrunas Gruslys, Will Dabney, Mohammad Gheshlaghi Azar, Bilal Piot, Marc Bellemare, Remi Munos | cs.AI | null | null | cs.AI | 20170415 | 20180619 | [
{
"id": "1707.06347"
},
{
"id": "1703.01161"
},
{
"id": "1509.02971"
},
{
"id": "1710.02298"
},
{
"id": "1706.10295"
},
{
"id": "1707.06887"
},
{
"id": "1511.05952"
}
] |
1704.04683 | 14 | questions come without candidate spans.
Datasets with span-based answers are challeng- ing as the space of possible spans is usually large. However, restricting answers to be text spans in the context passage may be unrealistic and more importantly, may not be intuitive even for humans, indicated by the suffered human performance of 80.3% on SQUAD (or 65% claimed by Trischler et al. (2016)) and 46.5% on NEWSQA. In other words, the format of span-based answers may not necessarily be a good examination of reading com- prehension of machines whose aim is to approach the comprehension ability of humans.
# 2.4 Datasets from Examinations | 1704.04683#14 | RACE: Large-scale ReAding Comprehension Dataset From Examinations | We present RACE, a new dataset for benchmark evaluation of methods in the
reading comprehension task. Collected from the English exams for middle and
high school Chinese students in the age range between 12 to 18, RACE consists
of near 28,000 passages and near 100,000 questions generated by human experts
(English instructors), and covers a variety of topics which are carefully
designed for evaluating the students' ability in understanding and reasoning.
In particular, the proportion of questions that requires reasoning is much
larger in RACE than that in other benchmark datasets for reading comprehension,
and there is a significant gap between the performance of the state-of-the-art
models (43%) and the ceiling human performance (95%). We hope this new dataset
can serve as a valuable resource for research and evaluation in machine
comprehension. The dataset is freely available at
http://www.cs.cmu.edu/~glai1/data/race/ and the code is available at
https://github.com/qizhex/RACE_AR_baselines. | http://arxiv.org/pdf/1704.04683 | Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, Eduard Hovy | cs.CL, cs.AI, cs.LG | EMNLP 2017 | null | cs.CL | 20170415 | 20171205 | [
{
"id": "1511.02301"
},
{
"id": "1608.05457"
},
{
"id": "1702.02206"
},
{
"id": "1606.05250"
},
{
"id": "1604.06076"
},
{
"id": "1611.09268"
},
{
"id": "1606.02858"
},
{
"id": "1610.00956"
},
{
"id": "1606.01549"
},
{
"id": "1611.09830"
},
{
"id": "1603.01547"
}
] |
1704.04651 | 15 | ËGISLR = Ï(Ëa) µ(Ëa) (R(Ëa) â V )â log Ï(Ëa),
where V is a baseline that depends on the state but not on the chosen action. However this estimate suffers from high variance. A possible way for reducing variance is to estimate G directly from (8) by using the return R(Ëa) for the chosen action Ëa and our current estimate Q of QÏ for the other actions, which leads to the so-called leave-one-out (LOO) policy-gradient estimate:
Groo = R(@)Vr(@) + Da zaQ(a)Vr(a). (9)
4
Published as a conference paper at ICLR 2018
1. Mix action-value distributions by = Tt Y ee ea Lae 2. Shrink mixed distribution by 7 Tt Tt+1 4, Obtain target_probabilities
Figure 1: Single-step (left) and multi-step (right) distribution bootstrapping.
This estimate has low variance but may be biased if the estimated Q values differ from QÏ. A better bias-variance tradeoff may be obtained by the more general β-LOO policy-gradient estimate: | 1704.04651#15 | The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning | In this work we present a new agent architecture, called Reactor, which
combines multiple algorithmic and architectural contributions to produce an
agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al.,
2016) and Categorical DQN (Bellemare et al., 2017), while giving better
run-time performance than A3C (Mnih et al., 2016). Our first contribution is a
new policy evaluation algorithm called Distributional Retrace, which brings
multi-step off-policy updates to the distributional reinforcement learning
setting. The same approach can be used to convert several classes of multi-step
policy evaluation algorithms designed for expected value evaluation into
distributional ones. Next, we introduce the \b{eta}-leave-one-out policy
gradient algorithm which improves the trade-off between variance and bias by
using action values as a baseline. Our final algorithmic contribution is a new
prioritized replay algorithm for sequences, which exploits the temporal
locality of neighboring observations for more efficient replay prioritization.
Using the Atari 2600 benchmarks, we show that each of these innovations
contribute to both the sample efficiency and final agent performance. Finally,
we demonstrate that Reactor reaches state-of-the-art performance after 200
million frames and less than a day of training. | http://arxiv.org/pdf/1704.04651 | Audrunas Gruslys, Will Dabney, Mohammad Gheshlaghi Azar, Bilal Piot, Marc Bellemare, Remi Munos | cs.AI | null | null | cs.AI | 20170415 | 20180619 | [
{
"id": "1707.06347"
},
{
"id": "1703.01161"
},
{
"id": "1509.02971"
},
{
"id": "1710.02298"
},
{
"id": "1706.10295"
},
{
"id": "1707.06887"
},
{
"id": "1511.05952"
}
] |
1704.04683 | 15 | # 2.4 Datasets from Examinations
There have been several datasets extracted from examinations, aiming at evaluating systems un- der the same conditions as how humans are evalu- ated in schools. E.g., the AI2 Elementary School Science Questions dataset (Khashabi et al., 2016) contains 1080 questions for students in elementary schools; NTCIR QA Lab (Shibuki et al., 2014) evaluates systems by the task of solving real-world university entrance exam questions; The Entrance Exams task at CLEF QA Track (PeËnas et al., 2014; Rodrigo et al., 2015) evaluates the systemâs read- ing comprehension ability. However, data pro- vided in these existing tasks are far from sufï¬cient for the training of advanced data-driven machine reading models, partially due to the expensive data generation process by human experts.
To the best of our knowledge, RACE is the ï¬rst large-scale dataset of this type, where questions are created based on exams designed to evaluate human performance in reading comprehension.
# 3 Data Analysis
In this section, we study the nature of questions covered in RACE at a detailed level. Speciï¬cally, we present the dataset statistics in Section 3.1, and then analyze different reasoning/question types in RACE in the remaining subsections.
# 3.1 Dataset Statistics | 1704.04683#15 | RACE: Large-scale ReAding Comprehension Dataset From Examinations | We present RACE, a new dataset for benchmark evaluation of methods in the
reading comprehension task. Collected from the English exams for middle and
high school Chinese students in the age range between 12 to 18, RACE consists
of near 28,000 passages and near 100,000 questions generated by human experts
(English instructors), and covers a variety of topics which are carefully
designed for evaluating the students' ability in understanding and reasoning.
In particular, the proportion of questions that requires reasoning is much
larger in RACE than that in other benchmark datasets for reading comprehension,
and there is a significant gap between the performance of the state-of-the-art
models (43%) and the ceiling human performance (95%). We hope this new dataset
can serve as a valuable resource for research and evaluation in machine
comprehension. The dataset is freely available at
http://www.cs.cmu.edu/~glai1/data/race/ and the code is available at
https://github.com/qizhex/RACE_AR_baselines. | http://arxiv.org/pdf/1704.04683 | Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, Eduard Hovy | cs.CL, cs.AI, cs.LG | EMNLP 2017 | null | cs.CL | 20170415 | 20171205 | [
{
"id": "1511.02301"
},
{
"id": "1608.05457"
},
{
"id": "1702.02206"
},
{
"id": "1606.05250"
},
{
"id": "1604.06076"
},
{
"id": "1611.09268"
},
{
"id": "1606.02858"
},
{
"id": "1610.00956"
},
{
"id": "1606.01549"
},
{
"id": "1611.09830"
},
{
"id": "1603.01547"
}
] |
1704.04651 | 16 | Go.100 = B(R(a) â Q(a))Va(@) + 0, Q(@)Va(a), (10)
where β = β(µ, Ï, Ëa) can be a function of both policies, Ï and µ, and the selected action Ëa. Notice that when β = 1, (10) reduces to (9), and when β = 1/µ(Ëa), then (10) is
ala a (R(4@) â Q(4@))V log 7(@) + >, Q(a)Vz(a). ay G -Loo = Ble
This estimate is unbiased and can be seen as a generalization of Gisir where instead of using a state-only dependent baseline, we use a state-and-action-dependent baseline (our current estimate Q) and add the correction term >, Vz(a)Q(a) to cancel the bias. Proposition[I] gives our analysis of the bias of Gg..00, with a proof left to the Appendix. Proposition 1. Assume @ ~ y and that E[R(4)] = Q7(@). Then, the bias of G'.100 is | 041 â u(a)5(a))Vr(a)[Q(a) â Q*(a)]]| | 1704.04651#16 | The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning | In this work we present a new agent architecture, called Reactor, which
combines multiple algorithmic and architectural contributions to produce an
agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al.,
2016) and Categorical DQN (Bellemare et al., 2017), while giving better
run-time performance than A3C (Mnih et al., 2016). Our first contribution is a
new policy evaluation algorithm called Distributional Retrace, which brings
multi-step off-policy updates to the distributional reinforcement learning
setting. The same approach can be used to convert several classes of multi-step
policy evaluation algorithms designed for expected value evaluation into
distributional ones. Next, we introduce the \b{eta}-leave-one-out policy
gradient algorithm which improves the trade-off between variance and bias by
using action values as a baseline. Our final algorithmic contribution is a new
prioritized replay algorithm for sequences, which exploits the temporal
locality of neighboring observations for more efficient replay prioritization.
Using the Atari 2600 benchmarks, we show that each of these innovations
contribute to both the sample efficiency and final agent performance. Finally,
we demonstrate that Reactor reaches state-of-the-art performance after 200
million frames and less than a day of training. | http://arxiv.org/pdf/1704.04651 | Audrunas Gruslys, Will Dabney, Mohammad Gheshlaghi Azar, Bilal Piot, Marc Bellemare, Remi Munos | cs.AI | null | null | cs.AI | 20170415 | 20180619 | [
{
"id": "1707.06347"
},
{
"id": "1703.01161"
},
{
"id": "1509.02971"
},
{
"id": "1710.02298"
},
{
"id": "1706.10295"
},
{
"id": "1707.06887"
},
{
"id": "1511.05952"
}
] |
1704.04683 | 16 | # 3.1 Dataset Statistics
As mentioned in section 1, RACE is collected from English examinations designed for 12â15 year-old middle school students, and 15â18 year- old high school students in China. To distin- guish the two subgroups with drastic difï¬culty gap, RACE-M denotes the middle school exami- nations and RACE-H denotes high school examinations. We split 5% data as the development set and 5% as the test set for RACE-M and RACE-H respectively. The number of samples in each set is shown in Table 2. The statistics for RACE-M and RACE-H is summarized in Table 3. We can ï¬nd that the length of the passages and the vocabulary size in the RACE-H are much larger than that of the RACE-M, an evidence of the higher difï¬culty of high school examinations.
However, notice that since the articles and ques- tions are selected and designed to test Chinese students learning English as a foreign language, the vocabulary size and the complexity of the lan- guage constructs are simpler than news articles and Wikipedia articles in other QA datasets.
# 3.2 Reasoning Types of the Questions | 1704.04683#16 | RACE: Large-scale ReAding Comprehension Dataset From Examinations | We present RACE, a new dataset for benchmark evaluation of methods in the
reading comprehension task. Collected from the English exams for middle and
high school Chinese students in the age range between 12 to 18, RACE consists
of near 28,000 passages and near 100,000 questions generated by human experts
(English instructors), and covers a variety of topics which are carefully
designed for evaluating the students' ability in understanding and reasoning.
In particular, the proportion of questions that requires reasoning is much
larger in RACE than that in other benchmark datasets for reading comprehension,
and there is a significant gap between the performance of the state-of-the-art
models (43%) and the ceiling human performance (95%). We hope this new dataset
can serve as a valuable resource for research and evaluation in machine
comprehension. The dataset is freely available at
http://www.cs.cmu.edu/~glai1/data/race/ and the code is available at
https://github.com/qizhex/RACE_AR_baselines. | http://arxiv.org/pdf/1704.04683 | Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, Eduard Hovy | cs.CL, cs.AI, cs.LG | EMNLP 2017 | null | cs.CL | 20170415 | 20171205 | [
{
"id": "1511.02301"
},
{
"id": "1608.05457"
},
{
"id": "1702.02206"
},
{
"id": "1606.05250"
},
{
"id": "1604.06076"
},
{
"id": "1611.09268"
},
{
"id": "1606.02858"
},
{
"id": "1610.00956"
},
{
"id": "1606.01549"
},
{
"id": "1611.09830"
},
{
"id": "1603.01547"
}
] |
1704.04651 | 17 | Thus the bias is small when (a) is close to 1/j:(a), or when the Q-estimates are close to the true Q⢠values, and unbiased regardless of the estimates if 6(a) = 1/j(a). The variance is low when 8 is small, therefore, in order to improve the bias-variance tradeoff we recommend using the 6-LOO estimate with 3 defined as: 6(@) = min (ce, ma)? for some constant c > 1. This truncated 1/p coefficient shares similarities with the truncated IS gradient estimate introduced in (which we call TISLR for truncated-ISLR):
Cnsiz=min (« ray) RO) - Vlog (@)+>) Ga â e),w(a)(Q*(a) ~ V)V log x(a). | 1704.04651#17 | The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning | In this work we present a new agent architecture, called Reactor, which
combines multiple algorithmic and architectural contributions to produce an
agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al.,
2016) and Categorical DQN (Bellemare et al., 2017), while giving better
run-time performance than A3C (Mnih et al., 2016). Our first contribution is a
new policy evaluation algorithm called Distributional Retrace, which brings
multi-step off-policy updates to the distributional reinforcement learning
setting. The same approach can be used to convert several classes of multi-step
policy evaluation algorithms designed for expected value evaluation into
distributional ones. Next, we introduce the \b{eta}-leave-one-out policy
gradient algorithm which improves the trade-off between variance and bias by
using action values as a baseline. Our final algorithmic contribution is a new
prioritized replay algorithm for sequences, which exploits the temporal
locality of neighboring observations for more efficient replay prioritization.
Using the Atari 2600 benchmarks, we show that each of these innovations
contribute to both the sample efficiency and final agent performance. Finally,
we demonstrate that Reactor reaches state-of-the-art performance after 200
million frames and less than a day of training. | http://arxiv.org/pdf/1704.04651 | Audrunas Gruslys, Will Dabney, Mohammad Gheshlaghi Azar, Bilal Piot, Marc Bellemare, Remi Munos | cs.AI | null | null | cs.AI | 20170415 | 20180619 | [
{
"id": "1707.06347"
},
{
"id": "1703.01161"
},
{
"id": "1509.02971"
},
{
"id": "1710.02298"
},
{
"id": "1706.10295"
},
{
"id": "1707.06887"
},
{
"id": "1511.05952"
}
] |
1704.04683 | 17 | # 3.2 Reasoning Types of the Questions
To get a comprehensive picture about the reason- ing difï¬culty requirement of RACE, we conduct human annotations of questions types. Following Chen et al. (2016); Trischler et al. (2016), we strat- ify the questions into ï¬ve classes as follows with ascending order of difï¬culty:
The question exactly matches a span in the article. The answer is self-evident.
⢠Paraphrasing: The question is entailed or paraphrased by exactly one sentence in the passage. The answer can be extracted within the sentence.
⢠Single-sentence reasoning: The answer could be inferred from a single sentence of the arti- cle by recognizing incomplete information or conceptual overlap.
⢠Multi-sentence reasoning: The answer must be inferred from synthesizing information distributed across multiple sentences.
⢠Insufï¬cient/Ambiguous: The question has no answer or the answer is not unique based on the given passage.
We refer readers to (Chen et al., 2016; Trischler et al., 2016) for examples of each category. | 1704.04683#17 | RACE: Large-scale ReAding Comprehension Dataset From Examinations | We present RACE, a new dataset for benchmark evaluation of methods in the
reading comprehension task. Collected from the English exams for middle and
high school Chinese students in the age range between 12 to 18, RACE consists
of near 28,000 passages and near 100,000 questions generated by human experts
(English instructors), and covers a variety of topics which are carefully
designed for evaluating the students' ability in understanding and reasoning.
In particular, the proportion of questions that requires reasoning is much
larger in RACE than that in other benchmark datasets for reading comprehension,
and there is a significant gap between the performance of the state-of-the-art
models (43%) and the ceiling human performance (95%). We hope this new dataset
can serve as a valuable resource for research and evaluation in machine
comprehension. The dataset is freely available at
http://www.cs.cmu.edu/~glai1/data/race/ and the code is available at
https://github.com/qizhex/RACE_AR_baselines. | http://arxiv.org/pdf/1704.04683 | Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, Eduard Hovy | cs.CL, cs.AI, cs.LG | EMNLP 2017 | null | cs.CL | 20170415 | 20171205 | [
{
"id": "1511.02301"
},
{
"id": "1608.05457"
},
{
"id": "1702.02206"
},
{
"id": "1606.05250"
},
{
"id": "1604.06076"
},
{
"id": "1611.09268"
},
{
"id": "1606.02858"
},
{
"id": "1610.00956"
},
{
"id": "1606.01549"
},
{
"id": "1611.09830"
},
{
"id": "1603.01547"
}
] |
1704.04651 | 18 | Cnsiz=min (« ray) RO) - Vlog (@)+>) Ga â e),w(a)(Q*(a) ~ V)V log x(a).
The differences are: (i) we truncate 1/µ(Ëa) = Ï(Ëa)/µ(Ëa) à 1/Ï(Ëa) instead of truncating Ï(Ëa)/µ(Ëa), which provides an additional variance reduction due to the variance of the LR â log Ï(Ëa) = âÏ(Ëa) Ï(Ëa) (since this LR may be large when a low probability action is chosen), and (ii) we use our Q-baseline instead of a V baseline, reducing further the variance of the LR estimate.
3.2 DISTRIBUTIONAL RETRACE
In off-policy learning it is very difï¬cult to produce an unbiased sample R(Ëa) of QÏ(Ëa) when following another policy µ. This would require using full importance sampling correction along the trajectory. Instead, we use the off-policy corrected return computed by the Retrace algorithm, which produces a (biased) estimate of QÏ(Ëa) but whose bias vanishes asymptotically (Munos et al., 2016). | 1704.04651#18 | The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning | In this work we present a new agent architecture, called Reactor, which
combines multiple algorithmic and architectural contributions to produce an
agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al.,
2016) and Categorical DQN (Bellemare et al., 2017), while giving better
run-time performance than A3C (Mnih et al., 2016). Our first contribution is a
new policy evaluation algorithm called Distributional Retrace, which brings
multi-step off-policy updates to the distributional reinforcement learning
setting. The same approach can be used to convert several classes of multi-step
policy evaluation algorithms designed for expected value evaluation into
distributional ones. Next, we introduce the \b{eta}-leave-one-out policy
gradient algorithm which improves the trade-off between variance and bias by
using action values as a baseline. Our final algorithmic contribution is a new
prioritized replay algorithm for sequences, which exploits the temporal
locality of neighboring observations for more efficient replay prioritization.
Using the Atari 2600 benchmarks, we show that each of these innovations
contribute to both the sample efficiency and final agent performance. Finally,
we demonstrate that Reactor reaches state-of-the-art performance after 200
million frames and less than a day of training. | http://arxiv.org/pdf/1704.04651 | Audrunas Gruslys, Will Dabney, Mohammad Gheshlaghi Azar, Bilal Piot, Marc Bellemare, Remi Munos | cs.AI | null | null | cs.AI | 20170415 | 20180619 | [
{
"id": "1707.06347"
},
{
"id": "1703.01161"
},
{
"id": "1509.02971"
},
{
"id": "1710.02298"
},
{
"id": "1706.10295"
},
{
"id": "1707.06887"
},
{
"id": "1511.05952"
}
] |
1704.04683 | 18 | We refer readers to (Chen et al., 2016; Trischler et al., 2016) for examples of each category.
To obtain the proportion of different question types, we sample 100 passages from RACE (50 from RACE-M and 50 from RACE-H), all of which have 5 questions hence there are 500 ques- tions in total. We put the passages on Amazon MeTrain 6,409 25,421 RACE-M Dev 368 1,436 Test 362 1,436 Train 18,728 62,445 RACE-H Dev 1,021 3,451 Test 1,045 3,498 Train 25,137 87,866 RACE Dev 1,389 4,887 Test 1,407 4,934 All 27,933 97,687
Table 2: The separation of the training, development and test sets of RACE-M,RACE-H and RACE
Dataset Passage Len Question Len Option Len Vocab size RACE-M RACE-H RACE 321.9 353.1 10.0 10.4 5.3 5.8 136,629 125,120 231.1 9.0 3.9 32,811 | 1704.04683#18 | RACE: Large-scale ReAding Comprehension Dataset From Examinations | We present RACE, a new dataset for benchmark evaluation of methods in the
reading comprehension task. Collected from the English exams for middle and
high school Chinese students in the age range between 12 to 18, RACE consists
of near 28,000 passages and near 100,000 questions generated by human experts
(English instructors), and covers a variety of topics which are carefully
designed for evaluating the students' ability in understanding and reasoning.
In particular, the proportion of questions that requires reasoning is much
larger in RACE than that in other benchmark datasets for reading comprehension,
and there is a significant gap between the performance of the state-of-the-art
models (43%) and the ceiling human performance (95%). We hope this new dataset
can serve as a valuable resource for research and evaluation in machine
comprehension. The dataset is freely available at
http://www.cs.cmu.edu/~glai1/data/race/ and the code is available at
https://github.com/qizhex/RACE_AR_baselines. | http://arxiv.org/pdf/1704.04683 | Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, Eduard Hovy | cs.CL, cs.AI, cs.LG | EMNLP 2017 | null | cs.CL | 20170415 | 20171205 | [
{
"id": "1511.02301"
},
{
"id": "1608.05457"
},
{
"id": "1702.02206"
},
{
"id": "1606.05250"
},
{
"id": "1604.06076"
},
{
"id": "1611.09268"
},
{
"id": "1606.02858"
},
{
"id": "1610.00956"
},
{
"id": "1606.01549"
},
{
"id": "1611.09830"
},
{
"id": "1603.01547"
}
] |
1704.04651 | 19 | In Reactor, we consider predicting an approximation of the return distribution function from any state-action pair (x, a) in a similar way as in Bellemare et al. (2017). The original algorithm C51 described in that paper considered single-step Bellman updates only. Here we need to extend this idea to multi-step updates and handle the off-policy correction performed by the Retrace algorithm, as deï¬ned in (3). Next, we describe these two extensions.
Multi-step distributional Bellman operator: First, we extend C51 to multi-step Bellman backups. We consider return-distributions from (2, a) of the form )>; q(x, a)dz, (where 6, denotes a Dirac in z)
5
(10)
Published as a conference paper at ICLR 2018 | 1704.04651#19 | The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning | In this work we present a new agent architecture, called Reactor, which
combines multiple algorithmic and architectural contributions to produce an
agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al.,
2016) and Categorical DQN (Bellemare et al., 2017), while giving better
run-time performance than A3C (Mnih et al., 2016). Our first contribution is a
new policy evaluation algorithm called Distributional Retrace, which brings
multi-step off-policy updates to the distributional reinforcement learning
setting. The same approach can be used to convert several classes of multi-step
policy evaluation algorithms designed for expected value evaluation into
distributional ones. Next, we introduce the \b{eta}-leave-one-out policy
gradient algorithm which improves the trade-off between variance and bias by
using action values as a baseline. Our final algorithmic contribution is a new
prioritized replay algorithm for sequences, which exploits the temporal
locality of neighboring observations for more efficient replay prioritization.
Using the Atari 2600 benchmarks, we show that each of these innovations
contribute to both the sample efficiency and final agent performance. Finally,
we demonstrate that Reactor reaches state-of-the-art performance after 200
million frames and less than a day of training. | http://arxiv.org/pdf/1704.04651 | Audrunas Gruslys, Will Dabney, Mohammad Gheshlaghi Azar, Bilal Piot, Marc Bellemare, Remi Munos | cs.AI | null | null | cs.AI | 20170415 | 20180619 | [
{
"id": "1707.06347"
},
{
"id": "1703.01161"
},
{
"id": "1509.02971"
},
{
"id": "1710.02298"
},
{
"id": "1706.10295"
},
{
"id": "1707.06887"
},
{
"id": "1511.05952"
}
] |
1704.04683 | 19 | 1. Detail reasoning: to answer the question, the agent should be clear about the details of the pas- sage. The answer appears in the passage but it can- not be found by simply matching the question with the passage. For example, Question 1 in the sam- ple passage falls into this category.
Table 3: Statistics of RACE where Len denotes length and Vocab denotes Vocabulary.
chanical Turk1, and a Hit is generated by a passage with 5 questions. Each question is labeled by two crowdworkers. We require the turkers to both an- swer the questions and label the reasoning type. We pay $0.70 and $1.00 per passage in RACE-M and RACE-H respectively, and restrict the access to master turkers only. Finally, we get 1000 labels for the 500 questions.
2. Whole-picture reasoning: the agent needs to understand the whole picture of the story to ob- tain the correct answer. For example, to answer the Question 2 in the sample passage, the agent is required to comprehend the entire story.
3. Passage summarization: The question re- quires the agent to select the best summarization of the passage among four candidate summariza- tions. A typical question of this type is âThe main idea of this passage is .â. An example question can be found in Appendix A.1. | 1704.04683#19 | RACE: Large-scale ReAding Comprehension Dataset From Examinations | We present RACE, a new dataset for benchmark evaluation of methods in the
reading comprehension task. Collected from the English exams for middle and
high school Chinese students in the age range between 12 to 18, RACE consists
of near 28,000 passages and near 100,000 questions generated by human experts
(English instructors), and covers a variety of topics which are carefully
designed for evaluating the students' ability in understanding and reasoning.
In particular, the proportion of questions that requires reasoning is much
larger in RACE than that in other benchmark datasets for reading comprehension,
and there is a significant gap between the performance of the state-of-the-art
models (43%) and the ceiling human performance (95%). We hope this new dataset
can serve as a valuable resource for research and evaluation in machine
comprehension. The dataset is freely available at
http://www.cs.cmu.edu/~glai1/data/race/ and the code is available at
https://github.com/qizhex/RACE_AR_baselines. | http://arxiv.org/pdf/1704.04683 | Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, Eduard Hovy | cs.CL, cs.AI, cs.LG | EMNLP 2017 | null | cs.CL | 20170415 | 20171205 | [
{
"id": "1511.02301"
},
{
"id": "1608.05457"
},
{
"id": "1702.02206"
},
{
"id": "1606.05250"
},
{
"id": "1604.06076"
},
{
"id": "1611.09268"
},
{
"id": "1606.02858"
},
{
"id": "1610.00956"
},
{
"id": "1606.01549"
},
{
"id": "1611.09830"
},
{
"id": "1603.01547"
}
] |
1704.04651 | 20 | 5
(10)
Published as a conference paper at ICLR 2018
which are supported on a finite uniform grid {2;} ⬠[Umins Umaxls 21 < 2-415 21 = Umnins 2m = Umax: The coefficients q;(a,a) (discrete distribution) corresponds to the probabilities assigned to each atom z; of the grid. From an observed n-step sequence {x,,:,71,T141,---;T+n}, generated by behavior policy p (ie, as ~ pu(-|vs) fort < s < t+ n), we build the n-step backed-up return-distribution from (x,,a;). The n-step distributional Bellman target, whose expectation is yin ys trs +" Q(Lt4n, @), is given by:
t+n-1 Ss Gi(tten,a)d2», with 2; = Ss rg by zie i
Since this distribution is supported on the set of atoms {zn i }, which is not necessarily aligned with the grid {zi}, we do a projection step and minimize the KL-loss between the projected target and the current estimate, just as with C51 except with a different target distribution (Bellemare et al., 2017). | 1704.04651#20 | The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning | In this work we present a new agent architecture, called Reactor, which
combines multiple algorithmic and architectural contributions to produce an
agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al.,
2016) and Categorical DQN (Bellemare et al., 2017), while giving better
run-time performance than A3C (Mnih et al., 2016). Our first contribution is a
new policy evaluation algorithm called Distributional Retrace, which brings
multi-step off-policy updates to the distributional reinforcement learning
setting. The same approach can be used to convert several classes of multi-step
policy evaluation algorithms designed for expected value evaluation into
distributional ones. Next, we introduce the \b{eta}-leave-one-out policy
gradient algorithm which improves the trade-off between variance and bias by
using action values as a baseline. Our final algorithmic contribution is a new
prioritized replay algorithm for sequences, which exploits the temporal
locality of neighboring observations for more efficient replay prioritization.
Using the Atari 2600 benchmarks, we show that each of these innovations
contribute to both the sample efficiency and final agent performance. Finally,
we demonstrate that Reactor reaches state-of-the-art performance after 200
million frames and less than a day of training. | http://arxiv.org/pdf/1704.04651 | Audrunas Gruslys, Will Dabney, Mohammad Gheshlaghi Azar, Bilal Piot, Marc Bellemare, Remi Munos | cs.AI | null | null | cs.AI | 20170415 | 20180619 | [
{
"id": "1707.06347"
},
{
"id": "1703.01161"
},
{
"id": "1509.02971"
},
{
"id": "1710.02298"
},
{
"id": "1706.10295"
},
{
"id": "1707.06887"
},
{
"id": "1511.05952"
}
] |
1704.04683 | 20 | The statistics about the reasoning type is sum- marized in Table 4. The higher difï¬culty level of RACE is justiï¬ed by its higher ratio of rea- soning questions in comparison to CNN, SQUAD and NEWSQA. Speciï¬cally, 59.2% questions of RACE are either in the category of single-sentence reasoning or in the category of multi-sentence reasoning, while the ratio is 21%, 20.5% and 33.9% for CNN, SQUAD and NEWSQA respec- tively. Also notice that the ratio of word match- ing questions on RACE is only 15.8%, the lowest among several categories. In addition, questions in RACE-H are more complex than questions in RACE-M since RACE-M has more word match- ing questions and fewer reasoning questions.
4. Attitude analysis: The question asks about the opinions/attitudes of the author or a character in the story towards somebody or something, e.g.,
⢠Evidence: â. . . Many people optimistically thought industry awards for better equipment the production of quieter would stimulate appliances. It was even suggested that noise from building sites could be alleviated . . . â
⢠Question: What was the authorâs attitude towards the industry awards for quieter? | 1704.04683#20 | RACE: Large-scale ReAding Comprehension Dataset From Examinations | We present RACE, a new dataset for benchmark evaluation of methods in the
reading comprehension task. Collected from the English exams for middle and
high school Chinese students in the age range between 12 to 18, RACE consists
of near 28,000 passages and near 100,000 questions generated by human experts
(English instructors), and covers a variety of topics which are carefully
designed for evaluating the students' ability in understanding and reasoning.
In particular, the proportion of questions that requires reasoning is much
larger in RACE than that in other benchmark datasets for reading comprehension,
and there is a significant gap between the performance of the state-of-the-art
models (43%) and the ceiling human performance (95%). We hope this new dataset
can serve as a valuable resource for research and evaluation in machine
comprehension. The dataset is freely available at
http://www.cs.cmu.edu/~glai1/data/race/ and the code is available at
https://github.com/qizhex/RACE_AR_baselines. | http://arxiv.org/pdf/1704.04683 | Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, Eduard Hovy | cs.CL, cs.AI, cs.LG | EMNLP 2017 | null | cs.CL | 20170415 | 20171205 | [
{
"id": "1511.02301"
},
{
"id": "1608.05457"
},
{
"id": "1702.02206"
},
{
"id": "1606.05250"
},
{
"id": "1604.06076"
},
{
"id": "1611.09268"
},
{
"id": "1606.02858"
},
{
"id": "1610.00956"
},
{
"id": "1606.01549"
},
{
"id": "1611.09830"
},
{
"id": "1603.01547"
}
] |
1704.04651 | 21 | Distributional Retrace: Now, the Retrace algorithm deï¬ned in (3) involves an off-policy correction which is not handled by the previous n-step distributional Bellman backup. The key to extending this distributional back-up to off-policy learning is to rewrite the Retrace algorithm as a linear combination of n-step Bellman backups, weighted by some coefï¬cients αn,a. Indeed, notice that (3) rewrites as
t+n-1 AQ(#t, at) = YE ana| > 7 ârs +7" Q(ce4ns a) | â Q(x, 41), n>1aeA
# n-step Bellman backup
where Qn. = (cr41 . -Ct4nâ1) (x(a|r14n) âlI{a= t4n}crtn): These coefficients depend on the degree of off-policy-ness (between ji and 77) along the trajectory. We have that 7,5; 04 Qn,a = Snot (cep1 ee Ct4nâ1) (1 â cr4n) = 1, but notice some coefficients may be negative. However, in expectation (over the behavior policy) they are non-negative. Indeed, | 1704.04651#21 | The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning | In this work we present a new agent architecture, called Reactor, which
combines multiple algorithmic and architectural contributions to produce an
agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al.,
2016) and Categorical DQN (Bellemare et al., 2017), while giving better
run-time performance than A3C (Mnih et al., 2016). Our first contribution is a
new policy evaluation algorithm called Distributional Retrace, which brings
multi-step off-policy updates to the distributional reinforcement learning
setting. The same approach can be used to convert several classes of multi-step
policy evaluation algorithms designed for expected value evaluation into
distributional ones. Next, we introduce the \b{eta}-leave-one-out policy
gradient algorithm which improves the trade-off between variance and bias by
using action values as a baseline. Our final algorithmic contribution is a new
prioritized replay algorithm for sequences, which exploits the temporal
locality of neighboring observations for more efficient replay prioritization.
Using the Atari 2600 benchmarks, we show that each of these innovations
contribute to both the sample efficiency and final agent performance. Finally,
we demonstrate that Reactor reaches state-of-the-art performance after 200
million frames and less than a day of training. | http://arxiv.org/pdf/1704.04651 | Audrunas Gruslys, Will Dabney, Mohammad Gheshlaghi Azar, Bilal Piot, Marc Bellemare, Remi Munos | cs.AI | null | null | cs.AI | 20170415 | 20180619 | [
{
"id": "1707.06347"
},
{
"id": "1703.01161"
},
{
"id": "1509.02971"
},
{
"id": "1710.02298"
},
{
"id": "1706.10295"
},
{
"id": "1707.06887"
},
{
"id": "1511.05952"
}
] |
1704.04683 | 21 | ⢠Question: What was the authorâs attitude towards the industry awards for quieter?
⢠Options: A.suspicious B.positive C.enthusiastic D.indifferent
# 3.3 Subdividing Reasoning Types
5. World knowledge: Certain external knowl- edge is needed. Most frequent questions under this category involve simple arithmetic.
To better understand our dataset and facilitate fu- ture research, we list the subdivisions of ques- tions under the reasoning category. We ï¬nd the most frequent reasoning subdivisions include: de- tail reasoning, whole-picture understanding, pas- sage summarization, attitude analysis and world knowledge. One question may fall into multiple divisions. Deï¬nition of these subdivisions and their associated examples are as follows:
1https://www.mturk.com/mturk/welcome
⢠Evidence: âThe park is open from 8 am to 5 pm.â
⢠Question: The park is open for hours a day.
⢠Options: A.eight B.nine C.ten D.eleven
To the best of our knowledge, questions like passage summarization and attitude analysis have not been introduced by any of the existing large- scale machine comprehension datasets. Both are | 1704.04683#21 | RACE: Large-scale ReAding Comprehension Dataset From Examinations | We present RACE, a new dataset for benchmark evaluation of methods in the
reading comprehension task. Collected from the English exams for middle and
high school Chinese students in the age range between 12 to 18, RACE consists
of near 28,000 passages and near 100,000 questions generated by human experts
(English instructors), and covers a variety of topics which are carefully
designed for evaluating the students' ability in understanding and reasoning.
In particular, the proportion of questions that requires reasoning is much
larger in RACE than that in other benchmark datasets for reading comprehension,
and there is a significant gap between the performance of the state-of-the-art
models (43%) and the ceiling human performance (95%). We hope this new dataset
can serve as a valuable resource for research and evaluation in machine
comprehension. The dataset is freely available at
http://www.cs.cmu.edu/~glai1/data/race/ and the code is available at
https://github.com/qizhex/RACE_AR_baselines. | http://arxiv.org/pdf/1704.04683 | Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, Eduard Hovy | cs.CL, cs.AI, cs.LG | EMNLP 2017 | null | cs.CL | 20170415 | 20171205 | [
{
"id": "1511.02301"
},
{
"id": "1608.05457"
},
{
"id": "1702.02206"
},
{
"id": "1606.05250"
},
{
"id": "1604.06076"
},
{
"id": "1611.09268"
},
{
"id": "1606.02858"
},
{
"id": "1610.00956"
},
{
"id": "1606.01549"
},
{
"id": "1611.09830"
},
{
"id": "1603.01547"
}
] |
1704.04651 | 22 | E,,[Qn,a] B[ (un tee Ctenâ1) Bae pn ~p(-lesn) [r(a 14n) â Ma = arin}ersn|tern] | E[ (conn Lee Ct4nâ1) (*(alxr+n) = p(a|ze4n)A min (1, merce) > 0,
by deï¬nition of the cs coefï¬cients (4). Thus in expectation (over the behavior policy), the Retrace update can be seen as a convex combination of n-step Bellman updates.
Then, the distributional Retrace algorithm can be deï¬ned as backing up a mixture of n-step distribu- tions. More precisely, we deï¬ne the Retrace target distribution as:
Sg (xp, a1)5z,, with g} (x2, ar) =D Leann ail Tron, A4n)hz, (2; ), i=l n>1 a
where hzi(x) is a linear interpolation kernel, projecting onto the support {zi}: | 1704.04651#22 | The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning | In this work we present a new agent architecture, called Reactor, which
combines multiple algorithmic and architectural contributions to produce an
agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al.,
2016) and Categorical DQN (Bellemare et al., 2017), while giving better
run-time performance than A3C (Mnih et al., 2016). Our first contribution is a
new policy evaluation algorithm called Distributional Retrace, which brings
multi-step off-policy updates to the distributional reinforcement learning
setting. The same approach can be used to convert several classes of multi-step
policy evaluation algorithms designed for expected value evaluation into
distributional ones. Next, we introduce the \b{eta}-leave-one-out policy
gradient algorithm which improves the trade-off between variance and bias by
using action values as a baseline. Our final algorithmic contribution is a new
prioritized replay algorithm for sequences, which exploits the temporal
locality of neighboring observations for more efficient replay prioritization.
Using the Atari 2600 benchmarks, we show that each of these innovations
contribute to both the sample efficiency and final agent performance. Finally,
we demonstrate that Reactor reaches state-of-the-art performance after 200
million frames and less than a day of training. | http://arxiv.org/pdf/1704.04651 | Audrunas Gruslys, Will Dabney, Mohammad Gheshlaghi Azar, Bilal Piot, Marc Bellemare, Remi Munos | cs.AI | null | null | cs.AI | 20170415 | 20180619 | [
{
"id": "1707.06347"
},
{
"id": "1703.01161"
},
{
"id": "1509.02971"
},
{
"id": "1710.02298"
},
{
"id": "1706.10295"
},
{
"id": "1707.06887"
},
{
"id": "1511.05952"
}
] |
1704.04683 | 22 | To the best of our knowledge, questions like passage summarization and attitude analysis have not been introduced by any of the existing large- scale machine comprehension datasets. Both are
Dataset Word Matching Paraphrasing Single-Sentence Reasoning Multi-Sentence Reasoning Ambiguous/Insufï¬cient RACE-M RACE-H RACE CNN 15.8% 13.0%â 19.2% 41.0%â 33.4% 19.0%â 25.8% 2.0%â 5.8% 25.0%â 29.4% 14.8% 31.3% 22.6% 1.8% 11.3% 20.6% 34.1% 26.9% 7.1% SQUAD NEWSQA 39.8%* 34.3%* 8.6%* 11.9%* 5.4%* 32.7%* 27.0%* 13.2%* 20.7%* 6.4%*
Table 4: Statistic information about Reasoning type in different datasets. * denotes the numbers coming from (Trischler et al., 2016) based on 1000 samples per dataset, and numbers with â come from (Chen et al., 2016).
crucial components in evaluating humansâ reading comprehension abilities.
# 4 Collection Methodology | 1704.04683#22 | RACE: Large-scale ReAding Comprehension Dataset From Examinations | We present RACE, a new dataset for benchmark evaluation of methods in the
reading comprehension task. Collected from the English exams for middle and
high school Chinese students in the age range between 12 to 18, RACE consists
of near 28,000 passages and near 100,000 questions generated by human experts
(English instructors), and covers a variety of topics which are carefully
designed for evaluating the students' ability in understanding and reasoning.
In particular, the proportion of questions that requires reasoning is much
larger in RACE than that in other benchmark datasets for reading comprehension,
and there is a significant gap between the performance of the state-of-the-art
models (43%) and the ceiling human performance (95%). We hope this new dataset
can serve as a valuable resource for research and evaluation in machine
comprehension. The dataset is freely available at
http://www.cs.cmu.edu/~glai1/data/race/ and the code is available at
https://github.com/qizhex/RACE_AR_baselines. | http://arxiv.org/pdf/1704.04683 | Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, Eduard Hovy | cs.CL, cs.AI, cs.LG | EMNLP 2017 | null | cs.CL | 20170415 | 20171205 | [
{
"id": "1511.02301"
},
{
"id": "1608.05457"
},
{
"id": "1702.02206"
},
{
"id": "1606.05250"
},
{
"id": "1604.06076"
},
{
"id": "1611.09268"
},
{
"id": "1606.02858"
},
{
"id": "1610.00956"
},
{
"id": "1606.01549"
},
{
"id": "1611.09830"
},
{
"id": "1603.01547"
}
] |
1704.04651 | 23 | where hzi(x) is a linear interpolation kernel, projecting onto the support {zi}:
hzi(x) = (x â ziâ1)/(zi â ziâ1), (zi+1 â x)/(zi+1 â zi), 0, 1, if ziâ1 ⤠x ⤠zi if zi ⤠x ⤠zi+1 if x ⤠ziâ1 or x ⥠zi+1 if (x ⤠vmin and zi = vmin) or (x ⥠vmax and zi = vmax)
We update the current probabilities q(xt, at) by performing a gradient step on the KL-loss
VKL(q* (x1, a), (et, a,)) = Sag (a, at )V log qi(ae, ay). (12) i=l
Again, notice that some target âprobabilitiesâ qâ i (xt, at) may be negative for some sample trajectory, but in expectation they will be non-negative. Since the gradient of a KL-loss is linear w.r.t. its ï¬rst argument, our update rule (12) provides an unbiased estimate of the gradient of the KL between the expected (over the behavior policy) Retrace target distribution and the current predicted distribution.1
1We store past action probabilities µ together with actions taken in the replay memory.
6 | 1704.04651#23 | The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning | In this work we present a new agent architecture, called Reactor, which
combines multiple algorithmic and architectural contributions to produce an
agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al.,
2016) and Categorical DQN (Bellemare et al., 2017), while giving better
run-time performance than A3C (Mnih et al., 2016). Our first contribution is a
new policy evaluation algorithm called Distributional Retrace, which brings
multi-step off-policy updates to the distributional reinforcement learning
setting. The same approach can be used to convert several classes of multi-step
policy evaluation algorithms designed for expected value evaluation into
distributional ones. Next, we introduce the \b{eta}-leave-one-out policy
gradient algorithm which improves the trade-off between variance and bias by
using action values as a baseline. Our final algorithmic contribution is a new
prioritized replay algorithm for sequences, which exploits the temporal
locality of neighboring observations for more efficient replay prioritization.
Using the Atari 2600 benchmarks, we show that each of these innovations
contribute to both the sample efficiency and final agent performance. Finally,
we demonstrate that Reactor reaches state-of-the-art performance after 200
million frames and less than a day of training. | http://arxiv.org/pdf/1704.04651 | Audrunas Gruslys, Will Dabney, Mohammad Gheshlaghi Azar, Bilal Piot, Marc Bellemare, Remi Munos | cs.AI | null | null | cs.AI | 20170415 | 20180619 | [
{
"id": "1707.06347"
},
{
"id": "1703.01161"
},
{
"id": "1509.02971"
},
{
"id": "1710.02298"
},
{
"id": "1706.10295"
},
{
"id": "1707.06887"
},
{
"id": "1511.05952"
}
] |
1704.04683 | 23 | crucial components in evaluating humansâ reading comprehension abilities.
# 4 Collection Methodology
We collected the raw data from three large free public websites in China2, where the reading com- prehension problems are extracted from English examinations designed by teachers in China. The data before cleaning contains 137,918 passages and 519,878 questions in total, where there are 38,159 passages with 156,782 questions in the middle school group, and 99,759 passages with 363,096 questions in the high school group.
The following ï¬ltering steps are conducted to clean the raw data. Firstly, we remove all prob- lems and questions that do not have the same for- mat as our problem setting, e.g., a question would be removed if the number of its options is not four. Secondly, we ï¬lter all articles and questions that are not self-contained based on the text informa- tion, i.e. we remove the articles and questions con- taining images or tables. We also remove all ques- tions containing keywords âunderlinedâ or âpara- graphâ, since it is difï¬cult to reproduce the effect of underlines and the paragraph segment informa- tion. Thirdly, we remove all duplicated articles. | 1704.04683#23 | RACE: Large-scale ReAding Comprehension Dataset From Examinations | We present RACE, a new dataset for benchmark evaluation of methods in the
reading comprehension task. Collected from the English exams for middle and
high school Chinese students in the age range between 12 to 18, RACE consists
of near 28,000 passages and near 100,000 questions generated by human experts
(English instructors), and covers a variety of topics which are carefully
designed for evaluating the students' ability in understanding and reasoning.
In particular, the proportion of questions that requires reasoning is much
larger in RACE than that in other benchmark datasets for reading comprehension,
and there is a significant gap between the performance of the state-of-the-art
models (43%) and the ceiling human performance (95%). We hope this new dataset
can serve as a valuable resource for research and evaluation in machine
comprehension. The dataset is freely available at
http://www.cs.cmu.edu/~glai1/data/race/ and the code is available at
https://github.com/qizhex/RACE_AR_baselines. | http://arxiv.org/pdf/1704.04683 | Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, Eduard Hovy | cs.CL, cs.AI, cs.LG | EMNLP 2017 | null | cs.CL | 20170415 | 20171205 | [
{
"id": "1511.02301"
},
{
"id": "1608.05457"
},
{
"id": "1702.02206"
},
{
"id": "1606.05250"
},
{
"id": "1604.06076"
},
{
"id": "1611.09268"
},
{
"id": "1606.02858"
},
{
"id": "1610.00956"
},
{
"id": "1606.01549"
},
{
"id": "1611.09830"
},
{
"id": "1603.01547"
}
] |
1704.04651 | 24 | 1We store past action probabilities µ together with actions taken in the replay memory.
6
Published as a conference paper at ICLR 2018
Remark: The same method can be applied to other algorithms (such as TB(λ) (Precup et al., 2000) and importance sampling (Precup et al., 2001)) in order to derive distributional versions of other off-policy multi-step RL algorithms.
3.3 PRIORITIZED SEQUENCE REPLAY
Prioritized experience replay has been shown to boost both statistical efï¬ciency and ï¬nal performance of deep RL agents (Schaul et al., 2016). However, as originally deï¬ned prioritized replay does not handle sequences of transitions and weights all unsampled transitions identically. In this section we present an alternative initialization strategy, called lazy initialization, and argue that it better encodes prior information about temporal difference errors. We then brieï¬y describe our computationally efï¬cient prioritized sequence sampling algorithm, with full details left to the appendix. | 1704.04651#24 | The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning | In this work we present a new agent architecture, called Reactor, which
combines multiple algorithmic and architectural contributions to produce an
agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al.,
2016) and Categorical DQN (Bellemare et al., 2017), while giving better
run-time performance than A3C (Mnih et al., 2016). Our first contribution is a
new policy evaluation algorithm called Distributional Retrace, which brings
multi-step off-policy updates to the distributional reinforcement learning
setting. The same approach can be used to convert several classes of multi-step
policy evaluation algorithms designed for expected value evaluation into
distributional ones. Next, we introduce the \b{eta}-leave-one-out policy
gradient algorithm which improves the trade-off between variance and bias by
using action values as a baseline. Our final algorithmic contribution is a new
prioritized replay algorithm for sequences, which exploits the temporal
locality of neighboring observations for more efficient replay prioritization.
Using the Atari 2600 benchmarks, we show that each of these innovations
contribute to both the sample efficiency and final agent performance. Finally,
we demonstrate that Reactor reaches state-of-the-art performance after 200
million frames and less than a day of training. | http://arxiv.org/pdf/1704.04651 | Audrunas Gruslys, Will Dabney, Mohammad Gheshlaghi Azar, Bilal Piot, Marc Bellemare, Remi Munos | cs.AI | null | null | cs.AI | 20170415 | 20180619 | [
{
"id": "1707.06347"
},
{
"id": "1703.01161"
},
{
"id": "1509.02971"
},
{
"id": "1710.02298"
},
{
"id": "1706.10295"
},
{
"id": "1707.06887"
},
{
"id": "1511.05952"
}
] |
1704.04683 | 24 | On one of the websites (xkw.com), the answers are stored as images. We used two standard OCR programs tesseract 3 and ABBYY FineReader 4 to process the images. We remove all the answers that two software disagree. The OCR task is easy since we only need to recognize printed alphabet A, B, C, D with a standard font. Finally, we get the cleaned dataset RACE, with 27,933 passages and 97,687 questions.
# 5 Experiments
In this section, we compare the performance of several state-of-the-art reading comprehension models with human performance. We use accu- racy as the metric to evaluate different models.
# 5.1 Methods for Comparison
Sliding Window Algorithm Firstly, we build the rule-based baseline introduced by Richardson et al. (2013). It chooses the answer having the highest matching score. Speciï¬cally, it ï¬rst con- catenates the question and the answer and then cal- culates the TF-IDF style matching score between the concatenated sentence with every window (a span of text) of the article. The window size is decided by the model performance in the training and dev sets. | 1704.04683#24 | RACE: Large-scale ReAding Comprehension Dataset From Examinations | We present RACE, a new dataset for benchmark evaluation of methods in the
reading comprehension task. Collected from the English exams for middle and
high school Chinese students in the age range between 12 to 18, RACE consists
of near 28,000 passages and near 100,000 questions generated by human experts
(English instructors), and covers a variety of topics which are carefully
designed for evaluating the students' ability in understanding and reasoning.
In particular, the proportion of questions that requires reasoning is much
larger in RACE than that in other benchmark datasets for reading comprehension,
and there is a significant gap between the performance of the state-of-the-art
models (43%) and the ceiling human performance (95%). We hope this new dataset
can serve as a valuable resource for research and evaluation in machine
comprehension. The dataset is freely available at
http://www.cs.cmu.edu/~glai1/data/race/ and the code is available at
https://github.com/qizhex/RACE_AR_baselines. | http://arxiv.org/pdf/1704.04683 | Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, Eduard Hovy | cs.CL, cs.AI, cs.LG | EMNLP 2017 | null | cs.CL | 20170415 | 20171205 | [
{
"id": "1511.02301"
},
{
"id": "1608.05457"
},
{
"id": "1702.02206"
},
{
"id": "1606.05250"
},
{
"id": "1604.06076"
},
{
"id": "1611.09268"
},
{
"id": "1606.02858"
},
{
"id": "1610.00956"
},
{
"id": "1606.01549"
},
{
"id": "1611.09830"
},
{
"id": "1603.01547"
}
] |
1704.04651 | 25 | It is widely recognized that TD errors tend to be temporally correlated, indeed the need to break this temporal correlation has been one of the primary justiï¬cations for the use of experience replay (Mnih et al., 2015). Our proposed algorithm begins with this fundamental assumption. Assumption 1. Temporal differences are temporally correlated, with correlation decaying on average with the time-difference between two transitions.
Prioritized experience replay adds new transitions to the replay buffer with a constant priority, but given the above assumption we can devise a better method. Speciï¬cally, we propose to add experience to the buffer with no priority, inserting a priority only after the transition has been sampled and used for training. Also, instead of sampling transitions, we assign priorities to all (overlapping) sequences of length n. When sampling, sequences with an assigned priority are sampled proportionally to that priority. Sequences with no assigned priority are sampled proportionally to the average priority of assigned priority sequences within some local neighbourhood. Averages are weighted to compensate for sampling biases (i.e. more samples are made in areas of high estimated priorities, and in the absence of weighting this would lead to overestimation of unassigned priorities). | 1704.04651#25 | The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning | In this work we present a new agent architecture, called Reactor, which
combines multiple algorithmic and architectural contributions to produce an
agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al.,
2016) and Categorical DQN (Bellemare et al., 2017), while giving better
run-time performance than A3C (Mnih et al., 2016). Our first contribution is a
new policy evaluation algorithm called Distributional Retrace, which brings
multi-step off-policy updates to the distributional reinforcement learning
setting. The same approach can be used to convert several classes of multi-step
policy evaluation algorithms designed for expected value evaluation into
distributional ones. Next, we introduce the \b{eta}-leave-one-out policy
gradient algorithm which improves the trade-off between variance and bias by
using action values as a baseline. Our final algorithmic contribution is a new
prioritized replay algorithm for sequences, which exploits the temporal
locality of neighboring observations for more efficient replay prioritization.
Using the Atari 2600 benchmarks, we show that each of these innovations
contribute to both the sample efficiency and final agent performance. Finally,
we demonstrate that Reactor reaches state-of-the-art performance after 200
million frames and less than a day of training. | http://arxiv.org/pdf/1704.04651 | Audrunas Gruslys, Will Dabney, Mohammad Gheshlaghi Azar, Bilal Piot, Marc Bellemare, Remi Munos | cs.AI | null | null | cs.AI | 20170415 | 20180619 | [
{
"id": "1707.06347"
},
{
"id": "1703.01161"
},
{
"id": "1509.02971"
},
{
"id": "1710.02298"
},
{
"id": "1706.10295"
},
{
"id": "1707.06887"
},
{
"id": "1511.05952"
}
] |
1704.04683 | 25 | Stanford Attentive Reader Stanford Attentive Reader (Stanford AR) (Chen et al., 2016) is a strong model that achieves state-of-the-art results on CNN/Daily Mail. Moreover, the authors claim that their model has nearly reached the ceiling per- formance on these two datasets.
Suppose that the triple of passage, question and options is denoted by (p, q, o1,··· ,4). We ï¬rst em- ploy bidirectional GRUs to encode p and q respec- tively into hp n and hq. Then we sum- marize the most relevant part of the passage into sp with an attention model. Following Chen et al. (2016), we adopt a bilinear attention form. Specif- ically,
ay = Softmax;((h?) Wh") P= S- ajh? () i
2We checked that our dataset does not include exam- ple questions of exams with copyright, such as SSAT, SAT, TOEFL and GRE.
# 3https://github.com/tesseract-ocr 4https://www.abbyy.com/FineReader
Similarly, we use bidirectional GRUs to encode option oi into a vector hoi. Finally, we com- pute the matching score between the i-th option (i = 1, · · · , 4) and the summarized passage using | 1704.04683#25 | RACE: Large-scale ReAding Comprehension Dataset From Examinations | We present RACE, a new dataset for benchmark evaluation of methods in the
reading comprehension task. Collected from the English exams for middle and
high school Chinese students in the age range between 12 to 18, RACE consists
of near 28,000 passages and near 100,000 questions generated by human experts
(English instructors), and covers a variety of topics which are carefully
designed for evaluating the students' ability in understanding and reasoning.
In particular, the proportion of questions that requires reasoning is much
larger in RACE than that in other benchmark datasets for reading comprehension,
and there is a significant gap between the performance of the state-of-the-art
models (43%) and the ceiling human performance (95%). We hope this new dataset
can serve as a valuable resource for research and evaluation in machine
comprehension. The dataset is freely available at
http://www.cs.cmu.edu/~glai1/data/race/ and the code is available at
https://github.com/qizhex/RACE_AR_baselines. | http://arxiv.org/pdf/1704.04683 | Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, Eduard Hovy | cs.CL, cs.AI, cs.LG | EMNLP 2017 | null | cs.CL | 20170415 | 20171205 | [
{
"id": "1511.02301"
},
{
"id": "1608.05457"
},
{
"id": "1702.02206"
},
{
"id": "1606.05250"
},
{
"id": "1604.06076"
},
{
"id": "1611.09268"
},
{
"id": "1606.02858"
},
{
"id": "1610.00956"
},
{
"id": "1606.01549"
},
{
"id": "1611.09830"
},
{
"id": "1603.01547"
}
] |
1704.04651 | 26 | The lazy initialization scheme starts with priorities p; corresponding to the sequences {x1,,---,t4n} for which a priority was already assigned. Then it extrapolates a priority of all other sequences in the following way. Let us define a partition (J;); of the states ordered by increasing time such that each cell J; contains exactly one state s; with already assigned priority. We define the estimated priority p; to all other sequences as p, = Vsies(t) Sycyn wy PCS)» where J(t) is a collection of contiguous cells (I;) containing time t, and w; = |J;| is the length of the cell I; containing s;. For already defined priorities denote p; = p;. Cell sizes work as estimates of inverse local density and are used as importance weights for priority estimation. | For the algorithm to be unbiased, partition (J;); must not be a function of the assigned priorities. So far we have defined a class of algorithms all free to choose the partition (J;) and the collection of cells I(t), as long that they satisfy the above constraints. Figure/4]in the Appendix illustrates the above description.
# siâJ(t) | 1704.04651#26 | The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning | In this work we present a new agent architecture, called Reactor, which
combines multiple algorithmic and architectural contributions to produce an
agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al.,
2016) and Categorical DQN (Bellemare et al., 2017), while giving better
run-time performance than A3C (Mnih et al., 2016). Our first contribution is a
new policy evaluation algorithm called Distributional Retrace, which brings
multi-step off-policy updates to the distributional reinforcement learning
setting. The same approach can be used to convert several classes of multi-step
policy evaluation algorithms designed for expected value evaluation into
distributional ones. Next, we introduce the \b{eta}-leave-one-out policy
gradient algorithm which improves the trade-off between variance and bias by
using action values as a baseline. Our final algorithmic contribution is a new
prioritized replay algorithm for sequences, which exploits the temporal
locality of neighboring observations for more efficient replay prioritization.
Using the Atari 2600 benchmarks, we show that each of these innovations
contribute to both the sample efficiency and final agent performance. Finally,
we demonstrate that Reactor reaches state-of-the-art performance after 200
million frames and less than a day of training. | http://arxiv.org/pdf/1704.04651 | Audrunas Gruslys, Will Dabney, Mohammad Gheshlaghi Azar, Bilal Piot, Marc Bellemare, Remi Munos | cs.AI | null | null | cs.AI | 20170415 | 20180619 | [
{
"id": "1707.06347"
},
{
"id": "1703.01161"
},
{
"id": "1509.02971"
},
{
"id": "1710.02298"
},
{
"id": "1706.10295"
},
{
"id": "1707.06887"
},
{
"id": "1511.05952"
}
] |
1704.04683 | 26 | Random Sliding Window Stanford AR GA Turkers Ceiling Performance RACE-M RACE-H RACE MCTest CNN DM CBT-N CBT-C WDW 32.0â 10.2 19.6â 48.0â 64.0â 67.3â 71.2â 24.6 37.3 44.2 43.7 85.1 95.4 25.0 30.4 43.0 44.2 69.4 94.2 24.9 32.2 43.3 44.1 73.3 94.5 24.8 51.5â â â â â 10.6 0.06 0.06 24.8 30.8 16.8â 73.6â 76.6â 77.9â 80.9â 70.1â â â â â â â â 81.6â â 81.6â â 84â
Table 5: Accuracy of models and human on the each dataset, where â denotes the results coming from previous publications. DM denotes Daily Mail and WDW denotes Who-Did-What .
(a) RACE-M (b) RACE-H | 1704.04683#26 | RACE: Large-scale ReAding Comprehension Dataset From Examinations | We present RACE, a new dataset for benchmark evaluation of methods in the
reading comprehension task. Collected from the English exams for middle and
high school Chinese students in the age range between 12 to 18, RACE consists
of near 28,000 passages and near 100,000 questions generated by human experts
(English instructors), and covers a variety of topics which are carefully
designed for evaluating the students' ability in understanding and reasoning.
In particular, the proportion of questions that requires reasoning is much
larger in RACE than that in other benchmark datasets for reading comprehension,
and there is a significant gap between the performance of the state-of-the-art
models (43%) and the ceiling human performance (95%). We hope this new dataset
can serve as a valuable resource for research and evaluation in machine
comprehension. The dataset is freely available at
http://www.cs.cmu.edu/~glai1/data/race/ and the code is available at
https://github.com/qizhex/RACE_AR_baselines. | http://arxiv.org/pdf/1704.04683 | Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, Eduard Hovy | cs.CL, cs.AI, cs.LG | EMNLP 2017 | null | cs.CL | 20170415 | 20171205 | [
{
"id": "1511.02301"
},
{
"id": "1608.05457"
},
{
"id": "1702.02206"
},
{
"id": "1606.05250"
},
{
"id": "1604.06076"
},
{
"id": "1611.09268"
},
{
"id": "1606.02858"
},
{
"id": "1610.00956"
},
{
"id": "1606.01549"
},
{
"id": "1611.09830"
},
{
"id": "1603.01547"
}
] |
1704.04651 | 27 | # siâJ(t)
Now, with probability « we sample uniformly at random, and with probability 1 â « we sample proportionally to p;. We implemented an algorithm satisfying the above constraints and called it Contextual Priority Tree (CPT). It is based on AVL trees Nene 6) and can execute sampling, insertion, deletion and density evaluation in O(In(n)) time. We describe CPT in detail in the Appendix in Section|6.3]
We treated prioritization as purely a variance reduction technique. Importance-sampling weights were evaluated as in prioritized experience replay, with ï¬xed β = 1 in (2). We used simple gradient magnitude estimates as priorities, corresponding to a mean absolute TD error along a sequence for Retrace, as deï¬ned in (3) for the classical RL case, and total variation in the distributional Retrace case.3
3.4 AGENT ARCHITECTURE
In order to improve CPU utilization we decoupled acting from learning. This is an important aspect of our architecture: an acting thread receives observations, submits actions to the environment, and
2Not to be confused with importance weights of produced samples. 3Sum of absolute discrete probability differences.
7
Published as a conference paper at ICLR 2018 | 1704.04651#27 | The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning | In this work we present a new agent architecture, called Reactor, which
combines multiple algorithmic and architectural contributions to produce an
agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al.,
2016) and Categorical DQN (Bellemare et al., 2017), while giving better
run-time performance than A3C (Mnih et al., 2016). Our first contribution is a
new policy evaluation algorithm called Distributional Retrace, which brings
multi-step off-policy updates to the distributional reinforcement learning
setting. The same approach can be used to convert several classes of multi-step
policy evaluation algorithms designed for expected value evaluation into
distributional ones. Next, we introduce the \b{eta}-leave-one-out policy
gradient algorithm which improves the trade-off between variance and bias by
using action values as a baseline. Our final algorithmic contribution is a new
prioritized replay algorithm for sequences, which exploits the temporal
locality of neighboring observations for more efficient replay prioritization.
Using the Atari 2600 benchmarks, we show that each of these innovations
contribute to both the sample efficiency and final agent performance. Finally,
we demonstrate that Reactor reaches state-of-the-art performance after 200
million frames and less than a day of training. | http://arxiv.org/pdf/1704.04651 | Audrunas Gruslys, Will Dabney, Mohammad Gheshlaghi Azar, Bilal Piot, Marc Bellemare, Remi Munos | cs.AI | null | null | cs.AI | 20170415 | 20180619 | [
{
"id": "1707.06347"
},
{
"id": "1703.01161"
},
{
"id": "1509.02971"
},
{
"id": "1710.02298"
},
{
"id": "1706.10295"
},
{
"id": "1707.06887"
},
{
"id": "1511.05952"
}
] |
1704.04683 | 27 | (a) RACE-M (b) RACE-H
Figure 1: Test accuracy of different baselines on each question type category introduced in Section 3.2, where Word-Match, Single-Reason, Multi-Reason and Ambiguous are the abbreviations for Word match- ing, Single-sentence Reasoning, Multi-sentence Reasoning and Insufï¬cient/Ambiguous respectively.
a bilinear attention. We pass the scores through softmax to get a probability distribution. Specif- ically, the probability of option i being the right answer is calculated as
pi = Softmaxi(hoiW2sd) (2)
After obtaining a query speciï¬c document rep- resentation sd, we use the same method as bilinear operation listed in Equation 2 to get the output.
Note that our implementation slightly differs from the original GA reader. Speciï¬cally, the At- tention Sum layer is not applied at the ï¬nal layer and no character-level embeddings are used. | 1704.04683#27 | RACE: Large-scale ReAding Comprehension Dataset From Examinations | We present RACE, a new dataset for benchmark evaluation of methods in the
reading comprehension task. Collected from the English exams for middle and
high school Chinese students in the age range between 12 to 18, RACE consists
of near 28,000 passages and near 100,000 questions generated by human experts
(English instructors), and covers a variety of topics which are carefully
designed for evaluating the students' ability in understanding and reasoning.
In particular, the proportion of questions that requires reasoning is much
larger in RACE than that in other benchmark datasets for reading comprehension,
and there is a significant gap between the performance of the state-of-the-art
models (43%) and the ceiling human performance (95%). We hope this new dataset
can serve as a valuable resource for research and evaluation in machine
comprehension. The dataset is freely available at
http://www.cs.cmu.edu/~glai1/data/race/ and the code is available at
https://github.com/qizhex/RACE_AR_baselines. | http://arxiv.org/pdf/1704.04683 | Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, Eduard Hovy | cs.CL, cs.AI, cs.LG | EMNLP 2017 | null | cs.CL | 20170415 | 20171205 | [
{
"id": "1511.02301"
},
{
"id": "1608.05457"
},
{
"id": "1702.02206"
},
{
"id": "1606.05250"
},
{
"id": "1604.06076"
},
{
"id": "1611.09268"
},
{
"id": "1606.02858"
},
{
"id": "1610.00956"
},
{
"id": "1606.01549"
},
{
"id": "1611.09830"
},
{
"id": "1603.01547"
}
] |
1704.04651 | 28 | 2Not to be confused with importance weights of produced samples. 3Sum of absolute discrete probability differences.
7
Published as a conference paper at ICLR 2018
Algorithm DQN Double DQN Dueling Prioritized DQN Rainbow A3C Reactor Reactor 500m Reactor* Training Time 8 days 8 days 8 days 8 days 10 days 4 days < 2 days 4 days < 1 day Type GPU 1 GPU 1 GPU 1 GPU 1 GPU 1 CPU 16 CPU 10+1 CPU 10+1 CPU 20+1 # Workers
Figure 2: (Left) The model of parallelism of DQN, A3C and Reactor architectures. Each row represents a separate thread. In Reactorâs case, each worker, consiting of a learner and an actor is run on a separate worker machine. (Right) Comparison of training times and resources for various algorithms. 500m denotes 500 million training frames; otherwise 200m training frames were used.
stores transitions in memory, while a learning thread re-samples sequences of experiences from memory and trains on them (Figure 2, left). We typically execute 4-6 acting steps per each learning step. We sample sequences of length n = 33 in batches of 4. A moving network is unrolled over frames 1-32 while the target network is unrolled over frames 2-33. | 1704.04651#28 | The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning | In this work we present a new agent architecture, called Reactor, which
combines multiple algorithmic and architectural contributions to produce an
agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al.,
2016) and Categorical DQN (Bellemare et al., 2017), while giving better
run-time performance than A3C (Mnih et al., 2016). Our first contribution is a
new policy evaluation algorithm called Distributional Retrace, which brings
multi-step off-policy updates to the distributional reinforcement learning
setting. The same approach can be used to convert several classes of multi-step
policy evaluation algorithms designed for expected value evaluation into
distributional ones. Next, we introduce the \b{eta}-leave-one-out policy
gradient algorithm which improves the trade-off between variance and bias by
using action values as a baseline. Our final algorithmic contribution is a new
prioritized replay algorithm for sequences, which exploits the temporal
locality of neighboring observations for more efficient replay prioritization.
Using the Atari 2600 benchmarks, we show that each of these innovations
contribute to both the sample efficiency and final agent performance. Finally,
we demonstrate that Reactor reaches state-of-the-art performance after 200
million frames and less than a day of training. | http://arxiv.org/pdf/1704.04651 | Audrunas Gruslys, Will Dabney, Mohammad Gheshlaghi Azar, Bilal Piot, Marc Bellemare, Remi Munos | cs.AI | null | null | cs.AI | 20170415 | 20180619 | [
{
"id": "1707.06347"
},
{
"id": "1703.01161"
},
{
"id": "1509.02971"
},
{
"id": "1710.02298"
},
{
"id": "1706.10295"
},
{
"id": "1707.06887"
},
{
"id": "1511.05952"
}
] |
1704.04683 | 28 | Gated-Attention Reader Gated AR (Dhingra et al., 2016) is the state-of-the-art model on mul- tiple datasets. To build query-speciï¬c represen- tations of tokens in the document, it employs an attention mechanism to model multiplicative in- teractions between the query embedding and the document representation. With a multi-hop ar- chitecture, GA also enables a model to scan the document and the question iteratively for multi- ple passes. In other words, the multi-hop struc- ture makes it possible for the reader to reï¬ne token representations iteratively and the attention mech- anism ï¬nd the most relevant part of the document. We refer readers to (Dhingra et al., 2016) for more details. | 1704.04683#28 | RACE: Large-scale ReAding Comprehension Dataset From Examinations | We present RACE, a new dataset for benchmark evaluation of methods in the
reading comprehension task. Collected from the English exams for middle and
high school Chinese students in the age range between 12 to 18, RACE consists
of near 28,000 passages and near 100,000 questions generated by human experts
(English instructors), and covers a variety of topics which are carefully
designed for evaluating the students' ability in understanding and reasoning.
In particular, the proportion of questions that requires reasoning is much
larger in RACE than that in other benchmark datasets for reading comprehension,
and there is a significant gap between the performance of the state-of-the-art
models (43%) and the ceiling human performance (95%). We hope this new dataset
can serve as a valuable resource for research and evaluation in machine
comprehension. The dataset is freely available at
http://www.cs.cmu.edu/~glai1/data/race/ and the code is available at
https://github.com/qizhex/RACE_AR_baselines. | http://arxiv.org/pdf/1704.04683 | Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, Eduard Hovy | cs.CL, cs.AI, cs.LG | EMNLP 2017 | null | cs.CL | 20170415 | 20171205 | [
{
"id": "1511.02301"
},
{
"id": "1608.05457"
},
{
"id": "1702.02206"
},
{
"id": "1606.05250"
},
{
"id": "1604.06076"
},
{
"id": "1611.09268"
},
{
"id": "1606.02858"
},
{
"id": "1610.00956"
},
{
"id": "1606.01549"
},
{
"id": "1611.09830"
},
{
"id": "1603.01547"
}
] |
1704.04651 | 29 | We allow the agent to be distributed over multiple machines each containing action-learner pairs. Each worker downloads the newest network parameters before each learning step and sends delta-updates at the end of it. Both the network and target network are stored on a shared parameter server while each machine contains its own local replay memory. Training is done by downloading a shared network, evaluating local gradients and sending them to be applied on the shared network. While the agent can also be trained on a single machine, in this work we present results of training obtained with either 10 or 20 actor-learner workers and one parameter server. In Figure 2 (right) we compare resources and runtimes of Reactor with related algorithms.4
3.4.1 NETWORK ARCHITECTURE
In some domains, such as Atari, it is useful to base decisions on a short history of past observations. The two techniques generally used to achieve this are frame stacking and recurrent network architec- tures. We chose the latter over the former for reasons of implementation simplicity and computational efï¬ciency. As the Retrace algorithm requires evaluating action-values over contiguous sequences of trajectories, using a recurrent architecture allowed each frame to be processed by the convolutional network only once, as opposed to n times times if n frame concatenations were used. | 1704.04651#29 | The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning | In this work we present a new agent architecture, called Reactor, which
combines multiple algorithmic and architectural contributions to produce an
agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al.,
2016) and Categorical DQN (Bellemare et al., 2017), while giving better
run-time performance than A3C (Mnih et al., 2016). Our first contribution is a
new policy evaluation algorithm called Distributional Retrace, which brings
multi-step off-policy updates to the distributional reinforcement learning
setting. The same approach can be used to convert several classes of multi-step
policy evaluation algorithms designed for expected value evaluation into
distributional ones. Next, we introduce the \b{eta}-leave-one-out policy
gradient algorithm which improves the trade-off between variance and bias by
using action values as a baseline. Our final algorithmic contribution is a new
prioritized replay algorithm for sequences, which exploits the temporal
locality of neighboring observations for more efficient replay prioritization.
Using the Atari 2600 benchmarks, we show that each of these innovations
contribute to both the sample efficiency and final agent performance. Finally,
we demonstrate that Reactor reaches state-of-the-art performance after 200
million frames and less than a day of training. | http://arxiv.org/pdf/1704.04651 | Audrunas Gruslys, Will Dabney, Mohammad Gheshlaghi Azar, Bilal Piot, Marc Bellemare, Remi Munos | cs.AI | null | null | cs.AI | 20170415 | 20180619 | [
{
"id": "1707.06347"
},
{
"id": "1703.01161"
},
{
"id": "1509.02971"
},
{
"id": "1710.02298"
},
{
"id": "1706.10295"
},
{
"id": "1707.06887"
},
{
"id": "1511.05952"
}
] |
1704.04683 | 29 | Implementation Details We follow Chen et al. (2016) in our experiment settings. The vocabulary size is set to 50k. We choose word embedding size d = 100 and use the 100-dimensional Glove word embedding (Pennington et al., 2014) as em- bedding initialization. GRU weights are initial- ized from Gaussian distribution N (0, 0.1). Other parameters are initialized from a uniform distri- bution on (â0.01, 0.01). The hidden dimension- ality is set to 128 and the number of layers is set to one for both Stanford AR and GA. We use vanilla stochastic gradient descent (SGD) to train our models. We apply dropout on word embed- dings and the gradient is clipped when the norm
of the gradient is larger than 10. We use a grid search on validation set to choose the learning rate within {0.05, 0.1, 0.3, 0.5} and dropout rate within {0.2, 0.5, 0.7}. The highest accuracy on validation set is obtained by setting learning rate to 0.1 for Stanford AR and 0.3 for GA and dropout rate to 0.5. The data of RACE-M and RACE-H is used together to train our model and testing is performed separately.
# 5.2 Human Evaluation | 1704.04683#29 | RACE: Large-scale ReAding Comprehension Dataset From Examinations | We present RACE, a new dataset for benchmark evaluation of methods in the
reading comprehension task. Collected from the English exams for middle and
high school Chinese students in the age range between 12 to 18, RACE consists
of near 28,000 passages and near 100,000 questions generated by human experts
(English instructors), and covers a variety of topics which are carefully
designed for evaluating the students' ability in understanding and reasoning.
In particular, the proportion of questions that requires reasoning is much
larger in RACE than that in other benchmark datasets for reading comprehension,
and there is a significant gap between the performance of the state-of-the-art
models (43%) and the ceiling human performance (95%). We hope this new dataset
can serve as a valuable resource for research and evaluation in machine
comprehension. The dataset is freely available at
http://www.cs.cmu.edu/~glai1/data/race/ and the code is available at
https://github.com/qizhex/RACE_AR_baselines. | http://arxiv.org/pdf/1704.04683 | Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, Eduard Hovy | cs.CL, cs.AI, cs.LG | EMNLP 2017 | null | cs.CL | 20170415 | 20171205 | [
{
"id": "1511.02301"
},
{
"id": "1608.05457"
},
{
"id": "1702.02206"
},
{
"id": "1606.05250"
},
{
"id": "1604.06076"
},
{
"id": "1611.09268"
},
{
"id": "1606.02858"
},
{
"id": "1610.00956"
},
{
"id": "1606.01549"
},
{
"id": "1611.09830"
},
{
"id": "1603.01547"
}
] |
1704.04651 | 30 | The Reactor architecture uses a recurrent neural network which takes an observation xt as input and produces two outputs: categorical action-value distributions qi(xt, a) (i here is a bin identiï¬er), and policy probabilities Ï(a|xt). We use an architecture inspired by the duelling network architecture (Wang et al., 2015). We split action-value -distribution logits into state-value logits and advantage logits, which in turn are connected to the same LSTM network (Hochreiter & Schmidhuber, 1997). Final action-value logits are produced by summing state- and action-speciï¬c logits, as in Wang et al. (2015). Finally, a softmax layer on top for each action produces the distributions over discounted future returns.
The policy head uses a softmax layer mixed with a ï¬xed uniform distribution over actions, where this mixing ratio is a hyperparameter (Wiering, 1999, Section 5.1.3). Policy and Q-networks have separate LSTMs. Both LSTMs are connected to a shared linear layer which is connected to a shared convolutional neural network (Krizhevsky et al., 2012). The precise network speciï¬cation is given in Table 3 in the Appendix. | 1704.04651#30 | The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning | In this work we present a new agent architecture, called Reactor, which
combines multiple algorithmic and architectural contributions to produce an
agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al.,
2016) and Categorical DQN (Bellemare et al., 2017), while giving better
run-time performance than A3C (Mnih et al., 2016). Our first contribution is a
new policy evaluation algorithm called Distributional Retrace, which brings
multi-step off-policy updates to the distributional reinforcement learning
setting. The same approach can be used to convert several classes of multi-step
policy evaluation algorithms designed for expected value evaluation into
distributional ones. Next, we introduce the \b{eta}-leave-one-out policy
gradient algorithm which improves the trade-off between variance and bias by
using action values as a baseline. Our final algorithmic contribution is a new
prioritized replay algorithm for sequences, which exploits the temporal
locality of neighboring observations for more efficient replay prioritization.
Using the Atari 2600 benchmarks, we show that each of these innovations
contribute to both the sample efficiency and final agent performance. Finally,
we demonstrate that Reactor reaches state-of-the-art performance after 200
million frames and less than a day of training. | http://arxiv.org/pdf/1704.04651 | Audrunas Gruslys, Will Dabney, Mohammad Gheshlaghi Azar, Bilal Piot, Marc Bellemare, Remi Munos | cs.AI | null | null | cs.AI | 20170415 | 20180619 | [
{
"id": "1707.06347"
},
{
"id": "1703.01161"
},
{
"id": "1509.02971"
},
{
"id": "1710.02298"
},
{
"id": "1706.10295"
},
{
"id": "1707.06887"
},
{
"id": "1511.05952"
}
] |
1704.04683 | 30 | # 5.2 Human Evaluation
As described in section 3.2, a randomly sam- pled subset of test set has been labeled by Ama- zon Turkers, which contains 500 questions with half from RACE-H and with the other half from RACE-M. The turkersâ performance is 85% for RACE-M and 70% for RACE-H. However, it is hard to guarantee that every turker performs the survey carefully, given the difï¬cult and long pas- sages of high school problems. Therefore, to ob- tain the ceiling human performance on RACE, we manually labeled the proportion of valid ques- tions. A question is valid if it is unambiguous and has a correct answer. We found that 94.5% of the data is valid, which sets the ceiling human per- formance. Similarly, the ceiling performance on RACE-M and RACE-H is 95.4% and 94.2% re- spectively.
# 5.3 Main Results | 1704.04683#30 | RACE: Large-scale ReAding Comprehension Dataset From Examinations | We present RACE, a new dataset for benchmark evaluation of methods in the
reading comprehension task. Collected from the English exams for middle and
high school Chinese students in the age range between 12 to 18, RACE consists
of near 28,000 passages and near 100,000 questions generated by human experts
(English instructors), and covers a variety of topics which are carefully
designed for evaluating the students' ability in understanding and reasoning.
In particular, the proportion of questions that requires reasoning is much
larger in RACE than that in other benchmark datasets for reading comprehension,
and there is a significant gap between the performance of the state-of-the-art
models (43%) and the ceiling human performance (95%). We hope this new dataset
can serve as a valuable resource for research and evaluation in machine
comprehension. The dataset is freely available at
http://www.cs.cmu.edu/~glai1/data/race/ and the code is available at
https://github.com/qizhex/RACE_AR_baselines. | http://arxiv.org/pdf/1704.04683 | Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, Eduard Hovy | cs.CL, cs.AI, cs.LG | EMNLP 2017 | null | cs.CL | 20170415 | 20171205 | [
{
"id": "1511.02301"
},
{
"id": "1608.05457"
},
{
"id": "1702.02206"
},
{
"id": "1606.05250"
},
{
"id": "1604.06076"
},
{
"id": "1611.09268"
},
{
"id": "1606.02858"
},
{
"id": "1610.00956"
},
{
"id": "1606.01549"
},
{
"id": "1611.09830"
},
{
"id": "1603.01547"
}
] |
1704.04651 | 31 | Gradients coming from the policy LSTM are blocked and only gradients originating from the Q- network LSTM are allowed to back-propagate into the convolutional neural network. We block gradients from the policy head for increased stability, as this avoids positive feedback loops between Ï and qi caused by shared representations. We used the Adam optimiser (Kingma & Ba, 2014),
4All results are reported with respect to the combined total number of observations obtained over all worker machines.
8
Published as a conference paper at ICLR 2018
Reactor Ablation and Sample-Efficiency 250% Reactor Time-Efficiency 2 2 z Reactor (10-+1) z 150% Reactor (20+1) == Rainbow â 100% 100% Prioritized DQN âââ ASC (16) ââ DON ââââ= Reactor (10+1) << Reactor: Minus Distributional == Reactor: Minus Prioritization âââ Reactor: TISLR, 0% 0% Human Normalized Score Human Normalized Score g 50% 1050 100 200 400 2550 100 200 Millions of Training Samples Hours of Training
Figure 3: a function of training time in hours. Rainbow learning curve provided by Hessel et al. (2017).
# comparison as | 1704.04651#31 | The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning | In this work we present a new agent architecture, called Reactor, which
combines multiple algorithmic and architectural contributions to produce an
agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al.,
2016) and Categorical DQN (Bellemare et al., 2017), while giving better
run-time performance than A3C (Mnih et al., 2016). Our first contribution is a
new policy evaluation algorithm called Distributional Retrace, which brings
multi-step off-policy updates to the distributional reinforcement learning
setting. The same approach can be used to convert several classes of multi-step
policy evaluation algorithms designed for expected value evaluation into
distributional ones. Next, we introduce the \b{eta}-leave-one-out policy
gradient algorithm which improves the trade-off between variance and bias by
using action values as a baseline. Our final algorithmic contribution is a new
prioritized replay algorithm for sequences, which exploits the temporal
locality of neighboring observations for more efficient replay prioritization.
Using the Atari 2600 benchmarks, we show that each of these innovations
contribute to both the sample efficiency and final agent performance. Finally,
we demonstrate that Reactor reaches state-of-the-art performance after 200
million frames and less than a day of training. | http://arxiv.org/pdf/1704.04651 | Audrunas Gruslys, Will Dabney, Mohammad Gheshlaghi Azar, Bilal Piot, Marc Bellemare, Remi Munos | cs.AI | null | null | cs.AI | 20170415 | 20180619 | [
{
"id": "1707.06347"
},
{
"id": "1703.01161"
},
{
"id": "1509.02971"
},
{
"id": "1710.02298"
},
{
"id": "1706.10295"
},
{
"id": "1707.06887"
},
{
"id": "1511.05952"
}
] |
1704.04683 | 31 | # 5.3 Main Results
We compare modelsâ and human ceiling perfor- mance on datasets which have the same evalua- tion metric with RACE. The compared datasets include RACE, MCTest, CNN/Daily Mail (CNN and DM), CBT and WDW. On CBT, we report per- formance on two subsets where the missing token is either a common noun (CBT-C) or name entity (CBT-N) since the language models have already reached human-level performance on other types (Hill et al., 2015). The comparison is shown in Table 5.
Performance of Sliding Window We ï¬rst com- pare MCTest with RACE using Sliding Window, where it is unable to train Stanford AR and Gated Slid- AR on MCTestâs limited training data. ing Window achieves an accuracy of 51.5% on MCTest while only 37.3% on RACE, meaning that to answer the questions of RACE requires more reasoning than MCTest.
The performance of sliding window on RACE is not directly comparable with CBT and WDW | 1704.04683#31 | RACE: Large-scale ReAding Comprehension Dataset From Examinations | We present RACE, a new dataset for benchmark evaluation of methods in the
reading comprehension task. Collected from the English exams for middle and
high school Chinese students in the age range between 12 to 18, RACE consists
of near 28,000 passages and near 100,000 questions generated by human experts
(English instructors), and covers a variety of topics which are carefully
designed for evaluating the students' ability in understanding and reasoning.
In particular, the proportion of questions that requires reasoning is much
larger in RACE than that in other benchmark datasets for reading comprehension,
and there is a significant gap between the performance of the state-of-the-art
models (43%) and the ceiling human performance (95%). We hope this new dataset
can serve as a valuable resource for research and evaluation in machine
comprehension. The dataset is freely available at
http://www.cs.cmu.edu/~glai1/data/race/ and the code is available at
https://github.com/qizhex/RACE_AR_baselines. | http://arxiv.org/pdf/1704.04683 | Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, Eduard Hovy | cs.CL, cs.AI, cs.LG | EMNLP 2017 | null | cs.CL | 20170415 | 20171205 | [
{
"id": "1511.02301"
},
{
"id": "1608.05457"
},
{
"id": "1702.02206"
},
{
"id": "1606.05250"
},
{
"id": "1604.06076"
},
{
"id": "1611.09268"
},
{
"id": "1606.02858"
},
{
"id": "1610.00956"
},
{
"id": "1606.01549"
},
{
"id": "1611.09830"
},
{
"id": "1603.01547"
}
] |
1704.04651 | 32 | Figure 3: a function of training time in hours. Rainbow learning curve provided by Hessel et al. (2017).
# comparison as
with a learning rate of 5 Ã 10â5 and zero momentum because asynchronous updates induce implicit momentum (Mitliagkas et al., 2016). Further discussion of hyperparameters and their optimization can be found in Appendix 6.1.
# 4 EXPERIMENTAL RESULTS
We trained and evaluated Reactor on 57 Atari games (Bellemare et al., 2013). Figure 3 compares the performance of Reactor with different versions of Reactor each time leaving one of the algorithmic improvements out. We can see that each of the algorithmic improvements (Distributional retrace, beta- LOO and prioritized replay) contributed to the ï¬nal results. While prioritization was arguably the most important component, Beta-LOO clearly outperformed TISLR algorithm. Although distributional and non-distributional versions performed similarly in terms of median human normalized scores, distributional version of the algorithm generalized better when tested with random human starts (Table 1). | 1704.04651#32 | The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning | In this work we present a new agent architecture, called Reactor, which
combines multiple algorithmic and architectural contributions to produce an
agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al.,
2016) and Categorical DQN (Bellemare et al., 2017), while giving better
run-time performance than A3C (Mnih et al., 2016). Our first contribution is a
new policy evaluation algorithm called Distributional Retrace, which brings
multi-step off-policy updates to the distributional reinforcement learning
setting. The same approach can be used to convert several classes of multi-step
policy evaluation algorithms designed for expected value evaluation into
distributional ones. Next, we introduce the \b{eta}-leave-one-out policy
gradient algorithm which improves the trade-off between variance and bias by
using action values as a baseline. Our final algorithmic contribution is a new
prioritized replay algorithm for sequences, which exploits the temporal
locality of neighboring observations for more efficient replay prioritization.
Using the Atari 2600 benchmarks, we show that each of these innovations
contribute to both the sample efficiency and final agent performance. Finally,
we demonstrate that Reactor reaches state-of-the-art performance after 200
million frames and less than a day of training. | http://arxiv.org/pdf/1704.04651 | Audrunas Gruslys, Will Dabney, Mohammad Gheshlaghi Azar, Bilal Piot, Marc Bellemare, Remi Munos | cs.AI | null | null | cs.AI | 20170415 | 20180619 | [
{
"id": "1707.06347"
},
{
"id": "1703.01161"
},
{
"id": "1509.02971"
},
{
"id": "1710.02298"
},
{
"id": "1706.10295"
},
{
"id": "1707.06887"
},
{
"id": "1511.05952"
}
] |
1704.04683 | 32 | The performance of sliding window on RACE is not directly comparable with CBT and WDW
since CBT has ten candidate answers for each question and WDW has an average of three. In- stead, we evaluate the performance improvement of sliding window on the random baseline. Larger improvement indicates more questions solvable by simple matching. On RACE, Sliding Window is 28.6% better than the random baseline, while the improvement is 58.5%, 92.2% and 50% for CBT- N, CBT-C and WDW.
The accuracy on RACE-M (37.3%) and RACE- H (30.4%) indicates that the middle school ques- tions are simpler based on the matching algorithm.
Performance of Neural Models We further compare the difï¬culty of different datasets by state-of-the-art neural modelsâ performance. A lower performance means that more problems are unsolvable by machines. The Stanford AR and Gated AR achieve an accuracy of only 43.3% and 44.1% on RACE while their accuracy is much higher on CNN/Daily Mail, Childrens Book Test and Who-Did-What. It justiï¬es the fact that, among current large-scale machine comprehen- sion datasets, RACE is the most challenging one. | 1704.04683#32 | RACE: Large-scale ReAding Comprehension Dataset From Examinations | We present RACE, a new dataset for benchmark evaluation of methods in the
reading comprehension task. Collected from the English exams for middle and
high school Chinese students in the age range between 12 to 18, RACE consists
of near 28,000 passages and near 100,000 questions generated by human experts
(English instructors), and covers a variety of topics which are carefully
designed for evaluating the students' ability in understanding and reasoning.
In particular, the proportion of questions that requires reasoning is much
larger in RACE than that in other benchmark datasets for reading comprehension,
and there is a significant gap between the performance of the state-of-the-art
models (43%) and the ceiling human performance (95%). We hope this new dataset
can serve as a valuable resource for research and evaluation in machine
comprehension. The dataset is freely available at
http://www.cs.cmu.edu/~glai1/data/race/ and the code is available at
https://github.com/qizhex/RACE_AR_baselines. | http://arxiv.org/pdf/1704.04683 | Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, Eduard Hovy | cs.CL, cs.AI, cs.LG | EMNLP 2017 | null | cs.CL | 20170415 | 20171205 | [
{
"id": "1511.02301"
},
{
"id": "1608.05457"
},
{
"id": "1702.02206"
},
{
"id": "1606.05250"
},
{
"id": "1604.06076"
},
{
"id": "1611.09268"
},
{
"id": "1606.02858"
},
{
"id": "1610.00956"
},
{
"id": "1606.01549"
},
{
"id": "1611.09830"
},
{
"id": "1603.01547"
}
] |
1704.04651 | 33 | ALGORITHM NORMALIZED MEAN RANK 11.65 6.82 9.05 7.63 6.35 6.63 6.25 6.30 4.18 4.98 4.58 3.65 SCORES 0.00 1.00 0.69 1.11 1.17 1.13 1.15 1.13 1.53 1.51 1.65 1.82 ELO -563 0 -172 -58 32 13 40 37 186 126 156 227 ALGORITHM RANDOM HUMAN DQN DDQN DUEL PRIOR PRIOR. DUEL. ACER6 500M RAINBOW REACTOR ND 5 REACTOR REACTOR 500M NORMALIZED MEAN RANK 10.93 6.89 8.65 7.28 5.19 6.11 5.44 - 3.63 4.53 4.46 3.47 SCORES 0.00 1.00 0.79 1.18 1.51 1.24 1.72 1.9 2.31 1.80 1.87 2.30 ELO -673 0 -167 -27 143 70 126 - 270 195 196 280
RANDOM HUMAN DQN DDQN DUEL PRIOR PRIOR. DUEL. A3C LSTM RAINBOW REACTOR ND 5 REACTOR REACTOR 500M | 1704.04651#33 | The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning | In this work we present a new agent architecture, called Reactor, which
combines multiple algorithmic and architectural contributions to produce an
agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al.,
2016) and Categorical DQN (Bellemare et al., 2017), while giving better
run-time performance than A3C (Mnih et al., 2016). Our first contribution is a
new policy evaluation algorithm called Distributional Retrace, which brings
multi-step off-policy updates to the distributional reinforcement learning
setting. The same approach can be used to convert several classes of multi-step
policy evaluation algorithms designed for expected value evaluation into
distributional ones. Next, we introduce the \b{eta}-leave-one-out policy
gradient algorithm which improves the trade-off between variance and bias by
using action values as a baseline. Our final algorithmic contribution is a new
prioritized replay algorithm for sequences, which exploits the temporal
locality of neighboring observations for more efficient replay prioritization.
Using the Atari 2600 benchmarks, we show that each of these innovations
contribute to both the sample efficiency and final agent performance. Finally,
we demonstrate that Reactor reaches state-of-the-art performance after 200
million frames and less than a day of training. | http://arxiv.org/pdf/1704.04651 | Audrunas Gruslys, Will Dabney, Mohammad Gheshlaghi Azar, Bilal Piot, Marc Bellemare, Remi Munos | cs.AI | null | null | cs.AI | 20170415 | 20180619 | [
{
"id": "1707.06347"
},
{
"id": "1703.01161"
},
{
"id": "1509.02971"
},
{
"id": "1710.02298"
},
{
"id": "1706.10295"
},
{
"id": "1707.06887"
},
{
"id": "1511.05952"
}
] |
1704.04683 | 33 | Human Ceiling Performance The human per- formance is 94.5% which shows our data is quite clean compared to other large-scale machine com- prehension datasets. Since we cannot enforce ev- ery turker do the test cautiously, the result shows a gap between turkersâ performance and human performance. Reasonably, problems in the high school group with longer passages and more com- plex questions lead to more signiï¬cant divergence. Nevertheless, the start-of-the-art models still have a large room to be improved to reach turkersâ per- formance. The performance gap is 41% for the middle school problems and 25% for the high school problems. Whatâs more, The performance of Stanford AR and GA is only less than a half of the ceiling human performance, which indicates that to match the humansâ reading comprehension ability, we still have a long way to go.
# 5.4 Reason Types Analysis
We evaluate human and models on different types of questions, shown in Figure 1. Turkers do the best on word matching problems while doing the worst on reasoning problems. Sliding window performs better on word matching than problems needing reasoning or paraphrasing. Surprisingly, Stanford AR does not have a stronger performance | 1704.04683#33 | RACE: Large-scale ReAding Comprehension Dataset From Examinations | We present RACE, a new dataset for benchmark evaluation of methods in the
reading comprehension task. Collected from the English exams for middle and
high school Chinese students in the age range between 12 to 18, RACE consists
of near 28,000 passages and near 100,000 questions generated by human experts
(English instructors), and covers a variety of topics which are carefully
designed for evaluating the students' ability in understanding and reasoning.
In particular, the proportion of questions that requires reasoning is much
larger in RACE than that in other benchmark datasets for reading comprehension,
and there is a significant gap between the performance of the state-of-the-art
models (43%) and the ceiling human performance (95%). We hope this new dataset
can serve as a valuable resource for research and evaluation in machine
comprehension. The dataset is freely available at
http://www.cs.cmu.edu/~glai1/data/race/ and the code is available at
https://github.com/qizhex/RACE_AR_baselines. | http://arxiv.org/pdf/1704.04683 | Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, Eduard Hovy | cs.CL, cs.AI, cs.LG | EMNLP 2017 | null | cs.CL | 20170415 | 20171205 | [
{
"id": "1511.02301"
},
{
"id": "1608.05457"
},
{
"id": "1702.02206"
},
{
"id": "1606.05250"
},
{
"id": "1604.06076"
},
{
"id": "1611.09268"
},
{
"id": "1606.02858"
},
{
"id": "1610.00956"
},
{
"id": "1606.01549"
},
{
"id": "1611.09830"
},
{
"id": "1603.01547"
}
] |
1704.04651 | 34 | Table 1: Random human starts
Table 2: 30 random no-op starts.
4.1 COMPARING TO PRIOR WORK
We evaluated Reactor with target update frequency Tupdate = 1000, λ = 1.0 and β-LOO with β = 1 on 57 Atari games trained on 10 machines in parallel. We averaged scores over 200 episodes using 30 random human starts and noop starts (Tables 4 and 5 in the Appendix). We calculated mean and median human normalised scores across all games. We also ranked all algorithms (including random and human scores) for each game and evaluated mean rank of each algorithm across all 57 Atari games. We also evaluated mean Rank and Elo scores for each algorithm for both human and noop start settings. Please refer to Section 6.2 in the Appendix for more details.
9
Published as a conference paper at ICLR 2018 | 1704.04651#34 | The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning | In this work we present a new agent architecture, called Reactor, which
combines multiple algorithmic and architectural contributions to produce an
agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al.,
2016) and Categorical DQN (Bellemare et al., 2017), while giving better
run-time performance than A3C (Mnih et al., 2016). Our first contribution is a
new policy evaluation algorithm called Distributional Retrace, which brings
multi-step off-policy updates to the distributional reinforcement learning
setting. The same approach can be used to convert several classes of multi-step
policy evaluation algorithms designed for expected value evaluation into
distributional ones. Next, we introduce the \b{eta}-leave-one-out policy
gradient algorithm which improves the trade-off between variance and bias by
using action values as a baseline. Our final algorithmic contribution is a new
prioritized replay algorithm for sequences, which exploits the temporal
locality of neighboring observations for more efficient replay prioritization.
Using the Atari 2600 benchmarks, we show that each of these innovations
contribute to both the sample efficiency and final agent performance. Finally,
we demonstrate that Reactor reaches state-of-the-art performance after 200
million frames and less than a day of training. | http://arxiv.org/pdf/1704.04651 | Audrunas Gruslys, Will Dabney, Mohammad Gheshlaghi Azar, Bilal Piot, Marc Bellemare, Remi Munos | cs.AI | null | null | cs.AI | 20170415 | 20180619 | [
{
"id": "1707.06347"
},
{
"id": "1703.01161"
},
{
"id": "1509.02971"
},
{
"id": "1710.02298"
},
{
"id": "1706.10295"
},
{
"id": "1707.06887"
},
{
"id": "1511.05952"
}
] |
1704.04683 | 34 | on the word matching category than reasoning cat- egories. A possible reason is that the proportion of data in reasoning categories is larger than that of data. Also, the candidate answers of simple matching questions may share similar word em- beddings. For example, if the question is about color, it is difï¬cult to distinguish candidate an- swers, âgreenâ, âredâ, âblueâ and âyellowâ, in the embedding vector space. The similar performance on different categories also explains the reason that the performance of the neural models is close in the middle and high school groups in Table 5.
# 6 Conclusion
We introduce a large, high-quality dataset for read- ing comprehension that is carefully designed to examine human ability on this task. Some desir- able properties of RACE include the broad cover- age of domains/styles and the richness in the ques- tion format. Most importantly, it requires substan- tially more reasoning to do well on RACE than on other datasets, as there is a signiï¬cant gap be- tween the performance of state-of-the-art machine comprehension models and that of the human. We hope this dataset will stimulate the development of more advanced machine comprehension models.
# Acknowledgement | 1704.04683#34 | RACE: Large-scale ReAding Comprehension Dataset From Examinations | We present RACE, a new dataset for benchmark evaluation of methods in the
reading comprehension task. Collected from the English exams for middle and
high school Chinese students in the age range between 12 to 18, RACE consists
of near 28,000 passages and near 100,000 questions generated by human experts
(English instructors), and covers a variety of topics which are carefully
designed for evaluating the students' ability in understanding and reasoning.
In particular, the proportion of questions that requires reasoning is much
larger in RACE than that in other benchmark datasets for reading comprehension,
and there is a significant gap between the performance of the state-of-the-art
models (43%) and the ceiling human performance (95%). We hope this new dataset
can serve as a valuable resource for research and evaluation in machine
comprehension. The dataset is freely available at
http://www.cs.cmu.edu/~glai1/data/race/ and the code is available at
https://github.com/qizhex/RACE_AR_baselines. | http://arxiv.org/pdf/1704.04683 | Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, Eduard Hovy | cs.CL, cs.AI, cs.LG | EMNLP 2017 | null | cs.CL | 20170415 | 20171205 | [
{
"id": "1511.02301"
},
{
"id": "1608.05457"
},
{
"id": "1702.02206"
},
{
"id": "1606.05250"
},
{
"id": "1604.06076"
},
{
"id": "1611.09268"
},
{
"id": "1606.02858"
},
{
"id": "1610.00956"
},
{
"id": "1606.01549"
},
{
"id": "1611.09830"
},
{
"id": "1603.01547"
}
] |
1704.04651 | 35 | 9
Published as a conference paper at ICLR 2018
Tables 1 & 2 compare versions of our algorithm,5 with several other state-of-art algorithms across 57 Atari games for a ï¬xed random seed across all games (Bellemare et al., 2013). We compare Reactor against are: DQN (Mnih et al., 2015), Double DQN (Van Hasselt et al., 2016), DQN with prioritised experience replay (Schaul et al., 2015), dueling architecture and prioritised dueling (Wang et al., 2015), ACER (Wang et al., 2017), A3C (Mnih et al., 2016), and Rainbow (Hessel et al., 2017). Each algorithm was exposed to 200 million frames of experience, or 500 million frames when followed by 500M, and the same pre-processing pipeline including 4 action repeats was used as in the original DQN paper (Mnih et al., 2015). | 1704.04651#35 | The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning | In this work we present a new agent architecture, called Reactor, which
combines multiple algorithmic and architectural contributions to produce an
agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al.,
2016) and Categorical DQN (Bellemare et al., 2017), while giving better
run-time performance than A3C (Mnih et al., 2016). Our first contribution is a
new policy evaluation algorithm called Distributional Retrace, which brings
multi-step off-policy updates to the distributional reinforcement learning
setting. The same approach can be used to convert several classes of multi-step
policy evaluation algorithms designed for expected value evaluation into
distributional ones. Next, we introduce the \b{eta}-leave-one-out policy
gradient algorithm which improves the trade-off between variance and bias by
using action values as a baseline. Our final algorithmic contribution is a new
prioritized replay algorithm for sequences, which exploits the temporal
locality of neighboring observations for more efficient replay prioritization.
Using the Atari 2600 benchmarks, we show that each of these innovations
contribute to both the sample efficiency and final agent performance. Finally,
we demonstrate that Reactor reaches state-of-the-art performance after 200
million frames and less than a day of training. | http://arxiv.org/pdf/1704.04651 | Audrunas Gruslys, Will Dabney, Mohammad Gheshlaghi Azar, Bilal Piot, Marc Bellemare, Remi Munos | cs.AI | null | null | cs.AI | 20170415 | 20180619 | [
{
"id": "1707.06347"
},
{
"id": "1703.01161"
},
{
"id": "1509.02971"
},
{
"id": "1710.02298"
},
{
"id": "1706.10295"
},
{
"id": "1707.06887"
},
{
"id": "1511.05952"
}
] |
1704.04683 | 35 | # Acknowledgement
We would like to thank Graham Neubig for sug- gestions on the draft and Diyi Yangâs help on ob- taining the crowdsourced labels.
This research was supported in part by DARPA grant FA8750-12-2-0342 funded under the DEFT program.
# References
Ondrej Bajgar, Rudolf Kadlec, and Jan Kleindi- enst. 2016. Embracing data abundance: Booktest dataset for reading comprehension. arXiv preprint arXiv:1610.00956 .
Danqi Chen, Jason Bolton, and Christopher D Man- ning. 2016. A thorough examination of the cn- arXiv n/daily mail reading comprehension task. preprint arXiv:1606.02858 .
Bhuwan Dhingra, Hanxiao Liu, William W Cohen, and Ruslan Salakhutdinov. 2016. Gated-attention arXiv preprint readers for text comprehension. arXiv:1606.01549 .
Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Su- leyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neu- ral Information Processing Systems. pages 1693â 1701. | 1704.04683#35 | RACE: Large-scale ReAding Comprehension Dataset From Examinations | We present RACE, a new dataset for benchmark evaluation of methods in the
reading comprehension task. Collected from the English exams for middle and
high school Chinese students in the age range between 12 to 18, RACE consists
of near 28,000 passages and near 100,000 questions generated by human experts
(English instructors), and covers a variety of topics which are carefully
designed for evaluating the students' ability in understanding and reasoning.
In particular, the proportion of questions that requires reasoning is much
larger in RACE than that in other benchmark datasets for reading comprehension,
and there is a significant gap between the performance of the state-of-the-art
models (43%) and the ceiling human performance (95%). We hope this new dataset
can serve as a valuable resource for research and evaluation in machine
comprehension. The dataset is freely available at
http://www.cs.cmu.edu/~glai1/data/race/ and the code is available at
https://github.com/qizhex/RACE_AR_baselines. | http://arxiv.org/pdf/1704.04683 | Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, Eduard Hovy | cs.CL, cs.AI, cs.LG | EMNLP 2017 | null | cs.CL | 20170415 | 20171205 | [
{
"id": "1511.02301"
},
{
"id": "1608.05457"
},
{
"id": "1702.02206"
},
{
"id": "1606.05250"
},
{
"id": "1604.06076"
},
{
"id": "1611.09268"
},
{
"id": "1606.02858"
},
{
"id": "1610.00956"
},
{
"id": "1606.01549"
},
{
"id": "1611.09830"
},
{
"id": "1603.01547"
}
] |
1704.04651 | 36 | In Table 1, we see that Reactor exceeds the performance of all algorithms across all metrics, despite requiring under two days of training. With 500 million frames and four days training we see Reactorâs performance continue to improve signiï¬cantly. The difference in time-efï¬ciency is especially apparent when comparing Reactor and Rainbow (see Figure 3, right). Additionally, unlike Rainbow, Reactor does not use Noisy Networks (Fortunato et al., 2017), which was reported to have contributed to the performance gains. When evaluating under the no-op starts regime (Table 2), Reactor out performs all methods except for Rainbow. This suggests that Rainbow is more sample-efï¬cient when training and evaluation regimes match exactly, but may be overï¬tting to particular trajectories due to the signiï¬cant drop in performance when evaluated on the random human starts.
Regarding ACER, another Retrace-based actor-critic architecture, both classical and distributional versions of Reactor (Figure 3) exceeded the best reported median human normalized score of 1.9 with noop starts achieved in 500 million steps.6
# 5 CONCLUSION | 1704.04651#36 | The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning | In this work we present a new agent architecture, called Reactor, which
combines multiple algorithmic and architectural contributions to produce an
agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al.,
2016) and Categorical DQN (Bellemare et al., 2017), while giving better
run-time performance than A3C (Mnih et al., 2016). Our first contribution is a
new policy evaluation algorithm called Distributional Retrace, which brings
multi-step off-policy updates to the distributional reinforcement learning
setting. The same approach can be used to convert several classes of multi-step
policy evaluation algorithms designed for expected value evaluation into
distributional ones. Next, we introduce the \b{eta}-leave-one-out policy
gradient algorithm which improves the trade-off between variance and bias by
using action values as a baseline. Our final algorithmic contribution is a new
prioritized replay algorithm for sequences, which exploits the temporal
locality of neighboring observations for more efficient replay prioritization.
Using the Atari 2600 benchmarks, we show that each of these innovations
contribute to both the sample efficiency and final agent performance. Finally,
we demonstrate that Reactor reaches state-of-the-art performance after 200
million frames and less than a day of training. | http://arxiv.org/pdf/1704.04651 | Audrunas Gruslys, Will Dabney, Mohammad Gheshlaghi Azar, Bilal Piot, Marc Bellemare, Remi Munos | cs.AI | null | null | cs.AI | 20170415 | 20180619 | [
{
"id": "1707.06347"
},
{
"id": "1703.01161"
},
{
"id": "1509.02971"
},
{
"id": "1710.02298"
},
{
"id": "1706.10295"
},
{
"id": "1707.06887"
},
{
"id": "1511.05952"
}
] |
1704.04683 | 36 | Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2015. The goldilocks principle: Reading childrenâs books with explicit memory representa- tions. arXiv preprint arXiv:1511.02301 .
Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehen- sion. ACL .
Rudolf Kadlec, Martin Schmid, Ondrej Bajgar, and Jan Kleindienst. 2016. Text understanding with the attention sum reader network. arXiv preprint arXiv:1603.01547 .
Daniel Khashabi, Tushar Khot, Ashish Sabhar- wal, Peter Clark, Oren Etzioni, and Dan Roth. 2016. Question answering via integer programming arXiv preprint over semi-structured knowledge. arXiv:1604.06076 .
Auto- matic evaluation of summaries using n-gram co- In Proceedings of the 2003 occurrence statistics. Conference of the North American Chapter of the Association for Computational Linguistics on Hu- man Language Technology-Volume 1. Association for Computational Linguistics, pages 71â78. | 1704.04683#36 | RACE: Large-scale ReAding Comprehension Dataset From Examinations | We present RACE, a new dataset for benchmark evaluation of methods in the
reading comprehension task. Collected from the English exams for middle and
high school Chinese students in the age range between 12 to 18, RACE consists
of near 28,000 passages and near 100,000 questions generated by human experts
(English instructors), and covers a variety of topics which are carefully
designed for evaluating the students' ability in understanding and reasoning.
In particular, the proportion of questions that requires reasoning is much
larger in RACE than that in other benchmark datasets for reading comprehension,
and there is a significant gap between the performance of the state-of-the-art
models (43%) and the ceiling human performance (95%). We hope this new dataset
can serve as a valuable resource for research and evaluation in machine
comprehension. The dataset is freely available at
http://www.cs.cmu.edu/~glai1/data/race/ and the code is available at
https://github.com/qizhex/RACE_AR_baselines. | http://arxiv.org/pdf/1704.04683 | Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, Eduard Hovy | cs.CL, cs.AI, cs.LG | EMNLP 2017 | null | cs.CL | 20170415 | 20171205 | [
{
"id": "1511.02301"
},
{
"id": "1608.05457"
},
{
"id": "1702.02206"
},
{
"id": "1606.05250"
},
{
"id": "1604.06076"
},
{
"id": "1611.09268"
},
{
"id": "1606.02858"
},
{
"id": "1610.00956"
},
{
"id": "1606.01549"
},
{
"id": "1611.09830"
},
{
"id": "1603.01547"
}
] |
1704.04651 | 37 | # 5 CONCLUSION
In this work we presented a new off-policy agent based on Retrace actor-critic architecture and show that it achieves similar performance as the current state-of-the-art while giving signiï¬cant real-time performance gains. We demonstrate the beneï¬ts of each of the suggested algorithmic improvements, including Distributional Retrace, beta-LOO policy gradient and contextual priority tree.
# REFERENCES
Oron Anschel, Nir Baram, and Nahum Shimkin. Averaged-dqn: Variance reduction and stabilization for deep reinforcement learning. In International Conference on Machine Learning, pp. 176â185, 2017.
Marc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning envi- ronment: An evaluation platform for general agents. J. Artif. Intell. Res.(JAIR), 47:253â279, 2013.
Marc G Bellemare, Will Dabney, and Rémi Munos. A distributional perspective on reinforcement learning. arXiv preprint arXiv:1707.06887, 2017. | 1704.04651#37 | The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning | In this work we present a new agent architecture, called Reactor, which
combines multiple algorithmic and architectural contributions to produce an
agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al.,
2016) and Categorical DQN (Bellemare et al., 2017), while giving better
run-time performance than A3C (Mnih et al., 2016). Our first contribution is a
new policy evaluation algorithm called Distributional Retrace, which brings
multi-step off-policy updates to the distributional reinforcement learning
setting. The same approach can be used to convert several classes of multi-step
policy evaluation algorithms designed for expected value evaluation into
distributional ones. Next, we introduce the \b{eta}-leave-one-out policy
gradient algorithm which improves the trade-off between variance and bias by
using action values as a baseline. Our final algorithmic contribution is a new
prioritized replay algorithm for sequences, which exploits the temporal
locality of neighboring observations for more efficient replay prioritization.
Using the Atari 2600 benchmarks, we show that each of these innovations
contribute to both the sample efficiency and final agent performance. Finally,
we demonstrate that Reactor reaches state-of-the-art performance after 200
million frames and less than a day of training. | http://arxiv.org/pdf/1704.04651 | Audrunas Gruslys, Will Dabney, Mohammad Gheshlaghi Azar, Bilal Piot, Marc Bellemare, Remi Munos | cs.AI | null | null | cs.AI | 20170415 | 20180619 | [
{
"id": "1707.06347"
},
{
"id": "1703.01161"
},
{
"id": "1509.02971"
},
{
"id": "1710.02298"
},
{
"id": "1706.10295"
},
{
"id": "1707.06887"
},
{
"id": "1511.05952"
}
] |
1704.04683 | 37 | Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. Ms marco: A human generated machine arXiv preprint reading comprehension dataset. arXiv:1611.09268 .
Takeshi Onishi, Hai Wang, Mohit Bansal, Kevin Gim- pel, and David McAllester. 2016. Who did what: A large-scale person-centered cloze dataset. arXiv preprint arXiv:1608.05457 .
Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- In Proceedings of uation of machine translation. the 40th annual meeting on association for compu- tational linguistics. Association for Computational Linguistics, pages 311â318.
Anselmo PeËnas, Yusuke Miyao, ´Alvaro Rodrigo, Ed- uard H Hovy, and Noriko Kando. 2014. Overview of clef qa entrance exams task 2014. In CLEF (Work- ing Notes). pages 1194â1200.
Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In EMNLP. volume 14, pages 1532â 1543. | 1704.04683#37 | RACE: Large-scale ReAding Comprehension Dataset From Examinations | We present RACE, a new dataset for benchmark evaluation of methods in the
reading comprehension task. Collected from the English exams for middle and
high school Chinese students in the age range between 12 to 18, RACE consists
of near 28,000 passages and near 100,000 questions generated by human experts
(English instructors), and covers a variety of topics which are carefully
designed for evaluating the students' ability in understanding and reasoning.
In particular, the proportion of questions that requires reasoning is much
larger in RACE than that in other benchmark datasets for reading comprehension,
and there is a significant gap between the performance of the state-of-the-art
models (43%) and the ceiling human performance (95%). We hope this new dataset
can serve as a valuable resource for research and evaluation in machine
comprehension. The dataset is freely available at
http://www.cs.cmu.edu/~glai1/data/race/ and the code is available at
https://github.com/qizhex/RACE_AR_baselines. | http://arxiv.org/pdf/1704.04683 | Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, Eduard Hovy | cs.CL, cs.AI, cs.LG | EMNLP 2017 | null | cs.CL | 20170415 | 20171205 | [
{
"id": "1511.02301"
},
{
"id": "1608.05457"
},
{
"id": "1702.02206"
},
{
"id": "1606.05250"
},
{
"id": "1604.06076"
},
{
"id": "1611.09268"
},
{
"id": "1606.02858"
},
{
"id": "1610.00956"
},
{
"id": "1606.01549"
},
{
"id": "1611.09830"
},
{
"id": "1603.01547"
}
] |
1704.04651 | 38 | Meire Fortunato, Mohammad Gheshlaghi Azar, Bilal Piot, Jacob Menick, Ian Osband, Alex Graves, Vlad Mnih, Remi Munos, Demis Hassabis, Olivier Pietquin, et al. Noisy networks for exploration. arXiv preprint arXiv:1706.10295, 2017.
Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E Turner, and Sergey Levine. Q-prop: Sample-efï¬cient policy gradient with an off-policy critic. International Conference on Learning Representations, 2017.
Frank S He, Yang Liu, Alexander G Schwing, and Jian Peng. Learning to play in a day: Faster deep reinforcement learning by optimality tightening. In International Conference on Learning Representations, 2017.
Matteo Hessel, Joseph Modayil, Hado Van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, and David Silver. Rainbow: Combining improvements in deep reinforcement learning. arXiv preprint arXiv:1710.02298, 2017. | 1704.04651#38 | The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning | In this work we present a new agent architecture, called Reactor, which
combines multiple algorithmic and architectural contributions to produce an
agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al.,
2016) and Categorical DQN (Bellemare et al., 2017), while giving better
run-time performance than A3C (Mnih et al., 2016). Our first contribution is a
new policy evaluation algorithm called Distributional Retrace, which brings
multi-step off-policy updates to the distributional reinforcement learning
setting. The same approach can be used to convert several classes of multi-step
policy evaluation algorithms designed for expected value evaluation into
distributional ones. Next, we introduce the \b{eta}-leave-one-out policy
gradient algorithm which improves the trade-off between variance and bias by
using action values as a baseline. Our final algorithmic contribution is a new
prioritized replay algorithm for sequences, which exploits the temporal
locality of neighboring observations for more efficient replay prioritization.
Using the Atari 2600 benchmarks, we show that each of these innovations
contribute to both the sample efficiency and final agent performance. Finally,
we demonstrate that Reactor reaches state-of-the-art performance after 200
million frames and less than a day of training. | http://arxiv.org/pdf/1704.04651 | Audrunas Gruslys, Will Dabney, Mohammad Gheshlaghi Azar, Bilal Piot, Marc Bellemare, Remi Munos | cs.AI | null | null | cs.AI | 20170415 | 20180619 | [
{
"id": "1707.06347"
},
{
"id": "1703.01161"
},
{
"id": "1509.02971"
},
{
"id": "1710.02298"
},
{
"id": "1706.10295"
},
{
"id": "1707.06887"
},
{
"id": "1511.05952"
}
] |
1704.04683 | 38 | Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In EMNLP. volume 14, pages 1532â 1543.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250 .
Matthew Richardson, Christopher JC Burges, and Erin Renshaw. 2013. Mctest: A challenge dataset for the open-domain machine comprehension of text. In EMNLP. volume 3, page 4.
´Alvaro Rodrigo, Anselmo PeËnas, Yusuke Miyao, Ed- uard H Hovy, and Noriko Kando. 2015. Overview of clef qa entrance exams task 2015. In CLEF (Work- ing Notes).
Hideyuki Shibuki, Kotaro Sakamoto, Yoshinobu Kano, Teruko Mitamura, Madoka Ishioroshi, Kelly Y Itakura, Di Wang, Tatsunori Mori, and Noriko Kando. 2014. Overview of the ntcir-11 qa-lab task. In NTCIR. | 1704.04683#38 | RACE: Large-scale ReAding Comprehension Dataset From Examinations | We present RACE, a new dataset for benchmark evaluation of methods in the
reading comprehension task. Collected from the English exams for middle and
high school Chinese students in the age range between 12 to 18, RACE consists
of near 28,000 passages and near 100,000 questions generated by human experts
(English instructors), and covers a variety of topics which are carefully
designed for evaluating the students' ability in understanding and reasoning.
In particular, the proportion of questions that requires reasoning is much
larger in RACE than that in other benchmark datasets for reading comprehension,
and there is a significant gap between the performance of the state-of-the-art
models (43%) and the ceiling human performance (95%). We hope this new dataset
can serve as a valuable resource for research and evaluation in machine
comprehension. The dataset is freely available at
http://www.cs.cmu.edu/~glai1/data/race/ and the code is available at
https://github.com/qizhex/RACE_AR_baselines. | http://arxiv.org/pdf/1704.04683 | Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, Eduard Hovy | cs.CL, cs.AI, cs.LG | EMNLP 2017 | null | cs.CL | 20170415 | 20171205 | [
{
"id": "1511.02301"
},
{
"id": "1608.05457"
},
{
"id": "1702.02206"
},
{
"id": "1606.05250"
},
{
"id": "1604.06076"
},
{
"id": "1611.09268"
},
{
"id": "1606.02858"
},
{
"id": "1610.00956"
},
{
"id": "1606.01549"
},
{
"id": "1611.09830"
},
{
"id": "1603.01547"
}
] |
1704.04651 | 39 | 5 âNDâ stands for a non-distributional (i.e. classical) version of Reactor using Retrace (Munos et al., 2016). 6 Score for ACER in Table 2 was obtained from (Figure 1 in Wang et al. (2017)), but is not directly comparable due to the authorsâ use of a cumulative maximization along each learning curve before taking the median.
10
Published as a conference paper at ICLR 2018
Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8): 1735â1780, 1997.
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classiï¬cation with deep convolu- tional neural networks. In Advances in neural information processing systems, pp. 1097â1105, 2012.
Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015. | 1704.04651#39 | The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning | In this work we present a new agent architecture, called Reactor, which
combines multiple algorithmic and architectural contributions to produce an
agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al.,
2016) and Categorical DQN (Bellemare et al., 2017), while giving better
run-time performance than A3C (Mnih et al., 2016). Our first contribution is a
new policy evaluation algorithm called Distributional Retrace, which brings
multi-step off-policy updates to the distributional reinforcement learning
setting. The same approach can be used to convert several classes of multi-step
policy evaluation algorithms designed for expected value evaluation into
distributional ones. Next, we introduce the \b{eta}-leave-one-out policy
gradient algorithm which improves the trade-off between variance and bias by
using action values as a baseline. Our final algorithmic contribution is a new
prioritized replay algorithm for sequences, which exploits the temporal
locality of neighboring observations for more efficient replay prioritization.
Using the Atari 2600 benchmarks, we show that each of these innovations
contribute to both the sample efficiency and final agent performance. Finally,
we demonstrate that Reactor reaches state-of-the-art performance after 200
million frames and less than a day of training. | http://arxiv.org/pdf/1704.04651 | Audrunas Gruslys, Will Dabney, Mohammad Gheshlaghi Azar, Bilal Piot, Marc Bellemare, Remi Munos | cs.AI | null | null | cs.AI | 20170415 | 20180619 | [
{
"id": "1707.06347"
},
{
"id": "1703.01161"
},
{
"id": "1509.02971"
},
{
"id": "1710.02298"
},
{
"id": "1706.10295"
},
{
"id": "1707.06887"
},
{
"id": "1511.05952"
}
] |
1704.04683 | 39 | Adam Trischler, Tong Wang, Xingdi Yuan, Justin Har- ris, Alessandro Sordoni, Philip Bachman, and Ka- heer Suleman. 2016. Newsqa: A machine compre- hension dataset. arXiv preprint arXiv:1611.09830 .
Zhilin Yang, Junjie Hu, Ruslan Salakhutdinov, and William W Cohen. 2017. Semi-supervised qa with arXiv preprint generative domain-adaptive nets. arXiv:1702.02206 .
# A Appendix
# A.1 Example Question of Passage Summarization
Passage: Do you love holidays but hate gaining weight? You are not alone. Holidays are times for celebrating. Many people are worried about their weight. With proper planning, though, it is pos- sible to keep normal weight during the holidays. The idea is to enjoy the holidays but not to eat too much. You donât have to turn away from the foods that you enjoy.
Here are some tips for preventing weight gain and maintaining physical ï¬tness:
Donât skip meals. Before you leave home, have a small, low-fat meal or snack. This may help to avoid getting too excited before delicious foods. | 1704.04683#39 | RACE: Large-scale ReAding Comprehension Dataset From Examinations | We present RACE, a new dataset for benchmark evaluation of methods in the
reading comprehension task. Collected from the English exams for middle and
high school Chinese students in the age range between 12 to 18, RACE consists
of near 28,000 passages and near 100,000 questions generated by human experts
(English instructors), and covers a variety of topics which are carefully
designed for evaluating the students' ability in understanding and reasoning.
In particular, the proportion of questions that requires reasoning is much
larger in RACE than that in other benchmark datasets for reading comprehension,
and there is a significant gap between the performance of the state-of-the-art
models (43%) and the ceiling human performance (95%). We hope this new dataset
can serve as a valuable resource for research and evaluation in machine
comprehension. The dataset is freely available at
http://www.cs.cmu.edu/~glai1/data/race/ and the code is available at
https://github.com/qizhex/RACE_AR_baselines. | http://arxiv.org/pdf/1704.04683 | Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, Eduard Hovy | cs.CL, cs.AI, cs.LG | EMNLP 2017 | null | cs.CL | 20170415 | 20171205 | [
{
"id": "1511.02301"
},
{
"id": "1608.05457"
},
{
"id": "1702.02206"
},
{
"id": "1606.05250"
},
{
"id": "1604.06076"
},
{
"id": "1611.09268"
},
{
"id": "1606.02858"
},
{
"id": "1610.00956"
},
{
"id": "1606.01549"
},
{
"id": "1611.09830"
},
{
"id": "1603.01547"
}
] |
1704.04651 | 40 | Long-H Lin. Self-improving reactive agents based on reinforcement learning, planning and teaching. Machine learning, 8(3/4):69â97, 1992.
Ioannis Mitliagkas, Ce Zhang, Stefan Hadjis, and Christopher Ré. Asynchrony begets momentum, with an application to deep learning. In Communication, Control, and Computing (Allerton), 2016 54th Annual Allerton Conference on, pp. 997â1004. IEEE, 2016.
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529â533, 2015.
Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy P Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In International Conference on Machine Learning, 2016.
Andrew W Moore and Christopher G Atkeson. Prioritized sweeping: Reinforcement learning with less data and less time. Machine learning, 13(1):103â130, 1993. | 1704.04651#40 | The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning | In this work we present a new agent architecture, called Reactor, which
combines multiple algorithmic and architectural contributions to produce an
agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al.,
2016) and Categorical DQN (Bellemare et al., 2017), while giving better
run-time performance than A3C (Mnih et al., 2016). Our first contribution is a
new policy evaluation algorithm called Distributional Retrace, which brings
multi-step off-policy updates to the distributional reinforcement learning
setting. The same approach can be used to convert several classes of multi-step
policy evaluation algorithms designed for expected value evaluation into
distributional ones. Next, we introduce the \b{eta}-leave-one-out policy
gradient algorithm which improves the trade-off between variance and bias by
using action values as a baseline. Our final algorithmic contribution is a new
prioritized replay algorithm for sequences, which exploits the temporal
locality of neighboring observations for more efficient replay prioritization.
Using the Atari 2600 benchmarks, we show that each of these innovations
contribute to both the sample efficiency and final agent performance. Finally,
we demonstrate that Reactor reaches state-of-the-art performance after 200
million frames and less than a day of training. | http://arxiv.org/pdf/1704.04651 | Audrunas Gruslys, Will Dabney, Mohammad Gheshlaghi Azar, Bilal Piot, Marc Bellemare, Remi Munos | cs.AI | null | null | cs.AI | 20170415 | 20180619 | [
{
"id": "1707.06347"
},
{
"id": "1703.01161"
},
{
"id": "1509.02971"
},
{
"id": "1710.02298"
},
{
"id": "1706.10295"
},
{
"id": "1707.06887"
},
{
"id": "1511.05952"
}
] |
1704.04683 | 40 | Donât skip meals. Before you leave home, have a small, low-fat meal or snack. This may help to avoid getting too excited before delicious foods.
Control the amount of food. Use a small plate that may encourage you to âload upâ. You should be most comfortable eating an amount of food about the size of your ï¬st.
Begin with soup and fruit or vegetables. Fill up beforehand on water-based soup and raw fruit or vegetables, or drink a large glass of water before you eat to help you to feel full.
Avoid high-fat foods. Dishes that look oily or creamy may have large amount of fat. Choose lean meat . Fill your plate with salad and green vegeta- bles. Use lemon juice instead of creamy food.
Stick to physical activity. Donât let exercise take a break during the holidays. A 20-minute walk helps to burn off extra calories.
Questions: What is the best title of the passage? Options: A. How to avoid holiday feasting B. Doâs and donâts for keeping slim and ï¬t. C. How to avoid weight gain over holidays. D. Wonderful holidays, boring experiences. | 1704.04683#40 | RACE: Large-scale ReAding Comprehension Dataset From Examinations | We present RACE, a new dataset for benchmark evaluation of methods in the
reading comprehension task. Collected from the English exams for middle and
high school Chinese students in the age range between 12 to 18, RACE consists
of near 28,000 passages and near 100,000 questions generated by human experts
(English instructors), and covers a variety of topics which are carefully
designed for evaluating the students' ability in understanding and reasoning.
In particular, the proportion of questions that requires reasoning is much
larger in RACE than that in other benchmark datasets for reading comprehension,
and there is a significant gap between the performance of the state-of-the-art
models (43%) and the ceiling human performance (95%). We hope this new dataset
can serve as a valuable resource for research and evaluation in machine
comprehension. The dataset is freely available at
http://www.cs.cmu.edu/~glai1/data/race/ and the code is available at
https://github.com/qizhex/RACE_AR_baselines. | http://arxiv.org/pdf/1704.04683 | Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, Eduard Hovy | cs.CL, cs.AI, cs.LG | EMNLP 2017 | null | cs.CL | 20170415 | 20171205 | [
{
"id": "1511.02301"
},
{
"id": "1608.05457"
},
{
"id": "1702.02206"
},
{
"id": "1606.05250"
},
{
"id": "1604.06076"
},
{
"id": "1611.09268"
},
{
"id": "1606.02858"
},
{
"id": "1610.00956"
},
{
"id": "1606.01549"
},
{
"id": "1611.09830"
},
{
"id": "1603.01547"
}
] |
1704.04651 | 41 | Andrew W Moore and Christopher G Atkeson. Prioritized sweeping: Reinforcement learning with less data and less time. Machine learning, 13(1):103â130, 1993.
Rémi Munos, Tom Stepleton, Anna Harutyunyan, and Marc Bellemare. Safe and efï¬cient off-policy reinforcement learning. In Advances in Neural Information Processing Systems, pp. 1046â1054, 2016.
Brendan OâDonoghue, Remi Munos, Koray Kavukcuoglu, and Volodymyr Mnih. Combining policy gradient and q-learning. International Conference on Learning Representations, 2017.
Doina Precup, Richard S Sutton, and Satinder Singh. Eligibility traces for off-policy policy evaluation. In Proceedings of the Seventeenth International Conference on Machine Learning, 2000.
Doina Precup, Richard S Sutton, and Sanjoy Dasgupta. Off-policy temporal-difference learning with function approximation. In Proceedings of the 18th International Conference on Machine Laerning, pp. 417â424, 2001.
Martin Riedmiller. Neural ï¬tted q iteration-ï¬rst experiences with a data efï¬cient neural reinforcement learning method. In ECML, volume 3720, pp. 317â328. Springer, 2005. | 1704.04651#41 | The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning | In this work we present a new agent architecture, called Reactor, which
combines multiple algorithmic and architectural contributions to produce an
agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al.,
2016) and Categorical DQN (Bellemare et al., 2017), while giving better
run-time performance than A3C (Mnih et al., 2016). Our first contribution is a
new policy evaluation algorithm called Distributional Retrace, which brings
multi-step off-policy updates to the distributional reinforcement learning
setting. The same approach can be used to convert several classes of multi-step
policy evaluation algorithms designed for expected value evaluation into
distributional ones. Next, we introduce the \b{eta}-leave-one-out policy
gradient algorithm which improves the trade-off between variance and bias by
using action values as a baseline. Our final algorithmic contribution is a new
prioritized replay algorithm for sequences, which exploits the temporal
locality of neighboring observations for more efficient replay prioritization.
Using the Atari 2600 benchmarks, we show that each of these innovations
contribute to both the sample efficiency and final agent performance. Finally,
we demonstrate that Reactor reaches state-of-the-art performance after 200
million frames and less than a day of training. | http://arxiv.org/pdf/1704.04651 | Audrunas Gruslys, Will Dabney, Mohammad Gheshlaghi Azar, Bilal Piot, Marc Bellemare, Remi Munos | cs.AI | null | null | cs.AI | 20170415 | 20180619 | [
{
"id": "1707.06347"
},
{
"id": "1703.01161"
},
{
"id": "1509.02971"
},
{
"id": "1710.02298"
},
{
"id": "1706.10295"
},
{
"id": "1707.06887"
},
{
"id": "1511.05952"
}
] |
1704.04651 | 42 | Tom Schaul, John Quan, Ioannis Antonoglou, and David Silver. Prioritized experience replay. arXiv preprint arXiv:1511.05952, 2015.
Tom Schaul, John Quan, Ioannis Antonoglou, and David Silver. Prioritized experience replay. In International Conference on Learning Representations, 2016.
John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trust region policy optimization. In Proceedings of the 32nd International Conference on Machine Learning (ICML-15), pp. 1889â1897, 2015.
John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484â489, 2016.
11
Published as a conference paper at ICLR 2018 | 1704.04651#42 | The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning | In this work we present a new agent architecture, called Reactor, which
combines multiple algorithmic and architectural contributions to produce an
agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al.,
2016) and Categorical DQN (Bellemare et al., 2017), while giving better
run-time performance than A3C (Mnih et al., 2016). Our first contribution is a
new policy evaluation algorithm called Distributional Retrace, which brings
multi-step off-policy updates to the distributional reinforcement learning
setting. The same approach can be used to convert several classes of multi-step
policy evaluation algorithms designed for expected value evaluation into
distributional ones. Next, we introduce the \b{eta}-leave-one-out policy
gradient algorithm which improves the trade-off between variance and bias by
using action values as a baseline. Our final algorithmic contribution is a new
prioritized replay algorithm for sequences, which exploits the temporal
locality of neighboring observations for more efficient replay prioritization.
Using the Atari 2600 benchmarks, we show that each of these innovations
contribute to both the sample efficiency and final agent performance. Finally,
we demonstrate that Reactor reaches state-of-the-art performance after 200
million frames and less than a day of training. | http://arxiv.org/pdf/1704.04651 | Audrunas Gruslys, Will Dabney, Mohammad Gheshlaghi Azar, Bilal Piot, Marc Bellemare, Remi Munos | cs.AI | null | null | cs.AI | 20170415 | 20180619 | [
{
"id": "1707.06347"
},
{
"id": "1703.01161"
},
{
"id": "1509.02971"
},
{
"id": "1710.02298"
},
{
"id": "1706.10295"
},
{
"id": "1707.06887"
},
{
"id": "1511.05952"
}
] |
1704.04651 | 43 | 11
Published as a conference paper at ICLR 2018
David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, Yutian Chen, Timothy Lillicrap, Fan Hui, Laurent Sifre, George van den Driessche, Thore Graepel, and Demis Hassabis. Mastering the game of go without human knowledge. Nature, 550(7676):354â359, 10 2017. URL http: //dx.doi.org/10.1038/nature24270.
Richard S. Sutton, David Mcallester, Satinder Singh, and Yishay Mansour. Policy gradient methods for reinforcement learning with function approximation. In In Advances in Neural Information Processing Systems 12, pp. 1057â1063. MIT Press, 2000.
Hado Van Hasselt, Arthur Guez, and David Silver. Deep reinforcement learning with double q- learning. In AAAI, pp. 2094â2100, 2016.
Adelâson G Velskii and E Landis. An algorithm for the organisation of information. Dokl. Akad. Nauk SSSR, 146:263â266, 1976. | 1704.04651#43 | The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning | In this work we present a new agent architecture, called Reactor, which
combines multiple algorithmic and architectural contributions to produce an
agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al.,
2016) and Categorical DQN (Bellemare et al., 2017), while giving better
run-time performance than A3C (Mnih et al., 2016). Our first contribution is a
new policy evaluation algorithm called Distributional Retrace, which brings
multi-step off-policy updates to the distributional reinforcement learning
setting. The same approach can be used to convert several classes of multi-step
policy evaluation algorithms designed for expected value evaluation into
distributional ones. Next, we introduce the \b{eta}-leave-one-out policy
gradient algorithm which improves the trade-off between variance and bias by
using action values as a baseline. Our final algorithmic contribution is a new
prioritized replay algorithm for sequences, which exploits the temporal
locality of neighboring observations for more efficient replay prioritization.
Using the Atari 2600 benchmarks, we show that each of these innovations
contribute to both the sample efficiency and final agent performance. Finally,
we demonstrate that Reactor reaches state-of-the-art performance after 200
million frames and less than a day of training. | http://arxiv.org/pdf/1704.04651 | Audrunas Gruslys, Will Dabney, Mohammad Gheshlaghi Azar, Bilal Piot, Marc Bellemare, Remi Munos | cs.AI | null | null | cs.AI | 20170415 | 20180619 | [
{
"id": "1707.06347"
},
{
"id": "1703.01161"
},
{
"id": "1509.02971"
},
{
"id": "1710.02298"
},
{
"id": "1706.10295"
},
{
"id": "1707.06887"
},
{
"id": "1511.05952"
}
] |
1704.04651 | 44 | Alexander Sasha Vezhnevets, Simon Osindero, Tom Schaul, Nicolas Heess, Max Jaderberg, David Silver, and Koray Kavukcuoglu. Feudal networks for hierarchical reinforcement learning. arXiv preprint arXiv:1703.01161, 2017.
Ziyu Wang, Tom Schaul, Matteo Hessel, Hado van Hasselt, Marc Lanctot, and Nando de Freitas. Dueling network architectures for deep reinforcement learning. International Conference on Machine Learning, pp. 1995â2003, 2015.
Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, and Nando de Freitas. Sample efï¬cient actor-critic with experience replay. In International Conference on Learning Representations, 2017.
C. J. C. H. Watkins and P. Dayan. Q-learning. Machine Learning, 8(3):272â292, 1992.
Marco A Wiering. Explorations in efï¬cient reinforcement learning. PhD thesis, University of Amsterdam, 1999. | 1704.04651#44 | The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning | In this work we present a new agent architecture, called Reactor, which
combines multiple algorithmic and architectural contributions to produce an
agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al.,
2016) and Categorical DQN (Bellemare et al., 2017), while giving better
run-time performance than A3C (Mnih et al., 2016). Our first contribution is a
new policy evaluation algorithm called Distributional Retrace, which brings
multi-step off-policy updates to the distributional reinforcement learning
setting. The same approach can be used to convert several classes of multi-step
policy evaluation algorithms designed for expected value evaluation into
distributional ones. Next, we introduce the \b{eta}-leave-one-out policy
gradient algorithm which improves the trade-off between variance and bias by
using action values as a baseline. Our final algorithmic contribution is a new
prioritized replay algorithm for sequences, which exploits the temporal
locality of neighboring observations for more efficient replay prioritization.
Using the Atari 2600 benchmarks, we show that each of these innovations
contribute to both the sample efficiency and final agent performance. Finally,
we demonstrate that Reactor reaches state-of-the-art performance after 200
million frames and less than a day of training. | http://arxiv.org/pdf/1704.04651 | Audrunas Gruslys, Will Dabney, Mohammad Gheshlaghi Azar, Bilal Piot, Marc Bellemare, Remi Munos | cs.AI | null | null | cs.AI | 20170415 | 20180619 | [
{
"id": "1707.06347"
},
{
"id": "1703.01161"
},
{
"id": "1509.02971"
},
{
"id": "1710.02298"
},
{
"id": "1706.10295"
},
{
"id": "1707.06887"
},
{
"id": "1511.05952"
}
] |
1704.04651 | 45 | Marco A Wiering. Explorations in efï¬cient reinforcement learning. PhD thesis, University of Amsterdam, 1999.
Dongbin Zhao, Haitao Wang, Kun Shao, and Yuanheng Zhu. Deep reinforcement learning with experience replay based on sarsa. In Computational Intelligence (SSCI), 2016 IEEE Symposium Series on, pp. 1â6. IEEE, 2016.
12
Published as a conference paper at ICLR 2018
# 6 APPENDIX
Proposition 1. Assume @ ~ jy and that E[R(4)| = Q*(4). Then, the bias of G3.r00 is | D1 â i(a)B(a))Vx(a)[Q(a) â Q*(a)]|Proof. The bias of ËGβ-LOO is
E[G100] â G Y u(a)[3(@)(E[R(a)] â Q(a))]Va(a) + Â¥2 Q(a)Va(a) â G S71 = H(@)8(a))[Q(@) - Q7 (a)|V x(a) a
# 6.1 HYPERPARAMETER OPTIMIZATION | 1704.04651#45 | The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning | In this work we present a new agent architecture, called Reactor, which
combines multiple algorithmic and architectural contributions to produce an
agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al.,
2016) and Categorical DQN (Bellemare et al., 2017), while giving better
run-time performance than A3C (Mnih et al., 2016). Our first contribution is a
new policy evaluation algorithm called Distributional Retrace, which brings
multi-step off-policy updates to the distributional reinforcement learning
setting. The same approach can be used to convert several classes of multi-step
policy evaluation algorithms designed for expected value evaluation into
distributional ones. Next, we introduce the \b{eta}-leave-one-out policy
gradient algorithm which improves the trade-off between variance and bias by
using action values as a baseline. Our final algorithmic contribution is a new
prioritized replay algorithm for sequences, which exploits the temporal
locality of neighboring observations for more efficient replay prioritization.
Using the Atari 2600 benchmarks, we show that each of these innovations
contribute to both the sample efficiency and final agent performance. Finally,
we demonstrate that Reactor reaches state-of-the-art performance after 200
million frames and less than a day of training. | http://arxiv.org/pdf/1704.04651 | Audrunas Gruslys, Will Dabney, Mohammad Gheshlaghi Azar, Bilal Piot, Marc Bellemare, Remi Munos | cs.AI | null | null | cs.AI | 20170415 | 20180619 | [
{
"id": "1707.06347"
},
{
"id": "1703.01161"
},
{
"id": "1509.02971"
},
{
"id": "1710.02298"
},
{
"id": "1706.10295"
},
{
"id": "1707.06887"
},
{
"id": "1511.05952"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.