doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
1707.06658
33
7 (12) (13) Table 3: Values of percentage relative tail risk measures and gains in reliability on using RAIL over GAIL for different continuous control tasks. Environment Reacher-v1 Hopper-v1 HalfCheetah-v1 Walker-v1 Humanoid-v1 V aR0.9(A|E)(%) GAIL -62.41 -53.17 -21.66 -1.64 -73.16 RAIL -23.81 -0.23 -8.20 0.03 -5.97 GR-VaR (%) 38.61 52.94 13.46 1.66 67.19 CV aR0.9(A|E) (%) GAIL -108.99 -49.62 -33.84 45.39 -71.71 RAIL -48.42 39.38 -12.24 70.52 1.07 GR-CVaR (%) 60.57 89.00 21.60 25.13 72.78 # 6 Experimental Results and Discussion
1707.06658#33
RAIL: Risk-Averse Imitation Learning
Imitation learning algorithms learn viable policies by imitating an expert's behavior when reward signals are not available. Generative Adversarial Imitation Learning (GAIL) is a state-of-the-art algorithm for learning policies when the expert's behavior is available as a fixed set of trajectories. We evaluate in terms of the expert's cost function and observe that the distribution of trajectory-costs is often more heavy-tailed for GAIL-agents than the expert at a number of benchmark continuous-control tasks. Thus, high-cost trajectories, corresponding to tail-end events of catastrophic failure, are more likely to be encountered by the GAIL-agents than the expert. This makes the reliability of GAIL-agents questionable when it comes to deployment in risk-sensitive applications like robotic surgery and autonomous driving. In this work, we aim to minimize the occurrence of tail-end events by minimizing tail risk within the GAIL framework. We quantify tail risk by the Conditional-Value-at-Risk (CVaR) of trajectories and develop the Risk-Averse Imitation Learning (RAIL) algorithm. We observe that the policies learned with RAIL show lower tail-end risk than those of vanilla GAIL. Thus the proposed RAIL algorithm appears as a potent alternative to GAIL for improved reliability in risk-sensitive applications.
http://arxiv.org/pdf/1707.06658
Anirban Santara, Abhishek Naik, Balaraman Ravindran, Dipankar Das, Dheevatsa Mudigere, Sasikanth Avancha, Bharat Kaul
cs.LG, cs.AI
Accepted for presentation in Deep Reinforcement Learning Symposium at NIPS 2017
null
cs.LG
20170720
20171129
[ { "id": "1703.01703" }, { "id": "1704.07911" }, { "id": "1708.06374" }, { "id": "1604.07316" }, { "id": "1610.03295" }, { "id": "1606.01540" } ]
1707.06658
34
# 6 Experimental Results and Discussion In this section, we present and discuss the results of comparison between GAIL and RAIL. The expert’s performance is used as a benchmark. Tables 2 and 3 present the values of our evaluation metrics for different continuous-control tasks. We set α = 0.9 for V aRα and CV aRα and estimate all metrics with N = 50 sampled trajectories (as followed by [Ho and Ermon, 2016]). The following are some interesting observations that we make: • RAIL obtains superior performance than GAIL at both tail risk measures – V aR0.9 and CV aR0.9, without increasing sample complexity. This shows that RAIL is a superior choice than GAIL for imitation learning in risk-sensitive applications.
1707.06658#34
RAIL: Risk-Averse Imitation Learning
Imitation learning algorithms learn viable policies by imitating an expert's behavior when reward signals are not available. Generative Adversarial Imitation Learning (GAIL) is a state-of-the-art algorithm for learning policies when the expert's behavior is available as a fixed set of trajectories. We evaluate in terms of the expert's cost function and observe that the distribution of trajectory-costs is often more heavy-tailed for GAIL-agents than the expert at a number of benchmark continuous-control tasks. Thus, high-cost trajectories, corresponding to tail-end events of catastrophic failure, are more likely to be encountered by the GAIL-agents than the expert. This makes the reliability of GAIL-agents questionable when it comes to deployment in risk-sensitive applications like robotic surgery and autonomous driving. In this work, we aim to minimize the occurrence of tail-end events by minimizing tail risk within the GAIL framework. We quantify tail risk by the Conditional-Value-at-Risk (CVaR) of trajectories and develop the Risk-Averse Imitation Learning (RAIL) algorithm. We observe that the policies learned with RAIL show lower tail-end risk than those of vanilla GAIL. Thus the proposed RAIL algorithm appears as a potent alternative to GAIL for improved reliability in risk-sensitive applications.
http://arxiv.org/pdf/1707.06658
Anirban Santara, Abhishek Naik, Balaraman Ravindran, Dipankar Das, Dheevatsa Mudigere, Sasikanth Avancha, Bharat Kaul
cs.LG, cs.AI
Accepted for presentation in Deep Reinforcement Learning Symposium at NIPS 2017
null
cs.LG
20170720
20171129
[ { "id": "1703.01703" }, { "id": "1704.07911" }, { "id": "1708.06374" }, { "id": "1604.07316" }, { "id": "1610.03295" }, { "id": "1606.01540" } ]
1707.06342
35
#Param. #FLOPs1 Top-1 Top-5 f./b. (ms) Table 2. Comparison among several state-of-the-art pruning meth- ods on the VGG-16 network. Some exact values are not reported in the original paper and cannot be computed, thus we use ≈ to denote the approximation value. Method APoZ-1 [14] APoZ-2 [14] Taylor-1 [23] Taylor-2 [23] ThiNet-WS [21] ThiNet-Conv ThiNet-GAP Top-1 Acc. Top-5 Acc. -2.16% +1.81% – – +1.01% +1.46% -1.00% -0.84% +1.25% -1.44% -3.94% +0.69% +1.09% -0.52% #Param. ↓ 2.04× 2.70× ≈ 1× ≈ 1× 1.05× 1.05× 16.63× #FLOPs ↓ ≈ 1× ≈ 1× 2.68× 3.86× 3.23× 3.23× 3.31×
1707.06342#35
ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression
We propose an efficient and unified framework, namely ThiNet, to simultaneously accelerate and compress CNN models in both training and inference stages. We focus on the filter level pruning, i.e., the whole filter would be discarded if it is less important. Our method does not change the original network structure, thus it can be perfectly supported by any off-the-shelf deep learning libraries. We formally establish filter pruning as an optimization problem, and reveal that we need to prune filters based on statistics information computed from its next layer, not the current layer, which differentiates ThiNet from existing methods. Experimental results demonstrate the effectiveness of this strategy, which has advanced the state-of-the-art. We also show the performance of ThiNet on ILSVRC-12 benchmark. ThiNet achieves 3.31$\times$ FLOPs reduction and 16.63$\times$ compression on VGG-16, with only 0.52$\%$ top-5 accuracy drop. Similar experiments with ResNet-50 reveal that even for a compact network, ThiNet can also reduce more than half of the parameters and FLOPs, at the cost of roughly 1$\%$ top-5 accuracy drop. Moreover, the original VGG-16 model can be further pruned into a very small model with only 5.05MB model size, preserving AlexNet level accuracy but showing much stronger generalization ability.
http://arxiv.org/pdf/1707.06342
Jian-Hao Luo, Jianxin Wu, Weiyao Lin
cs.CV
To appear in ICCV 2017
null
cs.CV
20170720
20170720
[ { "id": "1602.07360" }, { "id": "1610.02391" }, { "id": "1607.03250" } ]
1707.06658
35
• The applicability of RAIL is not limited to environments in which the distribution of trajectory-cost is heavy-tailed for GAIL. [Rockafellar and Uryasev, 2000] showed that if the distribution of the risk variable Z be normal, CV aRα(Z) = µZ + a(α)σZ, where a(α) is a constant for a given α, µZ and σZ are the mean and standard deviation of Z. Thus, in the absence of a heavy tail, minimization of CV aRα of the trajectory cost aids in learning better policies by contributing to the minimization of the mean and standard deviation of trajectory cost. The results on Reacher-v1 corroborate our claims. Although the histogram does not show a heavy tail (Figure 3 in Appendix B), the mean converges fine (Figure 2) and tail risk scores are improved (Table 2) which indicates the distribution of trajectory-costs is more condensed around the mean than GAIL. Thus we can use RAIL instead of GAIL, no matter whether the distribution of trajectory costs is heavy-tailed for GAIL or not. • Figure 2 shows the variation of mean trajectory cost over training iterations for GAIL and RAIL. We observe that RAIL converges almost as fast as GAIL at all the continuous-control tasks in discussion, and at times, even faster.
1707.06658#35
RAIL: Risk-Averse Imitation Learning
Imitation learning algorithms learn viable policies by imitating an expert's behavior when reward signals are not available. Generative Adversarial Imitation Learning (GAIL) is a state-of-the-art algorithm for learning policies when the expert's behavior is available as a fixed set of trajectories. We evaluate in terms of the expert's cost function and observe that the distribution of trajectory-costs is often more heavy-tailed for GAIL-agents than the expert at a number of benchmark continuous-control tasks. Thus, high-cost trajectories, corresponding to tail-end events of catastrophic failure, are more likely to be encountered by the GAIL-agents than the expert. This makes the reliability of GAIL-agents questionable when it comes to deployment in risk-sensitive applications like robotic surgery and autonomous driving. In this work, we aim to minimize the occurrence of tail-end events by minimizing tail risk within the GAIL framework. We quantify tail risk by the Conditional-Value-at-Risk (CVaR) of trajectories and develop the Risk-Averse Imitation Learning (RAIL) algorithm. We observe that the policies learned with RAIL show lower tail-end risk than those of vanilla GAIL. Thus the proposed RAIL algorithm appears as a potent alternative to GAIL for improved reliability in risk-sensitive applications.
http://arxiv.org/pdf/1707.06658
Anirban Santara, Abhishek Naik, Balaraman Ravindran, Dipankar Das, Dheevatsa Mudigere, Sasikanth Avancha, Bharat Kaul
cs.LG, cs.AI
Accepted for presentation in Deep Reinforcement Learning Symposium at NIPS 2017
null
cs.LG
20170720
20171129
[ { "id": "1703.01703" }, { "id": "1704.07911" }, { "id": "1708.06374" }, { "id": "1604.07316" }, { "id": "1610.03295" }, { "id": "1606.01540" } ]
1707.06342
36
conducted on one M40 GPU with batch size 32 accelerated by cuDNN v5.1. Since convolution operations dominate the computational costs of VGG-16, reducing FLOPs would accelerate inference speed greatly, which is shown in Table 1. We then compare our approach with several state-of-the- art pruning methods on the VGG-16 model, which is shown in Table 2. These methods also focus on filter-level pruning, but with totally different selection criteria. APoZ [14] aims to reduce parameter numbers, but its performance is limited. APoZ-1 prunes few layers (conv4, conv5 and the FC layers), but leads to significant accuracy degradation. APoZ-2 then only prunes conv5-3 and the FC layers. Its accuracy is improved but this model almost does not reduce the FLOPs. Hence, there is a great need for compressing convolution layers.
1707.06342#36
ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression
We propose an efficient and unified framework, namely ThiNet, to simultaneously accelerate and compress CNN models in both training and inference stages. We focus on the filter level pruning, i.e., the whole filter would be discarded if it is less important. Our method does not change the original network structure, thus it can be perfectly supported by any off-the-shelf deep learning libraries. We formally establish filter pruning as an optimization problem, and reveal that we need to prune filters based on statistics information computed from its next layer, not the current layer, which differentiates ThiNet from existing methods. Experimental results demonstrate the effectiveness of this strategy, which has advanced the state-of-the-art. We also show the performance of ThiNet on ILSVRC-12 benchmark. ThiNet achieves 3.31$\times$ FLOPs reduction and 16.63$\times$ compression on VGG-16, with only 0.52$\%$ top-5 accuracy drop. Similar experiments with ResNet-50 reveal that even for a compact network, ThiNet can also reduce more than half of the parameters and FLOPs, at the cost of roughly 1$\%$ top-5 accuracy drop. Moreover, the original VGG-16 model can be further pruned into a very small model with only 5.05MB model size, preserving AlexNet level accuracy but showing much stronger generalization ability.
http://arxiv.org/pdf/1707.06342
Jian-Hao Luo, Jianxin Wu, Weiyao Lin
cs.CV
To appear in ICCV 2017
null
cs.CV
20170720
20170720
[ { "id": "1602.07360" }, { "id": "1610.02391" }, { "id": "1607.03250" } ]
1707.06658
36
• The success of RAIL in learning a viable policy for Humanoid-v1 suggests that RAIL is scalable to large environments. Scalability is one of the salient features of GAIL. RAIL preserves the scalability of GAIL while showing lower tail risk. RAIL agents show lesser tail risk than GAIL agents after training has been completed. However it still requires the agent to act in the real world and sample trajectories (line 3 in Algorithm 1) during training. One way to rule out environmental interaction during training is to make the agent act in a simulator while learning from the expert’s real-world demonstrations. The setting changes to that of third person imitation learning [Stadie et al., 2017]. The RAIL formulation can be easily ported to this framework but we do not evaluate that in this paper. # 7 Conclusion This paper presents the RAIL algorithm which incorporates CV aR optimization within the original GAIL algorithm to minimize tail risk and thus improve the reliability of learned policies. We report significant improvement over GAIL at a number of evaluation metrics on five continuous-control tasks. Thus the proposed algorithm is a viable step in the direction of learning low-risk policies by imitation learning in complex environments, especially in risk-sensitive applications like robotic surgery and autonomous driving. We plan to test RAIL on fielded robotic applications in the future.
1707.06658#36
RAIL: Risk-Averse Imitation Learning
Imitation learning algorithms learn viable policies by imitating an expert's behavior when reward signals are not available. Generative Adversarial Imitation Learning (GAIL) is a state-of-the-art algorithm for learning policies when the expert's behavior is available as a fixed set of trajectories. We evaluate in terms of the expert's cost function and observe that the distribution of trajectory-costs is often more heavy-tailed for GAIL-agents than the expert at a number of benchmark continuous-control tasks. Thus, high-cost trajectories, corresponding to tail-end events of catastrophic failure, are more likely to be encountered by the GAIL-agents than the expert. This makes the reliability of GAIL-agents questionable when it comes to deployment in risk-sensitive applications like robotic surgery and autonomous driving. In this work, we aim to minimize the occurrence of tail-end events by minimizing tail risk within the GAIL framework. We quantify tail risk by the Conditional-Value-at-Risk (CVaR) of trajectories and develop the Risk-Averse Imitation Learning (RAIL) algorithm. We observe that the policies learned with RAIL show lower tail-end risk than those of vanilla GAIL. Thus the proposed RAIL algorithm appears as a potent alternative to GAIL for improved reliability in risk-sensitive applications.
http://arxiv.org/pdf/1707.06658
Anirban Santara, Abhishek Naik, Balaraman Ravindran, Dipankar Das, Dheevatsa Mudigere, Sasikanth Avancha, Bharat Kaul
cs.LG, cs.AI
Accepted for presentation in Deep Reinforcement Learning Symposium at NIPS 2017
null
cs.LG
20170720
20171129
[ { "id": "1703.01703" }, { "id": "1704.07911" }, { "id": "1708.06374" }, { "id": "1604.07316" }, { "id": "1610.03295" }, { "id": "1606.01540" } ]
1707.06342
37
In contrast, Molchanov et al. [23] pay their attention to model acceleration, and only prune the convolutional layers. They think a filter can be removed safely if it has little influ- ence on the loss function. But the calculating procedure can be very time-consuming, thus they use Taylor expansion to approximate the loss change. Their motivation and goals are similar to ours, but with totally different selection criterion and training framework. As shown in Table 2, the ThiNet- Conv model is significantly better than Taylor method. Our model can even improve classification accuracy with more FLOPs reduction. As for weight sum [21], they have not explored its perfor7 mance on VGG-16. Hence we simply replace our selection method with weight sum in the ThiNet framework, and re- port the final accuracy denoted by “ThiNet-WS”. All the parameters are kept the same except for selection criterion. Note that different fine-tuning framework may lead to very different results. Hence, the accuracy may be different if Li et al. [21] had done this using their own framework. Because the rest setups are the same, it is fair to compare ThiNet-WS and ThiNet, and ThiNet has obtained better results.
1707.06342#37
ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression
We propose an efficient and unified framework, namely ThiNet, to simultaneously accelerate and compress CNN models in both training and inference stages. We focus on the filter level pruning, i.e., the whole filter would be discarded if it is less important. Our method does not change the original network structure, thus it can be perfectly supported by any off-the-shelf deep learning libraries. We formally establish filter pruning as an optimization problem, and reveal that we need to prune filters based on statistics information computed from its next layer, not the current layer, which differentiates ThiNet from existing methods. Experimental results demonstrate the effectiveness of this strategy, which has advanced the state-of-the-art. We also show the performance of ThiNet on ILSVRC-12 benchmark. ThiNet achieves 3.31$\times$ FLOPs reduction and 16.63$\times$ compression on VGG-16, with only 0.52$\%$ top-5 accuracy drop. Similar experiments with ResNet-50 reveal that even for a compact network, ThiNet can also reduce more than half of the parameters and FLOPs, at the cost of roughly 1$\%$ top-5 accuracy drop. Moreover, the original VGG-16 model can be further pruned into a very small model with only 5.05MB model size, preserving AlexNet level accuracy but showing much stronger generalization ability.
http://arxiv.org/pdf/1707.06342
Jian-Hao Luo, Jianxin Wu, Weiyao Lin
cs.CV
To appear in ICCV 2017
null
cs.CV
20170720
20170720
[ { "id": "1602.07360" }, { "id": "1610.02391" }, { "id": "1607.03250" } ]
1707.06658
37
Acknowledgments The authors would like to thank Apoorv Vyas of Intel Labs and Sapana Chaudhary of IIT Madras for helpful discussions. Anirban Santara’s travel was supported by Google India under the Google India PhD Fellowship Award. 8 # References Pieter Abbeel and Andrew Y Ng. Apprenticeship learning via inverse reinforcement learning. In Proceedings of the twenty-first international conference on Machine learning, page 1. ACM, 2004. Pieter Abbeel and Andrew Y Ng. Inverse reinforcement learning. In Encyclopedia of machine learning, pages 554–558. Springer, 2011. Pieter Abbeel, Adam Coates, Morgan Quigley, and Andrew Y Ng. An application of reinforcement learning to aerobatic helicopter flight. In Advances in neural information processing systems, pages 1–8, 2007. Brenna D. Argall, Sonia Chernova, Manuela Veloso, and Brett Browning. A survey of robot learning from demonstration. Robotics and Autonomous Systems, 57(5):469 – 483, 2009. ISSN 0921-8890. doi: http://dx.doi.org/10.1016/j.robot.2008.10.024. URL http://www.sciencedirect. com/science/article/pii/S0921889008001772.
1707.06658#37
RAIL: Risk-Averse Imitation Learning
Imitation learning algorithms learn viable policies by imitating an expert's behavior when reward signals are not available. Generative Adversarial Imitation Learning (GAIL) is a state-of-the-art algorithm for learning policies when the expert's behavior is available as a fixed set of trajectories. We evaluate in terms of the expert's cost function and observe that the distribution of trajectory-costs is often more heavy-tailed for GAIL-agents than the expert at a number of benchmark continuous-control tasks. Thus, high-cost trajectories, corresponding to tail-end events of catastrophic failure, are more likely to be encountered by the GAIL-agents than the expert. This makes the reliability of GAIL-agents questionable when it comes to deployment in risk-sensitive applications like robotic surgery and autonomous driving. In this work, we aim to minimize the occurrence of tail-end events by minimizing tail risk within the GAIL framework. We quantify tail risk by the Conditional-Value-at-Risk (CVaR) of trajectories and develop the Risk-Averse Imitation Learning (RAIL) algorithm. We observe that the policies learned with RAIL show lower tail-end risk than those of vanilla GAIL. Thus the proposed RAIL algorithm appears as a potent alternative to GAIL for improved reliability in risk-sensitive applications.
http://arxiv.org/pdf/1707.06658
Anirban Santara, Abhishek Naik, Balaraman Ravindran, Dipankar Das, Dheevatsa Mudigere, Sasikanth Avancha, Bharat Kaul
cs.LG, cs.AI
Accepted for presentation in Deep Reinforcement Learning Symposium at NIPS 2017
null
cs.LG
20170720
20171129
[ { "id": "1703.01703" }, { "id": "1704.07911" }, { "id": "1708.06374" }, { "id": "1604.07316" }, { "id": "1610.03295" }, { "id": "1606.01540" } ]
1707.06658
38
Christopher G Atkeson and Stefan Schaal. Robot learning from demonstration. In ICML, volume 97, pages 12–20, 1997. Mariusz Bojarski, Davide Del Testa, Daniel Dworakowski, Bernhard Firner, Beat Flepp, Prasoon Goyal, Lawrence D Jackel, Mathew Monfort, Urs Muller, Jiakai Zhang, et al. End to end learning for self-driving cars. arXiv preprint arXiv:1604.07316, 2016. Mariusz Bojarski, Philip Yeres, Anna Choromanska, Krzysztof Choromanski, Bernhard Firner, Lawrence Jackel, and Urs Muller. Explaining how a deep neural network trained with end-to-end learning steers a car. arXiv preprint arXiv:1704.07911, 2017. Vivek S Borkar. Q-learning for risk-sensitive control. Mathematics of operations research, 27(2): 294–311, 2002. Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. Openai gym. arXiv preprint arXiv:1606.01540, 2016.
1707.06658#38
RAIL: Risk-Averse Imitation Learning
Imitation learning algorithms learn viable policies by imitating an expert's behavior when reward signals are not available. Generative Adversarial Imitation Learning (GAIL) is a state-of-the-art algorithm for learning policies when the expert's behavior is available as a fixed set of trajectories. We evaluate in terms of the expert's cost function and observe that the distribution of trajectory-costs is often more heavy-tailed for GAIL-agents than the expert at a number of benchmark continuous-control tasks. Thus, high-cost trajectories, corresponding to tail-end events of catastrophic failure, are more likely to be encountered by the GAIL-agents than the expert. This makes the reliability of GAIL-agents questionable when it comes to deployment in risk-sensitive applications like robotic surgery and autonomous driving. In this work, we aim to minimize the occurrence of tail-end events by minimizing tail risk within the GAIL framework. We quantify tail risk by the Conditional-Value-at-Risk (CVaR) of trajectories and develop the Risk-Averse Imitation Learning (RAIL) algorithm. We observe that the policies learned with RAIL show lower tail-end risk than those of vanilla GAIL. Thus the proposed RAIL algorithm appears as a potent alternative to GAIL for improved reliability in risk-sensitive applications.
http://arxiv.org/pdf/1707.06658
Anirban Santara, Abhishek Naik, Balaraman Ravindran, Dipankar Das, Dheevatsa Mudigere, Sasikanth Avancha, Bharat Kaul
cs.LG, cs.AI
Accepted for presentation in Deep Reinforcement Learning Symposium at NIPS 2017
null
cs.LG
20170720
20171129
[ { "id": "1703.01703" }, { "id": "1704.07911" }, { "id": "1708.06374" }, { "id": "1604.07316" }, { "id": "1610.03295" }, { "id": "1606.01540" } ]
1707.06342
39
Using these smaller compression ratios, we train a very small model. Denoted as “ThiNet-Tiny” in Table 1, it only takes 5.05MB disk space (1MB=220 bytes) but still has AlexNet-level accuracy (the top-1/top-5 accuracy of AlexNet is 57.2%/80.3%, respectively). ThiNet-Tiny has exactly the same level of model complexity as the recently proposed compact network: SqueezeNet [15], but showing high accu- racy. Although ThiNet-Tiny needs more FLOPs, its actual speed is even faster than SqueezeNet because it has a much simpler network structure. SqueezeNet adopts a special structure, namely the Fire module, which is parameter ef- ficient but relies on manual network structure design. In contrast, ThiNet is a unified framework, and higher accuracy would be obtained if we start from a more accurate model. # 4.3. ResNet-50 on ImageNet We also explore the performance of ThiNet on the re- cently proposed powerful CNN architecture: ResNet [11]. We select ResNet-50 as the representative of the ResNet family, which has exactly the same architecture and little difference with others.
1707.06342#39
ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression
We propose an efficient and unified framework, namely ThiNet, to simultaneously accelerate and compress CNN models in both training and inference stages. We focus on the filter level pruning, i.e., the whole filter would be discarded if it is less important. Our method does not change the original network structure, thus it can be perfectly supported by any off-the-shelf deep learning libraries. We formally establish filter pruning as an optimization problem, and reveal that we need to prune filters based on statistics information computed from its next layer, not the current layer, which differentiates ThiNet from existing methods. Experimental results demonstrate the effectiveness of this strategy, which has advanced the state-of-the-art. We also show the performance of ThiNet on ILSVRC-12 benchmark. ThiNet achieves 3.31$\times$ FLOPs reduction and 16.63$\times$ compression on VGG-16, with only 0.52$\%$ top-5 accuracy drop. Similar experiments with ResNet-50 reveal that even for a compact network, ThiNet can also reduce more than half of the parameters and FLOPs, at the cost of roughly 1$\%$ top-5 accuracy drop. Moreover, the original VGG-16 model can be further pruned into a very small model with only 5.05MB model size, preserving AlexNet level accuracy but showing much stronger generalization ability.
http://arxiv.org/pdf/1707.06342
Jian-Hao Luo, Jianxin Wu, Weiyao Lin
cs.CV
To appear in ICCV 2017
null
cs.CV
20170720
20170720
[ { "id": "1602.07360" }, { "id": "1610.02391" }, { "id": "1607.03250" } ]
1707.06658
39
Yinlam Chow and Mohammad Ghavamzadeh. Algorithms for cvar optimization in mdps. In Advances in neural information processing systems, pages 3509–3517, 2014. Nivine Dalleh. Why is CVaR superior to VaR?(c2009). PhD thesis, 2011. Hal Daumé, John Langford, and Daniel Marcu. Search-based structured prediction. Machine learning, 75(3):297–325, 2009. Jimmy Ba Diederik Kingma. Adam: A method for stochastic optimization. arXiv:1310.5107 [cs.CV], 2015. Chelsea Finn, Sergey Levine, and Pieter Abbeel. Guided cost learning: Deep inverse optimal control via policy optimization. In International Conference on Machine Learning, pages 49–58, 2016. Javier Garcıa and Fernando Fernández. A comprehensive survey on safe reinforcement learning. Journal of Machine Learning Research, 16(1):1437–1480, 2015. Paul W Glimcher and Ernst Fehr. Neuroeconomics: Decision making and the brain. Academic Press, 2013.
1707.06658#39
RAIL: Risk-Averse Imitation Learning
Imitation learning algorithms learn viable policies by imitating an expert's behavior when reward signals are not available. Generative Adversarial Imitation Learning (GAIL) is a state-of-the-art algorithm for learning policies when the expert's behavior is available as a fixed set of trajectories. We evaluate in terms of the expert's cost function and observe that the distribution of trajectory-costs is often more heavy-tailed for GAIL-agents than the expert at a number of benchmark continuous-control tasks. Thus, high-cost trajectories, corresponding to tail-end events of catastrophic failure, are more likely to be encountered by the GAIL-agents than the expert. This makes the reliability of GAIL-agents questionable when it comes to deployment in risk-sensitive applications like robotic surgery and autonomous driving. In this work, we aim to minimize the occurrence of tail-end events by minimizing tail risk within the GAIL framework. We quantify tail risk by the Conditional-Value-at-Risk (CVaR) of trajectories and develop the Risk-Averse Imitation Learning (RAIL) algorithm. We observe that the policies learned with RAIL show lower tail-end risk than those of vanilla GAIL. Thus the proposed RAIL algorithm appears as a potent alternative to GAIL for improved reliability in risk-sensitive applications.
http://arxiv.org/pdf/1707.06658
Anirban Santara, Abhishek Naik, Balaraman Ravindran, Dipankar Das, Dheevatsa Mudigere, Sasikanth Avancha, Bharat Kaul
cs.LG, cs.AI
Accepted for presentation in Deep Reinforcement Learning Symposium at NIPS 2017
null
cs.LG
20170720
20171129
[ { "id": "1703.01703" }, { "id": "1704.07911" }, { "id": "1708.06374" }, { "id": "1604.07316" }, { "id": "1610.03295" }, { "id": "1606.01540" } ]
1707.06342
40
Similar to VGG-16, we prune ResNet-50 from block 2a to 5c iteratively. Except for filters, the corresponding channels in batch-normalization layer are also discarded. After pruning, the model is fine-tuned in one epoch with fixed learning rate 10−4. And 9 epochs fine-tuning with learning rate changing from 10−3 to 10−5 is performed at the last round to gain a higher accuracy. Other parameters are kept the same as our VGG-16 pruning experiment. Because ResNet is a recently proposed model, the liter- ature lack enough works that compress this network. We report the performance of ThiNet on pruning ResNet-50, which is shown in Table 3. We prune this model with 3 different compression rates (preserve 70%, 50%, 30% fil- ters in each block respectively). Unlike VGG-16, ResNet is more compact. There exists less redundancy, thus pruning a large amount of filters seems to be more challenging. In spite of this, our method ThiNet-50 can still prune more than
1707.06342#40
ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression
We propose an efficient and unified framework, namely ThiNet, to simultaneously accelerate and compress CNN models in both training and inference stages. We focus on the filter level pruning, i.e., the whole filter would be discarded if it is less important. Our method does not change the original network structure, thus it can be perfectly supported by any off-the-shelf deep learning libraries. We formally establish filter pruning as an optimization problem, and reveal that we need to prune filters based on statistics information computed from its next layer, not the current layer, which differentiates ThiNet from existing methods. Experimental results demonstrate the effectiveness of this strategy, which has advanced the state-of-the-art. We also show the performance of ThiNet on ILSVRC-12 benchmark. ThiNet achieves 3.31$\times$ FLOPs reduction and 16.63$\times$ compression on VGG-16, with only 0.52$\%$ top-5 accuracy drop. Similar experiments with ResNet-50 reveal that even for a compact network, ThiNet can also reduce more than half of the parameters and FLOPs, at the cost of roughly 1$\%$ top-5 accuracy drop. Moreover, the original VGG-16 model can be further pruned into a very small model with only 5.05MB model size, preserving AlexNet level accuracy but showing much stronger generalization ability.
http://arxiv.org/pdf/1707.06342
Jian-Hao Luo, Jianxin Wu, Weiyao Lin
cs.CV
To appear in ICCV 2017
null
cs.CV
20170720
20170720
[ { "id": "1602.07360" }, { "id": "1610.02391" }, { "id": "1607.03250" } ]
1707.06658
40
Paul W Glimcher and Ernst Fehr. Neuroeconomics: Decision making and the brain. Academic Press, 2013. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural informa- tion processing systems, pages 2672–2680, 2014. Simon Haykin. Neural Networks: A Comprehensive Foundation. Prentice Hall PTR, Upper Saddle River, NJ, USA, 2nd edition, 1998. ISBN 0132733501. Matthias Heger. Consideration of risk in reinforcement learning. In Proceedings of the Eleventh International Conference on Machine Learning, pages 105–111, 1994. Jonathan Ho and Stefano Ermon. Generative adversarial imitation learning. In Advances in Neural Information Processing Systems, pages 4565–4573, 2016. 9 Ronald A Howard and James E Matheson. Risk-sensitive markov decision processes. Management science, 18(7):356–369, 1972. Ming Hsu, Meghana Bhatt, Ralph Adolphs, Daniel Tranel, and Colin F Camerer. Neural systems responding to degrees of uncertainty in human decision-making. Science, 310(5754):1680–1683, 2005.
1707.06658#40
RAIL: Risk-Averse Imitation Learning
Imitation learning algorithms learn viable policies by imitating an expert's behavior when reward signals are not available. Generative Adversarial Imitation Learning (GAIL) is a state-of-the-art algorithm for learning policies when the expert's behavior is available as a fixed set of trajectories. We evaluate in terms of the expert's cost function and observe that the distribution of trajectory-costs is often more heavy-tailed for GAIL-agents than the expert at a number of benchmark continuous-control tasks. Thus, high-cost trajectories, corresponding to tail-end events of catastrophic failure, are more likely to be encountered by the GAIL-agents than the expert. This makes the reliability of GAIL-agents questionable when it comes to deployment in risk-sensitive applications like robotic surgery and autonomous driving. In this work, we aim to minimize the occurrence of tail-end events by minimizing tail risk within the GAIL framework. We quantify tail risk by the Conditional-Value-at-Risk (CVaR) of trajectories and develop the Risk-Averse Imitation Learning (RAIL) algorithm. We observe that the policies learned with RAIL show lower tail-end risk than those of vanilla GAIL. Thus the proposed RAIL algorithm appears as a potent alternative to GAIL for improved reliability in risk-sensitive applications.
http://arxiv.org/pdf/1707.06658
Anirban Santara, Abhishek Naik, Balaraman Ravindran, Dipankar Das, Dheevatsa Mudigere, Sasikanth Avancha, Bharat Kaul
cs.LG, cs.AI
Accepted for presentation in Deep Reinforcement Learning Symposium at NIPS 2017
null
cs.LG
20170720
20171129
[ { "id": "1703.01703" }, { "id": "1704.07911" }, { "id": "1708.06374" }, { "id": "1604.07316" }, { "id": "1610.03295" }, { "id": "1606.01540" } ]
1707.06342
41
Table 3. Overall performance of pruning ResNet-50 on ImageNet via ThiNet with different compression rate. Here, M/B means million/billion respectively, f./b. denotes the forward/backward speed tested on one M40 GPU with batch size 32. #FLOPs Top-5 Top-1 72.88% 91.14% 25.56M 7.72B 72.04% 90.67% 16.94M 4.88B 71.01% 90.02% 12.38M 3.41B 2.20B 68.42% 88.30% 8.66M Model Original ThiNet-70 ThiNet-50 ThiNet-30 #Param. f./b. (ms) 188.27/269.32 169.38/243.37 153.60/212.29 144.45/200.67 half of the parameters with roughly 1% top-5 accuracy drop. Further pruning can also be carried out, leading to a much smaller model at the cost of more accuracy loss.
1707.06342#41
ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression
We propose an efficient and unified framework, namely ThiNet, to simultaneously accelerate and compress CNN models in both training and inference stages. We focus on the filter level pruning, i.e., the whole filter would be discarded if it is less important. Our method does not change the original network structure, thus it can be perfectly supported by any off-the-shelf deep learning libraries. We formally establish filter pruning as an optimization problem, and reveal that we need to prune filters based on statistics information computed from its next layer, not the current layer, which differentiates ThiNet from existing methods. Experimental results demonstrate the effectiveness of this strategy, which has advanced the state-of-the-art. We also show the performance of ThiNet on ILSVRC-12 benchmark. ThiNet achieves 3.31$\times$ FLOPs reduction and 16.63$\times$ compression on VGG-16, with only 0.52$\%$ top-5 accuracy drop. Similar experiments with ResNet-50 reveal that even for a compact network, ThiNet can also reduce more than half of the parameters and FLOPs, at the cost of roughly 1$\%$ top-5 accuracy drop. Moreover, the original VGG-16 model can be further pruned into a very small model with only 5.05MB model size, preserving AlexNet level accuracy but showing much stronger generalization ability.
http://arxiv.org/pdf/1707.06342
Jian-Hao Luo, Jianxin Wu, Weiyao Lin
cs.CV
To appear in ICCV 2017
null
cs.CV
20170720
20170720
[ { "id": "1602.07360" }, { "id": "1610.02391" }, { "id": "1607.03250" } ]
1707.06658
41
Investopedia. Definition of tail risk. http://www.investopedia.com/terms/t/ tailrisk.asp, 2017. Accessed: 2017-09-11. Sham Kakade and John Langford. Approximately optimal approximate reinforcement learning. In ICML, volume 2, pages 267–274, 2002. Sergey Levine and Vladlen Koltun. Continuous inverse optimal control with locally optimal examples. arXiv preprint arXiv:1206.4617, 2012. Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. CoRR, abs/1509.02971, 2015. URL http://arxiv.org/abs/1509.02971. Anirudha Majumdar, Sumeet Singh, Ajay Mandlekar, and Marco Pavone. Risk-sensitive inverse reinforcement learning via coherent risk models. 2017. Oliver Mihatsch and Ralph Neuneier. Risk-sensitive reinforcement learning. Machine learning, 49 (2-3):267–290, 2002.
1707.06658#41
RAIL: Risk-Averse Imitation Learning
Imitation learning algorithms learn viable policies by imitating an expert's behavior when reward signals are not available. Generative Adversarial Imitation Learning (GAIL) is a state-of-the-art algorithm for learning policies when the expert's behavior is available as a fixed set of trajectories. We evaluate in terms of the expert's cost function and observe that the distribution of trajectory-costs is often more heavy-tailed for GAIL-agents than the expert at a number of benchmark continuous-control tasks. Thus, high-cost trajectories, corresponding to tail-end events of catastrophic failure, are more likely to be encountered by the GAIL-agents than the expert. This makes the reliability of GAIL-agents questionable when it comes to deployment in risk-sensitive applications like robotic surgery and autonomous driving. In this work, we aim to minimize the occurrence of tail-end events by minimizing tail risk within the GAIL framework. We quantify tail risk by the Conditional-Value-at-Risk (CVaR) of trajectories and develop the Risk-Averse Imitation Learning (RAIL) algorithm. We observe that the policies learned with RAIL show lower tail-end risk than those of vanilla GAIL. Thus the proposed RAIL algorithm appears as a potent alternative to GAIL for improved reliability in risk-sensitive applications.
http://arxiv.org/pdf/1707.06658
Anirban Santara, Abhishek Naik, Balaraman Ravindran, Dipankar Das, Dheevatsa Mudigere, Sasikanth Avancha, Bharat Kaul
cs.LG, cs.AI
Accepted for presentation in Deep Reinforcement Learning Symposium at NIPS 2017
null
cs.LG
20170720
20171129
[ { "id": "1703.01703" }, { "id": "1704.07911" }, { "id": "1708.06374" }, { "id": "1604.07316" }, { "id": "1610.03295" }, { "id": "1606.01540" } ]
1707.06342
42
half of the parameters with roughly 1% top-5 accuracy drop. Further pruning can also be carried out, leading to a much smaller model at the cost of more accuracy loss. However, reduced FLOPs can not bring the same level of acceleration in ResNet. Due to the structure constraints of ResNet-50, non-tensor layers (e.g., batch normalization and pooling layers) take up more than 40% of the inference time on GPU. Hence, there is a great need to accelerate these non-tensor layers, which should be explored in the future. In this experiment, we only prune the first two layers of each block in ResNet for simplicity, leaving the block output and projection shortcuts unchanged. Pruning these parts would lead to further compression, but can be quite difficult, if not entirely impossible. And this exploration seems to be a promising extension for the future work. # 4.4. Domain adaptation ability of the pruned model One of the main advantages of ThiNet is that we have not changed network structure, thus a model pruned on Ima- geNet can be easily transfered into other domains.
1707.06342#42
ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression
We propose an efficient and unified framework, namely ThiNet, to simultaneously accelerate and compress CNN models in both training and inference stages. We focus on the filter level pruning, i.e., the whole filter would be discarded if it is less important. Our method does not change the original network structure, thus it can be perfectly supported by any off-the-shelf deep learning libraries. We formally establish filter pruning as an optimization problem, and reveal that we need to prune filters based on statistics information computed from its next layer, not the current layer, which differentiates ThiNet from existing methods. Experimental results demonstrate the effectiveness of this strategy, which has advanced the state-of-the-art. We also show the performance of ThiNet on ILSVRC-12 benchmark. ThiNet achieves 3.31$\times$ FLOPs reduction and 16.63$\times$ compression on VGG-16, with only 0.52$\%$ top-5 accuracy drop. Similar experiments with ResNet-50 reveal that even for a compact network, ThiNet can also reduce more than half of the parameters and FLOPs, at the cost of roughly 1$\%$ top-5 accuracy drop. Moreover, the original VGG-16 model can be further pruned into a very small model with only 5.05MB model size, preserving AlexNet level accuracy but showing much stronger generalization ability.
http://arxiv.org/pdf/1707.06342
Jian-Hao Luo, Jianxin Wu, Weiyao Lin
cs.CV
To appear in ICCV 2017
null
cs.CV
20170720
20170720
[ { "id": "1602.07360" }, { "id": "1610.02391" }, { "id": "1607.03250" } ]
1707.06658
42
Oliver Mihatsch and Ralph Neuneier. Risk-sensitive reinforcement learning. Machine learning, 49 (2-3):267–290, 2002. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529–533, 2015. Arne J Nagengast, Daniel A Braun, and Daniel M Wolpert. Risk-sensitive optimal feedback control accounts for sensorimotor behavior under uncertainty. PLoS computational biology, 6(7):e1000857, 2010. Andrew Y Ng, Stuart J Russell, et al. Algorithms for inverse reinforcement learning. In Icml, pages 663–670, 2000. Yael Niv, Jeffrey A Edlund, Peter Dayan, and John P O’Doherty. Neural prediction errors reveal a risk-sensitive reinforcement-learning process in the human brain. Journal of Neuroscience, 32(2): 551–562, 2012. OpenAI-GAIL. Imitation learning github repository. https://github.com/openai/ imitation.git, 2017. Accessed: 2017-06-27.
1707.06658#42
RAIL: Risk-Averse Imitation Learning
Imitation learning algorithms learn viable policies by imitating an expert's behavior when reward signals are not available. Generative Adversarial Imitation Learning (GAIL) is a state-of-the-art algorithm for learning policies when the expert's behavior is available as a fixed set of trajectories. We evaluate in terms of the expert's cost function and observe that the distribution of trajectory-costs is often more heavy-tailed for GAIL-agents than the expert at a number of benchmark continuous-control tasks. Thus, high-cost trajectories, corresponding to tail-end events of catastrophic failure, are more likely to be encountered by the GAIL-agents than the expert. This makes the reliability of GAIL-agents questionable when it comes to deployment in risk-sensitive applications like robotic surgery and autonomous driving. In this work, we aim to minimize the occurrence of tail-end events by minimizing tail risk within the GAIL framework. We quantify tail risk by the Conditional-Value-at-Risk (CVaR) of trajectories and develop the Risk-Averse Imitation Learning (RAIL) algorithm. We observe that the policies learned with RAIL show lower tail-end risk than those of vanilla GAIL. Thus the proposed RAIL algorithm appears as a potent alternative to GAIL for improved reliability in risk-sensitive applications.
http://arxiv.org/pdf/1707.06658
Anirban Santara, Abhishek Naik, Balaraman Ravindran, Dipankar Das, Dheevatsa Mudigere, Sasikanth Avancha, Bharat Kaul
cs.LG, cs.AI
Accepted for presentation in Deep Reinforcement Learning Symposium at NIPS 2017
null
cs.LG
20170720
20171129
[ { "id": "1703.01703" }, { "id": "1704.07911" }, { "id": "1708.06374" }, { "id": "1604.07316" }, { "id": "1610.03295" }, { "id": "1606.01540" } ]
1707.06342
43
One of the main advantages of ThiNet is that we have not changed network structure, thus a model pruned on Ima- geNet can be easily transfered into other domains. To help us better understand this benefit, let us consider a more practical scenario: get a small model on a domain- specific dataset. This is a very common requirement in the real-world applications, since we will not directly apply ImageNet models in a real application. To achieve this goal, there are two feasible strategies: starting from a pre-trained ImageNet model then prune on the new dataset, or train a small model from scratch. In this section, we argue that it would be a better choice if we fine-tune an already pruned model which is compressed on ImageNet.
1707.06342#43
ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression
We propose an efficient and unified framework, namely ThiNet, to simultaneously accelerate and compress CNN models in both training and inference stages. We focus on the filter level pruning, i.e., the whole filter would be discarded if it is less important. Our method does not change the original network structure, thus it can be perfectly supported by any off-the-shelf deep learning libraries. We formally establish filter pruning as an optimization problem, and reveal that we need to prune filters based on statistics information computed from its next layer, not the current layer, which differentiates ThiNet from existing methods. Experimental results demonstrate the effectiveness of this strategy, which has advanced the state-of-the-art. We also show the performance of ThiNet on ILSVRC-12 benchmark. ThiNet achieves 3.31$\times$ FLOPs reduction and 16.63$\times$ compression on VGG-16, with only 0.52$\%$ top-5 accuracy drop. Similar experiments with ResNet-50 reveal that even for a compact network, ThiNet can also reduce more than half of the parameters and FLOPs, at the cost of roughly 1$\%$ top-5 accuracy drop. Moreover, the original VGG-16 model can be further pruned into a very small model with only 5.05MB model size, preserving AlexNet level accuracy but showing much stronger generalization ability.
http://arxiv.org/pdf/1707.06342
Jian-Hao Luo, Jianxin Wu, Weiyao Lin
cs.CV
To appear in ICCV 2017
null
cs.CV
20170720
20170720
[ { "id": "1602.07360" }, { "id": "1610.02391" }, { "id": "1607.03250" } ]
1707.06658
43
OpenAI-GAIL. Imitation learning github repository. https://github.com/openai/ imitation.git, 2017. Accessed: 2017-06-27. Dean A Pomerleau. Alvinn: An autonomous land vehicle in a neural network. In Advances in neural information processing systems, pages 305–313, 1989. Aravind Rajeswaran, Sarvjeet Ghotra, Sergey Levine, and Balaraman Ravindran. Epopt: Learning robust neural network policies using model ensembles. 5th International Conference on Learning Representations, 2016. R Tyrrell Rockafellar and Stanislav Uryasev. Optimization of conditional value-at-risk. Journal of risk, 2:21–42, 2000. Stéphane Ross and Drew Bagnell. Efficient reductions for imitation learning. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pages 661–668, 2010. Stephane Ross and J Andrew Bagnell. Reinforcement and imitation learning via interactive no-regret learning. arXiv preprint arXiv:1406.5979, 2014.
1707.06658#43
RAIL: Risk-Averse Imitation Learning
Imitation learning algorithms learn viable policies by imitating an expert's behavior when reward signals are not available. Generative Adversarial Imitation Learning (GAIL) is a state-of-the-art algorithm for learning policies when the expert's behavior is available as a fixed set of trajectories. We evaluate in terms of the expert's cost function and observe that the distribution of trajectory-costs is often more heavy-tailed for GAIL-agents than the expert at a number of benchmark continuous-control tasks. Thus, high-cost trajectories, corresponding to tail-end events of catastrophic failure, are more likely to be encountered by the GAIL-agents than the expert. This makes the reliability of GAIL-agents questionable when it comes to deployment in risk-sensitive applications like robotic surgery and autonomous driving. In this work, we aim to minimize the occurrence of tail-end events by minimizing tail risk within the GAIL framework. We quantify tail risk by the Conditional-Value-at-Risk (CVaR) of trajectories and develop the Risk-Averse Imitation Learning (RAIL) algorithm. We observe that the policies learned with RAIL show lower tail-end risk than those of vanilla GAIL. Thus the proposed RAIL algorithm appears as a potent alternative to GAIL for improved reliability in risk-sensitive applications.
http://arxiv.org/pdf/1707.06658
Anirban Santara, Abhishek Naik, Balaraman Ravindran, Dipankar Das, Dheevatsa Mudigere, Sasikanth Avancha, Bharat Kaul
cs.LG, cs.AI
Accepted for presentation in Deep Reinforcement Learning Symposium at NIPS 2017
null
cs.LG
20170720
20171129
[ { "id": "1703.01703" }, { "id": "1704.07911" }, { "id": "1708.06374" }, { "id": "1604.07316" }, { "id": "1610.03295" }, { "id": "1606.01540" } ]
1707.06342
44
These strategies are compared on two different domain- specific dataset: CUB-200 [31] for fine-grained classifica- tion and Indoor-67 [25] for scene recognition. We have introduced CUB-200 in section 4.1. As for Indoor-67, we follow the official train/test split (5360 training and 1340 test images) to organize this dataset. All the models are fine-tuned with the same hyper-parameters and epochs for a fair comparison. Their performance is shown in Table 4. We first fine-tune the pre-trained VGG-16 model on the new dataset, which is a popular strategy adopted in numer- ous recognition tasks. As we can see, the fine-tuned model has the highest accuracy at the cost of huge model size and slow inference speed. Then, we use the proposed ThiNet approach to prune some unimportant filters (denoted by “FT 8 Table 4. Comparison of different strategies to get a small model on CUB-200 and Indoor-67. “FT” stands for “Fine Tune”.
1707.06342#44
ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression
We propose an efficient and unified framework, namely ThiNet, to simultaneously accelerate and compress CNN models in both training and inference stages. We focus on the filter level pruning, i.e., the whole filter would be discarded if it is less important. Our method does not change the original network structure, thus it can be perfectly supported by any off-the-shelf deep learning libraries. We formally establish filter pruning as an optimization problem, and reveal that we need to prune filters based on statistics information computed from its next layer, not the current layer, which differentiates ThiNet from existing methods. Experimental results demonstrate the effectiveness of this strategy, which has advanced the state-of-the-art. We also show the performance of ThiNet on ILSVRC-12 benchmark. ThiNet achieves 3.31$\times$ FLOPs reduction and 16.63$\times$ compression on VGG-16, with only 0.52$\%$ top-5 accuracy drop. Similar experiments with ResNet-50 reveal that even for a compact network, ThiNet can also reduce more than half of the parameters and FLOPs, at the cost of roughly 1$\%$ top-5 accuracy drop. Moreover, the original VGG-16 model can be further pruned into a very small model with only 5.05MB model size, preserving AlexNet level accuracy but showing much stronger generalization ability.
http://arxiv.org/pdf/1707.06342
Jian-Hao Luo, Jianxin Wu, Weiyao Lin
cs.CV
To appear in ICCV 2017
null
cs.CV
20170720
20170720
[ { "id": "1602.07360" }, { "id": "1610.02391" }, { "id": "1607.03250" } ]
1707.06658
44
Stéphane Ross, Geoffrey J Gordon, and Drew Bagnell. A reduction of imitation learning and structured prediction to no-regret online learning. In International Conference on Artificial Intelligence and Statistics, pages 627–635, 2011. 10 Stuart Russell. Learning agents for uncertain environments. In Proceedings of the eleventh annual conference on Computational learning theory, pages 101–103. ACM, 1998. Andrzej Ruszczy´nski. Risk-averse dynamic programming for markov decision processes. Mathemat- ical programming, 125(2):235–261, 2010. Stefan Schaal. Learning from demonstration. In Advances in neural information processing systems, pages 1040–1046, 1997. John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, and Pieter Abbeel. Trust region policy optimization. CoRR, abs/1502.05477, 2015. URL http://arxiv.org/abs/1502. 05477. Shai Shalev-Shwartz, Shaked Shammah, and Amnon Shashua. Safe, multi-agent, reinforcement learning for autonomous driving. arXiv preprint arXiv:1610.03295, 2016.
1707.06658#44
RAIL: Risk-Averse Imitation Learning
Imitation learning algorithms learn viable policies by imitating an expert's behavior when reward signals are not available. Generative Adversarial Imitation Learning (GAIL) is a state-of-the-art algorithm for learning policies when the expert's behavior is available as a fixed set of trajectories. We evaluate in terms of the expert's cost function and observe that the distribution of trajectory-costs is often more heavy-tailed for GAIL-agents than the expert at a number of benchmark continuous-control tasks. Thus, high-cost trajectories, corresponding to tail-end events of catastrophic failure, are more likely to be encountered by the GAIL-agents than the expert. This makes the reliability of GAIL-agents questionable when it comes to deployment in risk-sensitive applications like robotic surgery and autonomous driving. In this work, we aim to minimize the occurrence of tail-end events by minimizing tail risk within the GAIL framework. We quantify tail risk by the Conditional-Value-at-Risk (CVaR) of trajectories and develop the Risk-Averse Imitation Learning (RAIL) algorithm. We observe that the policies learned with RAIL show lower tail-end risk than those of vanilla GAIL. Thus the proposed RAIL algorithm appears as a potent alternative to GAIL for improved reliability in risk-sensitive applications.
http://arxiv.org/pdf/1707.06658
Anirban Santara, Abhishek Naik, Balaraman Ravindran, Dipankar Das, Dheevatsa Mudigere, Sasikanth Avancha, Bharat Kaul
cs.LG, cs.AI
Accepted for presentation in Deep Reinforcement Learning Symposium at NIPS 2017
null
cs.LG
20170720
20171129
[ { "id": "1703.01703" }, { "id": "1704.07911" }, { "id": "1708.06374" }, { "id": "1604.07316" }, { "id": "1610.03295" }, { "id": "1606.01540" } ]
1707.06342
45
8 Table 4. Comparison of different strategies to get a small model on CUB-200 and Indoor-67. “FT” stands for “Fine Tune”. Dataset CUB-200 Indoor-67 Strategy VGG-16 FT & prune Train from scratch ThiNet-Conv ThiNet-GAP ThiNet-Tiny AlexNet VGG-16 FT & prune Train from scratch ThiNet-Conv ThiNet-GAP ThiNet-Tiny AlexNet #Param. 135.07M 7.91M 7.91M 128.16M 7.91M 1.12M 57.68M 134.52M 7.84M 7.84M 127.62M 7.84M 1.08M 57.68M #FLOPs 30.93B 9.34B 9.34B 9.58B 9.34B 2.01B 1.44B 30.93B 9.34B 9.34B 9.57B 9.34B 2.01B 1.44B Top-1 72.30% 66.90% 44.27% 70.90% 69.43% 65.45% 57.28% 72.46% 64.70% 38.81% 72.31% 70.22% 62.84% 59.55%
1707.06342#45
ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression
We propose an efficient and unified framework, namely ThiNet, to simultaneously accelerate and compress CNN models in both training and inference stages. We focus on the filter level pruning, i.e., the whole filter would be discarded if it is less important. Our method does not change the original network structure, thus it can be perfectly supported by any off-the-shelf deep learning libraries. We formally establish filter pruning as an optimization problem, and reveal that we need to prune filters based on statistics information computed from its next layer, not the current layer, which differentiates ThiNet from existing methods. Experimental results demonstrate the effectiveness of this strategy, which has advanced the state-of-the-art. We also show the performance of ThiNet on ILSVRC-12 benchmark. ThiNet achieves 3.31$\times$ FLOPs reduction and 16.63$\times$ compression on VGG-16, with only 0.52$\%$ top-5 accuracy drop. Similar experiments with ResNet-50 reveal that even for a compact network, ThiNet can also reduce more than half of the parameters and FLOPs, at the cost of roughly 1$\%$ top-5 accuracy drop. Moreover, the original VGG-16 model can be further pruned into a very small model with only 5.05MB model size, preserving AlexNet level accuracy but showing much stronger generalization ability.
http://arxiv.org/pdf/1707.06342
Jian-Hao Luo, Jianxin Wu, Weiyao Lin
cs.CV
To appear in ICCV 2017
null
cs.CV
20170720
20170720
[ { "id": "1602.07360" }, { "id": "1610.02391" }, { "id": "1607.03250" } ]
1707.06658
45
Shai Shalev-Shwartz, Shaked Shammah, and Amnon Shashua. On a formal model of safe and scalable self-driving cars. arXiv preprint arXiv:1708.06374, 2017. Yun Shen, Michael J Tobia, Tobias Sommer, and Klaus Obermayer. Risk-sensitive reinforcement learning. Neural computation, 26(7):1298–1328, 2014. David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484–489, 2016. Bradly C Stadie, Pieter Abbeel, and Ilya Sutskever. Third-person imitation learning. arXiv preprint arXiv:1703.01703, 2017. R.S. Sutton and A.G. Barto. Reinforcement Learning: An Introduction. A Bradford book. Bradford Book, 1998. ISBN 9780262193986. URL https://books.google.co.in/books?id= CAFR6IBF4xYC.
1707.06658#45
RAIL: Risk-Averse Imitation Learning
Imitation learning algorithms learn viable policies by imitating an expert's behavior when reward signals are not available. Generative Adversarial Imitation Learning (GAIL) is a state-of-the-art algorithm for learning policies when the expert's behavior is available as a fixed set of trajectories. We evaluate in terms of the expert's cost function and observe that the distribution of trajectory-costs is often more heavy-tailed for GAIL-agents than the expert at a number of benchmark continuous-control tasks. Thus, high-cost trajectories, corresponding to tail-end events of catastrophic failure, are more likely to be encountered by the GAIL-agents than the expert. This makes the reliability of GAIL-agents questionable when it comes to deployment in risk-sensitive applications like robotic surgery and autonomous driving. In this work, we aim to minimize the occurrence of tail-end events by minimizing tail risk within the GAIL framework. We quantify tail risk by the Conditional-Value-at-Risk (CVaR) of trajectories and develop the Risk-Averse Imitation Learning (RAIL) algorithm. We observe that the policies learned with RAIL show lower tail-end risk than those of vanilla GAIL. Thus the proposed RAIL algorithm appears as a potent alternative to GAIL for improved reliability in risk-sensitive applications.
http://arxiv.org/pdf/1707.06658
Anirban Santara, Abhishek Naik, Balaraman Ravindran, Dipankar Das, Dheevatsa Mudigere, Sasikanth Avancha, Bharat Kaul
cs.LG, cs.AI
Accepted for presentation in Deep Reinforcement Learning Symposium at NIPS 2017
null
cs.LG
20170720
20171129
[ { "id": "1703.01703" }, { "id": "1704.07911" }, { "id": "1708.06374" }, { "id": "1604.07316" }, { "id": "1610.03295" }, { "id": "1606.01540" } ]
1707.06342
46
& prune”), converting the cumbersome model into a much smaller one. With small-scale training examples, the accu- racy cannot be recovered completely, i.e., the pruned model can be easily trapped into bad local minima. However, if we train a network from scratch with the same structure, its accuracy can be much lower. We suggest to fine-tune the ThiNet model, which is first pruned using the ImageNet data. As shown in Table 4, this strategy gets the best trade-off between model size and clas- sification accuracy. It is worth noting that the ThiNet-Conv model can even obtain a similar accuracy as the original VGG-16, but is smaller and much faster. We also report the performance of ThiNet-Tiny on these two datasets. Although ThiNet-Tiny has the same level of accuracy as AlexNet on ImageNet, it shows much stronger generalization ability. This tiny model can achieve 3% ∼ 8% higher classification accuracy than AlexNet when transferred into domain-specific tasks with 50× fewer parameters. And its model size is small enough to be deployed on resource constrained devices. # 5. Conclusion
1707.06342#46
ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression
We propose an efficient and unified framework, namely ThiNet, to simultaneously accelerate and compress CNN models in both training and inference stages. We focus on the filter level pruning, i.e., the whole filter would be discarded if it is less important. Our method does not change the original network structure, thus it can be perfectly supported by any off-the-shelf deep learning libraries. We formally establish filter pruning as an optimization problem, and reveal that we need to prune filters based on statistics information computed from its next layer, not the current layer, which differentiates ThiNet from existing methods. Experimental results demonstrate the effectiveness of this strategy, which has advanced the state-of-the-art. We also show the performance of ThiNet on ILSVRC-12 benchmark. ThiNet achieves 3.31$\times$ FLOPs reduction and 16.63$\times$ compression on VGG-16, with only 0.52$\%$ top-5 accuracy drop. Similar experiments with ResNet-50 reveal that even for a compact network, ThiNet can also reduce more than half of the parameters and FLOPs, at the cost of roughly 1$\%$ top-5 accuracy drop. Moreover, the original VGG-16 model can be further pruned into a very small model with only 5.05MB model size, preserving AlexNet level accuracy but showing much stronger generalization ability.
http://arxiv.org/pdf/1707.06342
Jian-Hao Luo, Jianxin Wu, Weiyao Lin
cs.CV
To appear in ICCV 2017
null
cs.CV
20170720
20170720
[ { "id": "1602.07360" }, { "id": "1610.02391" }, { "id": "1607.03250" } ]
1707.06658
46
Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based control. In Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, pages 5026–5033. IEEE, 2012. Brian D Ziebart. Modeling Purposeful Adaptive Behavior with the Principle of Maximum Causal Entropy. PhD thesis, Carnegie Mellon University, 2010. Brian D Ziebart, Andrew L Maas, J Andrew Bagnell, and Anind K Dey. Maximum entropy inverse reinforcement learning. In AAAI, volume 8, pages 1433–1438. Chicago, IL, USA, 2008. 11 # Appendix # A Calculation of Gradients of the CVaR term In this section we derive expressions of gradients of the CVaR term in equation 9 w.r.t. π, D, and ν. Let us denote Hα(Dπ(ξ|c(D)), ν) by LCV aR. Our derivations are inspired by those shown by Chow and Ghavamzadeh [2014]. • Gradient of LCV aR w.r.t. D:
1707.06658#46
RAIL: Risk-Averse Imitation Learning
Imitation learning algorithms learn viable policies by imitating an expert's behavior when reward signals are not available. Generative Adversarial Imitation Learning (GAIL) is a state-of-the-art algorithm for learning policies when the expert's behavior is available as a fixed set of trajectories. We evaluate in terms of the expert's cost function and observe that the distribution of trajectory-costs is often more heavy-tailed for GAIL-agents than the expert at a number of benchmark continuous-control tasks. Thus, high-cost trajectories, corresponding to tail-end events of catastrophic failure, are more likely to be encountered by the GAIL-agents than the expert. This makes the reliability of GAIL-agents questionable when it comes to deployment in risk-sensitive applications like robotic surgery and autonomous driving. In this work, we aim to minimize the occurrence of tail-end events by minimizing tail risk within the GAIL framework. We quantify tail risk by the Conditional-Value-at-Risk (CVaR) of trajectories and develop the Risk-Averse Imitation Learning (RAIL) algorithm. We observe that the policies learned with RAIL show lower tail-end risk than those of vanilla GAIL. Thus the proposed RAIL algorithm appears as a potent alternative to GAIL for improved reliability in risk-sensitive applications.
http://arxiv.org/pdf/1707.06658
Anirban Santara, Abhishek Naik, Balaraman Ravindran, Dipankar Das, Dheevatsa Mudigere, Sasikanth Avancha, Bharat Kaul
cs.LG, cs.AI
Accepted for presentation in Deep Reinforcement Learning Symposium at NIPS 2017
null
cs.LG
20170720
20171129
[ { "id": "1703.01703" }, { "id": "1704.07911" }, { "id": "1708.06374" }, { "id": "1604.07316" }, { "id": "1610.03295" }, { "id": "1606.01540" } ]
1707.06342
47
# 5. Conclusion In this paper, we proposed a unified framework, namely ThiNet, for CNN model acceleration and compression. The proposed filter level pruning method shows significant im- provements over existing methods. In the future, we would like to prune the projection short- cuts of ResNet. An alternative method for better channel selection is also worthy to be studied. In addition, extensive exploration on more vision tasks (such as object detection or semantic segmentation) with the pruned networks is an interesting direction too. The pruned networks will greatly accelerate these vision tasks. # Acknowledgements This work was supported in part by the National Natural Science Foundation of China under Grant No. 61422203. # References [1] Y. Bengio, A. Courville, and P. Vincent. Representation learning: A review and new perspectives. TPAMI, 35(8):1798– 1828, 2013. 6 [2] G. Chechik, I. Meilijson, and E. Ruppin. Synaptic pruning in development: A computational account. Neural computation, 10(7):1759–1777, 1998. 1
1707.06342#47
ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression
We propose an efficient and unified framework, namely ThiNet, to simultaneously accelerate and compress CNN models in both training and inference stages. We focus on the filter level pruning, i.e., the whole filter would be discarded if it is less important. Our method does not change the original network structure, thus it can be perfectly supported by any off-the-shelf deep learning libraries. We formally establish filter pruning as an optimization problem, and reveal that we need to prune filters based on statistics information computed from its next layer, not the current layer, which differentiates ThiNet from existing methods. Experimental results demonstrate the effectiveness of this strategy, which has advanced the state-of-the-art. We also show the performance of ThiNet on ILSVRC-12 benchmark. ThiNet achieves 3.31$\times$ FLOPs reduction and 16.63$\times$ compression on VGG-16, with only 0.52$\%$ top-5 accuracy drop. Similar experiments with ResNet-50 reveal that even for a compact network, ThiNet can also reduce more than half of the parameters and FLOPs, at the cost of roughly 1$\%$ top-5 accuracy drop. Moreover, the original VGG-16 model can be further pruned into a very small model with only 5.05MB model size, preserving AlexNet level accuracy but showing much stronger generalization ability.
http://arxiv.org/pdf/1707.06342
Jian-Hao Luo, Jianxin Wu, Weiyao Lin
cs.CV
To appear in ICCV 2017
null
cs.CV
20170720
20170720
[ { "id": "1602.07360" }, { "id": "1610.02391" }, { "id": "1607.03250" } ]
1707.06658
47
• Gradient of LCV aR w.r.t. D: Vo Levan = Vo |v + Ee [(D™(E|ce(D)) — v)*] = A oFer Vp D'El(DUD"(E\e(D)) > »)] AD where 1(·) denotes the indicator function. Now, Vp D*(é|c(D)) = Ve D*(E|c(D)) Vo e(D) (A2) Le-1 V. D7 (Ele(D)) = Ve Sy ytelse, ay) t=0 Le-l = > yt t=0 _ lays =F (A3) Substituting equation A.3 in A.2 and then A.2 in A.1, we have the following: 1 1-7" Vo Lover = 7+ Ber [+= uD") > )VveD)] (Aa) # • Gradient of LCV aR w.r.t. π: Vi lovar = Va Ha(D"( (let? 2) =, [rt tates “ Dr (gleD)) — »)*] = a Benn [(D" (Ele(D)) — v)*] = oes [(Vx logP(Eln))(D™(Ele(D)) — »)*] (AS)
1707.06658#47
RAIL: Risk-Averse Imitation Learning
Imitation learning algorithms learn viable policies by imitating an expert's behavior when reward signals are not available. Generative Adversarial Imitation Learning (GAIL) is a state-of-the-art algorithm for learning policies when the expert's behavior is available as a fixed set of trajectories. We evaluate in terms of the expert's cost function and observe that the distribution of trajectory-costs is often more heavy-tailed for GAIL-agents than the expert at a number of benchmark continuous-control tasks. Thus, high-cost trajectories, corresponding to tail-end events of catastrophic failure, are more likely to be encountered by the GAIL-agents than the expert. This makes the reliability of GAIL-agents questionable when it comes to deployment in risk-sensitive applications like robotic surgery and autonomous driving. In this work, we aim to minimize the occurrence of tail-end events by minimizing tail risk within the GAIL framework. We quantify tail risk by the Conditional-Value-at-Risk (CVaR) of trajectories and develop the Risk-Averse Imitation Learning (RAIL) algorithm. We observe that the policies learned with RAIL show lower tail-end risk than those of vanilla GAIL. Thus the proposed RAIL algorithm appears as a potent alternative to GAIL for improved reliability in risk-sensitive applications.
http://arxiv.org/pdf/1707.06658
Anirban Santara, Abhishek Naik, Balaraman Ravindran, Dipankar Das, Dheevatsa Mudigere, Sasikanth Avancha, Bharat Kaul
cs.LG, cs.AI
Accepted for presentation in Deep Reinforcement Learning Symposium at NIPS 2017
null
cs.LG
20170720
20171129
[ { "id": "1703.01703" }, { "id": "1704.07911" }, { "id": "1708.06374" }, { "id": "1604.07316" }, { "id": "1610.03295" }, { "id": "1606.01540" } ]
1707.06342
48
[3] W. Chen, J. Wilson, S. Tyree, K. Weinberger, and Y. Chen. Compressing neural networks with the hashing trick. In ICML, pages 2285–2294, 2015. 3 [4] M. Denil, B. Shakibi, L. Dinh, and N. de Freitas. Predicting parameters in deep learning. In NIPS, pages 2148–2156, 2013. 2 E. L. Denton, W. Zaremba, J. Bruna, Y. LeCun, and R. Fergus. Exploiting linear structure within convolutional networks for efficient evaluation. In NJPS, pages 1269-1277, 2014. 2,3 D. L. Donoho and Y. Tsaig. Fast solution of ¢:-norm mini- mization problems when the solution may be sparse. JEEE Trans. Information Theory, 54(11):4789-4812, 2008. 4 R. Girshick. Fast R-CNN. In JCCV, pages 1440-1448, 2015. 1 [8] Y. Gong, L. Liu, M. Yang, and L. Bourdev. Compressing deep convolutional networks using vector quantization. In arXiv preprint arXiv:1412.6115, pages 1–10, 2014. 3
1707.06342#48
ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression
We propose an efficient and unified framework, namely ThiNet, to simultaneously accelerate and compress CNN models in both training and inference stages. We focus on the filter level pruning, i.e., the whole filter would be discarded if it is less important. Our method does not change the original network structure, thus it can be perfectly supported by any off-the-shelf deep learning libraries. We formally establish filter pruning as an optimization problem, and reveal that we need to prune filters based on statistics information computed from its next layer, not the current layer, which differentiates ThiNet from existing methods. Experimental results demonstrate the effectiveness of this strategy, which has advanced the state-of-the-art. We also show the performance of ThiNet on ILSVRC-12 benchmark. ThiNet achieves 3.31$\times$ FLOPs reduction and 16.63$\times$ compression on VGG-16, with only 0.52$\%$ top-5 accuracy drop. Similar experiments with ResNet-50 reveal that even for a compact network, ThiNet can also reduce more than half of the parameters and FLOPs, at the cost of roughly 1$\%$ top-5 accuracy drop. Moreover, the original VGG-16 model can be further pruned into a very small model with only 5.05MB model size, preserving AlexNet level accuracy but showing much stronger generalization ability.
http://arxiv.org/pdf/1707.06342
Jian-Hao Luo, Jianxin Wu, Weiyao Lin
cs.CV
To appear in ICCV 2017
null
cs.CV
20170720
20170720
[ { "id": "1602.07360" }, { "id": "1610.02391" }, { "id": "1607.03250" } ]
1707.06658
48
# • Gradient of LCV aR w.r.t. ν: 1 Vi Lover = Vily+ Toa Be [(D (Ele(D)) — v)*] = 14 Bee [Ve (D*(Ele(D)) -»)*] = 1~ pA Beg [L(D*(Ele(D)) > v)] (A6) 12 # B Additional figures 40% 30% 20% population 10% 0% 0 50 100 150 200 trajectory-cost Figure 3: Histogram of costs of 250 trajectories generated by a GAIL-learned policy for Reacher-v1. The distribution shows no heavy tail. From Table 2 and Figure 2, we observe that RAIL performs as well as GAIL even in cases where the distribution of trajectory costs is not heavy-tailed. 13
1707.06658#48
RAIL: Risk-Averse Imitation Learning
Imitation learning algorithms learn viable policies by imitating an expert's behavior when reward signals are not available. Generative Adversarial Imitation Learning (GAIL) is a state-of-the-art algorithm for learning policies when the expert's behavior is available as a fixed set of trajectories. We evaluate in terms of the expert's cost function and observe that the distribution of trajectory-costs is often more heavy-tailed for GAIL-agents than the expert at a number of benchmark continuous-control tasks. Thus, high-cost trajectories, corresponding to tail-end events of catastrophic failure, are more likely to be encountered by the GAIL-agents than the expert. This makes the reliability of GAIL-agents questionable when it comes to deployment in risk-sensitive applications like robotic surgery and autonomous driving. In this work, we aim to minimize the occurrence of tail-end events by minimizing tail risk within the GAIL framework. We quantify tail risk by the Conditional-Value-at-Risk (CVaR) of trajectories and develop the Risk-Averse Imitation Learning (RAIL) algorithm. We observe that the policies learned with RAIL show lower tail-end risk than those of vanilla GAIL. Thus the proposed RAIL algorithm appears as a potent alternative to GAIL for improved reliability in risk-sensitive applications.
http://arxiv.org/pdf/1707.06658
Anirban Santara, Abhishek Naik, Balaraman Ravindran, Dipankar Das, Dheevatsa Mudigere, Sasikanth Avancha, Bharat Kaul
cs.LG, cs.AI
Accepted for presentation in Deep Reinforcement Learning Symposium at NIPS 2017
null
cs.LG
20170720
20171129
[ { "id": "1703.01703" }, { "id": "1704.07911" }, { "id": "1708.06374" }, { "id": "1604.07316" }, { "id": "1610.03295" }, { "id": "1606.01540" } ]
1707.06342
49
[9] S. Han, H. Mao, and W. J. Dally. Deep compression: Com- pressing deep neural networks with pruning, trained quan- tization and huffman coding. In ICLR, pages 1–14, 2016. 3 [10] S. Han, J. Pool, J. Tran, and W. Dally. Learning both weights and connections for efficient neural network. In NIPS, pages 1135–1143, 2015. 1, 2, 7 [11] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, pages 770–778, 2016. 1, 2, 5, 7 [12] G. Hinton. Learning distributed representations of concepts. In CogSci, pages 1–12, 1986. 6 [13] G. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. R. Salakhutdinov. Improving neural networks by pre- venting co-adaptation of feature detectors. In arXiv preprint arXiv:1207.0580, pages 1–18, 2012. 2
1707.06342#49
ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression
We propose an efficient and unified framework, namely ThiNet, to simultaneously accelerate and compress CNN models in both training and inference stages. We focus on the filter level pruning, i.e., the whole filter would be discarded if it is less important. Our method does not change the original network structure, thus it can be perfectly supported by any off-the-shelf deep learning libraries. We formally establish filter pruning as an optimization problem, and reveal that we need to prune filters based on statistics information computed from its next layer, not the current layer, which differentiates ThiNet from existing methods. Experimental results demonstrate the effectiveness of this strategy, which has advanced the state-of-the-art. We also show the performance of ThiNet on ILSVRC-12 benchmark. ThiNet achieves 3.31$\times$ FLOPs reduction and 16.63$\times$ compression on VGG-16, with only 0.52$\%$ top-5 accuracy drop. Similar experiments with ResNet-50 reveal that even for a compact network, ThiNet can also reduce more than half of the parameters and FLOPs, at the cost of roughly 1$\%$ top-5 accuracy drop. Moreover, the original VGG-16 model can be further pruned into a very small model with only 5.05MB model size, preserving AlexNet level accuracy but showing much stronger generalization ability.
http://arxiv.org/pdf/1707.06342
Jian-Hao Luo, Jianxin Wu, Weiyao Lin
cs.CV
To appear in ICCV 2017
null
cs.CV
20170720
20170720
[ { "id": "1602.07360" }, { "id": "1610.02391" }, { "id": "1607.03250" } ]
1707.06342
50
[14] H. Hu, R. Peng, Y. W. Tai, and C. K. Tang. Network trimming: A data-driven neuron pruning approach towards efficient deep architectures. In arXiv preprint arXiv:1607.03250, pages 1–9, 2016. 2, 5, 7 [15] F. N. Iandola, S. Han, M. W. Moskewicz, K. Ashraf, W. J. Dally, and K. Keutzer. SqueezeNet: AlexNet-level accuracy with 50× fewer parameters and <0.5 MB model size. In arXiv preprint arXiv:1602.07360, pages 1–13, 2016. 7 [16] X. Jia, E. Gavves, B. Fernando, and T. Tuytelaars. Guiding the long-short term memory model for image caption generation. In ICCV, pages 2407–2415, 2015. 1 [17] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Gir- shick, S. Guadarrama, and T. Darrell. Learning distributed representations of concepts. In ACM MM, pages 675–678, 2014. 5 9
1707.06342#50
ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression
We propose an efficient and unified framework, namely ThiNet, to simultaneously accelerate and compress CNN models in both training and inference stages. We focus on the filter level pruning, i.e., the whole filter would be discarded if it is less important. Our method does not change the original network structure, thus it can be perfectly supported by any off-the-shelf deep learning libraries. We formally establish filter pruning as an optimization problem, and reveal that we need to prune filters based on statistics information computed from its next layer, not the current layer, which differentiates ThiNet from existing methods. Experimental results demonstrate the effectiveness of this strategy, which has advanced the state-of-the-art. We also show the performance of ThiNet on ILSVRC-12 benchmark. ThiNet achieves 3.31$\times$ FLOPs reduction and 16.63$\times$ compression on VGG-16, with only 0.52$\%$ top-5 accuracy drop. Similar experiments with ResNet-50 reveal that even for a compact network, ThiNet can also reduce more than half of the parameters and FLOPs, at the cost of roughly 1$\%$ top-5 accuracy drop. Moreover, the original VGG-16 model can be further pruned into a very small model with only 5.05MB model size, preserving AlexNet level accuracy but showing much stronger generalization ability.
http://arxiv.org/pdf/1707.06342
Jian-Hao Luo, Jianxin Wu, Weiyao Lin
cs.CV
To appear in ICCV 2017
null
cs.CV
20170720
20170720
[ { "id": "1602.07360" }, { "id": "1610.02391" }, { "id": "1607.03250" } ]
1707.06342
51
9 Imagenet classification with deep convolutional neural networks. In NIPS, pages 1097–1105, 2012. 1, 5 [19] V. Lebedev and V. Lempitsky. Fast convnets using group-wise brain damage. In CVPR, pages 2554–2564, 2016. 2 [20] Y. LeCun, J. S. Denker, and S. A. Solla. Optimal brain damage. In NIPS, pages 598–605, 1990. 1 [21] H. Li, A. Kadav, I. Durdanovic, H. Samet, and H. P. Graf. Pruning filters for efficient ConvNets. In ICLR, pages 1–13, 2017. 2, 5, 7 [22] M. Lin, Q. Chen, and S. Yan. Network in network. In arXiv preprint arXiv:1312.4400, pages 1–10, 2013. 5 [23] P. Molchanov, S. Tyree, T. Karras, T. Aila, and J. Kautz. Pruning convolutional neural networks for resource efficient transfer learning. In ICLR, pages 1–17, 2017. 2, 7
1707.06342#51
ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression
We propose an efficient and unified framework, namely ThiNet, to simultaneously accelerate and compress CNN models in both training and inference stages. We focus on the filter level pruning, i.e., the whole filter would be discarded if it is less important. Our method does not change the original network structure, thus it can be perfectly supported by any off-the-shelf deep learning libraries. We formally establish filter pruning as an optimization problem, and reveal that we need to prune filters based on statistics information computed from its next layer, not the current layer, which differentiates ThiNet from existing methods. Experimental results demonstrate the effectiveness of this strategy, which has advanced the state-of-the-art. We also show the performance of ThiNet on ILSVRC-12 benchmark. ThiNet achieves 3.31$\times$ FLOPs reduction and 16.63$\times$ compression on VGG-16, with only 0.52$\%$ top-5 accuracy drop. Similar experiments with ResNet-50 reveal that even for a compact network, ThiNet can also reduce more than half of the parameters and FLOPs, at the cost of roughly 1$\%$ top-5 accuracy drop. Moreover, the original VGG-16 model can be further pruned into a very small model with only 5.05MB model size, preserving AlexNet level accuracy but showing much stronger generalization ability.
http://arxiv.org/pdf/1707.06342
Jian-Hao Luo, Jianxin Wu, Weiyao Lin
cs.CV
To appear in ICCV 2017
null
cs.CV
20170720
20170720
[ { "id": "1602.07360" }, { "id": "1610.02391" }, { "id": "1607.03250" } ]
1707.06342
52
[24] H. Noh, S. Hong, and B. Han. Learning deconvolution net- work for semantic segmentation. In ICCV, pages 1520–1528, 2015. 1 [25] A. Quattoni and A.Torralba. Recognizing indoor scenes. In CVPR, pages 413–420, 2009. 8 [26] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and F.-F. Li. ImageNet large scale visual recognition chal- lenge. IJCV, 115(3):211–252, 2015. 5, 6 [27] R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra. Grad-CAM: Visual explanations from deep networks via gradient-based localization. In arXiv preprint arXiv:1610.02391, pages 1–24, 2016. 2 [28] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, pages 1–14, 2015. 1, 2, 5, 6
1707.06342#52
ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression
We propose an efficient and unified framework, namely ThiNet, to simultaneously accelerate and compress CNN models in both training and inference stages. We focus on the filter level pruning, i.e., the whole filter would be discarded if it is less important. Our method does not change the original network structure, thus it can be perfectly supported by any off-the-shelf deep learning libraries. We formally establish filter pruning as an optimization problem, and reveal that we need to prune filters based on statistics information computed from its next layer, not the current layer, which differentiates ThiNet from existing methods. Experimental results demonstrate the effectiveness of this strategy, which has advanced the state-of-the-art. We also show the performance of ThiNet on ILSVRC-12 benchmark. ThiNet achieves 3.31$\times$ FLOPs reduction and 16.63$\times$ compression on VGG-16, with only 0.52$\%$ top-5 accuracy drop. Similar experiments with ResNet-50 reveal that even for a compact network, ThiNet can also reduce more than half of the parameters and FLOPs, at the cost of roughly 1$\%$ top-5 accuracy drop. Moreover, the original VGG-16 model can be further pruned into a very small model with only 5.05MB model size, preserving AlexNet level accuracy but showing much stronger generalization ability.
http://arxiv.org/pdf/1707.06342
Jian-Hao Luo, Jianxin Wu, Weiyao Lin
cs.CV
To appear in ICCV 2017
null
cs.CV
20170720
20170720
[ { "id": "1602.07360" }, { "id": "1610.02391" }, { "id": "1607.03250" } ]
1707.06342
53
[29] V. Sindhwani, T. Sainath, and S. Kumar. Structured trans- forms for small-footprint deep learning. In NIPS, pages 3088– 3096, 2015. 3 [30] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In CVPR, pages 1–9, 2015. 5 [31] C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie. The Caltech-UCSD birds-200-2011 dataset. Technical Report CNS-TR-2011-001, California Institute of Technology, 2011. 5, 8 [32] W. Wen, C. Wu, Y. Wang, Y. Chen, and H. Li. Learning structured sparsity in deep neural networks. In NIPS, pages 2074–2082, 2016. 1, 2 [33] J. Wu, C. Leng, Y. Wang, Q. Hu, and J. Cheng. Quantized convolutional neural networks for mobile devices. In CVPR, pages 4820–4828, 2016. 2, 3
1707.06342#53
ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression
We propose an efficient and unified framework, namely ThiNet, to simultaneously accelerate and compress CNN models in both training and inference stages. We focus on the filter level pruning, i.e., the whole filter would be discarded if it is less important. Our method does not change the original network structure, thus it can be perfectly supported by any off-the-shelf deep learning libraries. We formally establish filter pruning as an optimization problem, and reveal that we need to prune filters based on statistics information computed from its next layer, not the current layer, which differentiates ThiNet from existing methods. Experimental results demonstrate the effectiveness of this strategy, which has advanced the state-of-the-art. We also show the performance of ThiNet on ILSVRC-12 benchmark. ThiNet achieves 3.31$\times$ FLOPs reduction and 16.63$\times$ compression on VGG-16, with only 0.52$\%$ top-5 accuracy drop. Similar experiments with ResNet-50 reveal that even for a compact network, ThiNet can also reduce more than half of the parameters and FLOPs, at the cost of roughly 1$\%$ top-5 accuracy drop. Moreover, the original VGG-16 model can be further pruned into a very small model with only 5.05MB model size, preserving AlexNet level accuracy but showing much stronger generalization ability.
http://arxiv.org/pdf/1707.06342
Jian-Hao Luo, Jianxin Wu, Weiyao Lin
cs.CV
To appear in ICCV 2017
null
cs.CV
20170720
20170720
[ { "id": "1602.07360" }, { "id": "1610.02391" }, { "id": "1607.03250" } ]
1707.06203
0
8 1 0 2 b e F 4 1 ] G L . s c [ 2 v 3 0 2 6 0 . 7 0 7 1 : v i X r a # Imagination-Augmented Agents for Deep Reinforcement Learning Théophane Weber∗ Sébastien Racanière∗ David P. Reichert∗ Lars Buesing Arthur Guez Danilo Rezende Adria Puigdomènech Badia Oriol Vinyals Nicolas Heess Yujia Li Razvan Pascanu Peter Battaglia Demis Hassabis David Silver Daan Wierstra DeepMind # Abstract We introduce Imagination-Augmented Agents (I2As), a novel architecture for deep reinforcement learning combining model-free and model-based aspects. In con- trast to most existing model-based reinforcement learning and planning methods, which prescribe how a model should be used to arrive at a policy, I2As learn to interpret predictions from a learned environment model to construct implicit plans in arbitrary ways, by using the predictions as additional context in deep policy networks. I2As show improved data efficiency, performance, and robustness to model misspecification compared to several baselines. # Introduction
1707.06203#0
Imagination-Augmented Agents for Deep Reinforcement Learning
We introduce Imagination-Augmented Agents (I2As), a novel architecture for deep reinforcement learning combining model-free and model-based aspects. In contrast to most existing model-based reinforcement learning and planning methods, which prescribe how a model should be used to arrive at a policy, I2As learn to interpret predictions from a learned environment model to construct implicit plans in arbitrary ways, by using the predictions as additional context in deep policy networks. I2As show improved data efficiency, performance, and robustness to model misspecification compared to several baselines.
http://arxiv.org/pdf/1707.06203
Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra
cs.LG, cs.AI, stat.ML
null
null
cs.LG
20170719
20180214
[ { "id": "1707.03374" }, { "id": "1703.01250" }, { "id": "1511.09249" }, { "id": "1611.03673" }, { "id": "1610.03518" }, { "id": "1705.07177" }, { "id": "1603.08983" }, { "id": "1703.09260" }, { "id": "1611.05397" }, { "id": "1707.03497" }, { "id": "1511.07111" }, { "id": "1604.00289" }, { "id": "1612.08810" } ]
1707.06209
0
7 1 0 2 l u J 9 1 ] C H . s c [ 1 v 9 0 2 6 0 . 7 0 7 1 : v i X r a # Crowdsourcing Multiple Choice Science Questions # Johannes Welbl∗ Computer Science Department University College London [email protected] Nelson F. Liu∗ Paul G. Allen School of Computer Science & Engineering University of Washington [email protected] # Matt Gardner Allen Institute for Artificial Intelligence [email protected] # Abstract
1707.06209#0
Crowdsourcing Multiple Choice Science Questions
We present a novel method for obtaining high-quality, domain-targeted multiple choice questions from crowd workers. Generating these questions can be difficult without trading away originality, relevance or diversity in the answer options. Our method addresses these problems by leveraging a large corpus of domain-specific text and a small set of existing questions. It produces model suggestions for document selection and answer distractor choice which aid the human question generation process. With this method we have assembled SciQ, a dataset of 13.7K multiple choice science exam questions (Dataset available at http://allenai.org/data.html). We demonstrate that the method produces in-domain questions by providing an analysis of this new dataset and by showing that humans cannot distinguish the crowdsourced questions from original questions. When using SciQ as additional training data to existing questions, we observe accuracy improvements on real science exams.
http://arxiv.org/pdf/1707.06209
Johannes Welbl, Nelson F. Liu, Matt Gardner
cs.HC, cs.AI, cs.CL, stat.ML
accepted for the Workshop on Noisy User-generated Text (W-NUT) 2017
null
cs.HC
20170719
20170719
[ { "id": "1606.06031" }, { "id": "1604.04315" } ]
1707.06203
1
# Introduction A hallmark of an intelligent agent is its ability to rapidly adapt to new circumstances and "achieve goals in a wide range of environments" [1]. Progress has been made in developing capable agents for numerous domains using deep neural networks in conjunction with model-free reinforcement learning (RL) [2–4], where raw observations directly map to values or actions. However, this approach usually requires large amounts of training data and the resulting policies do not readily generalize to novel tasks in the same environment, as it lacks the behavioral flexibility constitutive of general intelligence. Model-based RL aims to address these shortcomings by endowing agents with a model of the world, synthesized from past experience. By using an internal model to reason about the future, here also referred to as imagining, the agent can seek positive outcomes while avoiding the adverse consequences of trial-and-error in the real environment – including making irreversible, poor decisions. Even if the model needs to be learned first, it can enable better generalization across states, remain valid across tasks in the same environment, and exploit additional unsupervised learning signals, thus ultimately leading to greater data efficiency. Another appeal of model-based methods is their ability to scale performance with more computation by increasing the amount of internal simulation.
1707.06203#1
Imagination-Augmented Agents for Deep Reinforcement Learning
We introduce Imagination-Augmented Agents (I2As), a novel architecture for deep reinforcement learning combining model-free and model-based aspects. In contrast to most existing model-based reinforcement learning and planning methods, which prescribe how a model should be used to arrive at a policy, I2As learn to interpret predictions from a learned environment model to construct implicit plans in arbitrary ways, by using the predictions as additional context in deep policy networks. I2As show improved data efficiency, performance, and robustness to model misspecification compared to several baselines.
http://arxiv.org/pdf/1707.06203
Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra
cs.LG, cs.AI, stat.ML
null
null
cs.LG
20170719
20180214
[ { "id": "1707.03374" }, { "id": "1703.01250" }, { "id": "1511.09249" }, { "id": "1611.03673" }, { "id": "1610.03518" }, { "id": "1705.07177" }, { "id": "1603.08983" }, { "id": "1703.09260" }, { "id": "1611.05397" }, { "id": "1707.03497" }, { "id": "1511.07111" }, { "id": "1604.00289" }, { "id": "1612.08810" } ]
1707.06209
1
# Matt Gardner Allen Institute for Artificial Intelligence [email protected] # Abstract We present a novel method for obtain- ing high-quality, domain-targeted multi- ple choice questions from crowd workers. Generating these questions can be difficult without trading away originality, relevance or diversity in the answer options. Our method addresses these problems by lever- aging a large corpus of domain-specific text and a small set of existing ques- tions. It produces model suggestions for document selection and answer distractor choice which aid the human question gen- eration process. With this method we have assembled SciQ, a dataset of 13.7K mul- tiple choice science exam questions.1 We demonstrate that the method produces in- domain questions by providing an analysis of this new dataset and by showing that hu- mans cannot distinguish the crowdsourced questions from original questions. When using SciQ as additional training data to existing questions, we observe accuracy improvements on real science exams. 2016; Dhingra et al., 2016; Sordoni et al., 2016; Seo et al., 2016). These recent datasets cover broad and general domains, but progress on these datasets has not translated into similar improve- ments in more targeted domains, such as science exam QA.
1707.06209#1
Crowdsourcing Multiple Choice Science Questions
We present a novel method for obtaining high-quality, domain-targeted multiple choice questions from crowd workers. Generating these questions can be difficult without trading away originality, relevance or diversity in the answer options. Our method addresses these problems by leveraging a large corpus of domain-specific text and a small set of existing questions. It produces model suggestions for document selection and answer distractor choice which aid the human question generation process. With this method we have assembled SciQ, a dataset of 13.7K multiple choice science exam questions (Dataset available at http://allenai.org/data.html). We demonstrate that the method produces in-domain questions by providing an analysis of this new dataset and by showing that humans cannot distinguish the crowdsourced questions from original questions. When using SciQ as additional training data to existing questions, we observe accuracy improvements on real science exams.
http://arxiv.org/pdf/1707.06209
Johannes Welbl, Nelson F. Liu, Matt Gardner
cs.HC, cs.AI, cs.CL, stat.ML
accepted for the Workshop on Noisy User-generated Text (W-NUT) 2017
null
cs.HC
20170719
20170719
[ { "id": "1606.06031" }, { "id": "1604.04315" } ]
1707.06203
2
The neural basis for imagination, model-based reasoning and decision making has generated a lot of interest in neuroscience [5–7]; at the cognitive level, model learning and mental simulation have been hypothesized and demonstrated in animal and human learning [8–11]. Its successful deployment in artificial model-based agents however has hitherto been limited to settings where an exact transition model is available [12] or in domains where models are easy to learn – e.g. symbolic environments or low-dimensional systems [13–16]. In complex domains for which a simulator is not available to the agent, recent successes are dominated by model-free methods [2, 17]. In such domains, the performance of model-based agents employing standard planning methods usually suffers from model errors resulting from function approximation [18, 19]. These errors compound during planning, causing over-optimism and poor agent performance. There are currently no planning ∗Equal contribution, corresponding authors: {theophane, sracaniere, reichert}@google.com. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. or model-based methods that are robust against model imperfections which are inevitable in complex domains, thereby preventing them from matching the success of their model-free counterparts.
1707.06203#2
Imagination-Augmented Agents for Deep Reinforcement Learning
We introduce Imagination-Augmented Agents (I2As), a novel architecture for deep reinforcement learning combining model-free and model-based aspects. In contrast to most existing model-based reinforcement learning and planning methods, which prescribe how a model should be used to arrive at a policy, I2As learn to interpret predictions from a learned environment model to construct implicit plans in arbitrary ways, by using the predictions as additional context in deep policy networks. I2As show improved data efficiency, performance, and robustness to model misspecification compared to several baselines.
http://arxiv.org/pdf/1707.06203
Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra
cs.LG, cs.AI, stat.ML
null
null
cs.LG
20170719
20180214
[ { "id": "1707.03374" }, { "id": "1703.01250" }, { "id": "1511.09249" }, { "id": "1611.03673" }, { "id": "1610.03518" }, { "id": "1705.07177" }, { "id": "1603.08983" }, { "id": "1703.09260" }, { "id": "1611.05397" }, { "id": "1707.03497" }, { "id": "1511.07111" }, { "id": "1604.00289" }, { "id": "1612.08810" } ]
1707.06209
2
Science exam QA is a high-level NLP task which requires the mastery and integration of in- formation extraction, reading comprehension and common sense reasoning (Clark et al., 2013; Clark, 2015). Consider, for example, the ques- tion “With which force does the moon affect tidal movements of the oceans?”. To solve it, a model must possess an abstract understanding of nat- ural phenomena and apply it to new questions. This transfer of general and domain-specific back- ground knowledge into new scenarios poses a formidable challenge, one which modern statisti- In a re- cal techniques currently struggle with. cent Kaggle competition addressing 8th grade sci- ence questions (Schoenick et al., 2016), the high- est scoring systems achieved only 60% on a mul- tiple choice test, with retrieval-based systems far outperforming neural systems. # Introduction
1707.06209#2
Crowdsourcing Multiple Choice Science Questions
We present a novel method for obtaining high-quality, domain-targeted multiple choice questions from crowd workers. Generating these questions can be difficult without trading away originality, relevance or diversity in the answer options. Our method addresses these problems by leveraging a large corpus of domain-specific text and a small set of existing questions. It produces model suggestions for document selection and answer distractor choice which aid the human question generation process. With this method we have assembled SciQ, a dataset of 13.7K multiple choice science exam questions (Dataset available at http://allenai.org/data.html). We demonstrate that the method produces in-domain questions by providing an analysis of this new dataset and by showing that humans cannot distinguish the crowdsourced questions from original questions. When using SciQ as additional training data to existing questions, we observe accuracy improvements on real science exams.
http://arxiv.org/pdf/1707.06209
Johannes Welbl, Nelson F. Liu, Matt Gardner
cs.HC, cs.AI, cs.CL, stat.ML
accepted for the Workshop on Noisy User-generated Text (W-NUT) 2017
null
cs.HC
20170719
20170719
[ { "id": "1606.06031" }, { "id": "1604.04315" } ]
1707.06203
3
or model-based methods that are robust against model imperfections which are inevitable in complex domains, thereby preventing them from matching the success of their model-free counterparts. We seek to address this shortcoming by proposing Imagination-Augmented Agents, which use approximate environment models by "learning to interpret" their imperfect predictions. Our algorithm can be trained directly on low-level observations with little domain knowledge, similarly to recent model-free successes. Without making any assumptions about the structure of the environment model and its possible imperfections, our approach learns in an end-to-end way to extract useful knowledge gathered from model simulations – in particular not relying exclusively on simulated returns. This allows the agent to benefit from model-based imagination without the pitfalls of conventional model-based planning. We demonstrate that our approach performs better than model- free baselines in various domains including Sokoban. It achieves better performance with less data, even with imperfect models, a significant step towards delivering the promises of model-based RL. # 2 The I2A architecture
1707.06203#3
Imagination-Augmented Agents for Deep Reinforcement Learning
We introduce Imagination-Augmented Agents (I2As), a novel architecture for deep reinforcement learning combining model-free and model-based aspects. In contrast to most existing model-based reinforcement learning and planning methods, which prescribe how a model should be used to arrive at a policy, I2As learn to interpret predictions from a learned environment model to construct implicit plans in arbitrary ways, by using the predictions as additional context in deep policy networks. I2As show improved data efficiency, performance, and robustness to model misspecification compared to several baselines.
http://arxiv.org/pdf/1707.06203
Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra
cs.LG, cs.AI, stat.ML
null
null
cs.LG
20170719
20180214
[ { "id": "1707.03374" }, { "id": "1703.01250" }, { "id": "1511.09249" }, { "id": "1611.03673" }, { "id": "1610.03518" }, { "id": "1705.07177" }, { "id": "1603.08983" }, { "id": "1703.09260" }, { "id": "1611.05397" }, { "id": "1707.03497" }, { "id": "1511.07111" }, { "id": "1604.00289" }, { "id": "1612.08810" } ]
1707.06209
3
# Introduction The construction of large, high-quality datasets has been one of the main drivers of progress in NLP. The recent proliferation of datasets for tex- tual entailment, reading comprehension and Ques- tion Answering (QA) (Bowman et al., 2015; Her- mann et al., 2015; Rajpurkar et al., 2016; Hill et al., 2015; Hewlett et al., 2016; Nguyen et al., 2016) has allowed for advances on these tasks, particularly with neural models (Kadlec et al., *Work done while at the Allen Institute for Artificial In- telligence. 1Dataset available at http://allenai.org/data. html A major bottleneck for applying sophisticated statistical techniques to science QA is the lack of large in-domain training sets. Creating a large, multiple choice science QA dataset is challeng- ing, since crowd workers cannot be expected to have domain expertise, and questions can lack rel- evance and diversity in structure and content. Fur- thermore, poorly chosen answer distractors in a multiple choice setting can make questions almost trivial to solve.
1707.06209#3
Crowdsourcing Multiple Choice Science Questions
We present a novel method for obtaining high-quality, domain-targeted multiple choice questions from crowd workers. Generating these questions can be difficult without trading away originality, relevance or diversity in the answer options. Our method addresses these problems by leveraging a large corpus of domain-specific text and a small set of existing questions. It produces model suggestions for document selection and answer distractor choice which aid the human question generation process. With this method we have assembled SciQ, a dataset of 13.7K multiple choice science exam questions (Dataset available at http://allenai.org/data.html). We demonstrate that the method produces in-domain questions by providing an analysis of this new dataset and by showing that humans cannot distinguish the crowdsourced questions from original questions. When using SciQ as additional training data to existing questions, we observe accuracy improvements on real science exams.
http://arxiv.org/pdf/1707.06209
Johannes Welbl, Nelson F. Liu, Matt Gardner
cs.HC, cs.AI, cs.CL, stat.ML
accepted for the Workshop on Noisy User-generated Text (W-NUT) 2017
null
cs.HC
20170719
20170719
[ { "id": "1606.06031" }, { "id": "1604.04315" } ]
1707.06203
4
# 2 The I2A architecture a) Imagination core b) Single imagination rollout ) Full I2A Architecture 1,V 7} i i 2 encod Policy Net Env. Model 1. imagine future | 2. encode Model-based path Model-free =) a [| S Aggregator / \ 7+ . =. = = s| ls 3] |8 3| 18 3} |8 } 2} |e s| | 5). ; \ s| |s | 8| |3 | 3| |3 | \ 2| |2 | internal state L \ KF UA 4 \ Rollout _ fred input Encoding ge Figure 1: I2A architecture. ˆ· notation indicates imagined quantities. a): the imagination core (IC) predicts the next time step conditioned on an action sampled from the rollout policy ˆπ. b): the IC imagines trajectories of features ˆf = (ˆo, ˆr), encoded by the rollout encoder. c): in the full I2A, aggregated rollout encodings and input from a model-free path determine the output policy π.
1707.06203#4
Imagination-Augmented Agents for Deep Reinforcement Learning
We introduce Imagination-Augmented Agents (I2As), a novel architecture for deep reinforcement learning combining model-free and model-based aspects. In contrast to most existing model-based reinforcement learning and planning methods, which prescribe how a model should be used to arrive at a policy, I2As learn to interpret predictions from a learned environment model to construct implicit plans in arbitrary ways, by using the predictions as additional context in deep policy networks. I2As show improved data efficiency, performance, and robustness to model misspecification compared to several baselines.
http://arxiv.org/pdf/1707.06203
Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra
cs.LG, cs.AI, stat.ML
null
null
cs.LG
20170719
20180214
[ { "id": "1707.03374" }, { "id": "1703.01250" }, { "id": "1511.09249" }, { "id": "1611.03673" }, { "id": "1610.03518" }, { "id": "1705.07177" }, { "id": "1603.08983" }, { "id": "1703.09260" }, { "id": "1611.05397" }, { "id": "1707.03497" }, { "id": "1511.07111" }, { "id": "1604.00289" }, { "id": "1612.08810" } ]
1707.06203
5
In order to augment model-free agents with imagination, we rely on environment models – models that, given information from the present, can be queried to make predictions about the future. We use these environment models to simulate imagined trajectories, which are interpreted by a neural network and provided as additional context to a policy network. In general, an environment model is any recurrent architecture which can be trained in an unsupervised fashion from agent trajectories: given a past state and current action, the environment model predicts the next state and any number of signals from the environment. In this work, we will consider in particular environment models that build on recent successes of action-conditional next-step predictors [20–22], which receive as input the current observation (or history of observations) and current action, and predict the next observation, and potentially the next reward. We roll out the environment model over multiple time steps into the future, by initializing the imagined trajectory with the present time real observation, and subsequently feeding simulated observations into the model.
1707.06203#5
Imagination-Augmented Agents for Deep Reinforcement Learning
We introduce Imagination-Augmented Agents (I2As), a novel architecture for deep reinforcement learning combining model-free and model-based aspects. In contrast to most existing model-based reinforcement learning and planning methods, which prescribe how a model should be used to arrive at a policy, I2As learn to interpret predictions from a learned environment model to construct implicit plans in arbitrary ways, by using the predictions as additional context in deep policy networks. I2As show improved data efficiency, performance, and robustness to model misspecification compared to several baselines.
http://arxiv.org/pdf/1707.06203
Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra
cs.LG, cs.AI, stat.ML
null
null
cs.LG
20170719
20180214
[ { "id": "1707.03374" }, { "id": "1703.01250" }, { "id": "1511.09249" }, { "id": "1611.03673" }, { "id": "1610.03518" }, { "id": "1705.07177" }, { "id": "1603.08983" }, { "id": "1703.09260" }, { "id": "1611.05397" }, { "id": "1707.03497" }, { "id": "1511.07111" }, { "id": "1604.00289" }, { "id": "1612.08810" } ]
1707.06209
5
Example 1 Example 2 Example 3 Example 4 Q: What type of organism is | Q: What commonly used in preparation of foods such as cheese and yogurt? phenomenon makes global winds blow northeast to southwest or the reverse in the northern hemisphere and northwest to southeast or the reverse in the southern hemisphere? Q: Ghanges from a less-ordered | Q: What is the state to a more-ordered state | least danger- (such as a liquid to a solid) are | ous radioactive always what? decay? T) mesophilic organisms T) coriolis effect 2) protozoa 2) muon effect 3) gymnosperms 3) centrifugal effect 4) viruses 4) tropical effect T) exothermic 2) unbalanced 3) reactive 3) gamma decay 4) endothermic 4) zeta decay T) alpha decay 2) beta decay Mesophiles grow best in mod- erate temperature, typically be- tween 25°C and 40°C (77°F and 104°F). Mesophiles are often found living in or on the bod- ies of humans or other animals. The optimal growth temperature of many pathogenic mesophiles is 37°C (98°F), the normal human body temperature.
1707.06209#5
Crowdsourcing Multiple Choice Science Questions
We present a novel method for obtaining high-quality, domain-targeted multiple choice questions from crowd workers. Generating these questions can be difficult without trading away originality, relevance or diversity in the answer options. Our method addresses these problems by leveraging a large corpus of domain-specific text and a small set of existing questions. It produces model suggestions for document selection and answer distractor choice which aid the human question generation process. With this method we have assembled SciQ, a dataset of 13.7K multiple choice science exam questions (Dataset available at http://allenai.org/data.html). We demonstrate that the method produces in-domain questions by providing an analysis of this new dataset and by showing that humans cannot distinguish the crowdsourced questions from original questions. When using SciQ as additional training data to existing questions, we observe accuracy improvements on real science exams.
http://arxiv.org/pdf/1707.06209
Johannes Welbl, Nelson F. Liu, Matt Gardner
cs.HC, cs.AI, cs.CL, stat.ML
accepted for the Workshop on Noisy User-generated Text (W-NUT) 2017
null
cs.HC
20170719
20170719
[ { "id": "1606.06031" }, { "id": "1604.04315" } ]
1707.06203
6
The actions chosen in each rollout result from a rollout policy ˆπ (explained in Section 3.1). The environment model together with ˆπ constitute the imagination core module, which predicts next time steps (Fig 1a). The imagination core is used to produce n trajectories ˆT1, . . . , ˆTn. Each imagined trajectory ˆT is a sequence of features ( ˆft+1, . . . , ˆft+τ ), where t is the current time, τ the length of the rollout, and ˆft+i the output of the environment model (i.e. the predicted observation and/or reward). Despite recent progress in training better environment models, a key issue addressed by I2As is that a learned model cannot be assumed to be perfect; it might sometimes make erroneous or nonsensical predictions. We therefore do not want to rely solely on predicted rewards (or values predicted 2 # path input observations stacked context ConvNet —_ predicted observation | a ~ a | TO ! input action one-hot predicted reward wee O-CLOD -. tile Figure 2: Environment model. The input action is broadcast and concate- nated to the observation. A convolu- tional network transforms this into a pixel-wise probability distribution for the output image, and a distribution for the reward.
1707.06203#6
Imagination-Augmented Agents for Deep Reinforcement Learning
We introduce Imagination-Augmented Agents (I2As), a novel architecture for deep reinforcement learning combining model-free and model-based aspects. In contrast to most existing model-based reinforcement learning and planning methods, which prescribe how a model should be used to arrive at a policy, I2As learn to interpret predictions from a learned environment model to construct implicit plans in arbitrary ways, by using the predictions as additional context in deep policy networks. I2As show improved data efficiency, performance, and robustness to model misspecification compared to several baselines.
http://arxiv.org/pdf/1707.06203
Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra
cs.LG, cs.AI, stat.ML
null
null
cs.LG
20170719
20180214
[ { "id": "1707.03374" }, { "id": "1703.01250" }, { "id": "1511.09249" }, { "id": "1611.03673" }, { "id": "1610.03518" }, { "id": "1705.07177" }, { "id": "1603.08983" }, { "id": "1703.09260" }, { "id": "1611.05397" }, { "id": "1707.03497" }, { "id": "1511.07111" }, { "id": "1604.00289" }, { "id": "1612.08810" } ]
1707.06209
6
bod- ies of humans or other animals. The optimal growth temperature of many pathogenic mesophiles is 37°C (98°F), the normal human body temperature. Mesophilic organisms have important uses in food preparation, including cheese, yogurt, beer and wine. north to south or to north. sphere. hemisphere. Without Coriolis Effect the global winds would blow south But Coriolis makes them blow north- east to southwest or the re- | tem. verse in the Northern Hemi- The winds blow northwest to southeast or the reverse in the southern Summary Changes of state are | All radioactive examples of phase changes, or | decay is dan- phase transitions. All phase | gerous to living changes are accompanied by | things, but al- changes in the energy of a sys- | pha decay is the Changes from a more- | least dangerous. ordered state to a less-ordered state (such as a liquid to a gas) are endothermic. Changes from a less-ordered state to a more- ordered state (such as a liquid to a solid) are always exothermic. The conversion ...
1707.06209#6
Crowdsourcing Multiple Choice Science Questions
We present a novel method for obtaining high-quality, domain-targeted multiple choice questions from crowd workers. Generating these questions can be difficult without trading away originality, relevance or diversity in the answer options. Our method addresses these problems by leveraging a large corpus of domain-specific text and a small set of existing questions. It produces model suggestions for document selection and answer distractor choice which aid the human question generation process. With this method we have assembled SciQ, a dataset of 13.7K multiple choice science exam questions (Dataset available at http://allenai.org/data.html). We demonstrate that the method produces in-domain questions by providing an analysis of this new dataset and by showing that humans cannot distinguish the crowdsourced questions from original questions. When using SciQ as additional training data to existing questions, we observe accuracy improvements on real science exams.
http://arxiv.org/pdf/1707.06209
Johannes Welbl, Nelson F. Liu, Matt Gardner
cs.HC, cs.AI, cs.CL, stat.ML
accepted for the Workshop on Noisy User-generated Text (W-NUT) 2017
null
cs.HC
20170719
20170719
[ { "id": "1606.06031" }, { "id": "1604.04315" } ]
1707.06203
7
from predicted states), as is often done in classical planning. Additionally, trajectories may contain information beyond the reward sequence (a trajectory could contain an informative subsequence – for instance solving a subproblem – which did not result in higher reward). For these reasons, we use a rollout encoder E that processes the imagined rollout as a whole and learns to interpret it, i.e. by extracting any information useful for the agent’s decision, or even ignoring it when necessary (Fig 1b). Each trajectory is encoded separately as a rollout embedding ei = E( ˆTi). Finally, an aggregator A converts the different rollout embeddings into a single imagination code cia = A(e1, . . . , en).
1707.06203#7
Imagination-Augmented Agents for Deep Reinforcement Learning
We introduce Imagination-Augmented Agents (I2As), a novel architecture for deep reinforcement learning combining model-free and model-based aspects. In contrast to most existing model-based reinforcement learning and planning methods, which prescribe how a model should be used to arrive at a policy, I2As learn to interpret predictions from a learned environment model to construct implicit plans in arbitrary ways, by using the predictions as additional context in deep policy networks. I2As show improved data efficiency, performance, and robustness to model misspecification compared to several baselines.
http://arxiv.org/pdf/1707.06203
Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra
cs.LG, cs.AI, stat.ML
null
null
cs.LG
20170719
20180214
[ { "id": "1707.03374" }, { "id": "1703.01250" }, { "id": "1511.09249" }, { "id": "1611.03673" }, { "id": "1610.03518" }, { "id": "1705.07177" }, { "id": "1603.08983" }, { "id": "1703.09260" }, { "id": "1611.05397" }, { "id": "1707.03497" }, { "id": "1511.07111" }, { "id": "1604.00289" }, { "id": "1612.08810" } ]
1707.06209
7
Figure 1: The first four SciQ training set examples. An instance consists of a question and 4 answer op- tions (the correct one in green). Most instances come with the document used to formulate the question. workers a passage of text and having them ask a question about it. However, unlike previous dataset construction tasks, we (1) need domain- relevant passages and questions, and (2) seek to create multiple choice questions, not direct- answer questions.
1707.06209#7
Crowdsourcing Multiple Choice Science Questions
We present a novel method for obtaining high-quality, domain-targeted multiple choice questions from crowd workers. Generating these questions can be difficult without trading away originality, relevance or diversity in the answer options. Our method addresses these problems by leveraging a large corpus of domain-specific text and a small set of existing questions. It produces model suggestions for document selection and answer distractor choice which aid the human question generation process. With this method we have assembled SciQ, a dataset of 13.7K multiple choice science exam questions (Dataset available at http://allenai.org/data.html). We demonstrate that the method produces in-domain questions by providing an analysis of this new dataset and by showing that humans cannot distinguish the crowdsourced questions from original questions. When using SciQ as additional training data to existing questions, we observe accuracy improvements on real science exams.
http://arxiv.org/pdf/1707.06209
Johannes Welbl, Nelson F. Liu, Matt Gardner
cs.HC, cs.AI, cs.CL, stat.ML
accepted for the Workshop on Noisy User-generated Text (W-NUT) 2017
null
cs.HC
20170719
20170719
[ { "id": "1606.06031" }, { "id": "1604.04315" } ]
1707.06203
8
The final component of the I2A is the policy module, which is a network that takes the information cia from model-based predictions, as well as the output cmf of a model-free path (a network which only takes the real observation as input; see Fig 1c, right), and outputs the imagination-augmented policy vector π and estimated value V . The I2A therefore learns to combine information from its model-free and imagination-augmented paths; note that without the model-based path, I2As reduce to a standard model-free network [3]. I2As can thus be thought of as augmenting model-free agents by providing additional information from model-based planning, and as having strictly more expressive power than the underlying model-free agent. # 3 Architectural choices and experimental setup # 3.1 Rollout strategy
1707.06203#8
Imagination-Augmented Agents for Deep Reinforcement Learning
We introduce Imagination-Augmented Agents (I2As), a novel architecture for deep reinforcement learning combining model-free and model-based aspects. In contrast to most existing model-based reinforcement learning and planning methods, which prescribe how a model should be used to arrive at a policy, I2As learn to interpret predictions from a learned environment model to construct implicit plans in arbitrary ways, by using the predictions as additional context in deep policy networks. I2As show improved data efficiency, performance, and robustness to model misspecification compared to several baselines.
http://arxiv.org/pdf/1707.06203
Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra
cs.LG, cs.AI, stat.ML
null
null
cs.LG
20170719
20180214
[ { "id": "1707.03374" }, { "id": "1703.01250" }, { "id": "1511.09249" }, { "id": "1611.03673" }, { "id": "1610.03518" }, { "id": "1705.07177" }, { "id": "1603.08983" }, { "id": "1703.09260" }, { "id": "1611.05397" }, { "id": "1707.03497" }, { "id": "1511.07111" }, { "id": "1604.00289" }, { "id": "1612.08810" } ]
1707.06209
8
We use a two-step process to solve these prob- lems, first using a noisy classifier to find relevant passages and showing several options to workers to select from when generating a question. Sec- ond, we use a model trained on real science exam questions to predict good answer distractors given a question and a correct answer. We use these pre- dictions to aid crowd workers in transforming the question produced from the first step into a multi- ple choice question. Thus, with our methodology we leverage existing study texts and science ques- tions to obtain new, relevant questions and plau- sible answer distractors. Consequently, the human intelligence task is shifted away from a purely gen- erative task (which is slow, difficult, expensive and can lack diversity in the outcomes when repeated) and reframed in terms of a selection, modification and validation task (being faster, easier, cheaper and with content variability induced by the sug- gestions provided).
1707.06209#8
Crowdsourcing Multiple Choice Science Questions
We present a novel method for obtaining high-quality, domain-targeted multiple choice questions from crowd workers. Generating these questions can be difficult without trading away originality, relevance or diversity in the answer options. Our method addresses these problems by leveraging a large corpus of domain-specific text and a small set of existing questions. It produces model suggestions for document selection and answer distractor choice which aid the human question generation process. With this method we have assembled SciQ, a dataset of 13.7K multiple choice science exam questions (Dataset available at http://allenai.org/data.html). We demonstrate that the method produces in-domain questions by providing an analysis of this new dataset and by showing that humans cannot distinguish the crowdsourced questions from original questions. When using SciQ as additional training data to existing questions, we observe accuracy improvements on real science exams.
http://arxiv.org/pdf/1707.06209
Johannes Welbl, Nelson F. Liu, Matt Gardner
cs.HC, cs.AI, cs.CL, stat.ML
accepted for the Workshop on Noisy User-generated Text (W-NUT) 2017
null
cs.HC
20170719
20170719
[ { "id": "1606.06031" }, { "id": "1604.04315" } ]
1707.06203
9
# 3 Architectural choices and experimental setup # 3.1 Rollout strategy For our experiments, we perform one rollout for each possible action in the environment. The first action in the ith rollout is the ith action of the action set A, and subsequent actions for all rollouts are produced by a shared rollout policy ˆπ. We investigated several types of rollout policies (random, pre- trained) and found that a particularly efficient strategy was to distill the imagination-augmented policy into a model-free policy. This distillation strategy consists in creating a small model-free network ˆπ(ot), and adding to the total loss a cross entropy auxiliary loss between the imagination-augmented policy π(ot) as computed on the current observation, and the policy ˆπ(ot) as computed on the same observation. By imitating the imagination-augmented policy, the internal rollouts will be similar to the trajectories of the agent in the real environment; this also ensures that the rollout corresponds to trajectories with high reward. At the same time, the imperfect approximation results in a rollout policy with higher entropy, potentially striking a balance between exploration and exploitation. # I2A components and environment models
1707.06203#9
Imagination-Augmented Agents for Deep Reinforcement Learning
We introduce Imagination-Augmented Agents (I2As), a novel architecture for deep reinforcement learning combining model-free and model-based aspects. In contrast to most existing model-based reinforcement learning and planning methods, which prescribe how a model should be used to arrive at a policy, I2As learn to interpret predictions from a learned environment model to construct implicit plans in arbitrary ways, by using the predictions as additional context in deep policy networks. I2As show improved data efficiency, performance, and robustness to model misspecification compared to several baselines.
http://arxiv.org/pdf/1707.06203
Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra
cs.LG, cs.AI, stat.ML
null
null
cs.LG
20170719
20180214
[ { "id": "1707.03374" }, { "id": "1703.01250" }, { "id": "1511.09249" }, { "id": "1611.03673" }, { "id": "1610.03518" }, { "id": "1705.07177" }, { "id": "1603.08983" }, { "id": "1703.09260" }, { "id": "1611.05397" }, { "id": "1707.03497" }, { "id": "1511.07111" }, { "id": "1604.00289" }, { "id": "1612.08810" } ]
1707.06209
9
we call SciQ. Figure 1 shows the first four train- ing examples in SciQ. This dataset has a multiple choice version, where the task is to select the cor- rect answer using whatever background informa- tion a system can find given a question and several answer options, and a direct answer version, where given a passage and a question a system must pre- dict the span within the passage that answers the question. With experiments using recent state-of- the-art reading comprehension methods, we show that this is a useful dataset for further research. In- terestingly, neural models do not beat simple infor- mation retrieval baselines on the multiple choice version of this dataset, leaving room for research on applying neural models in settings where train- ing examples number in the tens of thousands, in- stead of hundreds of thousands. We also show that using SciQ as an additional source of training data improves performance on real 4th and 8th grade exam questions, proving that our method success- fully produces useful in-domain training data. # 2 Related Work The second contribution of this paper is a dataset constructed by following this methodol- ogy. With a total budget of $10,415, we collected 13,679 multiple choice science questions, which
1707.06209#9
Crowdsourcing Multiple Choice Science Questions
We present a novel method for obtaining high-quality, domain-targeted multiple choice questions from crowd workers. Generating these questions can be difficult without trading away originality, relevance or diversity in the answer options. Our method addresses these problems by leveraging a large corpus of domain-specific text and a small set of existing questions. It produces model suggestions for document selection and answer distractor choice which aid the human question generation process. With this method we have assembled SciQ, a dataset of 13.7K multiple choice science exam questions (Dataset available at http://allenai.org/data.html). We demonstrate that the method produces in-domain questions by providing an analysis of this new dataset and by showing that humans cannot distinguish the crowdsourced questions from original questions. When using SciQ as additional training data to existing questions, we observe accuracy improvements on real science exams.
http://arxiv.org/pdf/1707.06209
Johannes Welbl, Nelson F. Liu, Matt Gardner
cs.HC, cs.AI, cs.CL, stat.ML
accepted for the Workshop on Noisy User-generated Text (W-NUT) 2017
null
cs.HC
20170719
20170719
[ { "id": "1606.06031" }, { "id": "1604.04315" } ]
1707.06203
10
# I2A components and environment models In our experiments, the encoder is an LSTM with convolutional encoder which sequentially processes a trajectory T . The features ˆft are fed to the LSTM in reverse order, from ˆft+τ to ˆft+1, to mimic Bellman type backup operations.2 The aggregator simply concatenates the summaries. For the model-free path of the I2A, we chose a standard network of convolutional layers plus one fully connected one [e.g. 3]. We also use this architecture on its own as a baseline agent. Our environment model (Fig. 2) defines a distribution which is optimized by using a negative log- likelihood loss lmodel. We can either pretrain the environment model before embedding it (with frozen weights) within the I2A architecture, or jointly train it with the agent by adding lmodel to the total loss as an auxiliary loss. In practice we found that pre-training the environment model led to faster runtime of the I2A architecture, so we adopted this strategy. 2The choice of forward, backward or bi-directional processing seems to have relatively little impact on the performance of the I2A, however, and should not preclude investigating different strategies. 3
1707.06203#10
Imagination-Augmented Agents for Deep Reinforcement Learning
We introduce Imagination-Augmented Agents (I2As), a novel architecture for deep reinforcement learning combining model-free and model-based aspects. In contrast to most existing model-based reinforcement learning and planning methods, which prescribe how a model should be used to arrive at a policy, I2As learn to interpret predictions from a learned environment model to construct implicit plans in arbitrary ways, by using the predictions as additional context in deep policy networks. I2As show improved data efficiency, performance, and robustness to model misspecification compared to several baselines.
http://arxiv.org/pdf/1707.06203
Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra
cs.LG, cs.AI, stat.ML
null
null
cs.LG
20170719
20180214
[ { "id": "1707.03374" }, { "id": "1703.01250" }, { "id": "1511.09249" }, { "id": "1611.03673" }, { "id": "1610.03518" }, { "id": "1705.07177" }, { "id": "1603.08983" }, { "id": "1703.09260" }, { "id": "1611.05397" }, { "id": "1707.03497" }, { "id": "1511.07111" }, { "id": "1604.00289" }, { "id": "1612.08810" } ]
1707.06209
10
The second contribution of this paper is a dataset constructed by following this methodol- ogy. With a total budget of $10,415, we collected 13,679 multiple choice science questions, which Dataset Construction. A lot of recent work has focused on constructing large datasets suitable for training neural models. QA datasets have been as- sembled based on Freebase (Berant et al., 2013; Bordes et al., 2015), Wikipedia articles (Yang et al., 2015; Rajpurkar et al., 2016; Hewlett et al., 2016) and web search user queries (Nguyen et al., 2016); for reading comprehension (RC) based on news (Hermann et al., 2015; Onishi et al., 2016), children books (Hill et al., 2015) and novels (Pa- perno et al., 2016), and for recognizing textual en- tailment based on image captions (Bowman et al., 2015). We continue this line of work and construct a dataset for science exam QA. Our dataset dif- fers from some of the aforementioned datasets in that it consists of natural language questions pro- duced by people, instead of cloze-style questions. It also differs from prior work in that we aim at the narrower domain of science exams and in that we produce multiple choice questions, which are more difficult to generate.
1707.06209#10
Crowdsourcing Multiple Choice Science Questions
We present a novel method for obtaining high-quality, domain-targeted multiple choice questions from crowd workers. Generating these questions can be difficult without trading away originality, relevance or diversity in the answer options. Our method addresses these problems by leveraging a large corpus of domain-specific text and a small set of existing questions. It produces model suggestions for document selection and answer distractor choice which aid the human question generation process. With this method we have assembled SciQ, a dataset of 13.7K multiple choice science exam questions (Dataset available at http://allenai.org/data.html). We demonstrate that the method produces in-domain questions by providing an analysis of this new dataset and by showing that humans cannot distinguish the crowdsourced questions from original questions. When using SciQ as additional training data to existing questions, we observe accuracy improvements on real science exams.
http://arxiv.org/pdf/1707.06209
Johannes Welbl, Nelson F. Liu, Matt Gardner
cs.HC, cs.AI, cs.CL, stat.ML
accepted for the Workshop on Noisy User-generated Text (W-NUT) 2017
null
cs.HC
20170719
20170719
[ { "id": "1606.06031" }, { "id": "1604.04315" } ]
1707.06203
11
2The choice of forward, backward or bi-directional processing seems to have relatively little impact on the performance of the I2A, however, and should not preclude investigating different strategies. 3 For all environments, training data for our environment model was generated from trajectories of a partially trained standard model-free agent (defined below). We use partially pre-trained agents because random agents see few rewards in some of our domains. However, this means we have to account for the budget (in terms of real environment steps) required to pretrain the data-generating agent, as well as to then generate the data. In the experiments, we address this concern in two ways: by explicitly accounting for the number of steps used in pretraining (for Sokoban), or by demonstrating how the same pretrained model can be reused for many tasks (for MiniPacman). # 3.3 Agent training and baseline agents
1707.06203#11
Imagination-Augmented Agents for Deep Reinforcement Learning
We introduce Imagination-Augmented Agents (I2As), a novel architecture for deep reinforcement learning combining model-free and model-based aspects. In contrast to most existing model-based reinforcement learning and planning methods, which prescribe how a model should be used to arrive at a policy, I2As learn to interpret predictions from a learned environment model to construct implicit plans in arbitrary ways, by using the predictions as additional context in deep policy networks. I2As show improved data efficiency, performance, and robustness to model misspecification compared to several baselines.
http://arxiv.org/pdf/1707.06203
Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra
cs.LG, cs.AI, stat.ML
null
null
cs.LG
20170719
20180214
[ { "id": "1707.03374" }, { "id": "1703.01250" }, { "id": "1511.09249" }, { "id": "1611.03673" }, { "id": "1610.03518" }, { "id": "1705.07177" }, { "id": "1603.08983" }, { "id": "1703.09260" }, { "id": "1611.05397" }, { "id": "1707.03497" }, { "id": "1511.07111" }, { "id": "1604.00289" }, { "id": "1612.08810" } ]
1707.06209
11
Science Exam Question Answering. Exist- ing models for multiple-choice science exam QA vary in their reasoning framework and training methodology. A set of sub-problems and solution strategies are outlined in Clark et al. (2013). The method described by Li and Clark (2015) eval- uates the coherence of a scene constructed from the question enriched with background KB infor- mation, while Sachan et al. (2016) train an en- tailment model that derives the correct answer from background knowledge aligned with a max- margin ranker. Probabilistic reasoning approaches include Markov logic networks (Khot et al., 2015) and an integer linear program-based model that assembles proof chains over structured knowl- edge (Khashabi et al., 2016). The Aristo ensem- ble (Clark et al., 2016) combines multiple rea- soning strategies with shallow statistical methods based on lexical co-occurrence and IR, which by themselves provide surprisingly strong baselines. There has not been much work applying neural networks to this task, likely because of the paucity of training data; this paper is an attempt to address this issue by constructing a much larger dataset than was previously available, and we present re- sults of experiments using state-of-the-art reading comprehension techniques on our datasets.
1707.06209#11
Crowdsourcing Multiple Choice Science Questions
We present a novel method for obtaining high-quality, domain-targeted multiple choice questions from crowd workers. Generating these questions can be difficult without trading away originality, relevance or diversity in the answer options. Our method addresses these problems by leveraging a large corpus of domain-specific text and a small set of existing questions. It produces model suggestions for document selection and answer distractor choice which aid the human question generation process. With this method we have assembled SciQ, a dataset of 13.7K multiple choice science exam questions (Dataset available at http://allenai.org/data.html). We demonstrate that the method produces in-domain questions by providing an analysis of this new dataset and by showing that humans cannot distinguish the crowdsourced questions from original questions. When using SciQ as additional training data to existing questions, we observe accuracy improvements on real science exams.
http://arxiv.org/pdf/1707.06209
Johannes Welbl, Nelson F. Liu, Matt Gardner
cs.HC, cs.AI, cs.CL, stat.ML
accepted for the Workshop on Noisy User-generated Text (W-NUT) 2017
null
cs.HC
20170719
20170719
[ { "id": "1606.06031" }, { "id": "1604.04315" } ]
1707.06203
12
# 3.3 Agent training and baseline agents Using a fixed pretrained environment model, we trained the remaining I2A parameters with asyn- chronous advantage actor-critic (A3C) [3]. We added an entropy regularizer on the policy π to encourage exploration and the auxiliary loss to distill π into the rollout policy ˆπ as explained above. We distributed asynchronous training over 32 to 64 workers; we used the RMSprop optimizer [23]. We report results after an initial round of hyperparameter exploration (details in Appendix A). Learning curves are averaged over the top three agents unless noted otherwise. A separate hyperparameter search was carried out for each agent architecture in order to ensure optimal performance. In addition to the I2A, we ran the following baseline agents (see Appendix B for architecture details for all agents).
1707.06203#12
Imagination-Augmented Agents for Deep Reinforcement Learning
We introduce Imagination-Augmented Agents (I2As), a novel architecture for deep reinforcement learning combining model-free and model-based aspects. In contrast to most existing model-based reinforcement learning and planning methods, which prescribe how a model should be used to arrive at a policy, I2As learn to interpret predictions from a learned environment model to construct implicit plans in arbitrary ways, by using the predictions as additional context in deep policy networks. I2As show improved data efficiency, performance, and robustness to model misspecification compared to several baselines.
http://arxiv.org/pdf/1707.06203
Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra
cs.LG, cs.AI, stat.ML
null
null
cs.LG
20170719
20180214
[ { "id": "1707.03374" }, { "id": "1703.01250" }, { "id": "1511.09249" }, { "id": "1611.03673" }, { "id": "1610.03518" }, { "id": "1705.07177" }, { "id": "1603.08983" }, { "id": "1703.09260" }, { "id": "1611.05397" }, { "id": "1707.03497" }, { "id": "1511.07111" }, { "id": "1604.00289" }, { "id": "1612.08810" } ]
1707.06209
12
Automatic Question Generation. Transform- ing text into questions has been tackled be- fore, mostly for didactic purposes. Some ap- proaches rely on syntactic transformation tem- plates (Mitkov and Ha, 2003; Heilman and Smith, 2010), while most others generate cloze-style questions. Our first attempts at constructing a sci- ence question dataset followed these techniques. We found the methods did not produce highquality science questions, as there were problems with selecting relevant text, generating reasonable distractors, and formulating coherent questions.
1707.06209#12
Crowdsourcing Multiple Choice Science Questions
We present a novel method for obtaining high-quality, domain-targeted multiple choice questions from crowd workers. Generating these questions can be difficult without trading away originality, relevance or diversity in the answer options. Our method addresses these problems by leveraging a large corpus of domain-specific text and a small set of existing questions. It produces model suggestions for document selection and answer distractor choice which aid the human question generation process. With this method we have assembled SciQ, a dataset of 13.7K multiple choice science exam questions (Dataset available at http://allenai.org/data.html). We demonstrate that the method produces in-domain questions by providing an analysis of this new dataset and by showing that humans cannot distinguish the crowdsourced questions from original questions. When using SciQ as additional training data to existing questions, we observe accuracy improvements on real science exams.
http://arxiv.org/pdf/1707.06209
Johannes Welbl, Nelson F. Liu, Matt Gardner
cs.HC, cs.AI, cs.CL, stat.ML
accepted for the Workshop on Noisy User-generated Text (W-NUT) 2017
null
cs.HC
20170719
20170719
[ { "id": "1606.06031" }, { "id": "1604.04315" } ]
1707.06203
13
Standard model-free agent. For our main baseline agent, we chose a model-free standard architec- ture similar to [3], consisting of convolutional layers (2 for MiniPacman, and 3 for Sokoban) followed by a fully connected layer. The final layer, again fully connected, outputs the policy logits and the value function. For Sokoban, we also tested a ‘large’ standard architecture, where we double the number of all feature maps (for convolutional layers) and hidden units (for fully connected layers). The resulting architecture has a slightly larger number of parameters than I2A.
1707.06203#13
Imagination-Augmented Agents for Deep Reinforcement Learning
We introduce Imagination-Augmented Agents (I2As), a novel architecture for deep reinforcement learning combining model-free and model-based aspects. In contrast to most existing model-based reinforcement learning and planning methods, which prescribe how a model should be used to arrive at a policy, I2As learn to interpret predictions from a learned environment model to construct implicit plans in arbitrary ways, by using the predictions as additional context in deep policy networks. I2As show improved data efficiency, performance, and robustness to model misspecification compared to several baselines.
http://arxiv.org/pdf/1707.06203
Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra
cs.LG, cs.AI, stat.ML
null
null
cs.LG
20170719
20180214
[ { "id": "1707.03374" }, { "id": "1703.01250" }, { "id": "1511.09249" }, { "id": "1611.03673" }, { "id": "1610.03518" }, { "id": "1705.07177" }, { "id": "1603.08983" }, { "id": "1703.09260" }, { "id": "1611.05397" }, { "id": "1707.03497" }, { "id": "1511.07111" }, { "id": "1604.00289" }, { "id": "1612.08810" } ]
1707.06209
13
Several similarity measures have been em- ployed for selecting answer distractors (Mitkov et al., 2009), including measures derived from WordNet (Mitkov and Ha, 2003), thesauri (Sumita et al., 2005) and distributional context (Pino et al., 2008; Aldabe and Maritxalar, 2010). Domain- specific ontologies (Papasalouros et al., 2008), phonetic or morphological similarity (Pino and Esknazi, 2009; Correia et al., 2010), probabil- ity scores for the question context (Mostow and Jang, 2012) and context-sensitive lexical infer- ence (Zesch and Melamud, 2014) have also been used. In contrast to the aforementioned similarity- based selection strategies, our method uses a feature-based ranker to learn plausible distractors from original questions. Several of the above heuristics are used as features in this ranking model. Feature-based distractor generation mod- els (Sakaguchi et al., 2013) have been used in the past by Agarwal and Mannem (2011) for creating biology questions. Our model uses a random for- est to rank candidates; it is agnostic towards tak- ing cloze or humanly-generated
1707.06209#13
Crowdsourcing Multiple Choice Science Questions
We present a novel method for obtaining high-quality, domain-targeted multiple choice questions from crowd workers. Generating these questions can be difficult without trading away originality, relevance or diversity in the answer options. Our method addresses these problems by leveraging a large corpus of domain-specific text and a small set of existing questions. It produces model suggestions for document selection and answer distractor choice which aid the human question generation process. With this method we have assembled SciQ, a dataset of 13.7K multiple choice science exam questions (Dataset available at http://allenai.org/data.html). We demonstrate that the method produces in-domain questions by providing an analysis of this new dataset and by showing that humans cannot distinguish the crowdsourced questions from original questions. When using SciQ as additional training data to existing questions, we observe accuracy improvements on real science exams.
http://arxiv.org/pdf/1707.06209
Johannes Welbl, Nelson F. Liu, Matt Gardner
cs.HC, cs.AI, cs.CL, stat.ML
accepted for the Workshop on Noisy User-generated Text (W-NUT) 2017
null
cs.HC
20170719
20170719
[ { "id": "1606.06031" }, { "id": "1604.04315" } ]
1707.06203
14
Copy-model agent. Aside from having an internal environment model, the I2A architecture is very different from the one of the standard agent. To verify that the information contained in the environment model rollouts contributed to an increase in performance, we implemented a baseline where we replaced the environment model in the I2A with a ‘copy’ model that simply returns the input observation. Lacking a model, this agent does not use imagination, but uses the same architecture, has the same number of learnable parameters (the environment model is kept constant in the I2A), and benefits from the same amount of computation (which in both cases increases linearly with the length of the rollouts). This model effectively corresponds to an architecture where policy logits and value are the final output of an LSTM network with skip connections. # 4 Sokoban experiments We now demonstrate the performance of I2A over baselines in a puzzle environment, Sokoban. We address the issue of dealing with imperfect models, highlighting the strengths of our approach over planning baselines. We also analyze the importance of the various components of the I2A.
1707.06203#14
Imagination-Augmented Agents for Deep Reinforcement Learning
We introduce Imagination-Augmented Agents (I2As), a novel architecture for deep reinforcement learning combining model-free and model-based aspects. In contrast to most existing model-based reinforcement learning and planning methods, which prescribe how a model should be used to arrive at a policy, I2As learn to interpret predictions from a learned environment model to construct implicit plans in arbitrary ways, by using the predictions as additional context in deep policy networks. I2As show improved data efficiency, performance, and robustness to model misspecification compared to several baselines.
http://arxiv.org/pdf/1707.06203
Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra
cs.LG, cs.AI, stat.ML
null
null
cs.LG
20170719
20180214
[ { "id": "1707.03374" }, { "id": "1703.01250" }, { "id": "1511.09249" }, { "id": "1611.03673" }, { "id": "1610.03518" }, { "id": "1705.07177" }, { "id": "1603.08983" }, { "id": "1703.09260" }, { "id": "1611.05397" }, { "id": "1707.03497" }, { "id": "1511.07111" }, { "id": "1604.00289" }, { "id": "1612.08810" } ]
1707.06203
15
Sokoban is a classic planning problem, where the agent has to push a number of boxes onto given target locations. Because boxes can only be pushed (as opposed to pulled), many moves are irreversible, and mistakes can render the puzzle unsolvable. A human player is thus forced to plan moves ahead of time. We expect that artificial agents will similarly benefit from internal simulation. Our implementation of Sokoban procedurally generates a new level each episode (see Appendix D.4 for details, Fig. 3 for examples). This means an agent cannot memorize specific puzzles.3 Together with the planning aspect, this makes for a very challenging environment for our model-free baseline agents, which solve less than 60% of the levels after a billion steps of training (details below). We provide videos of agents playing our version of Sokoban online [24]. While the underlying game logic operates in a 10 × 10 grid world, our agents were trained directly on RGB sprite graphics as shown in Fig. 4 (image size 80 × 80 pixels). There are no aspects of I2As that make them specific to grid world games. 3Out of 40 million levels generated, less than 0.7% were repeated. Training an agent on 1 billion frames requires less than 20 million episodes. 4
1707.06203#15
Imagination-Augmented Agents for Deep Reinforcement Learning
We introduce Imagination-Augmented Agents (I2As), a novel architecture for deep reinforcement learning combining model-free and model-based aspects. In contrast to most existing model-based reinforcement learning and planning methods, which prescribe how a model should be used to arrive at a policy, I2As learn to interpret predictions from a learned environment model to construct implicit plans in arbitrary ways, by using the predictions as additional context in deep policy networks. I2As show improved data efficiency, performance, and robustness to model misspecification compared to several baselines.
http://arxiv.org/pdf/1707.06203
Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra
cs.LG, cs.AI, stat.ML
null
null
cs.LG
20170719
20180214
[ { "id": "1707.03374" }, { "id": "1703.01250" }, { "id": "1511.09249" }, { "id": "1611.03673" }, { "id": "1610.03518" }, { "id": "1705.07177" }, { "id": "1603.08983" }, { "id": "1703.09260" }, { "id": "1611.05397" }, { "id": "1707.03497" }, { "id": "1511.07111" }, { "id": "1604.00289" }, { "id": "1612.08810" } ]
1707.06209
15
# 3 Creating a science exam QA dataset In this section we present our method for crowd- sourcing science exam questions. The method is a two-step process: first we present a set of candi- date passages to a crowd worker, letting the worker choose one of the passages and ask a question about it. Second, another worker takes the ques- tion and answer generated in the first step and pro- duces three distractors, aided by a model trained to predict good answer distractors. The end result is a multiple choice science question, consisting of a question q, a passage p, a correct answer a*, and a set of distractors, or incorrect answer options, {a’}. Some example questions are shown in Fig- ure 1. The remainder of this section elaborates on the two steps in our question generation process. # 3.1 First task: producing in-domain questions Conceiving an original question from scratch in a specialized domain is surprisingly difficult; per- forming the task repeatedly involves the danger of
1707.06209#15
Crowdsourcing Multiple Choice Science Questions
We present a novel method for obtaining high-quality, domain-targeted multiple choice questions from crowd workers. Generating these questions can be difficult without trading away originality, relevance or diversity in the answer options. Our method addresses these problems by leveraging a large corpus of domain-specific text and a small set of existing questions. It produces model suggestions for document selection and answer distractor choice which aid the human question generation process. With this method we have assembled SciQ, a dataset of 13.7K multiple choice science exam questions (Dataset available at http://allenai.org/data.html). We demonstrate that the method produces in-domain questions by providing an analysis of this new dataset and by showing that humans cannot distinguish the crowdsourced questions from original questions. When using SciQ as additional training data to existing questions, we observe accuracy improvements on real science exams.
http://arxiv.org/pdf/1707.06209
Johannes Welbl, Nelson F. Liu, Matt Gardner
cs.HC, cs.AI, cs.CL, stat.ML
accepted for the Workshop on Noisy User-generated Text (W-NUT) 2017
null
cs.HC
20170719
20170719
[ { "id": "1606.06031" }, { "id": "1604.04315" } ]
1707.06203
16
Figure 3: Random examples of procedurally generated Sokoban levels. The player (green sprite) needs to push all 4 boxes onto the red target squares to solve a level, while avoiding irreversible mistakes. Our agents receive sprite graphics (shown above) as observations. # I2A performance vs. baselines on Sokoban Figure 4 (left) shows the learning curves of the I2A architecture and various baselines explained throughout this section. First, we compare I2A (with rollouts of length 5) against the standard model-free agent. I2A clearly outperforms the latter, reaching a performance of 85% of levels solved vs. a maximum of under 60% for the baseline. The baseline with increased capacity reaches 70% - still significantly below I2A. Similarly, for Sokoban, I2A far outperforms the copy-model. Sokoban performance Lo Unroll depth analysis ° ° Ea ° a © ES a unroll depth standaratiarge) Standard ho reward A cony-mode! 128 ° fraction of levels solved fraction of levels solved © io 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 08 1.0 environment steps 1e9 environment steps 1e9
1707.06203#16
Imagination-Augmented Agents for Deep Reinforcement Learning
We introduce Imagination-Augmented Agents (I2As), a novel architecture for deep reinforcement learning combining model-free and model-based aspects. In contrast to most existing model-based reinforcement learning and planning methods, which prescribe how a model should be used to arrive at a policy, I2As learn to interpret predictions from a learned environment model to construct implicit plans in arbitrary ways, by using the predictions as additional context in deep policy networks. I2As show improved data efficiency, performance, and robustness to model misspecification compared to several baselines.
http://arxiv.org/pdf/1707.06203
Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra
cs.LG, cs.AI, stat.ML
null
null
cs.LG
20170719
20180214
[ { "id": "1707.03374" }, { "id": "1703.01250" }, { "id": "1511.09249" }, { "id": "1611.03673" }, { "id": "1610.03518" }, { "id": "1705.07177" }, { "id": "1603.08983" }, { "id": "1703.09260" }, { "id": "1611.05397" }, { "id": "1707.03497" }, { "id": "1511.07111" }, { "id": "1604.00289" }, { "id": "1612.08810" } ]
1707.06209
16
Conceiving an original question from scratch in a specialized domain is surprisingly difficult; per- forming the task repeatedly involves the danger of falling into specific lexical and structural patterns. To enforce diversity in question content and lex- ical expression, and to inspire relevant in-domain questions, we rely on a corpus of in-domain text about which crowd workers ask questions. How- ever, not all text in a large in-domain corpus, such as a textbook, is suitable for generating questions. We use a simple filter to narrow down the selection to paragraphs likely to produce reasonable ques- tions.
1707.06209#16
Crowdsourcing Multiple Choice Science Questions
We present a novel method for obtaining high-quality, domain-targeted multiple choice questions from crowd workers. Generating these questions can be difficult without trading away originality, relevance or diversity in the answer options. Our method addresses these problems by leveraging a large corpus of domain-specific text and a small set of existing questions. It produces model suggestions for document selection and answer distractor choice which aid the human question generation process. With this method we have assembled SciQ, a dataset of 13.7K multiple choice science exam questions (Dataset available at http://allenai.org/data.html). We demonstrate that the method produces in-domain questions by providing an analysis of this new dataset and by showing that humans cannot distinguish the crowdsourced questions from original questions. When using SciQ as additional training data to existing questions, we observe accuracy improvements on real science exams.
http://arxiv.org/pdf/1707.06209
Johannes Welbl, Nelson F. Liu, Matt Gardner
cs.HC, cs.AI, cs.CL, stat.ML
accepted for the Workshop on Noisy User-generated Text (W-NUT) 2017
null
cs.HC
20170719
20170719
[ { "id": "1606.06031" }, { "id": "1604.04315" } ]
1707.06209
17
Base Corpus. Choosing a relevant, in-domain base corpus to inspire the questions is of crucial importance for the overall characteristics of the dataset. For science questions, the corpus should consist of topics covered in school exams, but not be too linguistically complex, specific, or loaded with technical detail (e.g., scientific papers). We observed that articles retrieved from web searches for science exam keywords (e.g. “animal” and “food”) yield a significant proportion of commer- cial or otherwise irrelevant documents and did not consider this further. Articles from science-related categories in Simple Wikipedia are more targeted and factual, but often state highly specific knowl- edge (e.g., “Hoatzin can reach 25 inches in length and 1.78 pounds of weight.”).
1707.06209#17
Crowdsourcing Multiple Choice Science Questions
We present a novel method for obtaining high-quality, domain-targeted multiple choice questions from crowd workers. Generating these questions can be difficult without trading away originality, relevance or diversity in the answer options. Our method addresses these problems by leveraging a large corpus of domain-specific text and a small set of existing questions. It produces model suggestions for document selection and answer distractor choice which aid the human question generation process. With this method we have assembled SciQ, a dataset of 13.7K multiple choice science exam questions (Dataset available at http://allenai.org/data.html). We demonstrate that the method produces in-domain questions by providing an analysis of this new dataset and by showing that humans cannot distinguish the crowdsourced questions from original questions. When using SciQ as additional training data to existing questions, we observe accuracy improvements on real science exams.
http://arxiv.org/pdf/1707.06209
Johannes Welbl, Nelson F. Liu, Matt Gardner
cs.HC, cs.AI, cs.CL, stat.ML
accepted for the Workshop on Noisy User-generated Text (W-NUT) 2017
null
cs.HC
20170719
20170719
[ { "id": "1606.06031" }, { "id": "1604.04315" } ]
1707.06203
18
Since using imagined rollouts is helpful for this task, we investigate how the length of individual rollouts affects performance. The latter was one of the hyperparameters we searched over. A breakdown by number of unrolling/imagination steps in Fig. 4 (right) shows that using longer rollouts, while not increasing the number of parameters, increases performance: 3 unrolling steps improves speed of learning and top performance significantly over 1 unrolling step, 5 outperforms 3, and as a test for significantly longer rollouts, 15 outperforms 5, reaching above 90% of levels solved. However, in general we found diminishing returns with using I2A with longer rollouts. It is noteworthy that 5 steps is relatively small compared to the number of steps taken to solve a level, for which our best agents need about 50 steps on average. This implies that even such short rollouts can be highly informative. For example, they allow the agent to learn about moves it cannot recover from (such as pushing boxes against walls, in certain contexts). Because I2A with rollouts of length 15 are significantly slower, in the rest of this section, we choose rollouts of length 5 to be our canonical I2A architecture.
1707.06203#18
Imagination-Augmented Agents for Deep Reinforcement Learning
We introduce Imagination-Augmented Agents (I2As), a novel architecture for deep reinforcement learning combining model-free and model-based aspects. In contrast to most existing model-based reinforcement learning and planning methods, which prescribe how a model should be used to arrive at a policy, I2As learn to interpret predictions from a learned environment model to construct implicit plans in arbitrary ways, by using the predictions as additional context in deep policy networks. I2As show improved data efficiency, performance, and robustness to model misspecification compared to several baselines.
http://arxiv.org/pdf/1707.06203
Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra
cs.LG, cs.AI, stat.ML
null
null
cs.LG
20170719
20180214
[ { "id": "1707.03374" }, { "id": "1703.01250" }, { "id": "1511.09249" }, { "id": "1611.03673" }, { "id": "1610.03518" }, { "id": "1705.07177" }, { "id": "1603.08983" }, { "id": "1703.09260" }, { "id": "1611.05397" }, { "id": "1707.03497" }, { "id": "1511.07111" }, { "id": "1604.00289" }, { "id": "1612.08810" } ]
1707.06209
18
We chose science study textbooks as our base corpus because they are directly relevant and lin- guistically tailored towards a student audience. They contain verbal descriptions of general nat- ural principles instead of highly specific example features of particular species. While the number of resources is limited, we compiled a list of 28 books from various online learning resources, in- cluding CK-122 and OpenStax3, who share this material under a Creative Commons License. The books are about biology, chemistry, earth science and physics and span elementary level to college introductory material. A full list of the books we used can be found in the appendix. Document Filter. We designed a rule-based document filter model into which individual para- graphs of the base corpus are fed. The system classifies individual sentences and accepts a para- graph if a minimum number of sentences is ac- cepted. With a small manually annotated dataset of sentences labelled as either relevant or irrele- vant, the filter was designed iteratively by adding filter rules to first improve precision and then re# 2www.ck12.org 3www.openstax.org
1707.06209#18
Crowdsourcing Multiple Choice Science Questions
We present a novel method for obtaining high-quality, domain-targeted multiple choice questions from crowd workers. Generating these questions can be difficult without trading away originality, relevance or diversity in the answer options. Our method addresses these problems by leveraging a large corpus of domain-specific text and a small set of existing questions. It produces model suggestions for document selection and answer distractor choice which aid the human question generation process. With this method we have assembled SciQ, a dataset of 13.7K multiple choice science exam questions (Dataset available at http://allenai.org/data.html). We demonstrate that the method produces in-domain questions by providing an analysis of this new dataset and by showing that humans cannot distinguish the crowdsourced questions from original questions. When using SciQ as additional training data to existing questions, we observe accuracy improvements on real science exams.
http://arxiv.org/pdf/1707.06209
Johannes Welbl, Nelson F. Liu, Matt Gardner
cs.HC, cs.AI, cs.CL, stat.ML
accepted for the Workshop on Noisy User-generated Text (W-NUT) 2017
null
cs.HC
20170719
20170719
[ { "id": "1606.06031" }, { "id": "1604.04315" } ]
1707.06203
19
It terms of data efficiency, it should be noted that the environment model in the I2A was pretrained (see Section 3.2). We conservatively measured the total number of frames needed for pretraining to be lower than 1e8. Thus, even taking pretraining into account, I2A outperforms the baselines after seeing about 3e8 frames in total (compare again Fig. 4 (left)). Of course, data efficiency is even better if the environment model can be reused to solve multiple tasks in the same environment (Section 5). # 4.2 Learning with imperfect models One of the key strengths of I2As is being able to handle learned and thus potentially imperfect environment models. However, for the Sokoban task, our learned environment models actually perform quite well when rolling out imagined trajectories. To demonstrate that I2As can deal with less reliable predictions, we ran another experiment where the I2A used an environment model that had shown much worse performance (due to a smaller number of parameters), with strong artifacts accumulating over iterated rollout predictions (Fig. 5, left). As Fig. 5 (right) shows, even with such a 5
1707.06203#19
Imagination-Augmented Agents for Deep Reinforcement Learning
We introduce Imagination-Augmented Agents (I2As), a novel architecture for deep reinforcement learning combining model-free and model-based aspects. In contrast to most existing model-based reinforcement learning and planning methods, which prescribe how a model should be used to arrive at a policy, I2As learn to interpret predictions from a learned environment model to construct implicit plans in arbitrary ways, by using the predictions as additional context in deep policy networks. I2As show improved data efficiency, performance, and robustness to model misspecification compared to several baselines.
http://arxiv.org/pdf/1707.06203
Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra
cs.LG, cs.AI, stat.ML
null
null
cs.LG
20170719
20180214
[ { "id": "1707.03374" }, { "id": "1703.01250" }, { "id": "1511.09249" }, { "id": "1611.03673" }, { "id": "1610.03518" }, { "id": "1705.07177" }, { "id": "1603.08983" }, { "id": "1703.09260" }, { "id": "1611.05397" }, { "id": "1707.03497" }, { "id": "1511.07111" }, { "id": "1604.00289" }, { "id": "1612.08810" } ]
1707.06209
19
call on a held-out validation set. The final fil- ter included lexical, grammatical, pragmatical and complexity based rules. Specifically, sentences were filtered out if they i) were a question or ex- clamation ii) had no verb phrase iii) contained modal verbs iv) contained imperative phrases v) contained demonstrative pronouns vi) contained personal pronouns other than third-person vii) be- gan with a pronoun viii) contained first names ix) had less than 6 or more than 18 tokens or more than 2 commas x) contained special char- acters other than punctuation xi) had more than three tokens beginning uppercase xii) mentioned a graph, table or web link xiii) began with a dis- course marker (e.g. ‘Nonetheless’) xiv) contained absoulute wording (e.g. ‘never’, ‘nothing’, ‘def- initely’) xv) contained instructional vocabulary ( ‘teacher’, ‘worksheet’, . . . ). Besides the last, these rules are all generally applicable in other domains to identify simple declarative statements in a corpus.
1707.06209#19
Crowdsourcing Multiple Choice Science Questions
We present a novel method for obtaining high-quality, domain-targeted multiple choice questions from crowd workers. Generating these questions can be difficult without trading away originality, relevance or diversity in the answer options. Our method addresses these problems by leveraging a large corpus of domain-specific text and a small set of existing questions. It produces model suggestions for document selection and answer distractor choice which aid the human question generation process. With this method we have assembled SciQ, a dataset of 13.7K multiple choice science exam questions (Dataset available at http://allenai.org/data.html). We demonstrate that the method produces in-domain questions by providing an analysis of this new dataset and by showing that humans cannot distinguish the crowdsourced questions from original questions. When using SciQ as additional training data to existing questions, we observe accuracy improvements on real science exams.
http://arxiv.org/pdf/1707.06209
Johannes Welbl, Nelson F. Liu, Matt Gardner
cs.HC, cs.AI, cs.CL, stat.ML
accepted for the Workshop on Noisy User-generated Text (W-NUT) 2017
null
cs.HC
20170719
20170719
[ { "id": "1606.06031" }, { "id": "1604.04315" } ]
1707.06203
20
5 clearly flawed environment model, I2A performs similarly well. This implies that I2As can learn to ignore the latter parts of the rollout as errors accumulate, but still use initial predictions when errors are less severe. Finally, note that in our experiments, surprisingly, the I2A agent with poor model ended outperforming the I2A agent with good model. We posit this was due to random initialization, though we cannot exclude the noisy model providing some form of regularization — more work will be required to investigate this effect. Sokoban good vs. bad models B o Rollout steps — 2A: good — 2A: poor — MC: good — MC: poor os 92 a @ fraction of levels solved o 98 B 0.0 0.2 0.4 0.6 0.8 1.0 environment steps led Rollout steps Figure 5: Experiments with a noisy environment model. Left: each row shows an example 5-step rollout after conditioning on an environment observation. Errors accumulate and lead to various artefacts, including missing or duplicate sprites. Right: comparison of Monte-Carlo (MC) search and I2A when using either the accurate or the noisy model for rollouts.
1707.06203#20
Imagination-Augmented Agents for Deep Reinforcement Learning
We introduce Imagination-Augmented Agents (I2As), a novel architecture for deep reinforcement learning combining model-free and model-based aspects. In contrast to most existing model-based reinforcement learning and planning methods, which prescribe how a model should be used to arrive at a policy, I2As learn to interpret predictions from a learned environment model to construct implicit plans in arbitrary ways, by using the predictions as additional context in deep policy networks. I2As show improved data efficiency, performance, and robustness to model misspecification compared to several baselines.
http://arxiv.org/pdf/1707.06203
Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra
cs.LG, cs.AI, stat.ML
null
null
cs.LG
20170719
20180214
[ { "id": "1707.03374" }, { "id": "1703.01250" }, { "id": "1511.09249" }, { "id": "1611.03673" }, { "id": "1610.03518" }, { "id": "1705.07177" }, { "id": "1603.08983" }, { "id": "1703.09260" }, { "id": "1611.05397" }, { "id": "1707.03497" }, { "id": "1511.07111" }, { "id": "1604.00289" }, { "id": "1612.08810" } ]
1707.06209
20
Question Formulation Task. To actually gen- erate in-domain QA pairs, we presented the fil- tered, in-domain text to crowd workers and had them ask a question that could be answered by the presented passage. Although most undesirable paragraphs had been filtered out beforehand, a non-negligible proportion of irrelevant documents remained. To circumvent this problem, we showed each worker three textbook paragraphs and gave them the freedom to choose one or to reject all of them if irrelevant. Once a paragraph had been chosen, it was not reused to formulate more ques- tions about it. We further specified desirable char- acteristics of science exam questions: no yes/no questions, not requiring further context, query- ing general principles rather than highly specific facts, question length between 6-30 words, answer length up to 3 words (preferring shorter), no am- biguous questions, answers clear from paragraph chosen. Examples for both desirable and undesir- able questions were given, with explanations for why they were good or bad examples. Further- more we encouraged workers to give feedback, and a contact email was provided to address up- coming questions
1707.06209#20
Crowdsourcing Multiple Choice Science Questions
We present a novel method for obtaining high-quality, domain-targeted multiple choice questions from crowd workers. Generating these questions can be difficult without trading away originality, relevance or diversity in the answer options. Our method addresses these problems by leveraging a large corpus of domain-specific text and a small set of existing questions. It produces model suggestions for document selection and answer distractor choice which aid the human question generation process. With this method we have assembled SciQ, a dataset of 13.7K multiple choice science exam questions (Dataset available at http://allenai.org/data.html). We demonstrate that the method produces in-domain questions by providing an analysis of this new dataset and by showing that humans cannot distinguish the crowdsourced questions from original questions. When using SciQ as additional training data to existing questions, we observe accuracy improvements on real science exams.
http://arxiv.org/pdf/1707.06209
Johannes Welbl, Nelson F. Liu, Matt Gardner
cs.HC, cs.AI, cs.CL, stat.ML
accepted for the Workshop on Noisy User-generated Text (W-NUT) 2017
null
cs.HC
20170719
20170719
[ { "id": "1606.06031" }, { "id": "1604.04315" } ]
1707.06203
21
Learning a rollout encoder is what enables I2As to deal with imperfect model predictions. We can further demonstrate this point by comparing them to a setup without a rollout encoder: as in the classic Monte-Carlo search algorithm of Tesauro and Galperin [25], we now explicitly estimate the value of each action from rollouts, rather than learning an arbitrary encoding of the rollouts, as in I2A. We then select actions according to those values. Specifically, we learn a value function V from states, and, using a rollout policy 7, sample a trajectory rollout for each initial action, and compute the corresponding estimated Monte Carlo return >, —77'r? + V (x4) where ((x?, r'))t=0..7 comes from a trajectory initialized with action a. Action a is chosen with probability proportional to exp(—(0,-9..7. 7’ + V(24))/6), where 6 is a learned temperature. This can be thought of as a form of I2A with a fixed summarizer (which computes returns), no model-free path, and very simple policy head. In this architecture, only V, 7 and 6 are learned|'|
1707.06203#21
Imagination-Augmented Agents for Deep Reinforcement Learning
We introduce Imagination-Augmented Agents (I2As), a novel architecture for deep reinforcement learning combining model-free and model-based aspects. In contrast to most existing model-based reinforcement learning and planning methods, which prescribe how a model should be used to arrive at a policy, I2As learn to interpret predictions from a learned environment model to construct implicit plans in arbitrary ways, by using the predictions as additional context in deep policy networks. I2As show improved data efficiency, performance, and robustness to model misspecification compared to several baselines.
http://arxiv.org/pdf/1707.06203
Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra
cs.LG, cs.AI, stat.ML
null
null
cs.LG
20170719
20180214
[ { "id": "1707.03374" }, { "id": "1703.01250" }, { "id": "1511.09249" }, { "id": "1611.03673" }, { "id": "1610.03518" }, { "id": "1705.07177" }, { "id": "1603.08983" }, { "id": "1703.09260" }, { "id": "1611.05397" }, { "id": "1707.03497" }, { "id": "1511.07111" }, { "id": "1604.00289" }, { "id": "1612.08810" } ]
1707.06209
21
with explanations for why they were good or bad examples. Further- more we encouraged workers to give feedback, and a contact email was provided to address up- coming questions directly; multiple crowdwork- ers made use of this opportunity. The task was advertised on Amazon Mechanical Turk, requiring Master’s status for the crowdworkers, and paying a compensation of 0.30$ per HIT. A total of 175 workers participated in the whole crowdsourcing
1707.06209#21
Crowdsourcing Multiple Choice Science Questions
We present a novel method for obtaining high-quality, domain-targeted multiple choice questions from crowd workers. Generating these questions can be difficult without trading away originality, relevance or diversity in the answer options. Our method addresses these problems by leveraging a large corpus of domain-specific text and a small set of existing questions. It produces model suggestions for document selection and answer distractor choice which aid the human question generation process. With this method we have assembled SciQ, a dataset of 13.7K multiple choice science exam questions (Dataset available at http://allenai.org/data.html). We demonstrate that the method produces in-domain questions by providing an analysis of this new dataset and by showing that humans cannot distinguish the crowdsourced questions from original questions. When using SciQ as additional training data to existing questions, we observe accuracy improvements on real science exams.
http://arxiv.org/pdf/1707.06209
Johannes Welbl, Nelson F. Liu, Matt Gardner
cs.HC, cs.AI, cs.CL, stat.ML
accepted for the Workshop on Noisy User-generated Text (W-NUT) 2017
null
cs.HC
20170719
20170719
[ { "id": "1606.06031" }, { "id": "1604.04315" } ]
1707.06203
22
We ran this rollout encoder-free agent on Sokoban with both the accurate and the noisy environment model. We chose the length of the rollout to be optimal for each environment model (from the same range as for I2A, i.e. from 1 to 5). As can be seen in Fig. 5 (right),5 when using the high accuracy environment model, the performance of the encoder-free agent is similar to that of the baseline standard agent. However, unlike I2A, its performance degrades catastrophically when using the poor model, showcasing the susceptibility to model misspecification. # 4.3 Further insights into the workings of the I2A architecture
1707.06203#22
Imagination-Augmented Agents for Deep Reinforcement Learning
We introduce Imagination-Augmented Agents (I2As), a novel architecture for deep reinforcement learning combining model-free and model-based aspects. In contrast to most existing model-based reinforcement learning and planning methods, which prescribe how a model should be used to arrive at a policy, I2As learn to interpret predictions from a learned environment model to construct implicit plans in arbitrary ways, by using the predictions as additional context in deep policy networks. I2As show improved data efficiency, performance, and robustness to model misspecification compared to several baselines.
http://arxiv.org/pdf/1707.06203
Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra
cs.LG, cs.AI, stat.ML
null
null
cs.LG
20170719
20180214
[ { "id": "1707.03374" }, { "id": "1703.01250" }, { "id": "1511.09249" }, { "id": "1611.03673" }, { "id": "1610.03518" }, { "id": "1705.07177" }, { "id": "1603.08983" }, { "id": "1703.09260" }, { "id": "1611.05397" }, { "id": "1707.03497" }, { "id": "1511.07111" }, { "id": "1604.00289" }, { "id": "1612.08810" } ]
1707.06209
22
project. In 12.1% of the cases all three documents were rejected, much fewer than if a single document had been presented (assuming the same proportion of relevant documents). Thus, besides being more economical, proposing several documents reduces the risk of generating irrelevant questions and in the best case helps match a crowdworker’s indi- vidual preferences. # 3.2 Second task: selecting distractors Generating convincing answer distractors is of great importance, since bad distractors can make a question trivial to solve. When writing science questions ourselves, we found that finding rea- sonable distractors was the most time-consuming part overall. Thus, we support the process in our crowdsourcing task with model-generated answer distractor suggestions. This primed the workers with relevant examples, and we allowed them to use the suggested distractors directly if they were good enough. We next discuss characteristics of good answer distractors, propose and evaluate a model for suggesting such distractors, and de- scribe the crowdsourcing task that uses them.
1707.06209#22
Crowdsourcing Multiple Choice Science Questions
We present a novel method for obtaining high-quality, domain-targeted multiple choice questions from crowd workers. Generating these questions can be difficult without trading away originality, relevance or diversity in the answer options. Our method addresses these problems by leveraging a large corpus of domain-specific text and a small set of existing questions. It produces model suggestions for document selection and answer distractor choice which aid the human question generation process. With this method we have assembled SciQ, a dataset of 13.7K multiple choice science exam questions (Dataset available at http://allenai.org/data.html). We demonstrate that the method produces in-domain questions by providing an analysis of this new dataset and by showing that humans cannot distinguish the crowdsourced questions from original questions. When using SciQ as additional training data to existing questions, we observe accuracy improvements on real science exams.
http://arxiv.org/pdf/1707.06209
Johannes Welbl, Nelson F. Liu, Matt Gardner
cs.HC, cs.AI, cs.CL, stat.ML
accepted for the Workshop on Noisy User-generated Text (W-NUT) 2017
null
cs.HC
20170719
20170719
[ { "id": "1606.06031" }, { "id": "1604.04315" } ]
1707.06203
23
# 4.3 Further insights into the workings of the I2A architecture So far, we have studied the role of the rollout encoder. To show the importance of various other components of the I2A, we performed additional control experiments. Results are plotted in Fig. 4 (left) for comparison. First, I2A with the copy model (Section 3.3) performs far worse, demonstrating that the environment model is indeed crucial. Second, we trained an I2A where the environment model was predicting no rewards, only observations. This also performed worse. However, after much longer training (3e9 steps), these agents did recover performance close to that of the original I2A (see Appendix D.2), which was never the case for the baseline agent even with that many steps. Hence, reward prediction is helpful but not absolutely necessary in this task, and imagined observations alone are informative enough to obtain high performance on Sokoban. Note this is in contrast to many classical planning and model-based reinforcement learning methods, which often rely on reward prediction. 4the rollout policy is still learned by distillation from the output policy 5Note: the MC curves in Fig. 5 only used a single agent rather than averages. 6 # model # model # model # model Imagination efficiency and comparison with perfect-model planning methods
1707.06203#23
Imagination-Augmented Agents for Deep Reinforcement Learning
We introduce Imagination-Augmented Agents (I2As), a novel architecture for deep reinforcement learning combining model-free and model-based aspects. In contrast to most existing model-based reinforcement learning and planning methods, which prescribe how a model should be used to arrive at a policy, I2As learn to interpret predictions from a learned environment model to construct implicit plans in arbitrary ways, by using the predictions as additional context in deep policy networks. I2As show improved data efficiency, performance, and robustness to model misspecification compared to several baselines.
http://arxiv.org/pdf/1707.06203
Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra
cs.LG, cs.AI, stat.ML
null
null
cs.LG
20170719
20180214
[ { "id": "1707.03374" }, { "id": "1703.01250" }, { "id": "1511.09249" }, { "id": "1611.03673" }, { "id": "1610.03518" }, { "id": "1705.07177" }, { "id": "1603.08983" }, { "id": "1703.09260" }, { "id": "1611.05397" }, { "id": "1707.03497" }, { "id": "1511.07111" }, { "id": "1604.00289" }, { "id": "1612.08810" } ]
1707.06209
23
Distractor Characteristics. Multiple choice science questions with nonsensical incorrect an- swer options are not interesting as a task to study, nor are they useful for training a model to do well on real science exams, as the model would not need to do any kind of science reasoning to answer the training questions correctly. The difficulty in generating a good multiple choice question, then, lies not in identifying expressions which are false answers to q, but in generating expressions which are plausible false answers. Concretely, besides being false answers, good distractors should thus: • be grammatically consistent: for the question “When animals use energy, what is always produced?” a noun phrase is expected. • be consistent with respect to abstract proper- ties: if the correct answer belongs to a certain category (e.g., chemical elements) good dis- tractors likely should as well. • be consistent with the semantic context of the question: a question about animals and en- ergy should not have newspaper or bingo as distractors. Distractor Model Overview. We now intro- duce a model which generates plausible answer distrators and takes into account the above criteria. On a basic level, it ranks candidates from a large collection C of possible distractors and selects the highest scoring items. Its ranking function r:(q,a",a') + sq € [0,1] qd)
1707.06209#23
Crowdsourcing Multiple Choice Science Questions
We present a novel method for obtaining high-quality, domain-targeted multiple choice questions from crowd workers. Generating these questions can be difficult without trading away originality, relevance or diversity in the answer options. Our method addresses these problems by leveraging a large corpus of domain-specific text and a small set of existing questions. It produces model suggestions for document selection and answer distractor choice which aid the human question generation process. With this method we have assembled SciQ, a dataset of 13.7K multiple choice science exam questions (Dataset available at http://allenai.org/data.html). We demonstrate that the method produces in-domain questions by providing an analysis of this new dataset and by showing that humans cannot distinguish the crowdsourced questions from original questions. When using SciQ as additional training data to existing questions, we observe accuracy improvements on real science exams.
http://arxiv.org/pdf/1707.06209
Johannes Welbl, Nelson F. Liu, Matt Gardner
cs.HC, cs.AI, cs.CL, stat.ML
accepted for the Workshop on Noisy User-generated Text (W-NUT) 2017
null
cs.HC
20170719
20170719
[ { "id": "1606.06031" }, { "id": "1604.04315" } ]
1707.06203
24
6 # model # model # model # model Imagination efficiency and comparison with perfect-model planning methods ∼ 1400 I2A MC search @95 ∼ 4000 ∼ 25000 ∼ 100000 Random search ∼ millions I2A@87 MCTS@87 MCTS@95 2 3 4 5 6 7 99.5 97 92 87 77 66 53 Standard (%) 97 87 72 60 47 32 23 Boxes I2A (%) 1 Table 1: Imagination efficiency of various architectures. Table 2: Generalization of I2A to environ- ments with different number of boxes.
1707.06203#24
Imagination-Augmented Agents for Deep Reinforcement Learning
We introduce Imagination-Augmented Agents (I2As), a novel architecture for deep reinforcement learning combining model-free and model-based aspects. In contrast to most existing model-based reinforcement learning and planning methods, which prescribe how a model should be used to arrive at a policy, I2As learn to interpret predictions from a learned environment model to construct implicit plans in arbitrary ways, by using the predictions as additional context in deep policy networks. I2As show improved data efficiency, performance, and robustness to model misspecification compared to several baselines.
http://arxiv.org/pdf/1707.06203
Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra
cs.LG, cs.AI, stat.ML
null
null
cs.LG
20170719
20180214
[ { "id": "1707.03374" }, { "id": "1703.01250" }, { "id": "1511.09249" }, { "id": "1611.03673" }, { "id": "1610.03518" }, { "id": "1705.07177" }, { "id": "1603.08983" }, { "id": "1703.09260" }, { "id": "1611.05397" }, { "id": "1707.03497" }, { "id": "1511.07111" }, { "id": "1604.00289" }, { "id": "1612.08810" } ]
1707.06209
24
r:(q,a",a') + sq € [0,1] qd) produces a confidence score sq: for whether a’ € C is a good distractor in the context of question q and correct answer a*. For r we use the scoring unction sq; = P(a’ is good| q,a*) of a binary classifier which distinguishes plausible (good) dis- tractors from random (bad) distractors based on eatures ¢(q, a*,a’). For classification, we train r on actual in-domain questions with observed false answers as the plausible (good) distractors, and random expressions as negative examples, sam- pled in equal proportion from C.. As classifier we chose a random forest (Breiman, 2001), because of its robust performance in small and mid-sized data settings and its power to incorporate nonlin- ear feature interactions, in contrast, e.g., to logistic regression.
1707.06209#24
Crowdsourcing Multiple Choice Science Questions
We present a novel method for obtaining high-quality, domain-targeted multiple choice questions from crowd workers. Generating these questions can be difficult without trading away originality, relevance or diversity in the answer options. Our method addresses these problems by leveraging a large corpus of domain-specific text and a small set of existing questions. It produces model suggestions for document selection and answer distractor choice which aid the human question generation process. With this method we have assembled SciQ, a dataset of 13.7K multiple choice science exam questions (Dataset available at http://allenai.org/data.html). We demonstrate that the method produces in-domain questions by providing an analysis of this new dataset and by showing that humans cannot distinguish the crowdsourced questions from original questions. When using SciQ as additional training data to existing questions, we observe accuracy improvements on real science exams.
http://arxiv.org/pdf/1707.06209
Johannes Welbl, Nelson F. Liu, Matt Gardner
cs.HC, cs.AI, cs.CL, stat.ML
accepted for the Workshop on Noisy User-generated Text (W-NUT) 2017
null
cs.HC
20170719
20170719
[ { "id": "1606.06031" }, { "id": "1604.04315" } ]
1707.06203
25
Table 1: Imagination efficiency of various architectures. Table 2: Generalization of I2A to environ- ments with different number of boxes. In previous sections, we illustrated that I2As can be used to efficiently solve planning problems and can be robust in the face of model misspecification. Here, we ask a different question – if we do assume a nearly perfect model, how does I2A compare to competitive planning methods? Beyond raw performance we focus particularly on the efficiency of planning, i.e. the number of imagination steps required to solve a fixed ratio of levels. We compare our regular I2A agent to a variant of Monte Carlo Tree Search (MCTS), which is a modern guided tree search algorithm [12, 26]. For our MCTS implementation, we aimed to have a strong baseline by using recent ideas: we include transposition tables [27], and evaluate the returns of leaf nodes by using a value network (in this case, a deep residual value network trained with the same total amount of data as I2A; see appendix D.3 for further details).
1707.06203#25
Imagination-Augmented Agents for Deep Reinforcement Learning
We introduce Imagination-Augmented Agents (I2As), a novel architecture for deep reinforcement learning combining model-free and model-based aspects. In contrast to most existing model-based reinforcement learning and planning methods, which prescribe how a model should be used to arrive at a policy, I2As learn to interpret predictions from a learned environment model to construct implicit plans in arbitrary ways, by using the predictions as additional context in deep policy networks. I2As show improved data efficiency, performance, and robustness to model misspecification compared to several baselines.
http://arxiv.org/pdf/1707.06203
Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra
cs.LG, cs.AI, stat.ML
null
null
cs.LG
20170719
20180214
[ { "id": "1707.03374" }, { "id": "1703.01250" }, { "id": "1511.09249" }, { "id": "1611.03673" }, { "id": "1610.03518" }, { "id": "1705.07177" }, { "id": "1603.08983" }, { "id": "1703.09260" }, { "id": "1611.05397" }, { "id": "1707.03497" }, { "id": "1511.07111" }, { "id": "1604.00289" }, { "id": "1612.08810" } ]
1707.06209
25
Distractor Model Features. This section de- scribes the features ¢(g,a*,a’) used by the dis- tractor ranking model. With these features, the distractor model can learn characteristics of real distractors from original questions and will sug- gest those distractors that it deems the most realis- tic for a question. The following features of ques- tion q, correct answer a* and a tentative distractor expression a’ were used: e bags of GloVe embeddings for g, a* and a’; e an indicator for POS-tag consistency of a* and a’; e singular/plural consistency of a* and a’; e log. avg. word frequency in a* and a’; e Levenshtein string edit distance between a* and a’; e suffix consistency of a* and a’ (firing e.g. for (regeneration, exhaustion)); e token overlap indicators for g, a* and a’; e token and character length for a* and a’ and similarity therein; e indicators for numerical content in qg, a* and a’ consistency therein; e indicators for units of measure in g, a* and a’, and for co-occurrence of the same unit;
1707.06209#25
Crowdsourcing Multiple Choice Science Questions
We present a novel method for obtaining high-quality, domain-targeted multiple choice questions from crowd workers. Generating these questions can be difficult without trading away originality, relevance or diversity in the answer options. Our method addresses these problems by leveraging a large corpus of domain-specific text and a small set of existing questions. It produces model suggestions for document selection and answer distractor choice which aid the human question generation process. With this method we have assembled SciQ, a dataset of 13.7K multiple choice science exam questions (Dataset available at http://allenai.org/data.html). We demonstrate that the method produces in-domain questions by providing an analysis of this new dataset and by showing that humans cannot distinguish the crowdsourced questions from original questions. When using SciQ as additional training data to existing questions, we observe accuracy improvements on real science exams.
http://arxiv.org/pdf/1707.06209
Johannes Welbl, Nelson F. Liu, Matt Gardner
cs.HC, cs.AI, cs.CL, stat.ML
accepted for the Workshop on Noisy User-generated Text (W-NUT) 2017
null
cs.HC
20170719
20170719
[ { "id": "1606.06031" }, { "id": "1604.04315" } ]
1707.06203
26
Running MCTS on Sokoban, we find that it can achieve high performance, but at a cost of a much higher number of necessary environment model simulation steps: MCTS reaches the I2A performance of 87% of levels solved when using 25k model simulation steps on average to solve a level, compared to 1.4k environment model calls for I2A. Using even more simulation steps, MCTS performance increases further, e.g. reaching 95% with 100k steps.
1707.06203#26
Imagination-Augmented Agents for Deep Reinforcement Learning
We introduce Imagination-Augmented Agents (I2As), a novel architecture for deep reinforcement learning combining model-free and model-based aspects. In contrast to most existing model-based reinforcement learning and planning methods, which prescribe how a model should be used to arrive at a policy, I2As learn to interpret predictions from a learned environment model to construct implicit plans in arbitrary ways, by using the predictions as additional context in deep policy networks. I2As show improved data efficiency, performance, and robustness to model misspecification compared to several baselines.
http://arxiv.org/pdf/1707.06203
Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra
cs.LG, cs.AI, stat.ML
null
null
cs.LG
20170719
20180214
[ { "id": "1707.03374" }, { "id": "1703.01250" }, { "id": "1511.09249" }, { "id": "1611.03673" }, { "id": "1610.03518" }, { "id": "1705.07177" }, { "id": "1603.08983" }, { "id": "1703.09260" }, { "id": "1611.05397" }, { "id": "1707.03497" }, { "id": "1511.07111" }, { "id": "1604.00289" }, { "id": "1612.08810" } ]
1707.06209
26
e indicators for units of measure in g, a* and a’, and for co-occurrence of the same unit; e WordNet-based hypernymy indicators be- tween tokens in g, a* and a’, in both direc- tions and potentially via two steps; e indicators for 2-step connections between en- tities in a* and a’ via a KB based on OpenIE triples (Mausam et al., 2012) extracted from pages in Simple Wikipedia about anatomical structures; e indicators for shared Wordnet-hyponymy of a* and a’ to one of the concepts most fre- quently generalising all three question dis- tractors in the training set (e.g. element, or- gan, organism). The intuition for the knowledge-base link and hypernymy indicator features is that they can re- veal sibling structures of a* and a’ with respect to a shared property or hypernym. For example, if the correct answer a* is heart, then a plausible distractor a’ like liver would share with a* the hy- ponymy relation to organ in WordNet.
1707.06209#26
Crowdsourcing Multiple Choice Science Questions
We present a novel method for obtaining high-quality, domain-targeted multiple choice questions from crowd workers. Generating these questions can be difficult without trading away originality, relevance or diversity in the answer options. Our method addresses these problems by leveraging a large corpus of domain-specific text and a small set of existing questions. It produces model suggestions for document selection and answer distractor choice which aid the human question generation process. With this method we have assembled SciQ, a dataset of 13.7K multiple choice science exam questions (Dataset available at http://allenai.org/data.html). We demonstrate that the method produces in-domain questions by providing an analysis of this new dataset and by showing that humans cannot distinguish the crowdsourced questions from original questions. When using SciQ as additional training data to existing questions, we observe accuracy improvements on real science exams.
http://arxiv.org/pdf/1707.06209
Johannes Welbl, Nelson F. Liu, Matt Gardner
cs.HC, cs.AI, cs.CL, stat.ML
accepted for the Workshop on Noisy User-generated Text (W-NUT) 2017
null
cs.HC
20170719
20170719
[ { "id": "1606.06031" }, { "id": "1604.04315" } ]
1707.06203
27
If we assume access to a high-accuracy environment model (including the reward prediction), we can also push I2A performance further, by performing basic Monte-Carlo search with a trained I2A for the rollout policy: we let the agent play whole episodes in simulation (where I2A itself uses the environment model for short-term rollouts, hence corresponding to using a model-within-a-model), and execute a successful action sequence if found, up to a maximum number of retries; this is reminiscent of nested rollouts [28]. With a fixed maximum of 10 retries, we obtain a score of 95% (up from 87% for the I2A itself). The total average number of model simulation steps needed to solve a level, including running the model in the outer loop, is now 4k, again much lower than the corresponding MCTS run with 100k steps. Note again, this approach requires a nearly perfect model; we don’t expect I2A with MC search to perform well with approximate models. See Table 1 for a summary of the imagination efficiency for the different methods. # 4.5 Generalization experiments
1707.06203#27
Imagination-Augmented Agents for Deep Reinforcement Learning
We introduce Imagination-Augmented Agents (I2As), a novel architecture for deep reinforcement learning combining model-free and model-based aspects. In contrast to most existing model-based reinforcement learning and planning methods, which prescribe how a model should be used to arrive at a policy, I2As learn to interpret predictions from a learned environment model to construct implicit plans in arbitrary ways, by using the predictions as additional context in deep policy networks. I2As show improved data efficiency, performance, and robustness to model misspecification compared to several baselines.
http://arxiv.org/pdf/1707.06203
Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra
cs.LG, cs.AI, stat.ML
null
null
cs.LG
20170719
20180214
[ { "id": "1707.03374" }, { "id": "1703.01250" }, { "id": "1511.09249" }, { "id": "1611.03673" }, { "id": "1610.03518" }, { "id": "1705.07177" }, { "id": "1603.08983" }, { "id": "1703.09260" }, { "id": "1611.05397" }, { "id": "1707.03497" }, { "id": "1511.07111" }, { "id": "1604.00289" }, { "id": "1612.08810" } ]
1707.06209
27
Model Training. We first constructed a large candidate distractor set C whose items were to be ranked by the model. C contained 488,819 ex- pressions, consisting of (1) the 400K items in the GloVe vocabulary (Pennington et al., 2014); (2) answer distractors observed in training questions; (3) a list of noun phrases from Simple Wikipedia articles about body parts; (4) a noun vocabulary of ∼6000 expressions extracted from primary school science texts. In examples where a∗ consisted of multiple tokens, we added to C any expression that could be obtained by exchanging one unigram in a∗ with another unigram from C. The model was then trained on a set of 3705 sci- ence exam questions (4th and 8th grade) , separated into 80% training questions and 20% validation questions. Each question came with four answer options, providing three good distractor examples. We used scikit-learn’s implementation of ran- dom forests with default parameters. We used 500 trees and enforced at least 4 samples per tree leaf. Distractor Model Evaluation. Our model achieved 99, 4% training and 94, 2% validation ac- curacy overall. Example predictions of the dis- tractor model are shown in Table 1. Qualita- tively, the predictions appear acceptable in most cases, though the quality is not high enough to use
1707.06209#27
Crowdsourcing Multiple Choice Science Questions
We present a novel method for obtaining high-quality, domain-targeted multiple choice questions from crowd workers. Generating these questions can be difficult without trading away originality, relevance or diversity in the answer options. Our method addresses these problems by leveraging a large corpus of domain-specific text and a small set of existing questions. It produces model suggestions for document selection and answer distractor choice which aid the human question generation process. With this method we have assembled SciQ, a dataset of 13.7K multiple choice science exam questions (Dataset available at http://allenai.org/data.html). We demonstrate that the method produces in-domain questions by providing an analysis of this new dataset and by showing that humans cannot distinguish the crowdsourced questions from original questions. When using SciQ as additional training data to existing questions, we observe accuracy improvements on real science exams.
http://arxiv.org/pdf/1707.06209
Johannes Welbl, Nelson F. Liu, Matt Gardner
cs.HC, cs.AI, cs.CL, stat.ML
accepted for the Workshop on Noisy User-generated Text (W-NUT) 2017
null
cs.HC
20170719
20170719
[ { "id": "1606.06031" }, { "id": "1604.04315" } ]
1707.06203
28
# 4.5 Generalization experiments Lastly, we probe the generalization capabilities of I2As, beyond handling random level layouts in Sokoban. Our agents were trained on levels with 4 boxes. Table 2 shows the performance of I2A when such an agent was tested on levels with different numbers of boxes, and that of the standard model-free agent for comparison. We found that I2As generalizes well; at 7 boxes, the I2A agent is still able to solve more than half of the levels, nearly as many as the standard agent on 4 boxes. # 5 Learning one model for many tasks in MiniPacman In our final set of experiments, we demonstrate how a single model, which provides the I2A with a general understanding of the dynamics governing an environment, can be used to solve a collection of different tasks. We designed a simple, light-weight domain called MiniPacman, which allows us to easily define multiple tasks in an environment with shared state transitions and which enables us to do rapid experimentation. In MiniPacman (Fig. 6, left), the player explores a maze that contains food while being chased by ghosts. The maze also contains power pills; when eaten, for a fixed number of steps, the player moves faster, and the ghosts run away and can be eaten. These dynamics are common to all tasks. Each task 7
1707.06203#28
Imagination-Augmented Agents for Deep Reinforcement Learning
We introduce Imagination-Augmented Agents (I2As), a novel architecture for deep reinforcement learning combining model-free and model-based aspects. In contrast to most existing model-based reinforcement learning and planning methods, which prescribe how a model should be used to arrive at a policy, I2As learn to interpret predictions from a learned environment model to construct implicit plans in arbitrary ways, by using the predictions as additional context in deep policy networks. I2As show improved data efficiency, performance, and robustness to model misspecification compared to several baselines.
http://arxiv.org/pdf/1707.06203
Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra
cs.LG, cs.AI, stat.ML
null
null
cs.LG
20170719
20180214
[ { "id": "1707.03374" }, { "id": "1703.01250" }, { "id": "1511.09249" }, { "id": "1611.03673" }, { "id": "1610.03518" }, { "id": "1705.07177" }, { "id": "1603.08983" }, { "id": "1703.09260" }, { "id": "1611.05397" }, { "id": "1707.03497" }, { "id": "1511.07111" }, { "id": "1604.00289" }, { "id": "1612.08810" } ]
1707.06209
28
them directly without additional filtering by crowd workers. In many cases the distractor is semanti- cally related, but does not have the correct type (e.g., in column 1, “nutrient” and “soil” are not el- ements). Some predictions are misaligned in their level of specificity (e.g. “frogs” in column 3), and multiword expressions were more likely to be un- related or ungrammatical despite the inclusion of part of speech features. Even where the predicted distractors are not fully coherent, showing them to a crowd worker still has a positive priming effect, helping the worker generate good distractors either by provid- ing nearly-good-enough candidates, or by forcing the worker to think why a suggestion is not a good distractor for the question. Distractor Selection Task. To actually gener- ate a multiple choice science question, we show the result of the first task, a (q, a∗) pair, to a crowd worker, along with the top six distractors sug- gested from the previously described model. The goal of this task is two-fold: (1) quality control (validating a previously generated (q, a∗) pair), and (2) validating the predicted distractors or writ- ing new ones if necessary.
1707.06209#28
Crowdsourcing Multiple Choice Science Questions
We present a novel method for obtaining high-quality, domain-targeted multiple choice questions from crowd workers. Generating these questions can be difficult without trading away originality, relevance or diversity in the answer options. Our method addresses these problems by leveraging a large corpus of domain-specific text and a small set of existing questions. It produces model suggestions for document selection and answer distractor choice which aid the human question generation process. With this method we have assembled SciQ, a dataset of 13.7K multiple choice science exam questions (Dataset available at http://allenai.org/data.html). We demonstrate that the method produces in-domain questions by providing an analysis of this new dataset and by showing that humans cannot distinguish the crowdsourced questions from original questions. When using SciQ as additional training data to existing questions, we observe accuracy improvements on real science exams.
http://arxiv.org/pdf/1707.06209
Johannes Welbl, Nelson F. Liu, Matt Gardner
cs.HC, cs.AI, cs.CL, stat.ML
accepted for the Workshop on Noisy User-generated Text (W-NUT) 2017
null
cs.HC
20170719
20170719
[ { "id": "1606.06031" }, { "id": "1604.04315" } ]
1707.06203
29
7 is defined by a vector wrew ∈ R5, associating a reward to each of the following five events: moving, eating food, eating a power pill, eating a ghost, and being eaten by a ghost. We consider five different reward vectors inducing five different tasks. Empirically we found that the reward schemes were sufficiently different to lead to very different high-performing policies6 (for more details on the game and tasks, see appendix C. To illustrate the benefits of model-based methods in this multi-task setting, we train a single environ- ment model to predict both observations (frames) and events (as defined above, e.g. "eating a ghost"). Note that the environment model is effectively shared across all tasks, so that the marginal cost of learning the model is nil. During training and testing, the I2As have access to the frame and reward predictions generated by the model; the latter was computed from model event predictions and the task reward vector wrew. As such, the reward vector wrew can be interpreted as an ‘instruction’ about which task to solve in the same environment [cf. the Frostbite challenge of 11]. For a fair comparison, we also provide all baseline agents with the event variable as input.7
1707.06203#29
Imagination-Augmented Agents for Deep Reinforcement Learning
We introduce Imagination-Augmented Agents (I2As), a novel architecture for deep reinforcement learning combining model-free and model-based aspects. In contrast to most existing model-based reinforcement learning and planning methods, which prescribe how a model should be used to arrive at a policy, I2As learn to interpret predictions from a learned environment model to construct implicit plans in arbitrary ways, by using the predictions as additional context in deep policy networks. I2As show improved data efficiency, performance, and robustness to model misspecification compared to several baselines.
http://arxiv.org/pdf/1707.06203
Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra
cs.LG, cs.AI, stat.ML
null
null
cs.LG
20170719
20180214
[ { "id": "1707.03374" }, { "id": "1703.01250" }, { "id": "1511.09249" }, { "id": "1611.03673" }, { "id": "1610.03518" }, { "id": "1705.07177" }, { "id": "1603.08983" }, { "id": "1703.09260" }, { "id": "1611.05397" }, { "id": "1707.03497" }, { "id": "1511.07111" }, { "id": "1604.00289" }, { "id": "1612.08810" } ]
1707.06209
29
The first instruction was to judge whether the question could appear in a school science exam; questions could be marked as ungrammatical, hav- ing a false answer, being unrelated to science or re- quiring very specific background knowledge. The total proportion of questions passing was 92.8%. The second instruction was to select up to two of the six suggested distractors, and to write at least one distractor by themselves such that there is a total of three. The requirement for the worker to generate one of their own distractors, instead of being allowed to select three predicted distractors, was added after an initial pilot of the task, as we found that it forced workers to engage more with the task and resulted in higher quality distractors. We gave examples of desirable and undesir- able distractors and the opportunity to provide feedback, as before. We advertised the task on Amazon Mechanical Turk, paying 0.2$ per HIT, again requiring AMT Master’s status. On aver- age, crowd workers found the predicted distrac- tors good enough to include in the final question around half of the time, resulting in 36.1% of the distractors in the final
1707.06209#29
Crowdsourcing Multiple Choice Science Questions
We present a novel method for obtaining high-quality, domain-targeted multiple choice questions from crowd workers. Generating these questions can be difficult without trading away originality, relevance or diversity in the answer options. Our method addresses these problems by leveraging a large corpus of domain-specific text and a small set of existing questions. It produces model suggestions for document selection and answer distractor choice which aid the human question generation process. With this method we have assembled SciQ, a dataset of 13.7K multiple choice science exam questions (Dataset available at http://allenai.org/data.html). We demonstrate that the method produces in-domain questions by providing an analysis of this new dataset and by showing that humans cannot distinguish the crowdsourced questions from original questions. When using SciQ as additional training data to existing questions, we observe accuracy improvements on real science exams.
http://arxiv.org/pdf/1707.06209
Johannes Welbl, Nelson F. Liu, Matt Gardner
cs.HC, cs.AI, cs.CL, stat.ML
accepted for the Workshop on Noisy User-generated Text (W-NUT) 2017
null
cs.HC
20170719
20170719
[ { "id": "1606.06031" }, { "id": "1604.04315" } ]
1707.06203
30
We trained baseline agents and I2As separately on each task. Results in Fig. 6 (right) indicate the benefit of the I2A architecture, outperforming the standard agent in all tasks, and the copy-model baseline in all but one task. Moreover, we found that the performance gap between I2As and baselines is particularly high for tasks 4 & 5, where rewards are particularly sparse, and where the anticipation of ghost dynamics is especially important. We posit that the I2A agent can leverage its environment and reward model to explore the environment much more effectively. Task Name Regular Avoid Hunt Ambush Rush Standard model-free Copy-model 192 -16 -35 -40 1.3 919 3 33 -30 178 I2A 859 23 334 294 214 Figure 6: Minipacman environment. Left: Two frames from a minipacman game. Frames are 15 × 19 RGB images. The player is green, dangerous ghosts red, food dark blue, empty corridors black, power pills in cyan. After eating a power pill (right frame), the player can eat the 4 weak ghosts (yellow). Right: Performance after 300 million environment steps for different agents and all tasks. Note I2A clearly outperforms the other two agents on all tasks with sparse rewards. # 6 Related work
1707.06203#30
Imagination-Augmented Agents for Deep Reinforcement Learning
We introduce Imagination-Augmented Agents (I2As), a novel architecture for deep reinforcement learning combining model-free and model-based aspects. In contrast to most existing model-based reinforcement learning and planning methods, which prescribe how a model should be used to arrive at a policy, I2As learn to interpret predictions from a learned environment model to construct implicit plans in arbitrary ways, by using the predictions as additional context in deep policy networks. I2As show improved data efficiency, performance, and robustness to model misspecification compared to several baselines.
http://arxiv.org/pdf/1707.06203
Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra
cs.LG, cs.AI, stat.ML
null
null
cs.LG
20170719
20180214
[ { "id": "1707.03374" }, { "id": "1703.01250" }, { "id": "1511.09249" }, { "id": "1611.03673" }, { "id": "1610.03518" }, { "id": "1705.07177" }, { "id": "1603.08983" }, { "id": "1703.09260" }, { "id": "1611.05397" }, { "id": "1707.03497" }, { "id": "1511.07111" }, { "id": "1604.00289" }, { "id": "1612.08810" } ]
1707.06209
30
tors good enough to include in the final question around half of the time, resulting in 36.1% of the distractors in the final dataset being generated by the model (because workers were only allowed to pick two predicted distractors, the theoretical maxQ: Compounds containing an atom of what element, bonded in a hydrocarbon framework, are classified as amines? A: nitrogen oxygen (0.982) hydrogen (0.962) nutrient (0.942) calcium (0.938) silicon (0.938) soil (0.9365) Q: Elements have or- bitals that are filled with what? A: electrons ions (0.975) atoms (0.959) crystals (0.952) protons (0.951) neutrons (0.946) photons (0.912) Q: Many species use their body shape and col- oration to avoid being de- tected by what? A: predators viruses (0.912) ecosystems (0.896) frogs (0.896) distances (0.8952) males (0.877) crocodiles (0.869) Q: The small amount of energy
1707.06209#30
Crowdsourcing Multiple Choice Science Questions
We present a novel method for obtaining high-quality, domain-targeted multiple choice questions from crowd workers. Generating these questions can be difficult without trading away originality, relevance or diversity in the answer options. Our method addresses these problems by leveraging a large corpus of domain-specific text and a small set of existing questions. It produces model suggestions for document selection and answer distractor choice which aid the human question generation process. With this method we have assembled SciQ, a dataset of 13.7K multiple choice science exam questions (Dataset available at http://allenai.org/data.html). We demonstrate that the method produces in-domain questions by providing an analysis of this new dataset and by showing that humans cannot distinguish the crowdsourced questions from original questions. When using SciQ as additional training data to existing questions, we observe accuracy improvements on real science exams.
http://arxiv.org/pdf/1707.06209
Johannes Welbl, Nelson F. Liu, Matt Gardner
cs.HC, cs.AI, cs.CL, stat.ML
accepted for the Workshop on Noisy User-generated Text (W-NUT) 2017
null
cs.HC
20170719
20170719
[ { "id": "1606.06031" }, { "id": "1604.04315" } ]
1707.06203
31
# 6 Related work Some recent work has focused on applying deep learning to model-based RL. A common approach is to learn a neural model of the environment, including from raw observations, and use it in classical planning algorithms such as trajectory optimization [29–31]. These studies however do not address a possible mismatch between the learned model and the true environment. Model imperfection has attracted particular attention in robotics, when transferring policies from simulation to real environments [32–34]. There, the environment model is given, not learned, and used for pretraining, not planning at test time. Liu et al. [35] also learn to extract information from trajectories, but in the context of imitation learning. Bansal et al. [36] take a Bayesian approach to model imperfection, by selecting environment models on the basis of their actual control performance. The problem of making use of imperfect models was also approached in simplified environment in Talvitie [18, 19] by using techniques similar to scheduled sampling [37]; however these techniques break down in stochastic environments; they mostly address the compounding error issue but do not address fundamental model imperfections.
1707.06203#31
Imagination-Augmented Agents for Deep Reinforcement Learning
We introduce Imagination-Augmented Agents (I2As), a novel architecture for deep reinforcement learning combining model-free and model-based aspects. In contrast to most existing model-based reinforcement learning and planning methods, which prescribe how a model should be used to arrive at a policy, I2As learn to interpret predictions from a learned environment model to construct implicit plans in arbitrary ways, by using the predictions as additional context in deep policy networks. I2As show improved data efficiency, performance, and robustness to model misspecification compared to several baselines.
http://arxiv.org/pdf/1707.06203
Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra
cs.LG, cs.AI, stat.ML
null
null
cs.LG
20170719
20180214
[ { "id": "1707.03374" }, { "id": "1703.01250" }, { "id": "1511.09249" }, { "id": "1611.03673" }, { "id": "1610.03518" }, { "id": "1705.07177" }, { "id": "1603.08983" }, { "id": "1703.09260" }, { "id": "1611.05397" }, { "id": "1707.03497" }, { "id": "1511.07111" }, { "id": "1604.00289" }, { "id": "1612.08810" } ]
1707.06203
32
A principled way to deal with imperfect models is to capture model uncertainty, e.g. by using Gaussian Process models of the environment, see Deisenroth and Rasmussen [15]. The disadvantage of this method is its high computational cost; it also assumes that the model uncertainty is well calibrated and lacks a mechanism that can learn to compensate for possible miscalibration of uncertainty. Cutler et al. [38] consider RL with a hierarchy of models of increasing (known) fidelity. A recent multi-task 6For example, in the ‘avoid’ game, any event is negatively rewarded, and the optimal strategy is for the agent to clear a small space from food and use it to continuously escape the ghosts. 7It is not necessary to provide the reward vector wrew to the baseline agents, as it is equivalent a constant bias. 8 GP extension of this study can further help to mitigate the impact of model misspecification, but again suffers from high computational burden in large domains, see Marco et al. [39]. A number of approaches use models to create additional synthetic training data, starting from Dyna [40], to more recent work e.g. Gu et al. [41] and Venkatraman et al. [42]; these models increase data efficiency, but are not used by the agent at test time.
1707.06203#32
Imagination-Augmented Agents for Deep Reinforcement Learning
We introduce Imagination-Augmented Agents (I2As), a novel architecture for deep reinforcement learning combining model-free and model-based aspects. In contrast to most existing model-based reinforcement learning and planning methods, which prescribe how a model should be used to arrive at a policy, I2As learn to interpret predictions from a learned environment model to construct implicit plans in arbitrary ways, by using the predictions as additional context in deep policy networks. I2As show improved data efficiency, performance, and robustness to model misspecification compared to several baselines.
http://arxiv.org/pdf/1707.06203
Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra
cs.LG, cs.AI, stat.ML
null
null
cs.LG
20170719
20180214
[ { "id": "1707.03374" }, { "id": "1703.01250" }, { "id": "1511.09249" }, { "id": "1611.03673" }, { "id": "1610.03518" }, { "id": "1705.07177" }, { "id": "1603.08983" }, { "id": "1703.09260" }, { "id": "1611.05397" }, { "id": "1707.03497" }, { "id": "1511.07111" }, { "id": "1604.00289" }, { "id": "1612.08810" } ]
1707.06209
32
Table 1: Selected distractor prediction model outputs. For each QA pair, the top six predictions are listed in row 3 (ranking score in parentheses). Boldfaced candidates were accepted by crowd workers. imum is 66%). Acceptance rates were higher in the case of short answers, with almost none ac- cepted for the few cases with very long answers. The remainder of this paper will investigate properties of SciQ, the dataset we generated by following the methodology described in this sec- tion. We present system and human performance, and we show that SciQ can be used as additional training data to improve model performance on real science exams. # 3.3 Dataset properties SciQ has a total of 13,679 multiple choice ques- tions. We randomly shuffled this dataset and split it into training, validation and test portions, with 1000 questions in each of the validation and test portions, and the remainder in train. In Figure 2 we show the distribution of question and answer lengths in the data. For the most part, questions and answers in the dataset are relatively short, though there are some longer questions. —e question length —— answer length —e— distractor length absolute frequency [log] ” z 0 10 20 30 40 50 60 Length (# tokens) Figure 2: Total counts of question, answer and dis- tractor length, measured in number of tokens, cal- culated across the training set.
1707.06209#32
Crowdsourcing Multiple Choice Science Questions
We present a novel method for obtaining high-quality, domain-targeted multiple choice questions from crowd workers. Generating these questions can be difficult without trading away originality, relevance or diversity in the answer options. Our method addresses these problems by leveraging a large corpus of domain-specific text and a small set of existing questions. It produces model suggestions for document selection and answer distractor choice which aid the human question generation process. With this method we have assembled SciQ, a dataset of 13.7K multiple choice science exam questions (Dataset available at http://allenai.org/data.html). We demonstrate that the method produces in-domain questions by providing an analysis of this new dataset and by showing that humans cannot distinguish the crowdsourced questions from original questions. When using SciQ as additional training data to existing questions, we observe accuracy improvements on real science exams.
http://arxiv.org/pdf/1707.06209
Johannes Welbl, Nelson F. Liu, Matt Gardner
cs.HC, cs.AI, cs.CL, stat.ML
accepted for the Workshop on Noisy User-generated Text (W-NUT) 2017
null
cs.HC
20170719
20170719
[ { "id": "1606.06031" }, { "id": "1604.04315" } ]
1707.06203
33
Tamar et al. [43], Silver et al. [44], and Oh et al. [45] all present neural networks whose architectures mimic classical iterative planning algorithms, and which are trained by reinforcement learning or to predict user-defined, high-level features; in these, there is no explicit environment model. In our case, we use explicit environment models that are trained to predict low-level observations, which allows us to exploit additional unsupervised learning signals for training. This procedure is expected to be beneficial in environments with sparse rewards, where unsupervised modelling losses can complement return maximization as learning target as recently explored in Jaderberg et al. [46] and Mirowski et al. [47]. Internal models can also be used to improve the credit assignment problem in reinforcement learning: Henaff et al. [48] learn models of discrete actions environments, and exploit the effective differentia- bility of the model with respect to the actions by applying continuous control planning algorithms to derive a plan; Schmidhuber [49] uses an environment model to turn environment cost minimization into a network activity minimization. Kansky et al. [50] learn symbolic networks models of the environment and use them for planning, but are given the relevant abstractions from a hand-crafted vision system.
1707.06203#33
Imagination-Augmented Agents for Deep Reinforcement Learning
We introduce Imagination-Augmented Agents (I2As), a novel architecture for deep reinforcement learning combining model-free and model-based aspects. In contrast to most existing model-based reinforcement learning and planning methods, which prescribe how a model should be used to arrive at a policy, I2As learn to interpret predictions from a learned environment model to construct implicit plans in arbitrary ways, by using the predictions as additional context in deep policy networks. I2As show improved data efficiency, performance, and robustness to model misspecification compared to several baselines.
http://arxiv.org/pdf/1707.06203
Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra
cs.LG, cs.AI, stat.ML
null
null
cs.LG
20170719
20180214
[ { "id": "1707.03374" }, { "id": "1703.01250" }, { "id": "1511.09249" }, { "id": "1611.03673" }, { "id": "1610.03518" }, { "id": "1705.07177" }, { "id": "1603.08983" }, { "id": "1703.09260" }, { "id": "1611.05397" }, { "id": "1707.03497" }, { "id": "1511.07111" }, { "id": "1604.00289" }, { "id": "1612.08810" } ]
1707.06209
33
Figure 2: Total counts of question, answer and dis- tractor length, measured in number of tokens, cal- culated across the training set. Each question also has an associated passage used when generating the question. Because the multiple choice question is trivial to answer when given the correct passage, the multiple choice ver- sion of SciQ does not include the passage; systems must retrieve their own background knowledge when answering the question. Because we have the associated passage, we additionally created a direct-answer version of SciQ, which has the pas- sage and the question, but no answer options. A small percentage of the passages were obtained from unreleasable texts, so the direct answer ver- sion of SciQ is slightly smaller, with 10481 ques- tions in train, 887 in dev, and 884 in test. Model Aristo Lucene TableILP AS Reader GA Reader Humans Accuracy 77.4 80.0 31.8 74.1 73.8 87.8 ± 0.045
1707.06209#33
Crowdsourcing Multiple Choice Science Questions
We present a novel method for obtaining high-quality, domain-targeted multiple choice questions from crowd workers. Generating these questions can be difficult without trading away originality, relevance or diversity in the answer options. Our method addresses these problems by leveraging a large corpus of domain-specific text and a small set of existing questions. It produces model suggestions for document selection and answer distractor choice which aid the human question generation process. With this method we have assembled SciQ, a dataset of 13.7K multiple choice science exam questions (Dataset available at http://allenai.org/data.html). We demonstrate that the method produces in-domain questions by providing an analysis of this new dataset and by showing that humans cannot distinguish the crowdsourced questions from original questions. When using SciQ as additional training data to existing questions, we observe accuracy improvements on real science exams.
http://arxiv.org/pdf/1707.06209
Johannes Welbl, Nelson F. Liu, Matt Gardner
cs.HC, cs.AI, cs.CL, stat.ML
accepted for the Workshop on Noisy User-generated Text (W-NUT) 2017
null
cs.HC
20170719
20170719
[ { "id": "1606.06031" }, { "id": "1604.04315" } ]
1707.06203
34
Kansky et al. [50] learn symbolic networks models of the environment and use them for planning, but are given the relevant abstractions from a hand-crafted vision system. Close to our work is a study by Hamrick et al. [51]: they present a neural architecture that queries learned expert models, but focus on meta-control for continuous contextual bandit problems. Pascanu et al. [52] extend this work by focusing on explicit planning in sequential environments, and learn how to construct a plan iteratively. The general idea of learning to leverage an internal model in arbitrary ways was also discussed by Schmidhuber [53]. # 7 Discussion We presented I2A, an approach combining model-free and model-based ideas to implement imagination-augmented RL: learning to interpret environment models to augment model-free deci- sions. I2A outperforms model-free baselines on MiniPacman and on the challenging, combinatorial domain of Sokoban. We demonstrated that, unlike classical model-based RL and planning methods, I2A is able to successfully use imperfect models (including models without reward predictions), hence significantly broadening the applicability of model-based RL concepts and ideas.
1707.06203#34
Imagination-Augmented Agents for Deep Reinforcement Learning
We introduce Imagination-Augmented Agents (I2As), a novel architecture for deep reinforcement learning combining model-free and model-based aspects. In contrast to most existing model-based reinforcement learning and planning methods, which prescribe how a model should be used to arrive at a policy, I2As learn to interpret predictions from a learned environment model to construct implicit plans in arbitrary ways, by using the predictions as additional context in deep policy networks. I2As show improved data efficiency, performance, and robustness to model misspecification compared to several baselines.
http://arxiv.org/pdf/1707.06203
Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra
cs.LG, cs.AI, stat.ML
null
null
cs.LG
20170719
20180214
[ { "id": "1707.03374" }, { "id": "1703.01250" }, { "id": "1511.09249" }, { "id": "1611.03673" }, { "id": "1610.03518" }, { "id": "1705.07177" }, { "id": "1603.08983" }, { "id": "1703.09260" }, { "id": "1611.05397" }, { "id": "1707.03497" }, { "id": "1511.07111" }, { "id": "1604.00289" }, { "id": "1612.08810" } ]
1707.06209
34
Model Aristo Lucene TableILP AS Reader GA Reader Humans Accuracy 77.4 80.0 31.8 74.1 73.8 87.8 ± 0.045 Qualitative Evaluation. We created a crowd- sourcing task with the following setup: A person was presented with an original science exam ques- tion and a crowdsourced question. The instruc- tions were to choose which of the two questions was more likely to be the real exam question. We randomly drew 100 original questions and 100 in- stances from the SciQ training set and presented the two options in random order. People identi- fied the science exam question in 55% of the cases, which falls below the significance level of p=0.05 under a null hypothesis of a random guess4. Table 2: Test set accuracy of existing models on the multiple choice version of SciQ. 4using normal approximation # 4 SciQ Experiments # 4.1 System performance We evaluated several state-of-the-art science QA systems, reading comprehension models, and hu- man performance on SciQ.
1707.06209#34
Crowdsourcing Multiple Choice Science Questions
We present a novel method for obtaining high-quality, domain-targeted multiple choice questions from crowd workers. Generating these questions can be difficult without trading away originality, relevance or diversity in the answer options. Our method addresses these problems by leveraging a large corpus of domain-specific text and a small set of existing questions. It produces model suggestions for document selection and answer distractor choice which aid the human question generation process. With this method we have assembled SciQ, a dataset of 13.7K multiple choice science exam questions (Dataset available at http://allenai.org/data.html). We demonstrate that the method produces in-domain questions by providing an analysis of this new dataset and by showing that humans cannot distinguish the crowdsourced questions from original questions. When using SciQ as additional training data to existing questions, we observe accuracy improvements on real science exams.
http://arxiv.org/pdf/1707.06209
Johannes Welbl, Nelson F. Liu, Matt Gardner
cs.HC, cs.AI, cs.CL, stat.ML
accepted for the Workshop on Noisy User-generated Text (W-NUT) 2017
null
cs.HC
20170719
20170719
[ { "id": "1606.06031" }, { "id": "1604.04315" } ]