doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1707.06203 | 35 | As all model-based RL methods, I2As trade-off environment interactions for computation by pon- dering before acting. This is essential in irreversible domains, where actions can have catastrophic outcomes, such as in Sokoban. In our experiments, the I2A was always less than an order of magni- tude slower per interaction than the model-free baselines. The amount of computation can be varied (it grows linearly with the number and depth of rollouts); we therefore expect I2As to greatly beneï¬t from advances on dynamic compute resource allocation (e.g. Graves [54]). Another avenue for future research is on abstract environment models: learning predictive models at the "right" level of complexity and that can be evaluated efï¬ciently at test time will help to scale I2As to richer domains.
Remarkably, on Sokoban I2As compare favourably to a strong planning baseline (MCTS) with a perfect environment model: at comparable performance, I2As require far fewer function calls to the model than MCTS, because their model rollouts are guided towards relevant parts of the state space by a learned rollout policy. This points to further potential improvement by training rollout policies that "learn to query" imperfect models in a task-relevant way.
# Acknowledgements | 1707.06203#35 | Imagination-Augmented Agents for Deep Reinforcement Learning | We introduce Imagination-Augmented Agents (I2As), a novel architecture for
deep reinforcement learning combining model-free and model-based aspects. In
contrast to most existing model-based reinforcement learning and planning
methods, which prescribe how a model should be used to arrive at a policy, I2As
learn to interpret predictions from a learned environment model to construct
implicit plans in arbitrary ways, by using the predictions as additional
context in deep policy networks. I2As show improved data efficiency,
performance, and robustness to model misspecification compared to several
baselines. | http://arxiv.org/pdf/1707.06203 | Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170719 | 20180214 | [
{
"id": "1707.03374"
},
{
"id": "1703.01250"
},
{
"id": "1511.09249"
},
{
"id": "1611.03673"
},
{
"id": "1610.03518"
},
{
"id": "1705.07177"
},
{
"id": "1603.08983"
},
{
"id": "1703.09260"
},
{
"id": "1611.05397"
},
{
"id": "1707.03497"
},
{
"id": "1511.07111"
},
{
"id": "1604.00289"
},
{
"id": "1612.08810"
}
] |
1707.06209 | 35 | Multiple Choice Setting. We used the Aristo ensemble (Clark et al., 2016), and two of its indi- vidual components: a simple information retrieval baseline (Lucene), and a table-based integer linear programming model (TableILP), to evaluate SciQ. We also evaluate two competitive neural reading comprehension models: the Attention Sum Reader (AS Reader, a GRU with a pointer-attention mech- anism; Kadlec et al. (2016)) and the Gated At- tention Reader (GA Reader, an AS Reader with additional gated attention layers; Dhingra et al. (2016)). These reading comprehension methods require a supporting text passage to answer a ques- tion. We use the same corpus as Aristoâs Lucene component to retrieve a text passage, by formulat- ing ï¬ve queries based on the question and answer5 and then concatenating the top three results from each query into a passage. We train the reading comprehension models on the training set with hy- perparameters recommended by prior work ((On- ishi et al., 2016) for the AS Reader and (Dhingra et al., 2016) for the GA Reader), with early stop- ping on the validation data6. Human accuracy is | 1707.06209#35 | Crowdsourcing Multiple Choice Science Questions | We present a novel method for obtaining high-quality, domain-targeted
multiple choice questions from crowd workers. Generating these questions can be
difficult without trading away originality, relevance or diversity in the
answer options. Our method addresses these problems by leveraging a large
corpus of domain-specific text and a small set of existing questions. It
produces model suggestions for document selection and answer distractor choice
which aid the human question generation process. With this method we have
assembled SciQ, a dataset of 13.7K multiple choice science exam questions
(Dataset available at http://allenai.org/data.html). We demonstrate that the
method produces in-domain questions by providing an analysis of this new
dataset and by showing that humans cannot distinguish the crowdsourced
questions from original questions. When using SciQ as additional training data
to existing questions, we observe accuracy improvements on real science exams. | http://arxiv.org/pdf/1707.06209 | Johannes Welbl, Nelson F. Liu, Matt Gardner | cs.HC, cs.AI, cs.CL, stat.ML | accepted for the Workshop on Noisy User-generated Text (W-NUT) 2017 | null | cs.HC | 20170719 | 20170719 | [
{
"id": "1606.06031"
},
{
"id": "1604.04315"
}
] |
1707.06203 | 36 | # Acknowledgements
We thank Victor Valdes for designing and implementing the Sokoban environment, Joseph Modayil for reviewing an early version of this paper, and Ali Eslami, Hado Van Hasselt, Neil Rabinowitz, Tom Schaul, Yori Zwols for various help and feedback.
9
# References
[1] Shane Legg and Marcus Hutter. Universal intelligence: A deï¬nition of machine intelligence. Minds and Machines, 17(4):391â444, 2007.
[2] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013.
[3] Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In International Conference on Machine Learning, pages 1928â1937, 2016. | 1707.06203#36 | Imagination-Augmented Agents for Deep Reinforcement Learning | We introduce Imagination-Augmented Agents (I2As), a novel architecture for
deep reinforcement learning combining model-free and model-based aspects. In
contrast to most existing model-based reinforcement learning and planning
methods, which prescribe how a model should be used to arrive at a policy, I2As
learn to interpret predictions from a learned environment model to construct
implicit plans in arbitrary ways, by using the predictions as additional
context in deep policy networks. I2As show improved data efficiency,
performance, and robustness to model misspecification compared to several
baselines. | http://arxiv.org/pdf/1707.06203 | Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170719 | 20180214 | [
{
"id": "1707.03374"
},
{
"id": "1703.01250"
},
{
"id": "1511.09249"
},
{
"id": "1611.03673"
},
{
"id": "1610.03518"
},
{
"id": "1705.07177"
},
{
"id": "1603.08983"
},
{
"id": "1703.09260"
},
{
"id": "1611.05397"
},
{
"id": "1707.03497"
},
{
"id": "1511.07111"
},
{
"id": "1604.00289"
},
{
"id": "1612.08810"
}
] |
1707.06209 | 36 | et al., 2016) for the AS Reader and (Dhingra et al., 2016) for the GA Reader), with early stop- ping on the validation data6. Human accuracy is estimated using a sampled subset of 650 questions, with 13 different people each answering 50 ques- tions. When answering the questions, people were allowed to query the web, just as the systems were. Table 2 shows the results of this evaluation. Aristo performance is slightly better on this set than on real science exams (where Aristo achieves 71.3% accuracy (Clark et al., 2016)).7 Because TableILP uses a hand-collected set of background knowledge that does not cover the topics in SciQ, its performance is substantially worse here than on its original test set. Neural models perform rea- sonably well on this dataset, though, interestingly, they are not able to outperform a very simple infor- mation retrieval baseline, even when using exactly the same background information. This suggests that SciQ is a useful dataset for studying reading comprehension models in medium-data settings. | 1707.06209#36 | Crowdsourcing Multiple Choice Science Questions | We present a novel method for obtaining high-quality, domain-targeted
multiple choice questions from crowd workers. Generating these questions can be
difficult without trading away originality, relevance or diversity in the
answer options. Our method addresses these problems by leveraging a large
corpus of domain-specific text and a small set of existing questions. It
produces model suggestions for document selection and answer distractor choice
which aid the human question generation process. With this method we have
assembled SciQ, a dataset of 13.7K multiple choice science exam questions
(Dataset available at http://allenai.org/data.html). We demonstrate that the
method produces in-domain questions by providing an analysis of this new
dataset and by showing that humans cannot distinguish the crowdsourced
questions from original questions. When using SciQ as additional training data
to existing questions, we observe accuracy improvements on real science exams. | http://arxiv.org/pdf/1707.06209 | Johannes Welbl, Nelson F. Liu, Matt Gardner | cs.HC, cs.AI, cs.CL, stat.ML | accepted for the Workshop on Noisy User-generated Text (W-NUT) 2017 | null | cs.HC | 20170719 | 20170719 | [
{
"id": "1606.06031"
},
{
"id": "1604.04315"
}
] |
1707.06203 | 37 | [4] John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trust region policy optimization. In Proceedings of the 32nd International Conference on Machine Learning (ICML-15), pages 1889â1897, 2015.
[5] Demis Hassabis, Dharshan Kumaran, and Eleanor A Maguire. Using imagination to understand the neural basis of episodic memory. Journal of Neuroscience, 27(52):14365â14374, 2007.
[6] Daniel L Schacter, Donna Rose Addis, Demis Hassabis, Victoria C Martin, R Nathan Spreng, and Karl K Szpunar. The future of memory: remembering, imagining, and the brain. Neuron, 76(4):677â694, 2012.
[7] Demis Hassabis, Dharshan Kumaran, Seralynne D Vann, and Eleanor A Maguire. Patients with hippocam- pal amnesia cannot imagine new experiences. Proceedings of the National Academy of Sciences, 104(5): 1726â1731, 2007.
[8] Edward C Tolman. Cognitive maps in rats and men. Psychological Review, 55(4):189, 1948.
[9] Anthony Dickinson and Bernard Balleine. The Role of Learning in the Operation of Motivational Systems. John Wiley & Sons, Inc., 2002. | 1707.06203#37 | Imagination-Augmented Agents for Deep Reinforcement Learning | We introduce Imagination-Augmented Agents (I2As), a novel architecture for
deep reinforcement learning combining model-free and model-based aspects. In
contrast to most existing model-based reinforcement learning and planning
methods, which prescribe how a model should be used to arrive at a policy, I2As
learn to interpret predictions from a learned environment model to construct
implicit plans in arbitrary ways, by using the predictions as additional
context in deep policy networks. I2As show improved data efficiency,
performance, and robustness to model misspecification compared to several
baselines. | http://arxiv.org/pdf/1707.06203 | Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170719 | 20180214 | [
{
"id": "1707.03374"
},
{
"id": "1703.01250"
},
{
"id": "1511.09249"
},
{
"id": "1611.03673"
},
{
"id": "1610.03518"
},
{
"id": "1705.07177"
},
{
"id": "1603.08983"
},
{
"id": "1703.09260"
},
{
"id": "1611.05397"
},
{
"id": "1707.03497"
},
{
"id": "1511.07111"
},
{
"id": "1604.00289"
},
{
"id": "1612.08810"
}
] |
1707.06209 | 37 | 5The question text itself, plus each of the four answer op- tions appended to the question text.
6For training and hyperparameter details, see Appendix 7We did not retrain the Aristo ensemble for SciQ; it might overly rely on TableILP, which does not perform well here.
Dataset AS Reader GA Reader 4th grade 4th grade + SciQ Difference 40.7% 45.0% +4.3% 37.6% 45.4% +7.8% 8th grade 8th grade + SciQ Difference 41.2% 43.0% +1.8% 41.0% 44.3% +3.3%
Table 3: Model accuracies on real science ques- tions validation set when trained on 4th / 8th grade exam questions alone, and when adding SciQ. | 1707.06209#37 | Crowdsourcing Multiple Choice Science Questions | We present a novel method for obtaining high-quality, domain-targeted
multiple choice questions from crowd workers. Generating these questions can be
difficult without trading away originality, relevance or diversity in the
answer options. Our method addresses these problems by leveraging a large
corpus of domain-specific text and a small set of existing questions. It
produces model suggestions for document selection and answer distractor choice
which aid the human question generation process. With this method we have
assembled SciQ, a dataset of 13.7K multiple choice science exam questions
(Dataset available at http://allenai.org/data.html). We demonstrate that the
method produces in-domain questions by providing an analysis of this new
dataset and by showing that humans cannot distinguish the crowdsourced
questions from original questions. When using SciQ as additional training data
to existing questions, we observe accuracy improvements on real science exams. | http://arxiv.org/pdf/1707.06209 | Johannes Welbl, Nelson F. Liu, Matt Gardner | cs.HC, cs.AI, cs.CL, stat.ML | accepted for the Workshop on Noisy User-generated Text (W-NUT) 2017 | null | cs.HC | 20170719 | 20170719 | [
{
"id": "1606.06031"
},
{
"id": "1604.04315"
}
] |
1707.06203 | 38 | [9] Anthony Dickinson and Bernard Balleine. The Role of Learning in the Operation of Motivational Systems. John Wiley & Sons, Inc., 2002.
[10] Brad E Pfeiffer and David J Foster. Hippocampal place-cell sequences depict future paths to remembered goals. Nature, 497(7447):74â79, 2013.
[11] Brenden M Lake, Tomer D Ullman, Joshua B Tenenbaum, and Samuel J Gershman. Building machines that learn and think like people. arXiv preprint arXiv:1604.00289, 2016.
[12] David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484â489, 2016.
[13] Jing Peng and Ronald J Williams. Efï¬cient learning and planning within the dyna framework. Adaptive Behavior, 1(4):437â454, 1993. | 1707.06203#38 | Imagination-Augmented Agents for Deep Reinforcement Learning | We introduce Imagination-Augmented Agents (I2As), a novel architecture for
deep reinforcement learning combining model-free and model-based aspects. In
contrast to most existing model-based reinforcement learning and planning
methods, which prescribe how a model should be used to arrive at a policy, I2As
learn to interpret predictions from a learned environment model to construct
implicit plans in arbitrary ways, by using the predictions as additional
context in deep policy networks. I2As show improved data efficiency,
performance, and robustness to model misspecification compared to several
baselines. | http://arxiv.org/pdf/1707.06203 | Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170719 | 20180214 | [
{
"id": "1707.03374"
},
{
"id": "1703.01250"
},
{
"id": "1511.09249"
},
{
"id": "1611.03673"
},
{
"id": "1610.03518"
},
{
"id": "1705.07177"
},
{
"id": "1603.08983"
},
{
"id": "1703.09260"
},
{
"id": "1611.05397"
},
{
"id": "1707.03497"
},
{
"id": "1511.07111"
},
{
"id": "1604.00289"
},
{
"id": "1612.08810"
}
] |
1707.06209 | 38 | Table 3: Model accuracies on real science ques- tions validation set when trained on 4th / 8th grade exam questions alone, and when adding SciQ.
Direct Answer Setting. We additionally present a baseline on the direct answer version of SciQ. We use the Bidirectional Attention Flow model (BiDAF; Seo et al. (2016)), which recently achieved state-of-the-art results on SQuAD (Ra- jpurkar et al., 2016). We trained BiDAF on the training portion of SciQ and evaluated on the test set. BiDAF achieves a 66.7% exact match and 75.7 F1 score, which is 1.3% and 1.6% below the modelâs performance on SQuAD.
# 4.2 Using SciQ to answer exam questions | 1707.06209#38 | Crowdsourcing Multiple Choice Science Questions | We present a novel method for obtaining high-quality, domain-targeted
multiple choice questions from crowd workers. Generating these questions can be
difficult without trading away originality, relevance or diversity in the
answer options. Our method addresses these problems by leveraging a large
corpus of domain-specific text and a small set of existing questions. It
produces model suggestions for document selection and answer distractor choice
which aid the human question generation process. With this method we have
assembled SciQ, a dataset of 13.7K multiple choice science exam questions
(Dataset available at http://allenai.org/data.html). We demonstrate that the
method produces in-domain questions by providing an analysis of this new
dataset and by showing that humans cannot distinguish the crowdsourced
questions from original questions. When using SciQ as additional training data
to existing questions, we observe accuracy improvements on real science exams. | http://arxiv.org/pdf/1707.06209 | Johannes Welbl, Nelson F. Liu, Matt Gardner | cs.HC, cs.AI, cs.CL, stat.ML | accepted for the Workshop on Noisy User-generated Text (W-NUT) 2017 | null | cs.HC | 20170719 | 20170719 | [
{
"id": "1606.06031"
},
{
"id": "1604.04315"
}
] |
1707.06203 | 39 | [13] Jing Peng and Ronald J Williams. Efï¬cient learning and planning within the dyna framework. Adaptive Behavior, 1(4):437â454, 1993.
[14] Pieter Abbeel and Andrew Y Ng. Exploration and apprenticeship learning in reinforcement learning. In Proceedings of the 22nd international conference on Machine learning, pages 1â8. ACM, 2005.
[15] Marc Deisenroth and Carl E Rasmussen. Pilco: A model-based and data-efï¬cient approach to policy search. In Proceedings of the 28th International Conference on machine learning (ICML-11), pages 465â472, 2011.
[16] Sergey Levine and Pieter Abbeel. Learning neural network policies with guided policy search under unknown dynamics. In Advances in Neural Information Processing Systems, pages 1071â1079, 2014.
[17] Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. ICLR, 2016.
[18] Erik Talvitie. Model regularization for stable sample rollouts. In UAI, pages 780â789, 2014. | 1707.06203#39 | Imagination-Augmented Agents for Deep Reinforcement Learning | We introduce Imagination-Augmented Agents (I2As), a novel architecture for
deep reinforcement learning combining model-free and model-based aspects. In
contrast to most existing model-based reinforcement learning and planning
methods, which prescribe how a model should be used to arrive at a policy, I2As
learn to interpret predictions from a learned environment model to construct
implicit plans in arbitrary ways, by using the predictions as additional
context in deep policy networks. I2As show improved data efficiency,
performance, and robustness to model misspecification compared to several
baselines. | http://arxiv.org/pdf/1707.06203 | Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170719 | 20180214 | [
{
"id": "1707.03374"
},
{
"id": "1703.01250"
},
{
"id": "1511.09249"
},
{
"id": "1611.03673"
},
{
"id": "1610.03518"
},
{
"id": "1705.07177"
},
{
"id": "1603.08983"
},
{
"id": "1703.09260"
},
{
"id": "1611.05397"
},
{
"id": "1707.03497"
},
{
"id": "1511.07111"
},
{
"id": "1604.00289"
},
{
"id": "1612.08810"
}
] |
1707.06209 | 39 | # 4.2 Using SciQ to answer exam questions
Our last experiment with SciQ shows its useful- ness as training data for models that answer real science questions. We collected a corpus of 4th and 8th grade science exam questions and used the AS Reader and GA Reader to answer these ques- tions.8 Table 3 shows model performances when only using real science questions as training data, and when augmenting the training data with SciQ. By adding SciQ, performance for both the AS Reader and the GA Reader improves on both grade levels, in a few cases substantially. This contrasts with our earlier attempts using purely synthetic data, where we saw models overï¬t the synthetic data and an overall performance decrease. Our successful transfer of information from SciQ to real science exam questions shows that the ques- tion distribution is similar to that of real science questions.
# 5 Conclusion
We have presented a method for crowdsourcing the creation of multiple choice QA data, with
8There are approx. 3200 8th grade training questions and 1200 4th grade training questions. Some of the questions come from www.allenai.org/data, some are propri- etary. | 1707.06209#39 | Crowdsourcing Multiple Choice Science Questions | We present a novel method for obtaining high-quality, domain-targeted
multiple choice questions from crowd workers. Generating these questions can be
difficult without trading away originality, relevance or diversity in the
answer options. Our method addresses these problems by leveraging a large
corpus of domain-specific text and a small set of existing questions. It
produces model suggestions for document selection and answer distractor choice
which aid the human question generation process. With this method we have
assembled SciQ, a dataset of 13.7K multiple choice science exam questions
(Dataset available at http://allenai.org/data.html). We demonstrate that the
method produces in-domain questions by providing an analysis of this new
dataset and by showing that humans cannot distinguish the crowdsourced
questions from original questions. When using SciQ as additional training data
to existing questions, we observe accuracy improvements on real science exams. | http://arxiv.org/pdf/1707.06209 | Johannes Welbl, Nelson F. Liu, Matt Gardner | cs.HC, cs.AI, cs.CL, stat.ML | accepted for the Workshop on Noisy User-generated Text (W-NUT) 2017 | null | cs.HC | 20170719 | 20170719 | [
{
"id": "1606.06031"
},
{
"id": "1604.04315"
}
] |
1707.06203 | 40 | [18] Erik Talvitie. Model regularization for stable sample rollouts. In UAI, pages 780â789, 2014.
[19] Erik Talvitie. Agnostic system identiï¬cation for monte carlo planning. In AAAI, pages 2986â2992, 2015.
[20] Junhyuk Oh, Xiaoxiao Guo, Honglak Lee, Richard L Lewis, and Satinder Singh. Action-conditional video prediction using deep networks in atari games. In Advances in Neural Information Processing Systems, pages 2863â2871, 2015.
[21] Silvia Chiappa, Sébastien Racaniere, Daan Wierstra, and Shakir Mohamed. Recurrent environment simulators. In 5th International Conference on Learning Representations, 2017.
10
[22] Felix Leibfried, Nate Kushman, and Katja Hofmann. A deep learning approach for joint video frame and reward prediction in atari games. CoRR, abs/1611.07078, 2016. URL http://arxiv.org/abs/1611. 07078.
[23] Tijmen Tieleman and Geoffrey Hinton. Lecture 6.5-RMSprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning, 4(2), 2012. | 1707.06203#40 | Imagination-Augmented Agents for Deep Reinforcement Learning | We introduce Imagination-Augmented Agents (I2As), a novel architecture for
deep reinforcement learning combining model-free and model-based aspects. In
contrast to most existing model-based reinforcement learning and planning
methods, which prescribe how a model should be used to arrive at a policy, I2As
learn to interpret predictions from a learned environment model to construct
implicit plans in arbitrary ways, by using the predictions as additional
context in deep policy networks. I2As show improved data efficiency,
performance, and robustness to model misspecification compared to several
baselines. | http://arxiv.org/pdf/1707.06203 | Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170719 | 20180214 | [
{
"id": "1707.03374"
},
{
"id": "1703.01250"
},
{
"id": "1511.09249"
},
{
"id": "1611.03673"
},
{
"id": "1610.03518"
},
{
"id": "1705.07177"
},
{
"id": "1603.08983"
},
{
"id": "1703.09260"
},
{
"id": "1611.05397"
},
{
"id": "1707.03497"
},
{
"id": "1511.07111"
},
{
"id": "1604.00289"
},
{
"id": "1612.08810"
}
] |
1707.06209 | 40 | a particular focus on science questions. Using this methodology, we have constructed a dataset of 13.7K science questions, called SciQ, which we release for future research. We have shown through baseline evaluations that this dataset is a useful research resource, both to investigate neu- ral model performance in medium-sized data set- tings, and to augment training data for answering real science exam questions.
There are multiple strands for possible future work. One direction is a systematic exploration of multitask settings to best exploit this new dataset. Possible extensions for the direction of generating answer distractors could lie in the adaptation of this idea in negative sampling, e.g. in KB popula- tion. Another direction is to further bootstrap the data we obtained to improve automatic document selection, question generation and distractor pre- diction to generate questions fully automatically.
# References
Manish Agarwal and Prashanth Mannem. 2011. Auto- matic gap-ï¬ll question generation from text books. In Proceedings of the 6th Workshop on Innovative Use of NLP for Building Educational Applications. Association for Computational Linguistics, Strouds- burg, PA, USA, IUNLPBEA â11, pages 56â64. http://dl.acm.org/citation.cfm?id=2043132.2043139. | 1707.06209#40 | Crowdsourcing Multiple Choice Science Questions | We present a novel method for obtaining high-quality, domain-targeted
multiple choice questions from crowd workers. Generating these questions can be
difficult without trading away originality, relevance or diversity in the
answer options. Our method addresses these problems by leveraging a large
corpus of domain-specific text and a small set of existing questions. It
produces model suggestions for document selection and answer distractor choice
which aid the human question generation process. With this method we have
assembled SciQ, a dataset of 13.7K multiple choice science exam questions
(Dataset available at http://allenai.org/data.html). We demonstrate that the
method produces in-domain questions by providing an analysis of this new
dataset and by showing that humans cannot distinguish the crowdsourced
questions from original questions. When using SciQ as additional training data
to existing questions, we observe accuracy improvements on real science exams. | http://arxiv.org/pdf/1707.06209 | Johannes Welbl, Nelson F. Liu, Matt Gardner | cs.HC, cs.AI, cs.CL, stat.ML | accepted for the Workshop on Noisy User-generated Text (W-NUT) 2017 | null | cs.HC | 20170719 | 20170719 | [
{
"id": "1606.06031"
},
{
"id": "1604.04315"
}
] |
1707.06203 | 41 | [24] https://drive.google.com/open?id=0B4tKsKnCCZtQY2tTOThucHVxUTQ, 2017.
[25] Gerald Tesauro and Gregory R Galperin. On-line policy improvement using monte-carlo search. In NIPS, volume 96, pages 1068â1074, 1996.
[26] Rémi Coulom. Efï¬cient selectivity and backup operators in monte-carlo tree search. In International Conference on Computers and Games, pages 72â83. Springer, 2006.
[27] Benjamin E Childs, James H Brodeur, and Levente Kocsis. Transpositions and move groups in monte carlo tree search. In Computational Intelligence and Games, 2008. CIGâ08. IEEE Symposium On, pages 389â395. IEEE, 2008.
[28] Christopher D Rosin. Nested rollout policy adaptation for monte carlo tree search. In Ijcai, pages 649â654, 2011.
[29] Manuel Watter, Jost Springenberg, Joschka Boedecker, and Martin Riedmiller. Embed to control: A locally linear latent dynamics model for control from raw images. In Advances in Neural Information Processing Systems, pages 2746â2754, 2015. | 1707.06203#41 | Imagination-Augmented Agents for Deep Reinforcement Learning | We introduce Imagination-Augmented Agents (I2As), a novel architecture for
deep reinforcement learning combining model-free and model-based aspects. In
contrast to most existing model-based reinforcement learning and planning
methods, which prescribe how a model should be used to arrive at a policy, I2As
learn to interpret predictions from a learned environment model to construct
implicit plans in arbitrary ways, by using the predictions as additional
context in deep policy networks. I2As show improved data efficiency,
performance, and robustness to model misspecification compared to several
baselines. | http://arxiv.org/pdf/1707.06203 | Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170719 | 20180214 | [
{
"id": "1707.03374"
},
{
"id": "1703.01250"
},
{
"id": "1511.09249"
},
{
"id": "1611.03673"
},
{
"id": "1610.03518"
},
{
"id": "1705.07177"
},
{
"id": "1603.08983"
},
{
"id": "1703.09260"
},
{
"id": "1611.05397"
},
{
"id": "1707.03497"
},
{
"id": "1511.07111"
},
{
"id": "1604.00289"
},
{
"id": "1612.08810"
}
] |
1707.06209 | 41 | Itziar Aldabe and Montse Maritxalar. 2010. Auto- matic Distractor Generation for Domain Speciï¬c Texts, Springer Berlin Heidelberg, Berlin, Heidel- berg, pages 27â38.
Jonathan Berant, Andrew Chou, Roy Frostig, and Semantic parsing on free- Percy Liang. 2013. In Proceedings base from question-answer pairs. of the 2013 Conference on Empirical Methods in Natural Language Processing, EMNLP 2013, 18- 21 October 2013, Grand Hyatt Seattle, Seattle, Washington, USA, A meeting of SIGDAT, a Spe- cial Interest Group of the ACL. pages 1533â1544. http://aclweb.org/anthology/D/D13/D13-1160.pdf.
Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Large-scale simple ques- CoRR Jason Weston. 2015. tion answering with memory networks. abs/1506.02075. http://arxiv.org/abs/1506.02075.
Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large an- notated corpus for learning natural language infer- In Proceedings of the 2015 Conference on ence. Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguis- tics. | 1707.06209#41 | Crowdsourcing Multiple Choice Science Questions | We present a novel method for obtaining high-quality, domain-targeted
multiple choice questions from crowd workers. Generating these questions can be
difficult without trading away originality, relevance or diversity in the
answer options. Our method addresses these problems by leveraging a large
corpus of domain-specific text and a small set of existing questions. It
produces model suggestions for document selection and answer distractor choice
which aid the human question generation process. With this method we have
assembled SciQ, a dataset of 13.7K multiple choice science exam questions
(Dataset available at http://allenai.org/data.html). We demonstrate that the
method produces in-domain questions by providing an analysis of this new
dataset and by showing that humans cannot distinguish the crowdsourced
questions from original questions. When using SciQ as additional training data
to existing questions, we observe accuracy improvements on real science exams. | http://arxiv.org/pdf/1707.06209 | Johannes Welbl, Nelson F. Liu, Matt Gardner | cs.HC, cs.AI, cs.CL, stat.ML | accepted for the Workshop on Noisy User-generated Text (W-NUT) 2017 | null | cs.HC | 20170719 | 20170719 | [
{
"id": "1606.06031"
},
{
"id": "1604.04315"
}
] |
1707.06203 | 42 | [30] Ian Lenz, Ross A Knepper, and Ashutosh Saxena. DeepMPC: Learning deep latent features for model predictive control. In Robotics: Science and Systems, 2015.
[31] Chelsea Finn and Sergey Levine. Deep visual foresight for planning robot motion. In IEEE International Conference on Robotics and Automation (ICRA), 2017.
[32] Matthew E Taylor and Peter Stone. Transfer learning for reinforcement learning domains: A survey. Journal of Machine Learning Research, 10(Jul):1633â1685, 2009.
[33] Eric Tzeng, Coline Devin, Judy Hoffman, Chelsea Finn, Xingchao Peng, Sergey Levine, Kate Saenko, and Trevor Darrell. Towards adapting deep visuomotor representations from simulated to real environments. arXiv preprint arXiv:1511.07111, 2015.
[34] Paul Christiano, Zain Shah, Igor Mordatch, Jonas Schneider, Trevor Blackwell, Joshua Tobin, Pieter Abbeel, and Wojciech Zaremba. Transfer from simulation to real world through learning deep inverse dynamics model. arXiv preprint arXiv:1610.03518, 2016. | 1707.06203#42 | Imagination-Augmented Agents for Deep Reinforcement Learning | We introduce Imagination-Augmented Agents (I2As), a novel architecture for
deep reinforcement learning combining model-free and model-based aspects. In
contrast to most existing model-based reinforcement learning and planning
methods, which prescribe how a model should be used to arrive at a policy, I2As
learn to interpret predictions from a learned environment model to construct
implicit plans in arbitrary ways, by using the predictions as additional
context in deep policy networks. I2As show improved data efficiency,
performance, and robustness to model misspecification compared to several
baselines. | http://arxiv.org/pdf/1707.06203 | Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170719 | 20180214 | [
{
"id": "1707.03374"
},
{
"id": "1703.01250"
},
{
"id": "1511.09249"
},
{
"id": "1611.03673"
},
{
"id": "1610.03518"
},
{
"id": "1705.07177"
},
{
"id": "1603.08983"
},
{
"id": "1703.09260"
},
{
"id": "1611.05397"
},
{
"id": "1707.03497"
},
{
"id": "1511.07111"
},
{
"id": "1604.00289"
},
{
"id": "1612.08810"
}
] |
1707.06209 | 42 | Leo Breiman. 2001. Random forests. Machine Learn- ing 45(1):5â32.
Peter Clark. 2015. Elementary school science and math tests as a driver for ai: Take the aristo the Twenty- challenge! Ninth AAAI Conference on Artiï¬cial Intelli- gence. AAAI Press, AAAIâ15, pages 4019â4021. http://dl.acm.org/citation.cfm?id=2888116.2888274.
Peter Clark, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Oyvind Tafjord, Peter Turney, and Combining retrieval, Daniel Khashabi. 2016. elemen- to answer statistics, the tary science questions. Thirtieth AAAI Conference on Artiï¬cial Intelli- gence. AAAI Press, AAAIâ16, pages 2580â2586. http://dl.acm.org/citation.cfm?id=3016100.3016262. | 1707.06209#42 | Crowdsourcing Multiple Choice Science Questions | We present a novel method for obtaining high-quality, domain-targeted
multiple choice questions from crowd workers. Generating these questions can be
difficult without trading away originality, relevance or diversity in the
answer options. Our method addresses these problems by leveraging a large
corpus of domain-specific text and a small set of existing questions. It
produces model suggestions for document selection and answer distractor choice
which aid the human question generation process. With this method we have
assembled SciQ, a dataset of 13.7K multiple choice science exam questions
(Dataset available at http://allenai.org/data.html). We demonstrate that the
method produces in-domain questions by providing an analysis of this new
dataset and by showing that humans cannot distinguish the crowdsourced
questions from original questions. When using SciQ as additional training data
to existing questions, we observe accuracy improvements on real science exams. | http://arxiv.org/pdf/1707.06209 | Johannes Welbl, Nelson F. Liu, Matt Gardner | cs.HC, cs.AI, cs.CL, stat.ML | accepted for the Workshop on Noisy User-generated Text (W-NUT) 2017 | null | cs.HC | 20170719 | 20170719 | [
{
"id": "1606.06031"
},
{
"id": "1604.04315"
}
] |
1707.06203 | 43 | [35] YuXuan Liu, Abhishek Gupta, Pieter Abbeel, and Sergey Levine. Imitation from observation: Learning to imitate behaviors from raw video via context translation. arXiv preprint arXiv:1707.03374, 2017.
[36] Somil Bansal, Roberto Calandra, Ted Xiao, Sergey Levine, and Claire J Tomlin. Goal-driven dynamics learning via bayesian optimization. arXiv preprint arXiv:1703.09260, 2017.
[37] Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. Scheduled sampling for sequence prediction with recurrent neural networks. In Advances in Neural Information Processing Systems, pages 1171â1179, 2015.
[38] Mark Cutler, Thomas J Walsh, and Jonathan P How. Real-world reinforcement learning via multiï¬delity simulators. IEEE Transactions on Robotics, 31(3):655â671, 2015.
[39] Alonso Marco, Felix Berkenkamp, Philipp Hennig, Angela P Schoellig, Andreas Krause, Stefan Schaal, and Sebastian Trimpe. Virtual vs. real: Trading off simulations and physical experiments in reinforcement learning with bayesian optimization. arXiv preprint arXiv:1703.01250, 2017. | 1707.06203#43 | Imagination-Augmented Agents for Deep Reinforcement Learning | We introduce Imagination-Augmented Agents (I2As), a novel architecture for
deep reinforcement learning combining model-free and model-based aspects. In
contrast to most existing model-based reinforcement learning and planning
methods, which prescribe how a model should be used to arrive at a policy, I2As
learn to interpret predictions from a learned environment model to construct
implicit plans in arbitrary ways, by using the predictions as additional
context in deep policy networks. I2As show improved data efficiency,
performance, and robustness to model misspecification compared to several
baselines. | http://arxiv.org/pdf/1707.06203 | Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170719 | 20180214 | [
{
"id": "1707.03374"
},
{
"id": "1703.01250"
},
{
"id": "1511.09249"
},
{
"id": "1611.03673"
},
{
"id": "1610.03518"
},
{
"id": "1705.07177"
},
{
"id": "1603.08983"
},
{
"id": "1703.09260"
},
{
"id": "1611.05397"
},
{
"id": "1707.03497"
},
{
"id": "1511.07111"
},
{
"id": "1604.00289"
},
{
"id": "1612.08810"
}
] |
1707.06209 | 43 | Peter Clark, Philip Harrison, and Niranjan Balasub- ramanian. 2013. A study of the knowledge base requirements for passing an elementary science In Proceedings of the 2013 Workshop on test. Automated Knowledge Base Construction. ACM, New York, NY, USA, AKBC â13, pages 37â42. https://doi.org/10.1145/2509558.2509565.
Rui Correia, Jorge Baptista, Nuno Mamede, Isabel Trancoso, and Maxine Eskenazi. 2010. Automatic In Pro- generation of cloze question distractors. ceedings of the Interspeech 2010 Satellite Workshop on Second Language Studies: Acquisition, Learn- ing, Education and Technology, Waseda University, Tokyo, Japan.
Bhuwan Dhingra, Hanxiao Liu, William W. Cohen, and Ruslan Salakhutdinov. 2016. Gated-attention read- ers for text comprehension. CoRR abs/1606.01549. http://arxiv.org/abs/1606.01549. | 1707.06209#43 | Crowdsourcing Multiple Choice Science Questions | We present a novel method for obtaining high-quality, domain-targeted
multiple choice questions from crowd workers. Generating these questions can be
difficult without trading away originality, relevance or diversity in the
answer options. Our method addresses these problems by leveraging a large
corpus of domain-specific text and a small set of existing questions. It
produces model suggestions for document selection and answer distractor choice
which aid the human question generation process. With this method we have
assembled SciQ, a dataset of 13.7K multiple choice science exam questions
(Dataset available at http://allenai.org/data.html). We demonstrate that the
method produces in-domain questions by providing an analysis of this new
dataset and by showing that humans cannot distinguish the crowdsourced
questions from original questions. When using SciQ as additional training data
to existing questions, we observe accuracy improvements on real science exams. | http://arxiv.org/pdf/1707.06209 | Johannes Welbl, Nelson F. Liu, Matt Gardner | cs.HC, cs.AI, cs.CL, stat.ML | accepted for the Workshop on Noisy User-generated Text (W-NUT) 2017 | null | cs.HC | 20170719 | 20170719 | [
{
"id": "1606.06031"
},
{
"id": "1604.04315"
}
] |
1707.06203 | 44 | [40] Richard S Sutton. Integrated architectures for learning, planning, and reacting based on approximating dynamic programming. In Proceedings of the seventh international conference on machine learning, pages 216â224, 1990.
[41] Shixiang Gu, Timothy Lillicrap, Ilya Sutskever, and Sergey Levine. Continuous deep q-learning with model-based acceleration. In International Conference on Machine Learning, pages 2829â2838, 2016.
[42] Arun Venkatraman, Roberto Capobianco, Lerrel Pinto, Martial Hebert, Daniele Nardi, and J Andrew Bagnell. Improved learning of dynamics models for control. In International Symposium on Experimental Robotics, pages 703â713. Springer, 2016.
11
[43] Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, and Pieter Abbeel. Value iteration networks. In Advances in Neural Information Processing Systems, pages 2154â2162, 2016.
[44] David Silver, Hado van Hasselt, Matteo Hessel, Tom Schaul, Arthur Guez, Tim Harley, Gabriel Dulac- Arnold, David Reichert, Neil Rabinowitz, Andre Barreto, et al. The predictron: End-to-end learning and planning. arXiv preprint arXiv:1612.08810, 2016. | 1707.06203#44 | Imagination-Augmented Agents for Deep Reinforcement Learning | We introduce Imagination-Augmented Agents (I2As), a novel architecture for
deep reinforcement learning combining model-free and model-based aspects. In
contrast to most existing model-based reinforcement learning and planning
methods, which prescribe how a model should be used to arrive at a policy, I2As
learn to interpret predictions from a learned environment model to construct
implicit plans in arbitrary ways, by using the predictions as additional
context in deep policy networks. I2As show improved data efficiency,
performance, and robustness to model misspecification compared to several
baselines. | http://arxiv.org/pdf/1707.06203 | Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170719 | 20180214 | [
{
"id": "1707.03374"
},
{
"id": "1703.01250"
},
{
"id": "1511.09249"
},
{
"id": "1611.03673"
},
{
"id": "1610.03518"
},
{
"id": "1705.07177"
},
{
"id": "1603.08983"
},
{
"id": "1703.09260"
},
{
"id": "1611.05397"
},
{
"id": "1707.03497"
},
{
"id": "1511.07111"
},
{
"id": "1604.00289"
},
{
"id": "1612.08810"
}
] |
1707.06209 | 44 | Michael Heilman and Noah A. Smith. 2010. Good question! statistical ranking for question generation. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguis- tics. Association for Computational Linguistics, Stroudsburg, PA, USA, HLT â10, pages 609â617. http://dl.acm.org/citation.cfm?id=1857999.1858085.
Karl Moritz Hermann, Tom´aËs KoËcisk´y, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Teaching Suleyman, and Phil Blunsom. 2015. In Advances machines to read and comprehend. in Neural Information Processing Systems (NIPS). http://arxiv.org/abs/1506.03340.
Daniel Hewlett, Alexandre Lacoste, Llion Jones, Illia Polosukhin, Andrew Fandrianto, Jay Han, Matthew Kelcey, and David Berthelot. 2016. Wikiread- ing: A novel large-scale language understand- ing task over wikipedia. CoRR abs/1608.03542. http://arxiv.org/abs/1608.03542. | 1707.06209#44 | Crowdsourcing Multiple Choice Science Questions | We present a novel method for obtaining high-quality, domain-targeted
multiple choice questions from crowd workers. Generating these questions can be
difficult without trading away originality, relevance or diversity in the
answer options. Our method addresses these problems by leveraging a large
corpus of domain-specific text and a small set of existing questions. It
produces model suggestions for document selection and answer distractor choice
which aid the human question generation process. With this method we have
assembled SciQ, a dataset of 13.7K multiple choice science exam questions
(Dataset available at http://allenai.org/data.html). We demonstrate that the
method produces in-domain questions by providing an analysis of this new
dataset and by showing that humans cannot distinguish the crowdsourced
questions from original questions. When using SciQ as additional training data
to existing questions, we observe accuracy improvements on real science exams. | http://arxiv.org/pdf/1707.06209 | Johannes Welbl, Nelson F. Liu, Matt Gardner | cs.HC, cs.AI, cs.CL, stat.ML | accepted for the Workshop on Noisy User-generated Text (W-NUT) 2017 | null | cs.HC | 20170719 | 20170719 | [
{
"id": "1606.06031"
},
{
"id": "1604.04315"
}
] |
1707.06203 | 45 | [45] Junhyuk Oh, Satinder Singh, and Honglak Lee. Value prediction network. arXiv preprint arXiv:1707.03497, 2017.
[46] Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, and Koray Kavukcuoglu. Reinforcement learning with unsupervised auxiliary tasks. arXiv preprint arXiv:1611.05397, 2016.
[47] Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andy Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, et al. Learning to navigate in complex environments. arXiv preprint arXiv:1611.03673, 2016.
[48] Mikael Henaff, William F Whitney, and Yann LeCun. Model-based planning in discrete action spaces. arXiv preprint arXiv:1705.07177, 2017.
[49] Jürgen Schmidhuber. An on-line algorithm for dynamic reinforcement learning and planning in reactive environments. In Neural Networks, 1990., 1990 IJCNN International Joint Conference on, pages 253â258. IEEE, 1990. | 1707.06203#45 | Imagination-Augmented Agents for Deep Reinforcement Learning | We introduce Imagination-Augmented Agents (I2As), a novel architecture for
deep reinforcement learning combining model-free and model-based aspects. In
contrast to most existing model-based reinforcement learning and planning
methods, which prescribe how a model should be used to arrive at a policy, I2As
learn to interpret predictions from a learned environment model to construct
implicit plans in arbitrary ways, by using the predictions as additional
context in deep policy networks. I2As show improved data efficiency,
performance, and robustness to model misspecification compared to several
baselines. | http://arxiv.org/pdf/1707.06203 | Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170719 | 20180214 | [
{
"id": "1707.03374"
},
{
"id": "1703.01250"
},
{
"id": "1511.09249"
},
{
"id": "1611.03673"
},
{
"id": "1610.03518"
},
{
"id": "1705.07177"
},
{
"id": "1603.08983"
},
{
"id": "1703.09260"
},
{
"id": "1611.05397"
},
{
"id": "1707.03497"
},
{
"id": "1511.07111"
},
{
"id": "1604.00289"
},
{
"id": "1612.08810"
}
] |
1707.06209 | 45 | Felix Hill, Antoine Bordes, Sumit Chopra, and The goldilocks prin- Jason Weston. 2015. ciple: Reading childrenâs books with explicit memory representations. CoRR abs/1511.02301. http://arxiv.org/abs/1511.02301.
Rudolf Kadlec, Martin Schmid, Ondrej Bajgar, and Jan Kleindienst. 2016. Text understanding with the at- tention sum reader network. CoRR abs/1603.01547. http://arxiv.org/abs/1603.01547.
Daniel Khashabi, Tushar Khot, Ashish Sabharwal, Peter Clark, Oren Etzioni, and Dan Roth. 2016. Question answering via integer programming over In Proceedings of semi-structured knowledge. the Twenty-Fifth International Joint Conference on Artiï¬cial Intelligence, IJCAI 2016, New York, NY, USA, 9-15 July 2016. pages 1145â1152. http://www.ijcai.org/Abstract/16/166. | 1707.06209#45 | Crowdsourcing Multiple Choice Science Questions | We present a novel method for obtaining high-quality, domain-targeted
multiple choice questions from crowd workers. Generating these questions can be
difficult without trading away originality, relevance or diversity in the
answer options. Our method addresses these problems by leveraging a large
corpus of domain-specific text and a small set of existing questions. It
produces model suggestions for document selection and answer distractor choice
which aid the human question generation process. With this method we have
assembled SciQ, a dataset of 13.7K multiple choice science exam questions
(Dataset available at http://allenai.org/data.html). We demonstrate that the
method produces in-domain questions by providing an analysis of this new
dataset and by showing that humans cannot distinguish the crowdsourced
questions from original questions. When using SciQ as additional training data
to existing questions, we observe accuracy improvements on real science exams. | http://arxiv.org/pdf/1707.06209 | Johannes Welbl, Nelson F. Liu, Matt Gardner | cs.HC, cs.AI, cs.CL, stat.ML | accepted for the Workshop on Noisy User-generated Text (W-NUT) 2017 | null | cs.HC | 20170719 | 20170719 | [
{
"id": "1606.06031"
},
{
"id": "1604.04315"
}
] |
1707.06203 | 46 | [50] Ken Kansky, Tom Silver, David A Mély, Mohamed Eldawy, Miguel Lázaro-Gredilla, Xinghua Lou, Nimrod Dorfman, Szymon Sidor, Scott Phoenix, and Dileep George. Schema networks: Zero-shot transfer with a generative causal model of intuitive physics. Accepted at International Conference for Machine Learning, 2017, 2017.
[51] Jessica B. Hamrick, Andy J. Ballard, Razvan Pascanu, Oriol Vinyals, Nicolas Heess, and Peter W. Battaglia. Metacontrol for adaptive imagination-based optimization. In Proceedings of the 5th International Conference on Learning Representations (ICLR 2017), 2017.
[52] Razvan Pascanu, Yujia Li, Oriol Vinyals, Nicolas Heess, David Reichert, Theophane Weber, Sebastien Racaniere, Lars Buesing, Daan Wierstra, and Peter Battaglia. Learning model-based planning from scratch. arXiv preprint, 2017.
[53] Jürgen Schmidhuber. On learning to think: Algorithmic information theory for novel combinations of reinforcement learning controllers and recurrent neural world models. arXiv preprint arXiv:1511.09249, 2015. | 1707.06203#46 | Imagination-Augmented Agents for Deep Reinforcement Learning | We introduce Imagination-Augmented Agents (I2As), a novel architecture for
deep reinforcement learning combining model-free and model-based aspects. In
contrast to most existing model-based reinforcement learning and planning
methods, which prescribe how a model should be used to arrive at a policy, I2As
learn to interpret predictions from a learned environment model to construct
implicit plans in arbitrary ways, by using the predictions as additional
context in deep policy networks. I2As show improved data efficiency,
performance, and robustness to model misspecification compared to several
baselines. | http://arxiv.org/pdf/1707.06203 | Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170719 | 20180214 | [
{
"id": "1707.03374"
},
{
"id": "1703.01250"
},
{
"id": "1511.09249"
},
{
"id": "1611.03673"
},
{
"id": "1610.03518"
},
{
"id": "1705.07177"
},
{
"id": "1603.08983"
},
{
"id": "1703.09260"
},
{
"id": "1611.05397"
},
{
"id": "1707.03497"
},
{
"id": "1511.07111"
},
{
"id": "1604.00289"
},
{
"id": "1612.08810"
}
] |
1707.06209 | 46 | Eric Gribkoff, Ashish Sabharwal, Peter Clark, and Oren Etzioni. 2015. Exploring markov logic networks In Proceedings of the for question answering. 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015, Lisbon, Portugal, September 17-21, 2015. pages 685â694. http://aclweb.org/anthology/D/D15/D15-1080.pdf.
Yang Li and Peter Clark. 2015. Answering elementary science questions by constructing coherent scenes In EMNLP. pages using background knowledge. 2007â2012.
Mausam, Michael Schmitz, Robert Bart, Stephen Soderland, and Oren Etzioni. 2012. Open language In Proceed- learning for information extraction. ings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. Asso- ciation for Computational Linguistics, Stroudsburg, PA, USA, EMNLP-CoNLL â12, pages 523â534. http://dl.acm.org/citation.cfm?id=2390948.2391009. | 1707.06209#46 | Crowdsourcing Multiple Choice Science Questions | We present a novel method for obtaining high-quality, domain-targeted
multiple choice questions from crowd workers. Generating these questions can be
difficult without trading away originality, relevance or diversity in the
answer options. Our method addresses these problems by leveraging a large
corpus of domain-specific text and a small set of existing questions. It
produces model suggestions for document selection and answer distractor choice
which aid the human question generation process. With this method we have
assembled SciQ, a dataset of 13.7K multiple choice science exam questions
(Dataset available at http://allenai.org/data.html). We demonstrate that the
method produces in-domain questions by providing an analysis of this new
dataset and by showing that humans cannot distinguish the crowdsourced
questions from original questions. When using SciQ as additional training data
to existing questions, we observe accuracy improvements on real science exams. | http://arxiv.org/pdf/1707.06209 | Johannes Welbl, Nelson F. Liu, Matt Gardner | cs.HC, cs.AI, cs.CL, stat.ML | accepted for the Workshop on Noisy User-generated Text (W-NUT) 2017 | null | cs.HC | 20170719 | 20170719 | [
{
"id": "1606.06031"
},
{
"id": "1604.04315"
}
] |
1707.06203 | 47 | [54] Alex Graves. Adaptive computation time for recurrent neural networks. arXiv preprint arXiv:1603.08983, 2016.
[55] Leemon C Baird III. Advantage updating. Technical report, Wright Lab. Technical Report WL-TR-93-1l46., 1993.
[56] John Schulman, Nicolas Heess, Theophane Weber, and Pieter Abbeel. Gradient estimation using stochastic computation graphs. In Advances in Neural Information Processing Systems, pages 3528â3536, 2015.
[57] Levente Kocsis and Csaba Szepesvári. Bandit based monte-carlo planning. In European conference on machine learning, pages 282â293. Springer, 2006.
[58] Sylvain Gelly and David Silver. Combining online and ofï¬ine knowledge in uct. In Proceedings of the 24th international conference on Machine learning, pages 273â280. ACM, 2007.
[59] Joshua Taylor and Ian Parberry. Procedural generation of sokoban levels. In Proceedings of the International North American Conference on Intelligent Games and Simulation, pages 5â12, 2011. | 1707.06203#47 | Imagination-Augmented Agents for Deep Reinforcement Learning | We introduce Imagination-Augmented Agents (I2As), a novel architecture for
deep reinforcement learning combining model-free and model-based aspects. In
contrast to most existing model-based reinforcement learning and planning
methods, which prescribe how a model should be used to arrive at a policy, I2As
learn to interpret predictions from a learned environment model to construct
implicit plans in arbitrary ways, by using the predictions as additional
context in deep policy networks. I2As show improved data efficiency,
performance, and robustness to model misspecification compared to several
baselines. | http://arxiv.org/pdf/1707.06203 | Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170719 | 20180214 | [
{
"id": "1707.03374"
},
{
"id": "1703.01250"
},
{
"id": "1511.09249"
},
{
"id": "1611.03673"
},
{
"id": "1610.03518"
},
{
"id": "1705.07177"
},
{
"id": "1603.08983"
},
{
"id": "1703.09260"
},
{
"id": "1611.05397"
},
{
"id": "1707.03497"
},
{
"id": "1511.07111"
},
{
"id": "1604.00289"
},
{
"id": "1612.08810"
}
] |
1707.06209 | 47 | Ruslan Mitkov and Le An Ha. 2003. Computer- In aided generation of multiple-choice tests. Proceedings of the HLT-NAACL 03 Workshop on Building Educational Applications Using Natu- ral Language Processing - Volume 2. Associa- tion for Computational Linguistics, Stroudsburg, PA, USA, HLT-NAACL-EDUC â03, pages 17â22. https://doi.org/10.3115/1118894.1118897.
Ruslan Mitkov, Le An Ha, Andrea Varga, and Semantic similarity of dis- Luz Rello. 2009. tractors in multiple-choice tests: Extrinsic eval- the Workshop on uation. Geometrical Models of Natural Language Seman- tics. Association for Computational Linguistics, Stroudsburg, PA, USA, GEMS â09, pages 49â56. http://dl.acm.org/citation.cfm?id=1705415.1705422.
Generat- ing diagnostic multiple choice comprehension In Proceedings of the Seventh cloze questions. Workshop on Building Educational Applications Using NLP. Association for Computational Lin- guistics, Stroudsburg, PA, USA, pages 136â146. http://dl.acm.org/citation.cfm?id=2390384.2390401. | 1707.06209#47 | Crowdsourcing Multiple Choice Science Questions | We present a novel method for obtaining high-quality, domain-targeted
multiple choice questions from crowd workers. Generating these questions can be
difficult without trading away originality, relevance or diversity in the
answer options. Our method addresses these problems by leveraging a large
corpus of domain-specific text and a small set of existing questions. It
produces model suggestions for document selection and answer distractor choice
which aid the human question generation process. With this method we have
assembled SciQ, a dataset of 13.7K multiple choice science exam questions
(Dataset available at http://allenai.org/data.html). We demonstrate that the
method produces in-domain questions by providing an analysis of this new
dataset and by showing that humans cannot distinguish the crowdsourced
questions from original questions. When using SciQ as additional training data
to existing questions, we observe accuracy improvements on real science exams. | http://arxiv.org/pdf/1707.06209 | Johannes Welbl, Nelson F. Liu, Matt Gardner | cs.HC, cs.AI, cs.CL, stat.ML | accepted for the Workshop on Noisy User-generated Text (W-NUT) 2017 | null | cs.HC | 20170719 | 20170719 | [
{
"id": "1606.06031"
},
{
"id": "1604.04315"
}
] |
1707.06203 | 48 | [60] Yoshio Murase, Hitoshi Matsubara, and Yuzuru Hiraga. Automatic making of sokoban problems. PRI- CAIâ96: Topics in Artiï¬cial Intelligence, pages 592â600, 1996.
12
# Supplementary material for: Imagination-Augmented Agents for Deep Reinforcement Learning
# A Training and rollout policy distillation details
Each agent used in the paper deï¬nes a stochastic policy, i.e. a categorical distribution Ï(at|ot; θ) over discrete actions a. The logits of Ï(at|ot; θ) are computed by a neural network with parameters θ, taking observation ot at timestep t as input. During training, to increase the probability of rewarding actions being taken, A3C applies an update âθ to the parameters θ using policy gradient g(θ):
g(θ) = âθlogÏ(at|ot; θ)A(ot, at) where A(ot, at) is an estimate of the advantage function [55]. In practice, we learn a value function V (ot; θv) and use it to compute the advantage as the difference of the bootstrapped k-step return and and the current value estimate: | 1707.06203#48 | Imagination-Augmented Agents for Deep Reinforcement Learning | We introduce Imagination-Augmented Agents (I2As), a novel architecture for
deep reinforcement learning combining model-free and model-based aspects. In
contrast to most existing model-based reinforcement learning and planning
methods, which prescribe how a model should be used to arrive at a policy, I2As
learn to interpret predictions from a learned environment model to construct
implicit plans in arbitrary ways, by using the predictions as additional
context in deep policy networks. I2As show improved data efficiency,
performance, and robustness to model misspecification compared to several
baselines. | http://arxiv.org/pdf/1707.06203 | Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170719 | 20180214 | [
{
"id": "1707.03374"
},
{
"id": "1703.01250"
},
{
"id": "1511.09249"
},
{
"id": "1611.03673"
},
{
"id": "1610.03518"
},
{
"id": "1705.07177"
},
{
"id": "1603.08983"
},
{
"id": "1703.09260"
},
{
"id": "1611.05397"
},
{
"id": "1707.03497"
},
{
"id": "1511.07111"
},
{
"id": "1604.00289"
},
{
"id": "1612.08810"
}
] |
1707.06209 | 48 | Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and
Li Deng. 2016. MS MARCO: A human gener- ated machine reading comprehension dataset. CoRR abs/1611.09268. http://arxiv.org/abs/1611.09268.
Takeshi Onishi, Hai Wang, Mohit Bansal, Kevin Gimpel, and David A. McAllester. 2016. Who did what: A large-scale person-centered cloze the 2016 Con- dataset. ference on Empirical Methods in Natural Lan- guage Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016. pages 2230â2235. http://aclweb.org/anthology/D/D16/D16-1241.pdf.
Andreas Papasalouros, Konstantinos Kanaris, and Kon- stantinos Kotis. 2008. Automatic generation of mul- In tiple choice questions from domain ontologies. Miguel Baptista Nunes and Maggie McPherson, ed- itors, e-Learning. IADIS, pages 427â434. | 1707.06209#48 | Crowdsourcing Multiple Choice Science Questions | We present a novel method for obtaining high-quality, domain-targeted
multiple choice questions from crowd workers. Generating these questions can be
difficult without trading away originality, relevance or diversity in the
answer options. Our method addresses these problems by leveraging a large
corpus of domain-specific text and a small set of existing questions. It
produces model suggestions for document selection and answer distractor choice
which aid the human question generation process. With this method we have
assembled SciQ, a dataset of 13.7K multiple choice science exam questions
(Dataset available at http://allenai.org/data.html). We demonstrate that the
method produces in-domain questions by providing an analysis of this new
dataset and by showing that humans cannot distinguish the crowdsourced
questions from original questions. When using SciQ as additional training data
to existing questions, we observe accuracy improvements on real science exams. | http://arxiv.org/pdf/1707.06209 | Johannes Welbl, Nelson F. Liu, Matt Gardner | cs.HC, cs.AI, cs.CL, stat.ML | accepted for the Workshop on Noisy User-generated Text (W-NUT) 2017 | null | cs.HC | 20170719 | 20170719 | [
{
"id": "1606.06031"
},
{
"id": "1604.04315"
}
] |
1707.06203 | 49 | A(o1, a) = > of try + TV (0446415 80) â V (045 Oy): t<t!<t+k
The value function V (ot; θv) is also computed as the output of a neural network with parameters θv. The input to the value function network was chosen to be the second to last layer of the policy network that computes Ï. The parameter θv are updated with âθv towards bootstrapped k-step return:
g(θv) = âA(ot, at)âθv V (ot; θv)
implementation, we express the above updates as gradients of a corre- In our numerical To this surrogate loss, we add an entropy regularizer of sponding surrogate loss [56]. Ï(at|ot; θ) log Ï(at|ot; θ) to encourage exploration, with λent = 10â2 thoughout all ex- λent periments. Where applicable, we add a loss for policy distillation consisting of the cross-entropy between Ï and ËÏ:
ldist(Ï, ËÏ)(ot) = λdist Ï(a|ot) log ËÏ(a|ot), | 1707.06203#49 | Imagination-Augmented Agents for Deep Reinforcement Learning | We introduce Imagination-Augmented Agents (I2As), a novel architecture for
deep reinforcement learning combining model-free and model-based aspects. In
contrast to most existing model-based reinforcement learning and planning
methods, which prescribe how a model should be used to arrive at a policy, I2As
learn to interpret predictions from a learned environment model to construct
implicit plans in arbitrary ways, by using the predictions as additional
context in deep policy networks. I2As show improved data efficiency,
performance, and robustness to model misspecification compared to several
baselines. | http://arxiv.org/pdf/1707.06203 | Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170719 | 20180214 | [
{
"id": "1707.03374"
},
{
"id": "1703.01250"
},
{
"id": "1511.09249"
},
{
"id": "1611.03673"
},
{
"id": "1610.03518"
},
{
"id": "1705.07177"
},
{
"id": "1603.08983"
},
{
"id": "1703.09260"
},
{
"id": "1611.05397"
},
{
"id": "1707.03497"
},
{
"id": "1511.07111"
},
{
"id": "1604.00289"
},
{
"id": "1612.08810"
}
] |
1707.06209 | 49 | Denis Paperno, Germ´an Kruszewski, Angeliki Lazari- dou, Quan Ngoc Pham, Raffaella Bernardi, San- dro Pezzelle, Marco Baroni, Gemma Boleda, and Raquel Fern´andez. 2016. The lambada dataset: Word prediction requiring a broad discourse context. arXiv preprint arXiv:1606.06031 .
Jeffrey Pennington, Richard Socher, and Christo- pher D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Nat- ural Language Processing (EMNLP). pages 1532â 1543. http://www.aclweb.org/anthology/D14-1162.
Juan Pino and Maxine Esknazi. 2009. Semi-automatic generation of cloze question distractors effect of stu- dentsâ l1. In SLaTE. ISCA, pages 65â68.
Juan Pino, Michael Heilman, and Maxine Eskenazi. 2008. A Selection Strategy to Improve Cloze Ques- tion Quality. In Proceedings of the Workshop on In- telligent Tutoring Systems for Ill-Deï¬ned Domains. 9th International Conference on Intelligent Tutoring Systems.. | 1707.06209#49 | Crowdsourcing Multiple Choice Science Questions | We present a novel method for obtaining high-quality, domain-targeted
multiple choice questions from crowd workers. Generating these questions can be
difficult without trading away originality, relevance or diversity in the
answer options. Our method addresses these problems by leveraging a large
corpus of domain-specific text and a small set of existing questions. It
produces model suggestions for document selection and answer distractor choice
which aid the human question generation process. With this method we have
assembled SciQ, a dataset of 13.7K multiple choice science exam questions
(Dataset available at http://allenai.org/data.html). We demonstrate that the
method produces in-domain questions by providing an analysis of this new
dataset and by showing that humans cannot distinguish the crowdsourced
questions from original questions. When using SciQ as additional training data
to existing questions, we observe accuracy improvements on real science exams. | http://arxiv.org/pdf/1707.06209 | Johannes Welbl, Nelson F. Liu, Matt Gardner | cs.HC, cs.AI, cs.CL, stat.ML | accepted for the Workshop on Noisy User-generated Text (W-NUT) 2017 | null | cs.HC | 20170719 | 20170719 | [
{
"id": "1606.06031"
},
{
"id": "1604.04315"
}
] |
1707.06203 | 50 | ldist(Ï, ËÏ)(ot) = λdist Ï(a|ot) log ËÏ(a|ot),
a with scaling parameter λdist. Here Â¯Ï denotes that we do not backpropagate gradients of ldist wrt. to the parameters of the rollout policy through the behavioral policy Ï. Finally, even though we pre-trained our environment models, in principle we can also learn it jointly with the I2A agent by a adding an appropriate log-likelihood term of observations under the model. We will investigate this in future research. We optimize hyperparameters (learning rate and momentum of the RMSprop optimizer, gradient clipping parameter, distillation loss scaling λdist where applicable) separately for each agent (I2A and baselines).
# B Agent and model architecture details
We used rectiï¬ed linear units (ReLUs) between all hidden layers of all our agents. For the environment models, we used leaky ReLUs with a slope of 0.01.
# B.1 Agents
# Standard model-free baseline agent
The standard model-free baseline agent, taken from [3], is a multi-layer convolutional neural network (CNN), taking the current observation ot as input, followed by a fully connected (FC) hidden layer.
1 | 1707.06203#50 | Imagination-Augmented Agents for Deep Reinforcement Learning | We introduce Imagination-Augmented Agents (I2As), a novel architecture for
deep reinforcement learning combining model-free and model-based aspects. In
contrast to most existing model-based reinforcement learning and planning
methods, which prescribe how a model should be used to arrive at a policy, I2As
learn to interpret predictions from a learned environment model to construct
implicit plans in arbitrary ways, by using the predictions as additional
context in deep policy networks. I2As show improved data efficiency,
performance, and robustness to model misspecification compared to several
baselines. | http://arxiv.org/pdf/1707.06203 | Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170719 | 20180214 | [
{
"id": "1707.03374"
},
{
"id": "1703.01250"
},
{
"id": "1511.09249"
},
{
"id": "1611.03673"
},
{
"id": "1610.03518"
},
{
"id": "1705.07177"
},
{
"id": "1603.08983"
},
{
"id": "1703.09260"
},
{
"id": "1611.05397"
},
{
"id": "1707.03497"
},
{
"id": "1511.07111"
},
{
"id": "1604.00289"
},
{
"id": "1612.08810"
}
] |
1707.06209 | 50 | P. Rajpurkar, J. Zhang, K. Lopyrev, and P. Liang. 2016. Squad: 100,000+ questions for machine comprehen- sion of text. In Empirical Methods in Natural Lan- guage Processing (EMNLP).
Mrinmaya Sachan, Avinava Dubey, and Eric P. Science question answering using CoRR abs/1602.04375. Xing. 2016. instructional materials. http://arxiv.org/abs/1602.04375.
Keisuke Sakaguchi, Yuki Arase, and Mamoru Ko- machi. 2013. Discriminative approach to ï¬ll- in-the-blank quiz generation for language learn- the 51st Annual Meet- ers. ing of the Association for Computational Linguis- tics, ACL 2013, 4-9 August 2013, Soï¬a, Bul- garia, Volume 2: Short Papers. pages 238â242. http://aclweb.org/anthology/P/P13/P13-2043.pdf.
Carissa Schoenick, Peter Clark, Oyvind Tafjord, Peter Turney, and Oren Etzioni. 2016. Moving beyond the turing test with the allen ai science challenge. arXiv preprint arXiv:1604.04315 . | 1707.06209#50 | Crowdsourcing Multiple Choice Science Questions | We present a novel method for obtaining high-quality, domain-targeted
multiple choice questions from crowd workers. Generating these questions can be
difficult without trading away originality, relevance or diversity in the
answer options. Our method addresses these problems by leveraging a large
corpus of domain-specific text and a small set of existing questions. It
produces model suggestions for document selection and answer distractor choice
which aid the human question generation process. With this method we have
assembled SciQ, a dataset of 13.7K multiple choice science exam questions
(Dataset available at http://allenai.org/data.html). We demonstrate that the
method produces in-domain questions by providing an analysis of this new
dataset and by showing that humans cannot distinguish the crowdsourced
questions from original questions. When using SciQ as additional training data
to existing questions, we observe accuracy improvements on real science exams. | http://arxiv.org/pdf/1707.06209 | Johannes Welbl, Nelson F. Liu, Matt Gardner | cs.HC, cs.AI, cs.CL, stat.ML | accepted for the Workshop on Noisy User-generated Text (W-NUT) 2017 | null | cs.HC | 20170719 | 20170719 | [
{
"id": "1606.06031"
},
{
"id": "1604.04315"
}
] |
1707.06203 | 51 | 1
This FC layer feeds into two heads: into a FC layer with one output per action computing the policy logits log Ï(at|ot, θ); and into another FC layer with a single output that computes the value function V (ot; θv). The sizes of the layers were chosen as follows:
⢠for MiniPacman: the CNN has two layers, both with 3x3 kernels, 16 output channels and strides 1 and 2; the following FC layer has 256 units
⢠for Sokoban: the CNN has three layers with kernel sizes 8x8, 4x4, 3x3, strides of 4, 2, 1 and number of output channels 32, 64, 64; the following FC has 512 units
# I2A | 1707.06203#51 | Imagination-Augmented Agents for Deep Reinforcement Learning | We introduce Imagination-Augmented Agents (I2As), a novel architecture for
deep reinforcement learning combining model-free and model-based aspects. In
contrast to most existing model-based reinforcement learning and planning
methods, which prescribe how a model should be used to arrive at a policy, I2As
learn to interpret predictions from a learned environment model to construct
implicit plans in arbitrary ways, by using the predictions as additional
context in deep policy networks. I2As show improved data efficiency,
performance, and robustness to model misspecification compared to several
baselines. | http://arxiv.org/pdf/1707.06203 | Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170719 | 20180214 | [
{
"id": "1707.03374"
},
{
"id": "1703.01250"
},
{
"id": "1511.09249"
},
{
"id": "1611.03673"
},
{
"id": "1610.03518"
},
{
"id": "1705.07177"
},
{
"id": "1603.08983"
},
{
"id": "1703.09260"
},
{
"id": "1611.05397"
},
{
"id": "1707.03497"
},
{
"id": "1511.07111"
},
{
"id": "1604.00289"
},
{
"id": "1612.08810"
}
] |
1707.06209 | 51 | Min Joon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Bidirectional at- tention ï¬ow for machine comprehension. CoRR abs/1611.01603. http://arxiv.org/abs/1611.01603.
Alessandro Sordoni, Phillip Bachman, and Yoshua Iterative alternating neural atten- Bengio. 2016. tion for machine reading. CoRR abs/1606.02245. http://arxiv.org/abs/1606.02245.
and Seiichi Yamamoto. 2005. Measuring non-native speak- ersâ proï¬ciency of english by using a test with automatically-generated ques- In Proceedings of the Second Workshop on tions. Building Educational Applications Using NLP. Association for Computational Linguistics, Strouds- burg, PA, USA, EdAppsNLP 05, pages 61â68. http://dl.acm.org/citation.cfm?id=1609829.1609839. | 1707.06209#51 | Crowdsourcing Multiple Choice Science Questions | We present a novel method for obtaining high-quality, domain-targeted
multiple choice questions from crowd workers. Generating these questions can be
difficult without trading away originality, relevance or diversity in the
answer options. Our method addresses these problems by leveraging a large
corpus of domain-specific text and a small set of existing questions. It
produces model suggestions for document selection and answer distractor choice
which aid the human question generation process. With this method we have
assembled SciQ, a dataset of 13.7K multiple choice science exam questions
(Dataset available at http://allenai.org/data.html). We demonstrate that the
method produces in-domain questions by providing an analysis of this new
dataset and by showing that humans cannot distinguish the crowdsourced
questions from original questions. When using SciQ as additional training data
to existing questions, we observe accuracy improvements on real science exams. | http://arxiv.org/pdf/1707.06209 | Johannes Welbl, Nelson F. Liu, Matt Gardner | cs.HC, cs.AI, cs.CL, stat.ML | accepted for the Workshop on Noisy User-generated Text (W-NUT) 2017 | null | cs.HC | 20170719 | 20170719 | [
{
"id": "1606.06031"
},
{
"id": "1604.04315"
}
] |
1707.06203 | 52 | # I2A
The model free path of the I2A consists of a CNN identical to one of the standard model-free baseline (without the FC layers). The rollout encoder processes each frame generated by the environment model with another identically sized CNN. The output of this CNN is then concatenated with the reward prediction (single scalar broadcast into frame shape). This feature is the input to an LSTM with 512 (for Sokoban) or 256 (for MiniPacman) units. The same LSTM is used to process all 5 rollouts (one per action); the last output of the LSTM for all rollouts are concatenated into a single vector cia of length 2560 for Sokoban, and 1280 on MiniPacman. This vector is concatenated with the output cmf of the model-free CNN path and is fed into the fully connected layers computing policy logits and value function as in the baseline agent described above.
# Copy-model
The copy-model agent has the exact same architecture as the I2A, with the exception of the environ- ment model being replaced by the identity function (constantly returns the input observation).
# B.2 Environment models | 1707.06203#52 | Imagination-Augmented Agents for Deep Reinforcement Learning | We introduce Imagination-Augmented Agents (I2As), a novel architecture for
deep reinforcement learning combining model-free and model-based aspects. In
contrast to most existing model-based reinforcement learning and planning
methods, which prescribe how a model should be used to arrive at a policy, I2As
learn to interpret predictions from a learned environment model to construct
implicit plans in arbitrary ways, by using the predictions as additional
context in deep policy networks. I2As show improved data efficiency,
performance, and robustness to model misspecification compared to several
baselines. | http://arxiv.org/pdf/1707.06203 | Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170719 | 20180214 | [
{
"id": "1707.03374"
},
{
"id": "1703.01250"
},
{
"id": "1511.09249"
},
{
"id": "1611.03673"
},
{
"id": "1610.03518"
},
{
"id": "1705.07177"
},
{
"id": "1603.08983"
},
{
"id": "1703.09260"
},
{
"id": "1611.05397"
},
{
"id": "1707.03497"
},
{
"id": "1511.07111"
},
{
"id": "1604.00289"
},
{
"id": "1612.08810"
}
] |
1707.06209 | 52 | Yi Yang, Scott Wen-tau Yih, and Chris Meek. 2015. Wikiqa: A challenge dataset for open-domain question answering. ACL Association for Compu- tational Linguistics. https://www.microsoft.com/en- us/research/publication/wikiqa-a-challenge-dataset- for-open-domain-question-answering/.
Au- challenging distractors tomatic generation of In inference using context-sensitive Proceedings of the Ninth Workshop on In- novative Use of NLP for Building Educa- tional Applications, BEA@ACL 2014, June 26, 2014, Baltimore, Maryland, USA. pages 143â http://aclweb.org/anthology/W/W14/W14- 148. 1817.pdf.
# A List of Study Books
The following is a list of the books we used as data source:
⢠OpenStax, Anatomy & Physiology. Open- Stax. 25 April 20139
⢠OpenStax, Biology. OpenStax. May 20, 201310
⢠OpenStax, Chemistry. OpenStax. 11 March 201511
⢠OpenStax, College Physics. OpenStax. 21 June 201212
⢠OpenStax, Concepts of Biology. OpenStax. 25 April 201313 | 1707.06209#52 | Crowdsourcing Multiple Choice Science Questions | We present a novel method for obtaining high-quality, domain-targeted
multiple choice questions from crowd workers. Generating these questions can be
difficult without trading away originality, relevance or diversity in the
answer options. Our method addresses these problems by leveraging a large
corpus of domain-specific text and a small set of existing questions. It
produces model suggestions for document selection and answer distractor choice
which aid the human question generation process. With this method we have
assembled SciQ, a dataset of 13.7K multiple choice science exam questions
(Dataset available at http://allenai.org/data.html). We demonstrate that the
method produces in-domain questions by providing an analysis of this new
dataset and by showing that humans cannot distinguish the crowdsourced
questions from original questions. When using SciQ as additional training data
to existing questions, we observe accuracy improvements on real science exams. | http://arxiv.org/pdf/1707.06209 | Johannes Welbl, Nelson F. Liu, Matt Gardner | cs.HC, cs.AI, cs.CL, stat.ML | accepted for the Workshop on Noisy User-generated Text (W-NUT) 2017 | null | cs.HC | 20170719 | 20170719 | [
{
"id": "1606.06031"
},
{
"id": "1604.04315"
}
] |
1707.06203 | 53 | # B.2 Environment models
For the I2A, we pre-train separate auto-regressive models of order 1 for the raw pixel observations of the MiniPacman and Sokoban environments (see ï¬gures 7 and 8) . In both cases, the input to the model consisted of the last observation ot, and a broadcasted, one-hot representation of the last action at. Following previous studies, the outputs of the models were trained to predict the next frame ot+1 by stochastic gradient decent on the Bernoulli cross-entropy between network outputs and data ot+1.
The Sokoban model is a simpliï¬ed case of the MiniPacman model; the Sokoban model is nearly entirely local (save for the reward model), while the MiniPacman model needs to deal with nonlocal interaction (movement of ghosts is affected by position of Pacman, which can be arbitrarily far from the ghosts).
# MiniPacman model | 1707.06203#53 | Imagination-Augmented Agents for Deep Reinforcement Learning | We introduce Imagination-Augmented Agents (I2As), a novel architecture for
deep reinforcement learning combining model-free and model-based aspects. In
contrast to most existing model-based reinforcement learning and planning
methods, which prescribe how a model should be used to arrive at a policy, I2As
learn to interpret predictions from a learned environment model to construct
implicit plans in arbitrary ways, by using the predictions as additional
context in deep policy networks. I2As show improved data efficiency,
performance, and robustness to model misspecification compared to several
baselines. | http://arxiv.org/pdf/1707.06203 | Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170719 | 20180214 | [
{
"id": "1707.03374"
},
{
"id": "1703.01250"
},
{
"id": "1511.09249"
},
{
"id": "1611.03673"
},
{
"id": "1610.03518"
},
{
"id": "1705.07177"
},
{
"id": "1603.08983"
},
{
"id": "1703.09260"
},
{
"id": "1611.05397"
},
{
"id": "1707.03497"
},
{
"id": "1511.07111"
},
{
"id": "1604.00289"
},
{
"id": "1612.08810"
}
] |
1707.06209 | 53 | ⢠OpenStax, College Physics. OpenStax. 21 June 201212
⢠OpenStax, Concepts of Biology. OpenStax. 25 April 201313
by Michael 2.0 Klymkowsky, University of Colorado & Melanie Cooper, Michigan State Univer- sity14
⢠Earth Systems, An Earth Science Course on www.curriki.org15
⢠General Chemistry, Principles, Patterns, and Applications by Bruce Averill, Strategic En- ergy Security Solutions and Patricia El- dredge, R.H. Hand, LLC; Saylor Founda- tion16
⢠General Biology; Paul Doerder, Cleveland State University & Ralph Gibson, Cleveland State University 17
9Download for free at http://cnx.org/content/ col11496/latest/
10Download for free at http://cnx.org/content/ col11448/latest/
11Download for free at http://cnx.org/content/ col11760/latest/
12Download for free at http://cnx.org/content/ col11406/latest
13Download for free at http://cnx.org/content/ col11487/latest
14https://open.umn.edu/opentextbooks/ BookDetail.aspx?bookId=350 | 1707.06209#53 | Crowdsourcing Multiple Choice Science Questions | We present a novel method for obtaining high-quality, domain-targeted
multiple choice questions from crowd workers. Generating these questions can be
difficult without trading away originality, relevance or diversity in the
answer options. Our method addresses these problems by leveraging a large
corpus of domain-specific text and a small set of existing questions. It
produces model suggestions for document selection and answer distractor choice
which aid the human question generation process. With this method we have
assembled SciQ, a dataset of 13.7K multiple choice science exam questions
(Dataset available at http://allenai.org/data.html). We demonstrate that the
method produces in-domain questions by providing an analysis of this new
dataset and by showing that humans cannot distinguish the crowdsourced
questions from original questions. When using SciQ as additional training data
to existing questions, we observe accuracy improvements on real science exams. | http://arxiv.org/pdf/1707.06209 | Johannes Welbl, Nelson F. Liu, Matt Gardner | cs.HC, cs.AI, cs.CL, stat.ML | accepted for the Workshop on Noisy User-generated Text (W-NUT) 2017 | null | cs.HC | 20170719 | 20170719 | [
{
"id": "1606.06031"
},
{
"id": "1604.04315"
}
] |
1707.06203 | 54 | # MiniPacman model
The input and output frames were of size 15 x 19 x 3 (width x height x RGB). The model is depicted in ï¬gure 7. It consisted of a size preserving, multi-scale CNN architecture with additional fully connected layers for reward prediction. In order to capture long-range dependencies across pixels, we also make use of a layer we call pool-and-inject, which applies global max-pooling over each feature map and broadcasts the resulting values as feature maps of the same size and concatenates the result to the input. Pool-and-inject layers are therefore size-preserving layers which communicate the max-value of each layer globally to the next convolutional layer.
# Sokoban model
The Sokoban model was chosen to be a residual CNN with an additional CNN / fully-connected MLP pathway for predicting rewards. The input of size 80x80x3 was ï¬rst processed with convolutions with a large 8x8 kernel and stride of 8. This reduced representation was further processed with two size preserving CNN layers before outputting a predicted frame by a 8x8 convolutional layer.
2 | 1707.06203#54 | Imagination-Augmented Agents for Deep Reinforcement Learning | We introduce Imagination-Augmented Agents (I2As), a novel architecture for
deep reinforcement learning combining model-free and model-based aspects. In
contrast to most existing model-based reinforcement learning and planning
methods, which prescribe how a model should be used to arrive at a policy, I2As
learn to interpret predictions from a learned environment model to construct
implicit plans in arbitrary ways, by using the predictions as additional
context in deep policy networks. I2As show improved data efficiency,
performance, and robustness to model misspecification compared to several
baselines. | http://arxiv.org/pdf/1707.06203 | Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170719 | 20180214 | [
{
"id": "1707.03374"
},
{
"id": "1703.01250"
},
{
"id": "1511.09249"
},
{
"id": "1611.03673"
},
{
"id": "1610.03518"
},
{
"id": "1705.07177"
},
{
"id": "1603.08983"
},
{
"id": "1703.09260"
},
{
"id": "1611.05397"
},
{
"id": "1707.03497"
},
{
"id": "1511.07111"
},
{
"id": "1604.00289"
},
{
"id": "1612.08810"
}
] |
1707.06209 | 54 | 14https://open.umn.edu/opentextbooks/ BookDetail.aspx?bookId=350
# 15http://www.curriki. org/xwiki/bin/view/Group_ CLRN-OpenSourceEarthScienceCourse/ 16https://www.saylor.org/site/
textbooks/General%20Chemistry% 20Principles,%20Patterns,%20and% 20Applications.pdf
# 17https://upload.wikimedia.org/
wikipedia/commons/4/40/GeneralBiology. pdf
⢠Introductory Chemistry by David W. Ball, Cleveland State University. Saylor Founda- tion 18
⢠The Basics of General, Organic, and Biologi- cal Chemistry by David Ball, Cleveland State University & John Hill, University of Wis- consin & Rhonda Scott, Southern Adventist University. Saylor Foundation19
4 Elementary-Level Science Test, by Joyce Thornton Barry and Kathleen Cahill 20
⢠Campbell Biology: Concepts & Connections by Jane B. Reece, Martha R. Taylor, Eric J. Simon, Jean L. Dickey21
⢠CK-12 Peoples Physics Book Basic 22
⢠CK-12 Biology Advanced Concepts 23
⢠CK-12 Biology Concepts 24
⢠CK-12 Biology 25
⢠CK-12 Chemistry - Basic 26
⢠CK-12 Chemistry Concepts â Intermediate 27
⢠CK-12 Earth Science Concepts For Middle School28 | 1707.06209#54 | Crowdsourcing Multiple Choice Science Questions | We present a novel method for obtaining high-quality, domain-targeted
multiple choice questions from crowd workers. Generating these questions can be
difficult without trading away originality, relevance or diversity in the
answer options. Our method addresses these problems by leveraging a large
corpus of domain-specific text and a small set of existing questions. It
produces model suggestions for document selection and answer distractor choice
which aid the human question generation process. With this method we have
assembled SciQ, a dataset of 13.7K multiple choice science exam questions
(Dataset available at http://allenai.org/data.html). We demonstrate that the
method produces in-domain questions by providing an analysis of this new
dataset and by showing that humans cannot distinguish the crowdsourced
questions from original questions. When using SciQ as additional training data
to existing questions, we observe accuracy improvements on real science exams. | http://arxiv.org/pdf/1707.06209 | Johannes Welbl, Nelson F. Liu, Matt Gardner | cs.HC, cs.AI, cs.CL, stat.ML | accepted for the Workshop on Noisy User-generated Text (W-NUT) 2017 | null | cs.HC | 20170719 | 20170719 | [
{
"id": "1606.06031"
},
{
"id": "1604.04315"
}
] |
1707.06203 | 55 | 2
output image output reward t basic bloc t i fa 4) fo(5) Corer) basic bloc i â7 | 1x1, 64 an concat + ______ (16,32,64) a =) t pool es ee ee ââ) txt, 64 it cones and tile WxH (rend) ae inject Cae a) aa wee axt,nt txt, n2 >>s.,_ (max-pool WxH ee es) 7 \. ss Input frame tile 15x19 âone-hot AE Input action
Figure 7: The minipacman environment model. The overview is given in the right panel with blow- ups of the basic convolutional building block (middle panel) and the pool-and-inject layer (left panel). The basic build block has three hyperparameters n1, n2, n3 determining the number of channels in the convolutions; their numeric values are given in the right panel.
âoutput image output reward softmax 3x3, 32 / 3x3,32 \ 2x2 max-pool / 8x8, 32,/8 \ | / 3x3,32 \ i Input frame tile 80x80 Input action
Figure 8: The sokoban environment model.
# C MiniPacman additional details | 1707.06203#55 | Imagination-Augmented Agents for Deep Reinforcement Learning | We introduce Imagination-Augmented Agents (I2As), a novel architecture for
deep reinforcement learning combining model-free and model-based aspects. In
contrast to most existing model-based reinforcement learning and planning
methods, which prescribe how a model should be used to arrive at a policy, I2As
learn to interpret predictions from a learned environment model to construct
implicit plans in arbitrary ways, by using the predictions as additional
context in deep policy networks. I2As show improved data efficiency,
performance, and robustness to model misspecification compared to several
baselines. | http://arxiv.org/pdf/1707.06203 | Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170719 | 20180214 | [
{
"id": "1707.03374"
},
{
"id": "1703.01250"
},
{
"id": "1511.09249"
},
{
"id": "1611.03673"
},
{
"id": "1610.03518"
},
{
"id": "1705.07177"
},
{
"id": "1603.08983"
},
{
"id": "1703.09260"
},
{
"id": "1611.05397"
},
{
"id": "1707.03497"
},
{
"id": "1511.07111"
},
{
"id": "1604.00289"
},
{
"id": "1612.08810"
}
] |
1707.06209 | 55 | ⢠CK-12 Biology 25
⢠CK-12 Chemistry - Basic 26
⢠CK-12 Chemistry Concepts â Intermediate 27
⢠CK-12 Earth Science Concepts For Middle School28
⢠CK-12 Earth Science Concepts For High School29
# 18https://www.saylor.org/site/
# textbooks/Introductory%20Chemistry.pdf
# 19http://web.archive.org/web/ 20131024125808/http://www.saylor. org/site/textbooks/The%20Basics%20of% 20General,%20Organic%20and%20Biological% 20Chemistry.pdf
20We do not include documents from this resource in the dataset.
21We do not include documents from this resource in the dataset. | 1707.06209#55 | Crowdsourcing Multiple Choice Science Questions | We present a novel method for obtaining high-quality, domain-targeted
multiple choice questions from crowd workers. Generating these questions can be
difficult without trading away originality, relevance or diversity in the
answer options. Our method addresses these problems by leveraging a large
corpus of domain-specific text and a small set of existing questions. It
produces model suggestions for document selection and answer distractor choice
which aid the human question generation process. With this method we have
assembled SciQ, a dataset of 13.7K multiple choice science exam questions
(Dataset available at http://allenai.org/data.html). We demonstrate that the
method produces in-domain questions by providing an analysis of this new
dataset and by showing that humans cannot distinguish the crowdsourced
questions from original questions. When using SciQ as additional training data
to existing questions, we observe accuracy improvements on real science exams. | http://arxiv.org/pdf/1707.06209 | Johannes Welbl, Nelson F. Liu, Matt Gardner | cs.HC, cs.AI, cs.CL, stat.ML | accepted for the Workshop on Noisy User-generated Text (W-NUT) 2017 | null | cs.HC | 20170719 | 20170719 | [
{
"id": "1606.06031"
},
{
"id": "1604.04315"
}
] |
1707.06203 | 56 | Figure 8: The sokoban environment model.
# C MiniPacman additional details
MiniPacman is played in a 15 à 19 grid-world. Characters, the ghosts and Pacman, move through a maze. Walls positions are ï¬xed. At the start of each level 2 power pills, a number of ghosts, and Pacman are placed at random in the world. Food is found on every square of the maze. The number of ghosts on level k is 1 + levelâ1
2
# Game dynamics
Ghosts always move by one square at each time step. Pacman usually moves by one square, except when it has eaten a power pill, which makes it move by two squares at a time. When moving by 2 squares, if Pacman new position ends up inside a wall, then it is moved back by one square to get back to a corridor.
We say that Pacman and a ghost meet when they either end up at the same location, or when their path crosses (even if they do not end up at the same location). When Pacman moves to a square with food or a power pill, it eats it. Eating a power pill gives Pacman super powers, such as moving at
3 | 1707.06203#56 | Imagination-Augmented Agents for Deep Reinforcement Learning | We introduce Imagination-Augmented Agents (I2As), a novel architecture for
deep reinforcement learning combining model-free and model-based aspects. In
contrast to most existing model-based reinforcement learning and planning
methods, which prescribe how a model should be used to arrive at a policy, I2As
learn to interpret predictions from a learned environment model to construct
implicit plans in arbitrary ways, by using the predictions as additional
context in deep policy networks. I2As show improved data efficiency,
performance, and robustness to model misspecification compared to several
baselines. | http://arxiv.org/pdf/1707.06203 | Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170719 | 20180214 | [
{
"id": "1707.03374"
},
{
"id": "1703.01250"
},
{
"id": "1511.09249"
},
{
"id": "1611.03673"
},
{
"id": "1610.03518"
},
{
"id": "1705.07177"
},
{
"id": "1603.08983"
},
{
"id": "1703.09260"
},
{
"id": "1611.05397"
},
{
"id": "1707.03497"
},
{
"id": "1511.07111"
},
{
"id": "1604.00289"
},
{
"id": "1612.08810"
}
] |
1707.06209 | 56 | 20We do not include documents from this resource in the dataset.
21We do not include documents from this resource in the dataset.
22http://www.ck12.org/book/ Peoples-Physics-Book-Basic/ 23http://www.ck12.org/book/ CK-12-Biology-Advanced-Concepts/ 24http://www.ck12.org/book/ CK-12-Biology-Concepts/ 25http://www.ck12.org/book/ CK-12-Biology/ 26http://www.ck12.org/book/ CK-12-Chemistry-Basic/ 27http://www.ck12.org/book/ CK-12-Chemistry-Concepts-Intermediate/ 28http://www.ck12.org/book/ CK-12-Earth-Science-Concepts-For-Middle-School/ 29http://www.ck12.org/book/ CK-12-Earth-Science-Concepts-For-High-School/
⢠CK-12 Earth Science For Middle School 30
⢠CK-12 Life Science Concepts For Middle School 31
⢠CK-12 Life Science For Middle School 32
⢠CK-12 Physical Science Concepts For Mid- dle School33
⢠CK-12 Physical Science For Middle School 34
⢠CK-12 Physics Concepts - Intermediate 35
⢠CK-12 Peopleâs Physics Concepts 36 | 1707.06209#56 | Crowdsourcing Multiple Choice Science Questions | We present a novel method for obtaining high-quality, domain-targeted
multiple choice questions from crowd workers. Generating these questions can be
difficult without trading away originality, relevance or diversity in the
answer options. Our method addresses these problems by leveraging a large
corpus of domain-specific text and a small set of existing questions. It
produces model suggestions for document selection and answer distractor choice
which aid the human question generation process. With this method we have
assembled SciQ, a dataset of 13.7K multiple choice science exam questions
(Dataset available at http://allenai.org/data.html). We demonstrate that the
method produces in-domain questions by providing an analysis of this new
dataset and by showing that humans cannot distinguish the crowdsourced
questions from original questions. When using SciQ as additional training data
to existing questions, we observe accuracy improvements on real science exams. | http://arxiv.org/pdf/1707.06209 | Johannes Welbl, Nelson F. Liu, Matt Gardner | cs.HC, cs.AI, cs.CL, stat.ML | accepted for the Workshop on Noisy User-generated Text (W-NUT) 2017 | null | cs.HC | 20170719 | 20170719 | [
{
"id": "1606.06031"
},
{
"id": "1604.04315"
}
] |
1707.06203 | 57 | 3
double speed and being able to eat ghosts. The effects of eating a power pill last for 19 time steps. When Pacman meets a ghost, either Pacman dies eaten by the ghost, or, if Pacman has recently eaten a power pill, the ghost dies eaten by Pacman.
If Pacman has eaten a power pill, ghosts try to ï¬ee from Pacman. They otherwise try to chase Pacman. A more precise algorithm for the movement of a ghost is given below in pseudo code:
# Algorithm 1 move ghost | 1707.06203#57 | Imagination-Augmented Agents for Deep Reinforcement Learning | We introduce Imagination-Augmented Agents (I2As), a novel architecture for
deep reinforcement learning combining model-free and model-based aspects. In
contrast to most existing model-based reinforcement learning and planning
methods, which prescribe how a model should be used to arrive at a policy, I2As
learn to interpret predictions from a learned environment model to construct
implicit plans in arbitrary ways, by using the predictions as additional
context in deep policy networks. I2As show improved data efficiency,
performance, and robustness to model misspecification compared to several
baselines. | http://arxiv.org/pdf/1707.06203 | Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170719 | 20180214 | [
{
"id": "1707.03374"
},
{
"id": "1703.01250"
},
{
"id": "1511.09249"
},
{
"id": "1611.03673"
},
{
"id": "1610.03518"
},
{
"id": "1705.07177"
},
{
"id": "1603.08983"
},
{
"id": "1703.09260"
},
{
"id": "1611.05397"
},
{
"id": "1707.03497"
},
{
"id": "1511.07111"
},
{
"id": "1604.00289"
},
{
"id": "1612.08810"
}
] |
1707.06209 | 57 | ⢠CK-12 Physical Science For Middle School 34
⢠CK-12 Physics Concepts - Intermediate 35
⢠CK-12 Peopleâs Physics Concepts 36
through correspondence with the authors of On- ishi et al. (2016)) and use the hyperparameters re- ported in the original paper (Kadlec et al., 2016) for the rest. For the GA Reader, we use three gated-attention layers with the multiplicative gat- ing mechanism. We do not use the character-level embedding features or the question-evidence com- mon word features, but we do follow their work by using pretrained 100-dimension GloVe vectors to initialize a ï¬xed word embedding layer. Between each gated attention layer, we apply dropout with a rate of 0.3. The other hyperparameters are the same as their original work (Dhingra et al., 2016). Direct Answer Reading Comprehension. We implemented the Bidirectional Attention Flow model exactly as described in Seo et al. (2016) and adopted the hyperparameters used in the paper.
CK-12 books were obtained under the Creative Commons Attribution-Non-Commercial 3.0 Un- ported (CC BY-NC 3.0) License 37.
# B Training and Implementation Details | 1707.06209#57 | Crowdsourcing Multiple Choice Science Questions | We present a novel method for obtaining high-quality, domain-targeted
multiple choice questions from crowd workers. Generating these questions can be
difficult without trading away originality, relevance or diversity in the
answer options. Our method addresses these problems by leveraging a large
corpus of domain-specific text and a small set of existing questions. It
produces model suggestions for document selection and answer distractor choice
which aid the human question generation process. With this method we have
assembled SciQ, a dataset of 13.7K multiple choice science exam questions
(Dataset available at http://allenai.org/data.html). We demonstrate that the
method produces in-domain questions by providing an analysis of this new
dataset and by showing that humans cannot distinguish the crowdsourced
questions from original questions. When using SciQ as additional training data
to existing questions, we observe accuracy improvements on real science exams. | http://arxiv.org/pdf/1707.06209 | Johannes Welbl, Nelson F. Liu, Matt Gardner | cs.HC, cs.AI, cs.CL, stat.ML | accepted for the Workshop on Noisy User-generated Text (W-NUT) 2017 | null | cs.HC | 20170719 | 20170719 | [
{
"id": "1606.06031"
},
{
"id": "1604.04315"
}
] |
1707.06203 | 58 | : fun not 18: 19: 20: 21: 22: 23: 24: 25: ction MOVEGHOST Inputs: Ghost object PossibleDirections < [DOWN, LEFT, RIGHT, UP] CurrentDirection + Ghost.current_direction AllowedDirections < [] for dir in PossibleDirections do if Ghost.can_move(dir) then AllowedDirections + = [dir] if len(AllowedDirections) == 2 then > Contains position and some helper methods if Ghost.current_direction in AllowedDirections then return Ghost.current_direction if opposite(Ghost.current_direction) == AllowedDirections|[0] then return AllowedDirections[1] return AllowedDirections[0] else turn around X = normalise(Pacman.position - Ghost.position) DotProducts = [] for dir in AllowedDirections do DotProducts + = [dot_product(X, dir)] if Pacman.ate_super_pill then return AllowedDirections[argmin(DotProducts)] else > We are in a straight corridor, or at a bend > We are at an intersection if opposite(Ghost.current_direction) in AllowedDirections then AllowedDirections.remove(opposite(Ghost.current_direction)) > Ghosts do return AllowedDirections[argmax(DotProducts)] > Away from Pacman > Towards Pacman
# Task collection | 1707.06203#58 | Imagination-Augmented Agents for Deep Reinforcement Learning | We introduce Imagination-Augmented Agents (I2As), a novel architecture for
deep reinforcement learning combining model-free and model-based aspects. In
contrast to most existing model-based reinforcement learning and planning
methods, which prescribe how a model should be used to arrive at a policy, I2As
learn to interpret predictions from a learned environment model to construct
implicit plans in arbitrary ways, by using the predictions as additional
context in deep policy networks. I2As show improved data efficiency,
performance, and robustness to model misspecification compared to several
baselines. | http://arxiv.org/pdf/1707.06203 | Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170719 | 20180214 | [
{
"id": "1707.03374"
},
{
"id": "1703.01250"
},
{
"id": "1511.09249"
},
{
"id": "1611.03673"
},
{
"id": "1610.03518"
},
{
"id": "1705.07177"
},
{
"id": "1603.08983"
},
{
"id": "1703.09260"
},
{
"id": "1611.05397"
},
{
"id": "1707.03497"
},
{
"id": "1511.07111"
},
{
"id": "1604.00289"
},
{
"id": "1612.08810"
}
] |
1707.06209 | 58 | CK-12 books were obtained under the Creative Commons Attribution-Non-Commercial 3.0 Un- ported (CC BY-NC 3.0) License 37.
# B Training and Implementation Details
Multiple Choice Reading Comprehension. Dur- ing training of the AS Reader and GA Reader, we monitored model performance after each epoch and stopped training when the error on the valida- tion set had increased (early stopping, with a pa- tience of one). We set a hard limit of ten epochs, but most models reached their peak validation ac- curacy after the ï¬rst or second epoch. Test set evaluation, when applicable, used model param- eters at the epoch of their peak validation accu- racy. We implemented the models in Keras, and ran them with the Theano backend on a Tesla K80 GPU.
The hyperparameters for each of the models were adopted from previous work. For the AS Reader, we use an embedding dimension of 256 and GRU hidden layer dimension of 384 (obtained | 1707.06209#58 | Crowdsourcing Multiple Choice Science Questions | We present a novel method for obtaining high-quality, domain-targeted
multiple choice questions from crowd workers. Generating these questions can be
difficult without trading away originality, relevance or diversity in the
answer options. Our method addresses these problems by leveraging a large
corpus of domain-specific text and a small set of existing questions. It
produces model suggestions for document selection and answer distractor choice
which aid the human question generation process. With this method we have
assembled SciQ, a dataset of 13.7K multiple choice science exam questions
(Dataset available at http://allenai.org/data.html). We demonstrate that the
method produces in-domain questions by providing an analysis of this new
dataset and by showing that humans cannot distinguish the crowdsourced
questions from original questions. When using SciQ as additional training data
to existing questions, we observe accuracy improvements on real science exams. | http://arxiv.org/pdf/1707.06209 | Johannes Welbl, Nelson F. Liu, Matt Gardner | cs.HC, cs.AI, cs.CL, stat.ML | accepted for the Workshop on Noisy User-generated Text (W-NUT) 2017 | null | cs.HC | 20170719 | 20170719 | [
{
"id": "1606.06031"
},
{
"id": "1604.04315"
}
] |
1707.06203 | 59 | # Task collection
We used 5 different tasks available in MiniPacman. They all share the same environment dynamics (layout of maze, movement of ghosts, . . . ), but vary in their reward structure and level termination. The rewards associated with various events for each tasks are given in the table below.
Task Regular Avoid Hunt Ambush Rush At each step Eating food Eating power pill Eating ghost Killed by ghost 0 0.1 0 0 0 1 -0.1 0 -0.1 -0.1 2 -5 1 0 10 5 -10 10 10 0 0 -20 -20 -20 0
When a level is cleared, a new level starts. Tasks also differ in the way a level was cleared.
Regular: level is cleared when all the food is eaten;
Avoid: level is cleared after 128 steps;
Hunt: level is cleared when all ghosts are eaten or after 80 steps.
⢠Ambush: level is cleared when all ghosts are eaten or after 80 steps.
Rush: level is cleared when all power pills are eaten.
4 | 1707.06203#59 | Imagination-Augmented Agents for Deep Reinforcement Learning | We introduce Imagination-Augmented Agents (I2As), a novel architecture for
deep reinforcement learning combining model-free and model-based aspects. In
contrast to most existing model-based reinforcement learning and planning
methods, which prescribe how a model should be used to arrive at a policy, I2As
learn to interpret predictions from a learned environment model to construct
implicit plans in arbitrary ways, by using the predictions as additional
context in deep policy networks. I2As show improved data efficiency,
performance, and robustness to model misspecification compared to several
baselines. | http://arxiv.org/pdf/1707.06203 | Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170719 | 20180214 | [
{
"id": "1707.03374"
},
{
"id": "1703.01250"
},
{
"id": "1511.09249"
},
{
"id": "1611.03673"
},
{
"id": "1610.03518"
},
{
"id": "1705.07177"
},
{
"id": "1603.08983"
},
{
"id": "1703.09260"
},
{
"id": "1611.05397"
},
{
"id": "1707.03497"
},
{
"id": "1511.07111"
},
{
"id": "1604.00289"
},
{
"id": "1612.08810"
}
] |
1707.06209 | 59 | The hyperparameters for each of the models were adopted from previous work. For the AS Reader, we use an embedding dimension of 256 and GRU hidden layer dimension of 384 (obtained
30http://www.ck12.org/book/ CK-12-Earth-Science-For-Middle-School/ 31http://www.ck12.org/book/ CK-12-Life-Science-Concepts-For-Middle-School/ 32http://www.ck12.org/book/ CK-12-Life-Science-For-Middle-School/ 33http://www.ck12.org/book/ CK-12-Physical-Science-Concepts-For-Middle-School/ 34http://www.ck12.org/book/ CK-12-Physical-Science-For-Middle-School/ 35http://www.ck12.org/book/ CK-12-Physics-Concepts-Intermediate/ 36http://www.ck12.org/book/ Peoples-Physics-Concepts/ 37http://creativecommons.org/licenses/ by-nc/3.0/ | 1707.06209#59 | Crowdsourcing Multiple Choice Science Questions | We present a novel method for obtaining high-quality, domain-targeted
multiple choice questions from crowd workers. Generating these questions can be
difficult without trading away originality, relevance or diversity in the
answer options. Our method addresses these problems by leveraging a large
corpus of domain-specific text and a small set of existing questions. It
produces model suggestions for document selection and answer distractor choice
which aid the human question generation process. With this method we have
assembled SciQ, a dataset of 13.7K multiple choice science exam questions
(Dataset available at http://allenai.org/data.html). We demonstrate that the
method produces in-domain questions by providing an analysis of this new
dataset and by showing that humans cannot distinguish the crowdsourced
questions from original questions. When using SciQ as additional training data
to existing questions, we observe accuracy improvements on real science exams. | http://arxiv.org/pdf/1707.06209 | Johannes Welbl, Nelson F. Liu, Matt Gardner | cs.HC, cs.AI, cs.CL, stat.ML | accepted for the Workshop on Noisy User-generated Text (W-NUT) 2017 | null | cs.HC | 20170719 | 20170719 | [
{
"id": "1606.06031"
},
{
"id": "1604.04315"
}
] |
1707.06203 | 60 | Figure 9: The pink bar appears when Pacman eats a power pill, and it decreases in size over the duration of the effect of the pill.
There are no lives, and episode ends when Pacman is eaten by a ghost.
The time left before the effect of the power pill wears off is shown using a pink shrinking bar at the bottom of the screen as in Fig. 9.
Training curves
1400. Minipacman performance on âregularâ 40 Minipacman performance on âavoidâ â standara â standard 1200 copy mode 30 â copy modet â ita â Pn 1000 Score Score 00 05 10 415 20 25 30 âoo 05 10 15 20 25 30 environment steps 168 environment steps 168 250 400 __Minipacman performance on âhunt! 350. Minipacman performance on âambushâ 350 200 a 300) op a 300 os0|â 150 250 00 s g 200 e 3 100 5 § 150 a & 150 8 50 100 200 50 50 bo 05 20 15 20 25 30 bo 05 10 15 20 25 30 bo 05 10 15 20 25 30 environment steps 1e8 environment steps 1e8 environment steps 1e8
Figure 10: Learning curves for different agents and various tasks
# D Sokoban additional details
# D.1 Sokoban environment | 1707.06203#60 | Imagination-Augmented Agents for Deep Reinforcement Learning | We introduce Imagination-Augmented Agents (I2As), a novel architecture for
deep reinforcement learning combining model-free and model-based aspects. In
contrast to most existing model-based reinforcement learning and planning
methods, which prescribe how a model should be used to arrive at a policy, I2As
learn to interpret predictions from a learned environment model to construct
implicit plans in arbitrary ways, by using the predictions as additional
context in deep policy networks. I2As show improved data efficiency,
performance, and robustness to model misspecification compared to several
baselines. | http://arxiv.org/pdf/1707.06203 | Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170719 | 20180214 | [
{
"id": "1707.03374"
},
{
"id": "1703.01250"
},
{
"id": "1511.09249"
},
{
"id": "1611.03673"
},
{
"id": "1610.03518"
},
{
"id": "1705.07177"
},
{
"id": "1603.08983"
},
{
"id": "1703.09260"
},
{
"id": "1611.05397"
},
{
"id": "1707.03497"
},
{
"id": "1511.07111"
},
{
"id": "1604.00289"
},
{
"id": "1612.08810"
}
] |
1707.06203 | 61 | Figure 10: Learning curves for different agents and various tasks
# D Sokoban additional details
# D.1 Sokoban environment
In the game of Sokoban, random actions on the levels would solve levels with vanishing probability, leading to extreme exploration issues for solving the problem with reinforcement learning. To alleviate this issue, we use a shaping reward scheme for our version of Sokoban:
⢠Every time step, a penalty of -0.1 is applied to the agent.
Whenever the agent pushes a box on target, it receives a reward of +1.
Whenever the agent pushes a box off target, it receives a penalty of -1.
Finishing the level gives the agent a reward of +10 and the level terminates.
5 | 1707.06203#61 | Imagination-Augmented Agents for Deep Reinforcement Learning | We introduce Imagination-Augmented Agents (I2As), a novel architecture for
deep reinforcement learning combining model-free and model-based aspects. In
contrast to most existing model-based reinforcement learning and planning
methods, which prescribe how a model should be used to arrive at a policy, I2As
learn to interpret predictions from a learned environment model to construct
implicit plans in arbitrary ways, by using the predictions as additional
context in deep policy networks. I2As show improved data efficiency,
performance, and robustness to model misspecification compared to several
baselines. | http://arxiv.org/pdf/1707.06203 | Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170719 | 20180214 | [
{
"id": "1707.03374"
},
{
"id": "1703.01250"
},
{
"id": "1511.09249"
},
{
"id": "1611.03673"
},
{
"id": "1610.03518"
},
{
"id": "1705.07177"
},
{
"id": "1603.08983"
},
{
"id": "1703.09260"
},
{
"id": "1611.05397"
},
{
"id": "1707.03497"
},
{
"id": "1511.07111"
},
{
"id": "1604.00289"
},
{
"id": "1612.08810"
}
] |
1707.06203 | 62 | Whenever the agent pushes a box off target, it receives a penalty of -1.
Finishing the level gives the agent a reward of +10 and the level terminates.
5
The ï¬rst reward is to encourage agents to ï¬nish levels faster, the second to encourage agents to push boxes onto targets, the third to avoid artiï¬cial reward loop that would be induced by repeatedly pushing a box off and on target, the fourth to strongly reward solving a level. Levels are interrupted after 120 steps (i.e. agent may bootstrap from a value estimate of the last frame, but the level resets to a new one). Identical levels are nearly never encountered during training or testing (out of 40 million levels generated, less than 0.7% were repeated). Note that with this reward scheme, it is always optimal to solve the level (thus our shaping scheme is valid). An alternative strategy would have been to have the agent play through a curriculum of increasingly difï¬cult tasks; we expect both strategies to work similarly.
# D.2 Additional experiments | 1707.06203#62 | Imagination-Augmented Agents for Deep Reinforcement Learning | We introduce Imagination-Augmented Agents (I2As), a novel architecture for
deep reinforcement learning combining model-free and model-based aspects. In
contrast to most existing model-based reinforcement learning and planning
methods, which prescribe how a model should be used to arrive at a policy, I2As
learn to interpret predictions from a learned environment model to construct
implicit plans in arbitrary ways, by using the predictions as additional
context in deep policy networks. I2As show improved data efficiency,
performance, and robustness to model misspecification compared to several
baselines. | http://arxiv.org/pdf/1707.06203 | Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170719 | 20180214 | [
{
"id": "1707.03374"
},
{
"id": "1703.01250"
},
{
"id": "1511.09249"
},
{
"id": "1611.03673"
},
{
"id": "1610.03518"
},
{
"id": "1705.07177"
},
{
"id": "1603.08983"
},
{
"id": "1703.09260"
},
{
"id": "1611.05397"
},
{
"id": "1707.03497"
},
{
"id": "1511.07111"
},
{
"id": "1604.00289"
},
{
"id": "1612.08810"
}
] |
1707.06203 | 63 | # D.2 Additional experiments
Our ï¬rst additional experiment compared I2A with and without reward prediction, trained over a longer horizon. I2A with reward prediction clearly converged shortly after 1e9 steps and we therefore interrupted training; however, I2A without reward prediction kept increasing performance, and after 3e9 steps, we recover a performance level of close to 80% of levels solved, see Fig. 11.
10 Sokoban performance fraction of levels solved â RA â no reward 128 0.0 0.5 1.0 15 2.0 25 3.0 environment steps 1e9
Figure 11: I2A with and without reward prediction, longer training horizon. | 1707.06203#63 | Imagination-Augmented Agents for Deep Reinforcement Learning | We introduce Imagination-Augmented Agents (I2As), a novel architecture for
deep reinforcement learning combining model-free and model-based aspects. In
contrast to most existing model-based reinforcement learning and planning
methods, which prescribe how a model should be used to arrive at a policy, I2As
learn to interpret predictions from a learned environment model to construct
implicit plans in arbitrary ways, by using the predictions as additional
context in deep policy networks. I2As show improved data efficiency,
performance, and robustness to model misspecification compared to several
baselines. | http://arxiv.org/pdf/1707.06203 | Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170719 | 20180214 | [
{
"id": "1707.03374"
},
{
"id": "1703.01250"
},
{
"id": "1511.09249"
},
{
"id": "1611.03673"
},
{
"id": "1610.03518"
},
{
"id": "1705.07177"
},
{
"id": "1603.08983"
},
{
"id": "1703.09260"
},
{
"id": "1611.05397"
},
{
"id": "1707.03497"
},
{
"id": "1511.07111"
},
{
"id": "1604.00289"
},
{
"id": "1612.08810"
}
] |
1707.06203 | 64 | Figure 11: I2A with and without reward prediction, longer training horizon.
Next, we investigated the I2A with Monte-Carlo search (using a near perfect environment model of Sokoban). We let the agent try to solve the levels up to 16 times within its internal model. The base I2A architecture was solving around 87% of levels; mental retries boosted its performance to around 95% of levels solved. Although the agent was allowed up to 16 mental retries, in practice all the performance increase was obtained within the ï¬rst 10 mental retries. Exact percentage gain by each mental retry is shown in Fig. 12. Note in Fig. 12, only 83% of the levels are solved on the ï¬rst mental attempt, even though the I2A architecture could solve around 87% of levels. The gap is explained by the use of an environment model: although it looks nearly perfect to the naked eye, the model is not actually equivalent to the environment.
Levels solved at each Mental Retry % a2 Percentage solved ° 2 4 6 8 10 2 4 6
Figure 12: Gain in percentage by each additional mental retry using a near perfect environment model.
6
# D.3 Planning with the perfect model and Monte-Carlo Tree Search in Sokoban | 1707.06203#64 | Imagination-Augmented Agents for Deep Reinforcement Learning | We introduce Imagination-Augmented Agents (I2As), a novel architecture for
deep reinforcement learning combining model-free and model-based aspects. In
contrast to most existing model-based reinforcement learning and planning
methods, which prescribe how a model should be used to arrive at a policy, I2As
learn to interpret predictions from a learned environment model to construct
implicit plans in arbitrary ways, by using the predictions as additional
context in deep policy networks. I2As show improved data efficiency,
performance, and robustness to model misspecification compared to several
baselines. | http://arxiv.org/pdf/1707.06203 | Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170719 | 20180214 | [
{
"id": "1707.03374"
},
{
"id": "1703.01250"
},
{
"id": "1511.09249"
},
{
"id": "1611.03673"
},
{
"id": "1610.03518"
},
{
"id": "1705.07177"
},
{
"id": "1603.08983"
},
{
"id": "1703.09260"
},
{
"id": "1611.05397"
},
{
"id": "1707.03497"
},
{
"id": "1511.07111"
},
{
"id": "1604.00289"
},
{
"id": "1612.08810"
}
] |
1707.06203 | 65 | Figure 12: Gain in percentage by each additional mental retry using a near perfect environment model.
6
# D.3 Planning with the perfect model and Monte-Carlo Tree Search in Sokoban
We ï¬rst trained a value network that estimates the value function of a trained model-free policy; to do this, we trained a model-free agent for 1e9 environment steps. This agent solved close to 60 % of episodes. Using this agent, we generated 1e8 (frame, return) pairs, and trained the value network to predict the value (expected return) from the frame; training and test error were comparable, and we donât expect increasing the number of training points would have signiï¬cantly improved the quality of the the value network. | 1707.06203#65 | Imagination-Augmented Agents for Deep Reinforcement Learning | We introduce Imagination-Augmented Agents (I2As), a novel architecture for
deep reinforcement learning combining model-free and model-based aspects. In
contrast to most existing model-based reinforcement learning and planning
methods, which prescribe how a model should be used to arrive at a policy, I2As
learn to interpret predictions from a learned environment model to construct
implicit plans in arbitrary ways, by using the predictions as additional
context in deep policy networks. I2As show improved data efficiency,
performance, and robustness to model misspecification compared to several
baselines. | http://arxiv.org/pdf/1707.06203 | Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170719 | 20180214 | [
{
"id": "1707.03374"
},
{
"id": "1703.01250"
},
{
"id": "1511.09249"
},
{
"id": "1611.03673"
},
{
"id": "1610.03518"
},
{
"id": "1705.07177"
},
{
"id": "1603.08983"
},
{
"id": "1703.09260"
},
{
"id": "1611.05397"
},
{
"id": "1707.03497"
},
{
"id": "1511.07111"
},
{
"id": "1604.00289"
},
{
"id": "1612.08810"
}
] |
1707.06203 | 66 | The value network architecture is a residual network which stacks one convolution layer and 3 convolution blocks with a ï¬nal fully-connected layer of 128 hidden units. The ï¬rst convolution is 1 à 1 convolution with 128 feature maps. Each of the three residual convolution block is composed of two convolutional layers; the ï¬rst is a 1 à 1 convolution with 32 feature maps, the second a 3 à 3 convolution with 32 feature maps, and the last a 1 à 1 layer with 128 feature maps. To help the value networks, we trained them not on the pixel representation, but on a 10 à 10 à 4 symbolic representation.
The trained value network is then employed during search to evaluate leaf-nodes â similar to [12], replacing the role of traditional random rollouts in MCTS. The tree policy uses [57, 58] with a ï¬ne-tuned exploration constant of 1. Depth-wise transposition tables for the tree nodes are used to deal with the symmetries in the Sokoban environment. External actions are selected by taking the max Q value at the root node. The tree is reused between steps but selecting the appropriate subtree as the root node for the next step.
Reported results are obtained by averaging the results over 250 episodes.
# D.4 Level Generation for Sokoban | 1707.06203#66 | Imagination-Augmented Agents for Deep Reinforcement Learning | We introduce Imagination-Augmented Agents (I2As), a novel architecture for
deep reinforcement learning combining model-free and model-based aspects. In
contrast to most existing model-based reinforcement learning and planning
methods, which prescribe how a model should be used to arrive at a policy, I2As
learn to interpret predictions from a learned environment model to construct
implicit plans in arbitrary ways, by using the predictions as additional
context in deep policy networks. I2As show improved data efficiency,
performance, and robustness to model misspecification compared to several
baselines. | http://arxiv.org/pdf/1707.06203 | Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170719 | 20180214 | [
{
"id": "1707.03374"
},
{
"id": "1703.01250"
},
{
"id": "1511.09249"
},
{
"id": "1611.03673"
},
{
"id": "1610.03518"
},
{
"id": "1705.07177"
},
{
"id": "1603.08983"
},
{
"id": "1703.09260"
},
{
"id": "1611.05397"
},
{
"id": "1707.03497"
},
{
"id": "1511.07111"
},
{
"id": "1604.00289"
},
{
"id": "1612.08810"
}
] |
1707.06203 | 68 | The generation of a Sokoban level involves three steps: room topology generation, position conï¬gura- tion and room reverse-playing. Topology generation: Given an initial width*height room entirely constituted by wall blocks, the topology generation consists in creating the âemptyâ spaces (i.e. corridors) where boxes, targets and the player can be placed. For this simple random walk algorithm with a conï¬gurable number of steps is applied: a random initial position and direction are chosen. Afterwards, for every step, the position is updated and, with a probability p = 0.35, a new random direction is selected. Every âvisitedâ position is emptied together with a number of surrounding wall blocks, selected by randomly choosing one of the following patterns indicating the adjacent room blocks to be removed (the darker square represents the reference position, that is, the position being visited). Note that the room âexteriorâ walls are never emptied, so from a widthÃheight room only a (width-2)Ã(height-2) space can actually be converted into corridors. The random walk approach guarantees that all the positions in the room are, in principle, reachable by the player. A relatively small probability of changing the walk direction favours the generation of longer corridors, while the application of a random pattern favours slightly more convoluted spaces. Position conï¬guration:
| aoe | 1707.06203#68 | Imagination-Augmented Agents for Deep Reinforcement Learning | We introduce Imagination-Augmented Agents (I2As), a novel architecture for
deep reinforcement learning combining model-free and model-based aspects. In
contrast to most existing model-based reinforcement learning and planning
methods, which prescribe how a model should be used to arrive at a policy, I2As
learn to interpret predictions from a learned environment model to construct
implicit plans in arbitrary ways, by using the predictions as additional
context in deep policy networks. I2As show improved data efficiency,
performance, and robustness to model misspecification compared to several
baselines. | http://arxiv.org/pdf/1707.06203 | Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170719 | 20180214 | [
{
"id": "1707.03374"
},
{
"id": "1703.01250"
},
{
"id": "1511.09249"
},
{
"id": "1611.03673"
},
{
"id": "1610.03518"
},
{
"id": "1705.07177"
},
{
"id": "1603.08983"
},
{
"id": "1703.09260"
},
{
"id": "1611.05397"
},
{
"id": "1707.03497"
},
{
"id": "1511.07111"
},
{
"id": "1604.00289"
},
{
"id": "1612.08810"
}
] |
1707.06203 | 69 | | aoe
Once a room topology is generated, the target locations for the desired N boxes and the player initial position are randomly selected. There is the obvious prerequisite of having enough empty spaces in the room to place the targets and the player but no other constraints are imposed in this step.
7
Reverse playing: Once the topology and targets/player positions are generated the room is reverse- played. In this case, on each step, the player has eight possible actions to choose from: simply moving or moving+pulling from a box in each possible direction (assuming for the latter, that there is a box adjacent to the player position).
Initially the room is conï¬gured with the boxes placed over their corresponding targets. From that position a depth-ï¬rst search (with a conï¬gurable maximum depth) is carried out over the space of possible moves, by âexpandingâ each reached player/boxes position by iteratively applying all the possible actions (which are randomly permuted on each step). An entire tree is not explored as there are different combinations of actions leading to repeated boxes/player conï¬gurations which are skipped.
Statistics are collected for each boxes/player conï¬guration, which is, in turn, scored with a simple heuristic:
RoomScore = BoxSwaps à BoxDisplacementi i | 1707.06203#69 | Imagination-Augmented Agents for Deep Reinforcement Learning | We introduce Imagination-Augmented Agents (I2As), a novel architecture for
deep reinforcement learning combining model-free and model-based aspects. In
contrast to most existing model-based reinforcement learning and planning
methods, which prescribe how a model should be used to arrive at a policy, I2As
learn to interpret predictions from a learned environment model to construct
implicit plans in arbitrary ways, by using the predictions as additional
context in deep policy networks. I2As show improved data efficiency,
performance, and robustness to model misspecification compared to several
baselines. | http://arxiv.org/pdf/1707.06203 | Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170719 | 20180214 | [
{
"id": "1707.03374"
},
{
"id": "1703.01250"
},
{
"id": "1511.09249"
},
{
"id": "1611.03673"
},
{
"id": "1610.03518"
},
{
"id": "1705.07177"
},
{
"id": "1603.08983"
},
{
"id": "1703.09260"
},
{
"id": "1611.05397"
},
{
"id": "1707.03497"
},
{
"id": "1511.07111"
},
{
"id": "1604.00289"
},
{
"id": "1612.08810"
}
] |
1707.06203 | 70 | RoomScore = BoxSwaps à BoxDisplacementi i
where BoxSwaps represents the number of occasions in which the player stopped pulling from a given box and started pulling from a different one, while BoxDisplacement represents the Manhattan distance between the initial and ï¬nal position of a given box. Also whenever a box or the player are placed on top of one of the targets the RoomScore value is set to 0. While this scoring heuristic doesnât guarantee the complexity of the generated rooms itâs aimed to a) favour room conï¬gurations where overall the boxes are further away from their original positions and b) increase the probability of a room requiring a more convoluted combination of box moves to get to a solution (by aiming for solutions with higher boxSwaps values). This scoring mechanism has empirically proved to generate levels with a balanced combination of difï¬culties.
The reverse playing ends when there are no more available positions to explore or when a predeï¬ned maximum number of possible room conï¬gurations is reached. The room with the higher RoomScore is then returned.
# Defaul parameters:
⢠A maximum of 10 room topologies and for each of those 10 boxes/player positioning are retried in case a given combination doesnât produce rooms with a score > 0. | 1707.06203#70 | Imagination-Augmented Agents for Deep Reinforcement Learning | We introduce Imagination-Augmented Agents (I2As), a novel architecture for
deep reinforcement learning combining model-free and model-based aspects. In
contrast to most existing model-based reinforcement learning and planning
methods, which prescribe how a model should be used to arrive at a policy, I2As
learn to interpret predictions from a learned environment model to construct
implicit plans in arbitrary ways, by using the predictions as additional
context in deep policy networks. I2As show improved data efficiency,
performance, and robustness to model misspecification compared to several
baselines. | http://arxiv.org/pdf/1707.06203 | Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170719 | 20180214 | [
{
"id": "1707.03374"
},
{
"id": "1703.01250"
},
{
"id": "1511.09249"
},
{
"id": "1611.03673"
},
{
"id": "1610.03518"
},
{
"id": "1705.07177"
},
{
"id": "1603.08983"
},
{
"id": "1703.09260"
},
{
"id": "1611.05397"
},
{
"id": "1707.03497"
},
{
"id": "1511.07111"
},
{
"id": "1604.00289"
},
{
"id": "1612.08810"
}
] |
1707.06203 | 71 | ⢠A maximum of 10 room topologies and for each of those 10 boxes/player positioning are retried in case a given combination doesnât produce rooms with a score > 0.
The room conï¬guration tree is by default limited to a maximum depth of 300 applied actions. ⢠The total number of visited positions is by default limited to 1000000. ⢠Default random-walk steps: 1.5à (room width + room height).
8 | 1707.06203#71 | Imagination-Augmented Agents for Deep Reinforcement Learning | We introduce Imagination-Augmented Agents (I2As), a novel architecture for
deep reinforcement learning combining model-free and model-based aspects. In
contrast to most existing model-based reinforcement learning and planning
methods, which prescribe how a model should be used to arrive at a policy, I2As
learn to interpret predictions from a learned environment model to construct
implicit plans in arbitrary ways, by using the predictions as additional
context in deep policy networks. I2As show improved data efficiency,
performance, and robustness to model misspecification compared to several
baselines. | http://arxiv.org/pdf/1707.06203 | Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170719 | 20180214 | [
{
"id": "1707.03374"
},
{
"id": "1703.01250"
},
{
"id": "1511.09249"
},
{
"id": "1611.03673"
},
{
"id": "1610.03518"
},
{
"id": "1705.07177"
},
{
"id": "1603.08983"
},
{
"id": "1703.09260"
},
{
"id": "1611.05397"
},
{
"id": "1707.03497"
},
{
"id": "1511.07111"
},
{
"id": "1604.00289"
},
{
"id": "1612.08810"
}
] |
1707.05589 | 0 | 7 1 0 2
v o N 0 2 ] L C . s c [
2 v 9 8 5 5 0 . 7 0 7 1 : v i X r a
Under review as a conference paper at ICLR 2018
# ON THE STATE OF THE ART OF EVALUATION IN NEURAL LANGUAGE MODELS
# G´abor Melisâ , Chris Dyerâ , Phil Blunsomâ â¡ {melisgl,cdyer,pblunsom}@google.com â DeepMind â¡University of Oxford
# ABSTRACT
Ongoing innovations in recurrent neural network architectures have provided a steady inï¬ux of apparently state-of-the-art results on language modelling bench- marks. However, these have been evaluated using differing codebases and limited computational resources, which represent uncontrolled sources of experimental variation. We reevaluate several popular architectures and regularisation meth- ods with large-scale automatic black-box hyperparameter tuning and arrive at the somewhat surprising conclusion that standard LSTM architectures, when properly regularised, outperform more recent models. We establish a new state of the art on the Penn Treebank and Wikitext-2 corpora, as well as strong baselines on the Hutter Prize dataset.
# INTRODUCTION | 1707.05589#0 | On the State of the Art of Evaluation in Neural Language Models | Ongoing innovations in recurrent neural network architectures have provided a
steady influx of apparently state-of-the-art results on language modelling
benchmarks. However, these have been evaluated using differing code bases and
limited computational resources, which represent uncontrolled sources of
experimental variation. We reevaluate several popular architectures and
regularisation methods with large-scale automatic black-box hyperparameter
tuning and arrive at the somewhat surprising conclusion that standard LSTM
architectures, when properly regularised, outperform more recent models. We
establish a new state of the art on the Penn Treebank and Wikitext-2 corpora,
as well as strong baselines on the Hutter Prize dataset. | http://arxiv.org/pdf/1707.05589 | Gábor Melis, Chris Dyer, Phil Blunsom | cs.CL | null | null | cs.CL | 20170718 | 20171120 | [
{
"id": "1611.09913"
},
{
"id": "1709.06560"
},
{
"id": "1709.07432"
},
{
"id": "1611.01578"
}
] |
1707.05589 | 1 | # INTRODUCTION
The scientiï¬c process by which the deep learning research community operates is guided by em- pirical studies that evaluate the relative quality of models. Complicating matters, the measured performance of a model depends not only on its architecture (and data), but it can strongly depend on hyperparameter values that affect learning, regularisation, and capacity. This hyperparameter dependence is an often inadequately controlled source of variation in experiments, which creates a risk that empirically unsound claims will be reported.
In this paper, we use a black-box hyperparameter optimisation technique to control for hyperpa- rameter effects while comparing the relative performance of language modelling architectures based on LSTMs, Recurrent Highway Networks (Zilly et al., 2016) and NAS (Zoph & Le, 2016). We specify ï¬exible, parameterised model families with the ability to adjust embedding and recurrent cell sizes for a given parameter budget and with ï¬ne grain control over regularisation and learning hyperparameters. | 1707.05589#1 | On the State of the Art of Evaluation in Neural Language Models | Ongoing innovations in recurrent neural network architectures have provided a
steady influx of apparently state-of-the-art results on language modelling
benchmarks. However, these have been evaluated using differing code bases and
limited computational resources, which represent uncontrolled sources of
experimental variation. We reevaluate several popular architectures and
regularisation methods with large-scale automatic black-box hyperparameter
tuning and arrive at the somewhat surprising conclusion that standard LSTM
architectures, when properly regularised, outperform more recent models. We
establish a new state of the art on the Penn Treebank and Wikitext-2 corpora,
as well as strong baselines on the Hutter Prize dataset. | http://arxiv.org/pdf/1707.05589 | Gábor Melis, Chris Dyer, Phil Blunsom | cs.CL | null | null | cs.CL | 20170718 | 20171120 | [
{
"id": "1611.09913"
},
{
"id": "1709.06560"
},
{
"id": "1709.07432"
},
{
"id": "1611.01578"
}
] |
1707.05589 | 2 | Once hyperparameters have been properly controlled for, we ï¬nd that LSTMs outperform the more recent models, contra the published claims. Our result is therefore a demonstration that replication failures can happen due to poorly controlled hyperparameter variation, and this paper joins other recent papers in warning of the under-acknowledged existence of replication failure in deep learn- ing (Henderson et al., 2017; Reimers & Gurevych, 2017). However, we do show that careful controls are possible, albeit at considerable computational cost.
Several remarks can be made in light of these results. First, as (conditional) language models serve as the central building block of many tasks, including machine translation, there is little reason to expect that the problem of unreliable evaluation is unique to the tasks discussed here. However, in machine translation, carefully controlling for hyperparameter effects would be substantially more expensive because standard datasets are much larger. Second, the research community should strive for more consensus about appropriate experimental methodology that balances costs of careful ex- perimentation with the risks associated with false claims. Finally, more attention should be paid to hyperparameter sensitivity. Models that introduce many new hyperparameters or which perform well only in narrow ranges of hyperparameter settings should be identiï¬ed as such as part of standard publication practice.
1
# Under review as a conference paper at ICLR 2018 | 1707.05589#2 | On the State of the Art of Evaluation in Neural Language Models | Ongoing innovations in recurrent neural network architectures have provided a
steady influx of apparently state-of-the-art results on language modelling
benchmarks. However, these have been evaluated using differing code bases and
limited computational resources, which represent uncontrolled sources of
experimental variation. We reevaluate several popular architectures and
regularisation methods with large-scale automatic black-box hyperparameter
tuning and arrive at the somewhat surprising conclusion that standard LSTM
architectures, when properly regularised, outperform more recent models. We
establish a new state of the art on the Penn Treebank and Wikitext-2 corpora,
as well as strong baselines on the Hutter Prize dataset. | http://arxiv.org/pdf/1707.05589 | Gábor Melis, Chris Dyer, Phil Blunsom | cs.CL | null | null | cs.CL | 20170718 | 20171120 | [
{
"id": "1611.09913"
},
{
"id": "1709.06560"
},
{
"id": "1709.07432"
},
{
"id": "1611.01578"
}
] |
1707.05589 | 3 | 1
# Under review as a conference paper at ICLR 2018
©OSSSSOD SOOO OOOO 7 T T ry + T T T 7 7-hLL-- a ah ea eh ea ea fea \ \ \ \ \ \ y : ; Aj : fk : \ | | | \ (es ee ee L\ ft \ bP \ P| \ P| p\ OOO000000 Od000000
(a) two-layer LSTM/NAS with skip connections
(b) RHN with two processing steps per input
Figure 1: Recurrent networks with optional down-projection, per-step and per-sequence dropout (dashed and solid lines).
# 2 MODELS
Our focus is on three recurrent architectures:
⢠The Long Short-Term Memory (Hochreiter & Schmidhuber, 1997) serves as a well known and frequently used baseline.
⢠The recently proposed Recurrent Highway Network (Zilly et al., 2016) is chosen because it has demonstrated state-of-the-art performance on a number of datasets.
⢠Finally, we also include NAS (Zoph & Le, 2016), because of its impressive performance and because its architecture was the result of an automated reinforcement learning based optimisation process. | 1707.05589#3 | On the State of the Art of Evaluation in Neural Language Models | Ongoing innovations in recurrent neural network architectures have provided a
steady influx of apparently state-of-the-art results on language modelling
benchmarks. However, these have been evaluated using differing code bases and
limited computational resources, which represent uncontrolled sources of
experimental variation. We reevaluate several popular architectures and
regularisation methods with large-scale automatic black-box hyperparameter
tuning and arrive at the somewhat surprising conclusion that standard LSTM
architectures, when properly regularised, outperform more recent models. We
establish a new state of the art on the Penn Treebank and Wikitext-2 corpora,
as well as strong baselines on the Hutter Prize dataset. | http://arxiv.org/pdf/1707.05589 | Gábor Melis, Chris Dyer, Phil Blunsom | cs.CL | null | null | cs.CL | 20170718 | 20171120 | [
{
"id": "1611.09913"
},
{
"id": "1709.06560"
},
{
"id": "1709.07432"
},
{
"id": "1611.01578"
}
] |
1707.05589 | 4 | ⢠Finally, we also include NAS (Zoph & Le, 2016), because of its impressive performance and because its architecture was the result of an automated reinforcement learning based optimisation process.
Our aim is strictly to do better model comparisons for these architectures and we thus refrain from including techniques that are known to push perplexities even lower, but which are believed to be largely orthogonal to the question of the relative merits of these recurrent cells. In parallel work with a remarkable overlap with ours, Merity et al. (2017) demonstrate the utility of adding a Neural Cache (Grave et al., 2016). Building on their work, Krause et al. (2017) show that Dynamic Evaluation (Graves, 2013) contributes similarly to the ï¬nal perplexity.
As pictured in Fig. 1a, our models with LSTM or NAS cells have all the standard components: an input embedding lookup table, recurrent cells stacked as layers with additive skip connections combining outputs of all layers to ease optimisation. There is an optional down-projection whose presence is governed by a hyperparameter from this combined output to a smaller space which reduces the number of output embedding parameters. Unless otherwise noted, input and output embeddings are shared, see (Inan et al., 2016) and (Press & Wolf, 2016). | 1707.05589#4 | On the State of the Art of Evaluation in Neural Language Models | Ongoing innovations in recurrent neural network architectures have provided a
steady influx of apparently state-of-the-art results on language modelling
benchmarks. However, these have been evaluated using differing code bases and
limited computational resources, which represent uncontrolled sources of
experimental variation. We reevaluate several popular architectures and
regularisation methods with large-scale automatic black-box hyperparameter
tuning and arrive at the somewhat surprising conclusion that standard LSTM
architectures, when properly regularised, outperform more recent models. We
establish a new state of the art on the Penn Treebank and Wikitext-2 corpora,
as well as strong baselines on the Hutter Prize dataset. | http://arxiv.org/pdf/1707.05589 | Gábor Melis, Chris Dyer, Phil Blunsom | cs.CL | null | null | cs.CL | 20170718 | 20171120 | [
{
"id": "1611.09913"
},
{
"id": "1709.06560"
},
{
"id": "1709.07432"
},
{
"id": "1611.01578"
}
] |
1707.05589 | 5 | Dropout is applied to feedforward connections denoted by dashed arrows in the ï¬gure. From the bottom up: to embedded inputs (input dropout), to connections between layers (intra-layer dropout), to the combined and the down-projected outputs (output dropout). All these dropouts have random masks drawn independently per time step, in contrast to the dropout on recurrent states where the same mask is used for all time steps in the sequence.
RHN based models are typically conceived of as a single horizontal âhighwayâ to emphasise how the recurrent state is processed through time. In Fig. 1b, we choose to draw their schema in a way that makes the differences from LSTMs immediately apparent. In a nutshell, the RHN state is passed from the topmost layer to the lowest layer of the next time step. In contrast, each LSTM layer has its own recurrent connection and state.
The same dropout variants are applied to all three model types, with the exception of intra-layer dropout which does not apply to RHNs since only the recurrent state is passed between the layers.
2
# Under review as a conference paper at ICLR 2018
For the recurrent states, all architectures use either variational dropout (Gal & Ghahramani, 2016, state dropout)1 or recurrent dropout (Semeniuta et al., 2016), unless explicitly noted otherwise. | 1707.05589#5 | On the State of the Art of Evaluation in Neural Language Models | Ongoing innovations in recurrent neural network architectures have provided a
steady influx of apparently state-of-the-art results on language modelling
benchmarks. However, these have been evaluated using differing code bases and
limited computational resources, which represent uncontrolled sources of
experimental variation. We reevaluate several popular architectures and
regularisation methods with large-scale automatic black-box hyperparameter
tuning and arrive at the somewhat surprising conclusion that standard LSTM
architectures, when properly regularised, outperform more recent models. We
establish a new state of the art on the Penn Treebank and Wikitext-2 corpora,
as well as strong baselines on the Hutter Prize dataset. | http://arxiv.org/pdf/1707.05589 | Gábor Melis, Chris Dyer, Phil Blunsom | cs.CL | null | null | cs.CL | 20170718 | 20171120 | [
{
"id": "1611.09913"
},
{
"id": "1709.06560"
},
{
"id": "1709.07432"
},
{
"id": "1611.01578"
}
] |
1707.05589 | 6 | 3 EXPERIMENTAL SETUP
3.1 DATASETS
We compare models on three datasets. The smallest of them is the Penn Treebank corpus by Marcus et al. (1993) with preprocessing from Mikolov et al. (2010). We also include another word level corpus: Wikitext-2 by Merity et al. (2016). It is about twice the size of Penn Treebank with a larger vocabulary and much lighter preprocessing. The third corpus is Enwik8 from the Hutter Prize dataset (Hutter, 2012). Following common practice, we use the ï¬rst 90 million characters for training, and the remaining 10 million evenly split between validation and test.
# 4 TRAINING DETAILS
When training word level models we follow common practice and use a batch size of 64, truncated backpropagation with 35 time steps, and we feed the ï¬nal states from the previous batch as the initial state of the subsequent one. At the beginning of training and test time, the model starts with a zero state. To bias the model towards being able to easily start from such a state at test time, during training, with probability 0.01 a constant zero state is provided as the initial state. | 1707.05589#6 | On the State of the Art of Evaluation in Neural Language Models | Ongoing innovations in recurrent neural network architectures have provided a
steady influx of apparently state-of-the-art results on language modelling
benchmarks. However, these have been evaluated using differing code bases and
limited computational resources, which represent uncontrolled sources of
experimental variation. We reevaluate several popular architectures and
regularisation methods with large-scale automatic black-box hyperparameter
tuning and arrive at the somewhat surprising conclusion that standard LSTM
architectures, when properly regularised, outperform more recent models. We
establish a new state of the art on the Penn Treebank and Wikitext-2 corpora,
as well as strong baselines on the Hutter Prize dataset. | http://arxiv.org/pdf/1707.05589 | Gábor Melis, Chris Dyer, Phil Blunsom | cs.CL | null | null | cs.CL | 20170718 | 20171120 | [
{
"id": "1611.09913"
},
{
"id": "1709.06560"
},
{
"id": "1709.07432"
},
{
"id": "1611.01578"
}
] |
1707.05589 | 7 | Optimisation is performed by Adam (Kingma & Ba, 2014) with 6; = 0 but otherwise default parameters (82 = 0.999, « = 107%). Setting 31 so turns off the exponential moving average for the estimates of the means of the gradients and brings Adam very close to RMSProp without momentum, but due to Adamâs bias correction, larger learning rates can be used.
Batch size is set to 64. The learning rate is multiplied by 0.1 whenever validation performance does not improve ever during 30 consecutive checkpoints. These checkpoints are performed after every 100 and 200 optimization steps for Penn Treebank and Wikitext-2, respectively.
For character level models (ie. Enwik8), the differences are: truncated backpropagation is per- formed with 50 time steps. Adamâs parameters are 8. = 0.99, ⬠= 1075. Batch size is 128. Checkpoints are only every 400 optimisation steps and embeddings are not shared.
# 5 EVALUATION | 1707.05589#7 | On the State of the Art of Evaluation in Neural Language Models | Ongoing innovations in recurrent neural network architectures have provided a
steady influx of apparently state-of-the-art results on language modelling
benchmarks. However, these have been evaluated using differing code bases and
limited computational resources, which represent uncontrolled sources of
experimental variation. We reevaluate several popular architectures and
regularisation methods with large-scale automatic black-box hyperparameter
tuning and arrive at the somewhat surprising conclusion that standard LSTM
architectures, when properly regularised, outperform more recent models. We
establish a new state of the art on the Penn Treebank and Wikitext-2 corpora,
as well as strong baselines on the Hutter Prize dataset. | http://arxiv.org/pdf/1707.05589 | Gábor Melis, Chris Dyer, Phil Blunsom | cs.CL | null | null | cs.CL | 20170718 | 20171120 | [
{
"id": "1611.09913"
},
{
"id": "1709.06560"
},
{
"id": "1709.07432"
},
{
"id": "1611.01578"
}
] |
1707.05589 | 8 | # 5 EVALUATION
For evaluation, the checkpoint with the best validation perplexity found by the tuner is loaded and the model is applied to the test set with a batch size of 1. For the word based datasets, using the training batch size makes results worse by 0.3 PPL while Enwik8 is practically unaffected due to its evaluation and training sets being much larger. Preliminary experiments indicate that MC averaging would bring a small improvement of about 0.4 in perplexity and 0.005 in bits per character, similar to the results of Gal & Ghahramani (2016), while being a 1000 times more expensive which is prohibitive on larger datasets. Therefore, throughout we use the mean-ï¬eld approximation for dropout at test time.
5.1 HYPERPARAMETER TUNING | 1707.05589#8 | On the State of the Art of Evaluation in Neural Language Models | Ongoing innovations in recurrent neural network architectures have provided a
steady influx of apparently state-of-the-art results on language modelling
benchmarks. However, these have been evaluated using differing code bases and
limited computational resources, which represent uncontrolled sources of
experimental variation. We reevaluate several popular architectures and
regularisation methods with large-scale automatic black-box hyperparameter
tuning and arrive at the somewhat surprising conclusion that standard LSTM
architectures, when properly regularised, outperform more recent models. We
establish a new state of the art on the Penn Treebank and Wikitext-2 corpora,
as well as strong baselines on the Hutter Prize dataset. | http://arxiv.org/pdf/1707.05589 | Gábor Melis, Chris Dyer, Phil Blunsom | cs.CL | null | null | cs.CL | 20170718 | 20171120 | [
{
"id": "1611.09913"
},
{
"id": "1709.06560"
},
{
"id": "1709.07432"
},
{
"id": "1611.01578"
}
] |
1707.05589 | 9 | 5.1 HYPERPARAMETER TUNING
Hyperparameters are optimised by Google Vizier (Golovin et al., 2017), a black-box hyperparameter tuner based on batched GP bandits using the expected improvement acquisition function (Desautels et al., 2014). Tuners of this nature are generally more efï¬cient than grid search when the number of hyperparameters is small. To keep the problem tractable, we restrict the set of hyperparameters to learning rate, input embedding ratio, input dropout, state dropout, output dropout, weight decay. For deep LSTMs, there is an extra hyperparameter to tune: intra-layer dropout. Even with this small set, thousands of evaluations are required to reach convergence.
1Of the two parameterisations, we used the one in which there is further sharing of masks between gates rather than independent noise for the gates.
3
# Under review as a conference paper at ICLR 2018 | 1707.05589#9 | On the State of the Art of Evaluation in Neural Language Models | Ongoing innovations in recurrent neural network architectures have provided a
steady influx of apparently state-of-the-art results on language modelling
benchmarks. However, these have been evaluated using differing code bases and
limited computational resources, which represent uncontrolled sources of
experimental variation. We reevaluate several popular architectures and
regularisation methods with large-scale automatic black-box hyperparameter
tuning and arrive at the somewhat surprising conclusion that standard LSTM
architectures, when properly regularised, outperform more recent models. We
establish a new state of the art on the Penn Treebank and Wikitext-2 corpora,
as well as strong baselines on the Hutter Prize dataset. | http://arxiv.org/pdf/1707.05589 | Gábor Melis, Chris Dyer, Phil Blunsom | cs.CL | null | null | cs.CL | 20170718 | 20171120 | [
{
"id": "1611.09913"
},
{
"id": "1709.06560"
},
{
"id": "1709.07432"
},
{
"id": "1611.01578"
}
] |
1707.05589 | 10 | Model Size Depth Valid Test Medium LSTM, Zaremba et al. (2014) Large LSTM, Zaremba et al. (2014) VD LSTM, Press & Wolf (2016) VD LSTM, Inan et al. (2016) VD LSTM, Inan et al. (2016) VD RHN, Zilly et al. (2016) NAS, Zoph & Le (2016) NAS, Zoph & Le (2016) AWD-LSTM, Merity et al. (2017) â 10M 24M 51M 9M 28M 24M 25M 54M 24M 2 2 2 2 2 10 - - 3 86.2 82.2 75.8 77.1 72.5 67.9 - - 60.0 82.7 78.4 73.2 73.9 69.0 65.4 64.0 62.4 57.3 LSTM LSTM LSTM RHN NAS 10M 1 2 4 5 1 61.8 63.0 62.4 66.0 65.6 59.6 60.8 60.1 63.5 62.7 LSTM LSTM LSTM RHN NAS 24M 1 2 4 5 1 61.4 62.1 60.9 64.8 62.1 59.5 59.6 58.3 62.2 59.7 | 1707.05589#10 | On the State of the Art of Evaluation in Neural Language Models | Ongoing innovations in recurrent neural network architectures have provided a
steady influx of apparently state-of-the-art results on language modelling
benchmarks. However, these have been evaluated using differing code bases and
limited computational resources, which represent uncontrolled sources of
experimental variation. We reevaluate several popular architectures and
regularisation methods with large-scale automatic black-box hyperparameter
tuning and arrive at the somewhat surprising conclusion that standard LSTM
architectures, when properly regularised, outperform more recent models. We
establish a new state of the art on the Penn Treebank and Wikitext-2 corpora,
as well as strong baselines on the Hutter Prize dataset. | http://arxiv.org/pdf/1707.05589 | Gábor Melis, Chris Dyer, Phil Blunsom | cs.CL | null | null | cs.CL | 20170718 | 20171120 | [
{
"id": "1611.09913"
},
{
"id": "1709.06560"
},
{
"id": "1709.07432"
},
{
"id": "1611.01578"
}
] |
1707.05589 | 11 | Table 1: Validation and test set perplexities on Penn Treebank for models with different numbers of parameters and depths. All results except those from Zaremba are with shared input and output embeddings. VD stands for Variational Dropout from Gal & Ghahramani (2016). â : parallel work.
Parameter budget. Motivated by recent results from Collins et al. (2016), we compare models on the basis of the total number of trainable parameters as opposed to the number of hidden units. The tuner is given control over the presence and size of the down-projection, and thus over the tradeoff between the number of embedding vs. recurrent cell parameters. Consequently, the cellsâ hidden size and the embedding size is determined by the actual parameter budget, depth and the input embedding ratio hyperparameter.
For Enwik8 there are relatively few parameters in the embeddings since the vocabulary size is only 205. Here we choose not to share embeddings and to omit the down-projection unconditionally.
# 6 RESULTS
6.1 PENN TREEBANK
We tested LSTMs of various depths and an RHN of depth 5 with parameter budgets of 10 and 24 million matching the sizes of the Medium and Large LSTMs by (Zaremba et al., 2014). The results are summarised in Table 1. | 1707.05589#11 | On the State of the Art of Evaluation in Neural Language Models | Ongoing innovations in recurrent neural network architectures have provided a
steady influx of apparently state-of-the-art results on language modelling
benchmarks. However, these have been evaluated using differing code bases and
limited computational resources, which represent uncontrolled sources of
experimental variation. We reevaluate several popular architectures and
regularisation methods with large-scale automatic black-box hyperparameter
tuning and arrive at the somewhat surprising conclusion that standard LSTM
architectures, when properly regularised, outperform more recent models. We
establish a new state of the art on the Penn Treebank and Wikitext-2 corpora,
as well as strong baselines on the Hutter Prize dataset. | http://arxiv.org/pdf/1707.05589 | Gábor Melis, Chris Dyer, Phil Blunsom | cs.CL | null | null | cs.CL | 20170718 | 20171120 | [
{
"id": "1611.09913"
},
{
"id": "1709.06560"
},
{
"id": "1709.07432"
},
{
"id": "1611.01578"
}
] |
1707.05589 | 12 | Notably, in our experiments even the RHN with only 10M parameters has better perplexity than the 24M one in the original publication. Our 24M version improves on that further. However, a shallow LSTM-based model with only 10M parameters enjoys a very comfortable margin over that, with deeper models following near the estimated noise range. At 24M, all depths obtain very similar results, reaching 58.3 at depth 4. Unsurprisingly, NAS whose architecture was chosen based on its performance on this dataset does almost equally well, even better than in Zoph & Le (2016).
# 6.2 WIKITEXT-2
Wikitext-2 is not much larger than Penn Treebank, so it is not surprising that even models tuned for Penn Treebank perform reasonably on this dataset, and this is in fact how results in previous works were produced. For a fairer comparison, we also tune hyperparameters on the same dataset. In Table 2, we report numbers for both approaches. All our results are well below the previous state of the are for models without dynamic evaluation or caching. That said, our best result, 65.9 compares
4
# Under review as a conference paper at ICLR 2018 | 1707.05589#12 | On the State of the Art of Evaluation in Neural Language Models | Ongoing innovations in recurrent neural network architectures have provided a
steady influx of apparently state-of-the-art results on language modelling
benchmarks. However, these have been evaluated using differing code bases and
limited computational resources, which represent uncontrolled sources of
experimental variation. We reevaluate several popular architectures and
regularisation methods with large-scale automatic black-box hyperparameter
tuning and arrive at the somewhat surprising conclusion that standard LSTM
architectures, when properly regularised, outperform more recent models. We
establish a new state of the art on the Penn Treebank and Wikitext-2 corpora,
as well as strong baselines on the Hutter Prize dataset. | http://arxiv.org/pdf/1707.05589 | Gábor Melis, Chris Dyer, Phil Blunsom | cs.CL | null | null | cs.CL | 20170718 | 20171120 | [
{
"id": "1611.09913"
},
{
"id": "1709.06560"
},
{
"id": "1709.07432"
},
{
"id": "1611.01578"
}
] |
1707.05589 | 13 | 4
# Under review as a conference paper at ICLR 2018
Model Size Depth Valid Test VD LSTM, Merity et al. (2016) VD+Zoneout LSTM, Merity et al. (2016) VD LSTM, Inan et al. (2016) AWD-LSTM, Merity et al. (2017) â 20M 20M 22M 33M 2 2 2 3 101.7 108.7 91.5 68.6 96.3 100.9 87.7 65.8 LSTM (tuned for PTB) LSTM LSTM LSTM RHN NAS 10M 1 1 2 4 5 1 88.4 72.7 73.8 78.3 83.5 79.6 83.2 69.1 70.7 74.3 79.5 75.9 LSTM (tuned for PTB) LSTM LSTM LSTM RHN NAS 24M 1 1 2 4 5 1 79.8 69.3 69.1 70.5 78.1 73.0 76.3 65.9 65.9 67.6 75.6 69.8
Table 2: Validation and test set perplexities on Wikitext-2. All results are with shared input and output embed- dings. â : parallel work. | 1707.05589#13 | On the State of the Art of Evaluation in Neural Language Models | Ongoing innovations in recurrent neural network architectures have provided a
steady influx of apparently state-of-the-art results on language modelling
benchmarks. However, these have been evaluated using differing code bases and
limited computational resources, which represent uncontrolled sources of
experimental variation. We reevaluate several popular architectures and
regularisation methods with large-scale automatic black-box hyperparameter
tuning and arrive at the somewhat surprising conclusion that standard LSTM
architectures, when properly regularised, outperform more recent models. We
establish a new state of the art on the Penn Treebank and Wikitext-2 corpora,
as well as strong baselines on the Hutter Prize dataset. | http://arxiv.org/pdf/1707.05589 | Gábor Melis, Chris Dyer, Phil Blunsom | cs.CL | null | null | cs.CL | 20170718 | 20171120 | [
{
"id": "1611.09913"
},
{
"id": "1709.06560"
},
{
"id": "1709.07432"
},
{
"id": "1611.01578"
}
] |
1707.05589 | 14 | Table 2: Validation and test set perplexities on Wikitext-2. All results are with shared input and output embed- dings. â : parallel work.
favourably even to the Neural Cache (Grave et al., 2016) whose innovations are fairly orthogonal to the base model.
Shallow LSTMs do especially well here. Deeper models have gradually degrading perplexity, with RHNs lagging all of them by a signiï¬cant margin. NAS is not quite up there with the LSTM suggesting its architecture might have overï¬tted to Penn Treebank, but data for deeper variants would be necessary to draw this conclusion.
6.3 ENWIK8
In contrast to the previous datasets, our numbers on this task (reported in BPC, following convetion) are slightly off the state of the art. This is most likely due to optimisation being limited to 14 epochs which is about a tenth of what the model of Zilly et al. (2016) was trained for. Nevertheless, we match their smaller RHN with our models which are very close to each other. NAS lags the other models by a surprising margin at this task.
# 7 ANALYSIS | 1707.05589#14 | On the State of the Art of Evaluation in Neural Language Models | Ongoing innovations in recurrent neural network architectures have provided a
steady influx of apparently state-of-the-art results on language modelling
benchmarks. However, these have been evaluated using differing code bases and
limited computational resources, which represent uncontrolled sources of
experimental variation. We reevaluate several popular architectures and
regularisation methods with large-scale automatic black-box hyperparameter
tuning and arrive at the somewhat surprising conclusion that standard LSTM
architectures, when properly regularised, outperform more recent models. We
establish a new state of the art on the Penn Treebank and Wikitext-2 corpora,
as well as strong baselines on the Hutter Prize dataset. | http://arxiv.org/pdf/1707.05589 | Gábor Melis, Chris Dyer, Phil Blunsom | cs.CL | null | null | cs.CL | 20170718 | 20171120 | [
{
"id": "1611.09913"
},
{
"id": "1709.06560"
},
{
"id": "1709.07432"
},
{
"id": "1611.01578"
}
] |
1707.05589 | 15 | # 7 ANALYSIS
On two of the three datasets, we improved previous results substantially by careful model speciï¬- cation and hyperparameter optimisation, but the improvement for RHNs is much smaller compared to that for LSTMs. While it cannot be ruled out that our particular setup somehow favours LSTMs, we believe it is more likely that this effect arises due to the original RHN experimental condition having been tuned more extensively (this is nearly unavoidable during model development).
Naturally, NAS beneï¬tted only to a limited degree from our tuning, since the numbers of Zoph & Le (2016) were already produced by employing similar regularisation methods and a grid search. The small edge can be attributed to the suboptimality of grid search (see Section 7.3).
In summary, the three recurrent cell architectures are closely matched on all three datasets, with minuscule differences on Enwik8 where regularisation matters the least. These results support the claims of Collins et al. (2016), that capacities of various cells are very similar and their apparent differences result from trainability and regularisation. While comparing three similar architectures cannot prove this point, the inclusion of NAS certainly gives it more credence. This way we have two of the best human designed and one machine optimised cell that was the top performer among thousands of candidates.
5
# Under review as a conference paper at ICLR 2018 | 1707.05589#15 | On the State of the Art of Evaluation in Neural Language Models | Ongoing innovations in recurrent neural network architectures have provided a
steady influx of apparently state-of-the-art results on language modelling
benchmarks. However, these have been evaluated using differing code bases and
limited computational resources, which represent uncontrolled sources of
experimental variation. We reevaluate several popular architectures and
regularisation methods with large-scale automatic black-box hyperparameter
tuning and arrive at the somewhat surprising conclusion that standard LSTM
architectures, when properly regularised, outperform more recent models. We
establish a new state of the art on the Penn Treebank and Wikitext-2 corpora,
as well as strong baselines on the Hutter Prize dataset. | http://arxiv.org/pdf/1707.05589 | Gábor Melis, Chris Dyer, Phil Blunsom | cs.CL | null | null | cs.CL | 20170718 | 20171120 | [
{
"id": "1611.09913"
},
{
"id": "1709.06560"
},
{
"id": "1709.07432"
},
{
"id": "1611.01578"
}
] |
1707.05589 | 16 | 5
# Under review as a conference paper at ICLR 2018
Model Size Depth Valid Test Stacked LSTM, Graves (2013) Grid LSTM, Kalchbrenner et al. (2015) MI-LSTM, Wu et al. (2016) LN HM-LSTM, Chung et al. (2016) ByteNet, Kalchbrenner et al. (2016) VD RHN, Zilly et al. (2016) VD RHN, Zilly et al. (2016) VD RHN, Zilly et al. (2016) 21M 17M 17M 35M - 23M 21M 46M 7 6 1 3 25 5 10 10 - - - - - - - - 1.67 1.47 1.44 1.32 1.31 1.31 1.30 1.27 LSTM RHN NAS 27M 4 5 4 1.29 1.30 1.38 1.31 1.31 1.40 LSTM RHN NAS 46M 4 5 4 1.28 1.29 1.32 1.30 1.30 1.33
Table 3: Validation and test set BPCs on Enwik8 from the Hutter Prize dataset.
7.1 THE EFFECT OF INDIVIDUAL FEATURES | 1707.05589#16 | On the State of the Art of Evaluation in Neural Language Models | Ongoing innovations in recurrent neural network architectures have provided a
steady influx of apparently state-of-the-art results on language modelling
benchmarks. However, these have been evaluated using differing code bases and
limited computational resources, which represent uncontrolled sources of
experimental variation. We reevaluate several popular architectures and
regularisation methods with large-scale automatic black-box hyperparameter
tuning and arrive at the somewhat surprising conclusion that standard LSTM
architectures, when properly regularised, outperform more recent models. We
establish a new state of the art on the Penn Treebank and Wikitext-2 corpora,
as well as strong baselines on the Hutter Prize dataset. | http://arxiv.org/pdf/1707.05589 | Gábor Melis, Chris Dyer, Phil Blunsom | cs.CL | null | null | cs.CL | 20170718 | 20171120 | [
{
"id": "1611.09913"
},
{
"id": "1709.06560"
},
{
"id": "1709.07432"
},
{
"id": "1611.01578"
}
] |
1707.05589 | 17 | Table 3: Validation and test set BPCs on Enwik8 from the Hutter Prize dataset.
7.1 THE EFFECT OF INDIVIDUAL FEATURES
Down-projection was found to be very beneï¬cial by the tuner for some depth/budget combinations. On Penn Treebank, it improved results by about 2â5 perplexity points at depths 1 and 2 at 10M, and depth 1 at 24M, possibly by equipping the recurrent cells with more capacity. The very same models beneï¬ted from down-projection on Wikitext-2, but even more so with gaps of about 10â18 points which is readily explained by the larger vocabulary size.
We further measured the contribution of other features of the models in a series of experiments. See Table 4. To limit the number of resource used, in these experiments only individual features were evaluated (not their combinations) on Penn Treebank at the best depth for each architecture (LSTM or RHN) and parameter budget (10M or 24M) as determined above.
First, we untied input and output embeddings which made perplexities worse by about 6 points across the board which is consistent with the results of Inan et al. (2016). | 1707.05589#17 | On the State of the Art of Evaluation in Neural Language Models | Ongoing innovations in recurrent neural network architectures have provided a
steady influx of apparently state-of-the-art results on language modelling
benchmarks. However, these have been evaluated using differing code bases and
limited computational resources, which represent uncontrolled sources of
experimental variation. We reevaluate several popular architectures and
regularisation methods with large-scale automatic black-box hyperparameter
tuning and arrive at the somewhat surprising conclusion that standard LSTM
architectures, when properly regularised, outperform more recent models. We
establish a new state of the art on the Penn Treebank and Wikitext-2 corpora,
as well as strong baselines on the Hutter Prize dataset. | http://arxiv.org/pdf/1707.05589 | Gábor Melis, Chris Dyer, Phil Blunsom | cs.CL | null | null | cs.CL | 20170718 | 20171120 | [
{
"id": "1611.09913"
},
{
"id": "1709.06560"
},
{
"id": "1709.07432"
},
{
"id": "1611.01578"
}
] |
1707.05589 | 18 | First, we untied input and output embeddings which made perplexities worse by about 6 points across the board which is consistent with the results of Inan et al. (2016).
Second, without variational dropout the RHN models suffer quite a bit since there remains no dropout at all in between the layers. The deep LSTM also sees a similar loss of perplexity as having intra-layer dropout does not in itself provide enough regularisation.
Third, we were also interested in how recurrent dropout (Semeniuta et al., 2016) would perform in lieu of variational dropout. Dropout masks were shared between time steps in both methods, and our results indicate no consistent advantage to either of them.
7.2 MODEL SELECTION
With a large number of hyperparameter combinations evaluated, the question of how much the tuner overï¬ts arises. There are multiple sources of noise in play,
(a) non-deterministic ordering of ï¬oating-point operations in optimised linear algebra routines, (b) different initialisation seeds, (c) the validation and test sets being ï¬nite samples from a inï¬nite population. | 1707.05589#18 | On the State of the Art of Evaluation in Neural Language Models | Ongoing innovations in recurrent neural network architectures have provided a
steady influx of apparently state-of-the-art results on language modelling
benchmarks. However, these have been evaluated using differing code bases and
limited computational resources, which represent uncontrolled sources of
experimental variation. We reevaluate several popular architectures and
regularisation methods with large-scale automatic black-box hyperparameter
tuning and arrive at the somewhat surprising conclusion that standard LSTM
architectures, when properly regularised, outperform more recent models. We
establish a new state of the art on the Penn Treebank and Wikitext-2 corpora,
as well as strong baselines on the Hutter Prize dataset. | http://arxiv.org/pdf/1707.05589 | Gábor Melis, Chris Dyer, Phil Blunsom | cs.CL | null | null | cs.CL | 20170718 | 20171120 | [
{
"id": "1611.09913"
},
{
"id": "1709.06560"
},
{
"id": "1709.07432"
},
{
"id": "1611.01578"
}
] |
1707.05589 | 19 | To assess the severity of these issues, we conducted the following experiment: models with the best hyperparameter settings for Penn Treebank and Wikitext-2 were retrained from scratch with various initialisation seeds and the validation and test scores were recorded. If during tuning, a model just got a lucky run due to a combination of (a) and (b), then retraining with the same hyperparameters but with different seeds would fail to reproduce the same good results.
There are a few notable things about the results. First, in our environment (Tensorï¬ow with a single GPU) even with the same seed as the one used by the tuner, the effect of (a) is almost as large as that of (a) and (b) combined. Second, the variance induced by (a) and (b) together is roughly equivalent to an absolute difference of 0.4 in perplexity on Penn Treebank and 0.5 on Wikitext-2.
6
# Under review as a conference paper at ICLR 2018 | 1707.05589#19 | On the State of the Art of Evaluation in Neural Language Models | Ongoing innovations in recurrent neural network architectures have provided a
steady influx of apparently state-of-the-art results on language modelling
benchmarks. However, these have been evaluated using differing code bases and
limited computational resources, which represent uncontrolled sources of
experimental variation. We reevaluate several popular architectures and
regularisation methods with large-scale automatic black-box hyperparameter
tuning and arrive at the somewhat surprising conclusion that standard LSTM
architectures, when properly regularised, outperform more recent models. We
establish a new state of the art on the Penn Treebank and Wikitext-2 corpora,
as well as strong baselines on the Hutter Prize dataset. | http://arxiv.org/pdf/1707.05589 | Gábor Melis, Chris Dyer, Phil Blunsom | cs.CL | null | null | cs.CL | 20170718 | 20171120 | [
{
"id": "1611.09913"
},
{
"id": "1709.06560"
},
{
"id": "1709.07432"
},
{
"id": "1611.01578"
}
] |
1707.05589 | 20 | 6
# Under review as a conference paper at ICLR 2018
Size 10M Size 24M Model Depth Valid Test Depth Valid Test LSTM 1 61.8 59.6 4 60.9 58.3 - Shared Embeddings - Variational Dropout + Recurrent Dropout + Untied gates + Tied gates 1 1 1 1 1 67.6 62.9 62.8 61.4 61.7 65.2 61.2 60.6 58.9 59.6 4 4 4 4 4 65.6 66.3 65.2 64.0 60.4 63.2 64.5 62.9 61.3 58.0 RHN 5 66.0 63.5 5 64.8 62.2 - Shared Embeddings - Variational Dropout + Recurrent Dropout 5 5 5 72.3 74.4 65.5 69.5 71.7 63.0 5 5 5 67.4 74.7 63.4 64.6 71.7 61.0
Table 4: Validation and test set perplexities on Penn Treebank for variants of our best LSTM and RHN models of two sizes.
Third, the validation perplexities of the best checkpoints are about one standard deviation lower than the sample mean of the reruns, so the tuner could ï¬t the noise only to a limited degree. | 1707.05589#20 | On the State of the Art of Evaluation in Neural Language Models | Ongoing innovations in recurrent neural network architectures have provided a
steady influx of apparently state-of-the-art results on language modelling
benchmarks. However, these have been evaluated using differing code bases and
limited computational resources, which represent uncontrolled sources of
experimental variation. We reevaluate several popular architectures and
regularisation methods with large-scale automatic black-box hyperparameter
tuning and arrive at the somewhat surprising conclusion that standard LSTM
architectures, when properly regularised, outperform more recent models. We
establish a new state of the art on the Penn Treebank and Wikitext-2 corpora,
as well as strong baselines on the Hutter Prize dataset. | http://arxiv.org/pdf/1707.05589 | Gábor Melis, Chris Dyer, Phil Blunsom | cs.CL | null | null | cs.CL | 20170718 | 20171120 | [
{
"id": "1611.09913"
},
{
"id": "1709.06560"
},
{
"id": "1709.07432"
},
{
"id": "1611.01578"
}
] |
1707.05589 | 21 | Because we treat our corpora as a single sequence, test set contents are not i.i.d., and we cannot apply techniques such as the bootstrap to assess (c). Instead, we looked at the gap between validation and test scores as a proxy and observed that it is very stable, contributing variance of 0.12â0.3 perplexity to the ï¬nal results on Penn Treebank and Wikitext-2, respectively.
We have not explicitly dealt with the unknown uncertainty remaining in the Gaussian Process that may affect model comparisons, apart from running it until apparent convergence. All in all, our ï¬ndings suggest that a gap in perplexity of 1.0 is a statistically robust difference between models trained in this way on these datasets. The distribution of results was approximately normal with roughly the same variance for all models, so we still report numbers in a tabular form instead of plotting the distribution of results, for example in a violin plot (Hintze & Nelson, 1998).
7.3 SENSITIVITY | 1707.05589#21 | On the State of the Art of Evaluation in Neural Language Models | Ongoing innovations in recurrent neural network architectures have provided a
steady influx of apparently state-of-the-art results on language modelling
benchmarks. However, these have been evaluated using differing code bases and
limited computational resources, which represent uncontrolled sources of
experimental variation. We reevaluate several popular architectures and
regularisation methods with large-scale automatic black-box hyperparameter
tuning and arrive at the somewhat surprising conclusion that standard LSTM
architectures, when properly regularised, outperform more recent models. We
establish a new state of the art on the Penn Treebank and Wikitext-2 corpora,
as well as strong baselines on the Hutter Prize dataset. | http://arxiv.org/pdf/1707.05589 | Gábor Melis, Chris Dyer, Phil Blunsom | cs.CL | null | null | cs.CL | 20170718 | 20171120 | [
{
"id": "1611.09913"
},
{
"id": "1709.06560"
},
{
"id": "1709.07432"
},
{
"id": "1611.01578"
}
] |
1707.05589 | 22 | 7.3 SENSITIVITY
To further verify that the best hyperparameter setting found by the tuner is not a ï¬uke, we plotted the validation loss against the hyperparameter settings. Fig. 2 shows one such typical plot, for a 4-layer LSTM. We manually restricted the ranges around the best hyperparameter values to around 15â25% of the entire tuneable range, and observed that the vast majority of settings in that neighbourhood produced perplexities within 3.0 of the best value. Widening the ranges further leads to quickly deteriorating results.
Satisï¬ed that the hyperparameter surface is well behaved, we considered whether the same results could have possibly been achieved with a simple grid search. Omitting input embedding ratio be- cause the tuner found having a down-projection suboptimal almost non-conditionally for this model, there remain six hyperparameters to tune. If there were 5 possible values on the grid for each hyper- parameter (with one value in every 20% interval), then we would need 65, nearly 8000 trials to get within 3.0 of the best perplexity achieved by the tuner in about 1500 trials.
7.4 TYING LSTM GATES | 1707.05589#22 | On the State of the Art of Evaluation in Neural Language Models | Ongoing innovations in recurrent neural network architectures have provided a
steady influx of apparently state-of-the-art results on language modelling
benchmarks. However, these have been evaluated using differing code bases and
limited computational resources, which represent uncontrolled sources of
experimental variation. We reevaluate several popular architectures and
regularisation methods with large-scale automatic black-box hyperparameter
tuning and arrive at the somewhat surprising conclusion that standard LSTM
architectures, when properly regularised, outperform more recent models. We
establish a new state of the art on the Penn Treebank and Wikitext-2 corpora,
as well as strong baselines on the Hutter Prize dataset. | http://arxiv.org/pdf/1707.05589 | Gábor Melis, Chris Dyer, Phil Blunsom | cs.CL | null | null | cs.CL | 20170718 | 20171120 | [
{
"id": "1611.09913"
},
{
"id": "1709.06560"
},
{
"id": "1709.07432"
},
{
"id": "1611.01578"
}
] |
1707.05589 | 23 | 7.4 TYING LSTM GATES
Normally, LSTMs have two independent gates controlling the retention of cell state and the admis- sion of updates (Eq. 1). A minor variant which reduces the number of parameters at the loss of some ï¬exibility is to tie the input and forget gates as in Eq. 2. A possible middle ground that keeps the number of parameters the same but ensures that values of the cell state c remain in [â1, 1] is to cap
7
# Under review as a conference paper at ICLR 2018
objectveValue input_dropout intra_layer_dropout learning_rate output_dropout state_dropout weight decay 0.800007) 7 0.007000 7 ora0000 | 42800 | 1180000 | «1.060000 442500 | | 0.70000 | 2400 | «10050000 42200 42000 41900 41900 4.1400 Yi 4.1200
Figure 2: Average per-word negative log-likelihoods of hyperparameter combinations in the neighbourhood of the best solution for a 4-layer LSTM with 24M weights on the Penn Treebank dataset.
the input gate as in Eq. 3. | 1707.05589#23 | On the State of the Art of Evaluation in Neural Language Models | Ongoing innovations in recurrent neural network architectures have provided a
steady influx of apparently state-of-the-art results on language modelling
benchmarks. However, these have been evaluated using differing code bases and
limited computational resources, which represent uncontrolled sources of
experimental variation. We reevaluate several popular architectures and
regularisation methods with large-scale automatic black-box hyperparameter
tuning and arrive at the somewhat surprising conclusion that standard LSTM
architectures, when properly regularised, outperform more recent models. We
establish a new state of the art on the Penn Treebank and Wikitext-2 corpora,
as well as strong baselines on the Hutter Prize dataset. | http://arxiv.org/pdf/1707.05589 | Gábor Melis, Chris Dyer, Phil Blunsom | cs.CL | null | null | cs.CL | 20170718 | 20171120 | [
{
"id": "1611.09913"
},
{
"id": "1709.06560"
},
{
"id": "1709.07432"
},
{
"id": "1611.01578"
}
] |
1707.05589 | 24 | the input gate as in Eq. 3.
co =fOci1thOj eo =fOu-1+(1-f) Oj ce, =f, Oc¢_1 + min(1 â fy, iz) Oj
co =fOci1thOj dd)
eo =fOu-1+(1-f) Oj (2)
ce, =f, Oc¢_1 + min(1 â fy, iz) Oj (3)
Where the equations are based on the formulation of Sak et al. (2014). All LSTM models in this pa- per use the third variant, except those titled âUntied gatesâ and âTied gatesâ in Table 4 corresponding to Eq. 1 and 2, respectively.
The results show that LSTMs are insensitive to these changes and the results vary only slightly even though more hidden units are allocated to the tied version to ï¬ll its parameter budget. Finally, the numbers suggest that deep LSTMs beneï¬t from bounded cell states.
# 8 CONCLUSION | 1707.05589#24 | On the State of the Art of Evaluation in Neural Language Models | Ongoing innovations in recurrent neural network architectures have provided a
steady influx of apparently state-of-the-art results on language modelling
benchmarks. However, these have been evaluated using differing code bases and
limited computational resources, which represent uncontrolled sources of
experimental variation. We reevaluate several popular architectures and
regularisation methods with large-scale automatic black-box hyperparameter
tuning and arrive at the somewhat surprising conclusion that standard LSTM
architectures, when properly regularised, outperform more recent models. We
establish a new state of the art on the Penn Treebank and Wikitext-2 corpora,
as well as strong baselines on the Hutter Prize dataset. | http://arxiv.org/pdf/1707.05589 | Gábor Melis, Chris Dyer, Phil Blunsom | cs.CL | null | null | cs.CL | 20170718 | 20171120 | [
{
"id": "1611.09913"
},
{
"id": "1709.06560"
},
{
"id": "1709.07432"
},
{
"id": "1611.01578"
}
] |
1707.05589 | 25 | # 8 CONCLUSION
During the transitional period when deep neural language models began to supplant their shallower predecessors, effect sizes tended to be large, and robust conclusions about the value of the mod- elling innovations could be made, even in the presence of poorly controlled âhyperparameter noise.â However, now that the neural revolution is in full swing, researchers must often compare competing deep architectures. In this regime, effect sizes tend to be much smaller, and more methodological care is required to produce reliable results. Furthermore, with so much work carried out in parallel by a growing research community, the costs of faulty conclusions are increased.
Although we can draw attention to this problem, this paper does not offer a practical methodologi- cal solution beyond establishing reliable baselines that can be the benchmarks for subsequent work. Still, we demonstrate how, with a huge amount of computation, noise levels of various origins can be carefully estimated and models meaningfully compared. This apparent tradeoff between the amount of computation and the reliability of results seems to lie at the heart of the matter. Solutions to the methodological challenges must therefore make model evaluation cheaper by, for instance, reducing the number of hyperparameters and the sensitivity of models to them, employing better hyperpa- rameter optimisation strategies, or by deï¬ning âleaguesâ with predeï¬ned computational budgets for a single model representing different points on the tradeoff curve. | 1707.05589#25 | On the State of the Art of Evaluation in Neural Language Models | Ongoing innovations in recurrent neural network architectures have provided a
steady influx of apparently state-of-the-art results on language modelling
benchmarks. However, these have been evaluated using differing code bases and
limited computational resources, which represent uncontrolled sources of
experimental variation. We reevaluate several popular architectures and
regularisation methods with large-scale automatic black-box hyperparameter
tuning and arrive at the somewhat surprising conclusion that standard LSTM
architectures, when properly regularised, outperform more recent models. We
establish a new state of the art on the Penn Treebank and Wikitext-2 corpora,
as well as strong baselines on the Hutter Prize dataset. | http://arxiv.org/pdf/1707.05589 | Gábor Melis, Chris Dyer, Phil Blunsom | cs.CL | null | null | cs.CL | 20170718 | 20171120 | [
{
"id": "1611.09913"
},
{
"id": "1709.06560"
},
{
"id": "1709.07432"
},
{
"id": "1611.01578"
}
] |
1707.05589 | 26 | # REFERENCES
Junyoung Chung, Sungjin Ahn, and Yoshua Bengio. Hierarchical multiscale recurrent neural net- works. CoRR, abs/1609.01704, 2016. URL http://arxiv.org/abs/1609.01704.
Jasmine Collins, Jascha Sohl-Dickstein, and David Sussillo. Capacity and trainability in recurrent neural networks. arXiv preprint arXiv:1611.09913, 2016.
8
(1) (2) (3)
# Under review as a conference paper at ICLR 2018
Thomas Desautels, Andreas Krause, and Joel W. Burdick. Parallelizing exploration-exploitation tradeoffs in Gaussian process bandit optimization. Journal of Machine Learning Research, 15: 4053â4103, 2014. URL http://jmlr.org/papers/v15/desautels14a.html.
Yarin Gal and Zoubin Ghahramani. A theoretically grounded application of dropout in recurrent neural networks. In Advances in Neural Information Processing Systems, pp. 1019â1027, 2016. | 1707.05589#26 | On the State of the Art of Evaluation in Neural Language Models | Ongoing innovations in recurrent neural network architectures have provided a
steady influx of apparently state-of-the-art results on language modelling
benchmarks. However, these have been evaluated using differing code bases and
limited computational resources, which represent uncontrolled sources of
experimental variation. We reevaluate several popular architectures and
regularisation methods with large-scale automatic black-box hyperparameter
tuning and arrive at the somewhat surprising conclusion that standard LSTM
architectures, when properly regularised, outperform more recent models. We
establish a new state of the art on the Penn Treebank and Wikitext-2 corpora,
as well as strong baselines on the Hutter Prize dataset. | http://arxiv.org/pdf/1707.05589 | Gábor Melis, Chris Dyer, Phil Blunsom | cs.CL | null | null | cs.CL | 20170718 | 20171120 | [
{
"id": "1611.09913"
},
{
"id": "1709.06560"
},
{
"id": "1709.07432"
},
{
"id": "1611.01578"
}
] |
1707.05589 | 27 | Daniel Golovin, Benjamin Solnik, Subhodeep Moitra, Greg Kochanski, John Karro, and D Scul- In Proceedings of the 23rd ACM ley. Google vizier: A service for black-box optimization. SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1487â1495. ACM, 2017.
Edouard Grave, Armand Joulin, and Nicolas Usunier. Improving neural language models with a continuous cache. CoRR, abs/1612.04426, 2016. URL http://arxiv.org/abs/1612. 04426.
Alex Graves. Generating sequences with recurrent neural networks. CoRR, abs/1308.0850, 2013. URL http://arxiv.org/abs/1308.0850.
Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, and David Meger. Deep reinforcement learning that matters. arXiv preprint arXiv:1709.06560, 2017.
Jerry L Hintze and Ray D Nelson. Violin plots: a box plot-density trace synergism. The American Statistician, 52(2):181â184, 1998. | 1707.05589#27 | On the State of the Art of Evaluation in Neural Language Models | Ongoing innovations in recurrent neural network architectures have provided a
steady influx of apparently state-of-the-art results on language modelling
benchmarks. However, these have been evaluated using differing code bases and
limited computational resources, which represent uncontrolled sources of
experimental variation. We reevaluate several popular architectures and
regularisation methods with large-scale automatic black-box hyperparameter
tuning and arrive at the somewhat surprising conclusion that standard LSTM
architectures, when properly regularised, outperform more recent models. We
establish a new state of the art on the Penn Treebank and Wikitext-2 corpora,
as well as strong baselines on the Hutter Prize dataset. | http://arxiv.org/pdf/1707.05589 | Gábor Melis, Chris Dyer, Phil Blunsom | cs.CL | null | null | cs.CL | 20170718 | 20171120 | [
{
"id": "1611.09913"
},
{
"id": "1709.06560"
},
{
"id": "1709.07432"
},
{
"id": "1611.01578"
}
] |
1707.05589 | 28 | Jerry L Hintze and Ray D Nelson. Violin plots: a box plot-density trace synergism. The American Statistician, 52(2):181â184, 1998.
Sepp Hochreiter and J¨urgen Schmidhuber. Long Short-Term Memory. Neural Computation, 9 ISSN 0899-7667. doi: 10.1162/neco.1997.9.8.1735. URL (8):1735â1780, November 1997. http://dx.doi.org/10.1162/neco.1997.9.8.1735.
# Marcus Hutter. The human knowledge compression contest. 2012.
Hakan Inan, Khashayar Khosravi, and Richard Socher. Tying word vectors and word classiï¬ers: A loss framework for language modeling. CoRR, abs/1611.01462, 2016. URL http://arxiv. org/abs/1611.01462.
Nal Kalchbrenner, Ivo Danihelka, and Alex Graves. Grid long short-term memory. CoRR, abs/1507.01526, 2015. URL http://arxiv.org/abs/1507.01526. | 1707.05589#28 | On the State of the Art of Evaluation in Neural Language Models | Ongoing innovations in recurrent neural network architectures have provided a
steady influx of apparently state-of-the-art results on language modelling
benchmarks. However, these have been evaluated using differing code bases and
limited computational resources, which represent uncontrolled sources of
experimental variation. We reevaluate several popular architectures and
regularisation methods with large-scale automatic black-box hyperparameter
tuning and arrive at the somewhat surprising conclusion that standard LSTM
architectures, when properly regularised, outperform more recent models. We
establish a new state of the art on the Penn Treebank and Wikitext-2 corpora,
as well as strong baselines on the Hutter Prize dataset. | http://arxiv.org/pdf/1707.05589 | Gábor Melis, Chris Dyer, Phil Blunsom | cs.CL | null | null | cs.CL | 20170718 | 20171120 | [
{
"id": "1611.09913"
},
{
"id": "1709.06560"
},
{
"id": "1709.07432"
},
{
"id": "1611.01578"
}
] |
1707.05589 | 29 | Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, A¨aron van den Oord, Alex Graves, and Koray Kavukcuoglu. Neural machine translation in linear time. CoRR, abs/1610.10099, 2016. URL http://arxiv.org/abs/1610.10099.
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Ben Krause, Emmanuel Kahembwe, Iain Murray, and Steve Renals. Dynamic evaluation of neural sequence models. arXiv preprint arXiv:1709.07432, 2017.
Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. Building a large annotated corpus of english: The Penn treebank. Computational linguistics, 19(2):313â330, 1993.
Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer sentinel mixture models. CoRR, abs/1609.07843, 2016. URL http://arxiv.org/abs/1609.07843. | 1707.05589#29 | On the State of the Art of Evaluation in Neural Language Models | Ongoing innovations in recurrent neural network architectures have provided a
steady influx of apparently state-of-the-art results on language modelling
benchmarks. However, these have been evaluated using differing code bases and
limited computational resources, which represent uncontrolled sources of
experimental variation. We reevaluate several popular architectures and
regularisation methods with large-scale automatic black-box hyperparameter
tuning and arrive at the somewhat surprising conclusion that standard LSTM
architectures, when properly regularised, outperform more recent models. We
establish a new state of the art on the Penn Treebank and Wikitext-2 corpora,
as well as strong baselines on the Hutter Prize dataset. | http://arxiv.org/pdf/1707.05589 | Gábor Melis, Chris Dyer, Phil Blunsom | cs.CL | null | null | cs.CL | 20170718 | 20171120 | [
{
"id": "1611.09913"
},
{
"id": "1709.06560"
},
{
"id": "1709.07432"
},
{
"id": "1611.01578"
}
] |
1707.05589 | 30 | Stephen Merity, Nitish Shirish Keskar, and Richard Socher. Regularizing and optimizing LSTM language models. CoRR, abs/1708.02182, 2017. URL http://arxiv.org/abs/1708. 02182.
Tomas Mikolov, Martin Karaï¬Â´at, Lukas Burget, Jan Cernock`y, and Sanjeev Khudanpur. Recurrent neural network based language model. In Interspeech, volume 2, pp. 3, 2010.
Oï¬r Press and Lior Wolf. Using the output embedding to improve language models. CoRR, abs/1608.05859, 2016. URL http://arxiv.org/abs/1608.05859.
9
# Under review as a conference paper at ICLR 2018
Nils Reimers and Iryna Gurevych. Reporting score distributions makes a difference: Performance study of lstm-networks for sequence tagging. CoRR, abs/1707.09861, 2017. URL http:// arxiv.org/abs/1707.09861. | 1707.05589#30 | On the State of the Art of Evaluation in Neural Language Models | Ongoing innovations in recurrent neural network architectures have provided a
steady influx of apparently state-of-the-art results on language modelling
benchmarks. However, these have been evaluated using differing code bases and
limited computational resources, which represent uncontrolled sources of
experimental variation. We reevaluate several popular architectures and
regularisation methods with large-scale automatic black-box hyperparameter
tuning and arrive at the somewhat surprising conclusion that standard LSTM
architectures, when properly regularised, outperform more recent models. We
establish a new state of the art on the Penn Treebank and Wikitext-2 corpora,
as well as strong baselines on the Hutter Prize dataset. | http://arxiv.org/pdf/1707.05589 | Gábor Melis, Chris Dyer, Phil Blunsom | cs.CL | null | null | cs.CL | 20170718 | 20171120 | [
{
"id": "1611.09913"
},
{
"id": "1709.06560"
},
{
"id": "1709.07432"
},
{
"id": "1611.01578"
}
] |
1707.05589 | 31 | Hasim Sak, Andrew W. Senior, and Franc¸oise Beaufays. Long short-term memory based recur- rent neural network architectures for large vocabulary speech recognition. CoRR, abs/1402.1128, 2014. URL http://arxiv.org/abs/1402.1128.
Stanislau Semeniuta, Aliaksei Severyn, and Erhardt Barth. Recurrent dropout without memory loss. CoRR, abs/1603.05118, 2016. URL http://arxiv.org/abs/1603.05118.
Yuhuai Wu, Saizheng Zhang, Ying Zhang, Yoshua Bengio, and Ruslan Salakhutdinov. On mul- tiplicative integration with recurrent neural networks. CoRR, abs/1606.06630, 2016. URL http://arxiv.org/abs/1606.06630.
Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. Recurrent neural network regularization. CoRR, abs/1409.2329, 2014. URL http://arxiv.org/abs/1409.2329. | 1707.05589#31 | On the State of the Art of Evaluation in Neural Language Models | Ongoing innovations in recurrent neural network architectures have provided a
steady influx of apparently state-of-the-art results on language modelling
benchmarks. However, these have been evaluated using differing code bases and
limited computational resources, which represent uncontrolled sources of
experimental variation. We reevaluate several popular architectures and
regularisation methods with large-scale automatic black-box hyperparameter
tuning and arrive at the somewhat surprising conclusion that standard LSTM
architectures, when properly regularised, outperform more recent models. We
establish a new state of the art on the Penn Treebank and Wikitext-2 corpora,
as well as strong baselines on the Hutter Prize dataset. | http://arxiv.org/pdf/1707.05589 | Gábor Melis, Chris Dyer, Phil Blunsom | cs.CL | null | null | cs.CL | 20170718 | 20171120 | [
{
"id": "1611.09913"
},
{
"id": "1709.06560"
},
{
"id": "1709.07432"
},
{
"id": "1611.01578"
}
] |
1707.05409 | 0 | 7 1 0 2
l u J 7 1 ] R I . s c [
1 v 9 0 4 5 0 . 7 0 7 1 : v i X r a
# Neural Matching Models for Question Retrieval and Next Question Prediction in Conversation
Jiafeng Guo2 W. Bruce Croft1 Yongfeng Zhang1 Liu Yang1 Hamed Zamani1 1 Center for Intelligent Information Retrieval, University of Massachusetts Amherst, Amherst, MA, USA 2 CAS Key Lab of Network Data Science and Technology, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China {lyang,zamani,yongfeng,croft}@cs.umass.edu,[email protected] | 1707.05409#0 | Neural Matching Models for Question Retrieval and Next Question Prediction in Conversation | The recent boom of AI has seen the emergence of many human-computer
conversation systems such as Google Assistant, Microsoft Cortana, Amazon Echo
and Apple Siri. We introduce and formalize the task of predicting questions in
conversations, where the goal is to predict the new question that the user will
ask, given the past conversational context. This task can be modeled as a
"sequence matching" problem, where two sequences are given and the aim is to
learn a model that maps any pair of sequences to a matching probability. Neural
matching models, which adopt deep neural networks to learn sequence
representations and matching scores, have attracted immense research interests
of information retrieval and natural language processing communities. In this
paper, we first study neural matching models for the question retrieval task
that has been widely explored in the literature, whereas the effectiveness of
neural models for this task is relatively unstudied. We further evaluate the
neural matching models in the next question prediction task in conversations.
We have used the publicly available Quora data and Ubuntu chat logs in our
experiments. Our evaluations investigate the potential of neural matching
models with representation learning for question retrieval and next question
prediction in conversations. Experimental results show that neural matching
models perform well for both tasks. | http://arxiv.org/pdf/1707.05409 | Liu Yang, Hamed Zamani, Yongfeng Zhang, Jiafeng Guo, W. Bruce Croft | cs.IR | Neu-IR 2017: The SIGIR 2017 Workshop on Neural Information Retrieval
(SIGIR Neu-IR 2017), Tokyo, Japan, August 7-11, 2017 | null | cs.IR | 20170717 | 20170717 | [] |
1707.05173 | 1 | # Abstract
AI systems are increasingly applied to complex tasks that involve interaction with humans. During training, such systems are potentially dangerous, as they havenât yet learned to avoid actions that could cause serious harm. How can an AI system explore and learn without making a single mistake that harms humans or otherwise causes serious damage? For model-free reinforcement learning, having a human âin the loopâ and ready to intervene is currently the only way to prevent all catastrophes. We formalize human intervention for RL and show how to reduce the human labor required by training a supervised learner to imitate the humanâs intervention decisions. We evaluate this scheme on Atari games, with a Deep RL agent being overseen by a human for four hours. When the class of catastrophes is simple, we are able to prevent all catastrophes without affecting the agentâs learning (whereas an RL baseline fails due to catastrophic forgetting). However, this scheme is less successful when catastrophes are more complex: it reduces but does not eliminate catastrophes and the supervised learner fails on adversarial examples found by the agent. Extrapolating to more challenging environments, we show that our implementation would not scale (due to the infeasible amount of human labor required). We outline extensions of the scheme that are necessary if we are to train model-free agents without a single catastrophe.
Link to videos that illustrate our approach on Atari games.
# Introduction | 1707.05173#1 | Trial without Error: Towards Safe Reinforcement Learning via Human Intervention | AI systems are increasingly applied to complex tasks that involve interaction
with humans. During training, such systems are potentially dangerous, as they
haven't yet learned to avoid actions that could cause serious harm. How can an
AI system explore and learn without making a single mistake that harms humans
or otherwise causes serious damage? For model-free reinforcement learning,
having a human "in the loop" and ready to intervene is currently the only way
to prevent all catastrophes. We formalize human intervention for RL and show
how to reduce the human labor required by training a supervised learner to
imitate the human's intervention decisions. We evaluate this scheme on Atari
games, with a Deep RL agent being overseen by a human for four hours. When the
class of catastrophes is simple, we are able to prevent all catastrophes
without affecting the agent's learning (whereas an RL baseline fails due to
catastrophic forgetting). However, this scheme is less successful when
catastrophes are more complex: it reduces but does not eliminate catastrophes
and the supervised learner fails on adversarial examples found by the agent.
Extrapolating to more challenging environments, we show that our implementation
would not scale (due to the infeasible amount of human labor required). We
outline extensions of the scheme that are necessary if we are to train
model-free agents without a single catastrophe. | http://arxiv.org/pdf/1707.05173 | William Saunders, Girish Sastry, Andreas Stuhlmueller, Owain Evans | cs.AI, cs.LG, cs.NE | null | null | cs.AI | 20170717 | 20170717 | [
{
"id": "1606.06565"
},
{
"id": "1606.04460"
},
{
"id": "1706.03741"
},
{
"id": "1610.03518"
},
{
"id": "1611.01211"
},
{
"id": "1606.01540"
},
{
"id": "1604.07095"
},
{
"id": "1612.01086"
}
] |
1707.05409 | 1 | ABSTRACT The recent boom of AI has seen the emergence of many human- computer conversation systems such as Google Assistant, Microsoft Cortana, Amazon Echo and Apple Siri. We introduce and formalize the task of predicting questions in conversations, where the goal is to predict the new question that the user will ask, given the past conversational context. This task can be modeled as a âsequence matchingâ problem, where two sequences are given and the aim is to learn a model that maps any pair of sequences to a matching probability. Neural matching models, which adopt deep neural networks to learn sequence representations and matching scores, have attracted immense research interests of information retrieval and natural language processing communities. In this paper, we first study neural matching models for the question retrieval task that has been widely explored in the literature, whereas the effec- tiveness of neural models for this task is relatively unstudied. We further evaluate the neural matching models in the next question prediction task in conversations. We have used the publicly avail- able Quora data and Ubuntu chat logs in our experiments. Our evaluations investigate the potential of neural matching models with representation learning for question retrieval and next ques- tion prediction in conversations. Experimental results show that neural matching models perform well for both tasks.
Table 1: Motivated examples of predicting questions in con- versations and search. Ground truth labels are highlighted by different text colors, where blue means correct predic- tions and red means wrong predictions. | 1707.05409#1 | Neural Matching Models for Question Retrieval and Next Question Prediction in Conversation | The recent boom of AI has seen the emergence of many human-computer
conversation systems such as Google Assistant, Microsoft Cortana, Amazon Echo
and Apple Siri. We introduce and formalize the task of predicting questions in
conversations, where the goal is to predict the new question that the user will
ask, given the past conversational context. This task can be modeled as a
"sequence matching" problem, where two sequences are given and the aim is to
learn a model that maps any pair of sequences to a matching probability. Neural
matching models, which adopt deep neural networks to learn sequence
representations and matching scores, have attracted immense research interests
of information retrieval and natural language processing communities. In this
paper, we first study neural matching models for the question retrieval task
that has been widely explored in the literature, whereas the effectiveness of
neural models for this task is relatively unstudied. We further evaluate the
neural matching models in the next question prediction task in conversations.
We have used the publicly available Quora data and Ubuntu chat logs in our
experiments. Our evaluations investigate the potential of neural matching
models with representation learning for question retrieval and next question
prediction in conversations. Experimental results show that neural matching
models perform well for both tasks. | http://arxiv.org/pdf/1707.05409 | Liu Yang, Hamed Zamani, Yongfeng Zhang, Jiafeng Guo, W. Bruce Croft | cs.IR | Neu-IR 2017: The SIGIR 2017 Workshop on Neural Information Retrieval
(SIGIR Neu-IR 2017), Tokyo, Japan, August 7-11, 2017 | null | cs.IR | 20170717 | 20170717 | [] |
1707.05173 | 2 | Link to videos that illustrate our approach on Atari games.
# Introduction
# 1.1 Motivation
AI systems are increasingly applied to complex tasks that involve interaction with humans. During training, such systems are potentially dangerous, as they havenât yet learned to avoid actions that would cause serious harm. How can an AI system explore and learn without making a single mistake that harms humans, destroys property, or damages the environment?
A crucial safeguard against this danger is human intervention. Self-driving cars are overseen by human drivers, who take control when they predict the AI system will perform badly. These overseers frequently intervene, especially in self-driving systems at an early stage of development [11]. The same safeguard is used for human learners, who are overseen by a licensed driver.
Many AI systems pose no physical danger to humans. Yet web-based systems can still cause unintended harm. Microsoftâs chatbot Tay reproduced thousands of offensive tweets before being taken down [29]. Facebookâs algorithms for sharing news stories inadvertently provided a platform for malicious and false stories and disinformation during the US 2016 election [3]. If human operators had monitored these systems in real-time (as with self-driving cars), the bad outcomes could have been avoided. | 1707.05173#2 | Trial without Error: Towards Safe Reinforcement Learning via Human Intervention | AI systems are increasingly applied to complex tasks that involve interaction
with humans. During training, such systems are potentially dangerous, as they
haven't yet learned to avoid actions that could cause serious harm. How can an
AI system explore and learn without making a single mistake that harms humans
or otherwise causes serious damage? For model-free reinforcement learning,
having a human "in the loop" and ready to intervene is currently the only way
to prevent all catastrophes. We formalize human intervention for RL and show
how to reduce the human labor required by training a supervised learner to
imitate the human's intervention decisions. We evaluate this scheme on Atari
games, with a Deep RL agent being overseen by a human for four hours. When the
class of catastrophes is simple, we are able to prevent all catastrophes
without affecting the agent's learning (whereas an RL baseline fails due to
catastrophic forgetting). However, this scheme is less successful when
catastrophes are more complex: it reduces but does not eliminate catastrophes
and the supervised learner fails on adversarial examples found by the agent.
Extrapolating to more challenging environments, we show that our implementation
would not scale (due to the infeasible amount of human labor required). We
outline extensions of the scheme that are necessary if we are to train
model-free agents without a single catastrophe. | http://arxiv.org/pdf/1707.05173 | William Saunders, Girish Sastry, Andreas Stuhlmueller, Owain Evans | cs.AI, cs.LG, cs.NE | null | null | cs.AI | 20170717 | 20170717 | [
{
"id": "1606.06565"
},
{
"id": "1606.04460"
},
{
"id": "1706.03741"
},
{
"id": "1610.03518"
},
{
"id": "1611.01211"
},
{
"id": "1606.01540"
},
{
"id": "1604.07095"
},
{
"id": "1612.01086"
}
] |
1707.05173 | 3 | Human oversight is currently the only means of avoiding all accidents in complex real-world domains.1 How does human intervention for safety ï¬t together with Deep Learning and Reinforcement Learning, which are likely to be key components of future applied AI systems? We present a scheme for human intervention in RL systems and test the scheme on Atari games. We document serious scalability problems for human intervention applied to RL and outline potential remedies.
# 1.2 Contributions
We provide a formal scheme (HIRL) for applying human oversight to RL agents. The scheme makes it easy to train a supervised learner to imitate the humanâs intervention policy and take over from the human. (Automating human oversight is crucial since itâs infeasible for a human to watch over an RL agent for 100 million timesteps.) While the human oversees a particular RL agent, the supervised learner can be re-used as a safety-harness for different agents. | 1707.05173#3 | Trial without Error: Towards Safe Reinforcement Learning via Human Intervention | AI systems are increasingly applied to complex tasks that involve interaction
with humans. During training, such systems are potentially dangerous, as they
haven't yet learned to avoid actions that could cause serious harm. How can an
AI system explore and learn without making a single mistake that harms humans
or otherwise causes serious damage? For model-free reinforcement learning,
having a human "in the loop" and ready to intervene is currently the only way
to prevent all catastrophes. We formalize human intervention for RL and show
how to reduce the human labor required by training a supervised learner to
imitate the human's intervention decisions. We evaluate this scheme on Atari
games, with a Deep RL agent being overseen by a human for four hours. When the
class of catastrophes is simple, we are able to prevent all catastrophes
without affecting the agent's learning (whereas an RL baseline fails due to
catastrophic forgetting). However, this scheme is less successful when
catastrophes are more complex: it reduces but does not eliminate catastrophes
and the supervised learner fails on adversarial examples found by the agent.
Extrapolating to more challenging environments, we show that our implementation
would not scale (due to the infeasible amount of human labor required). We
outline extensions of the scheme that are necessary if we are to train
model-free agents without a single catastrophe. | http://arxiv.org/pdf/1707.05173 | William Saunders, Girish Sastry, Andreas Stuhlmueller, Owain Evans | cs.AI, cs.LG, cs.NE | null | null | cs.AI | 20170717 | 20170717 | [
{
"id": "1606.06565"
},
{
"id": "1606.04460"
},
{
"id": "1706.03741"
},
{
"id": "1610.03518"
},
{
"id": "1611.01211"
},
{
"id": "1606.01540"
},
{
"id": "1604.07095"
},
{
"id": "1612.01086"
}
] |
1707.05409 | 3 | Example 1: Time: 2010-12-18 Conversation Context: [17:23] <neohunter111> Hello I have a problem with my mouse, is a microsoft wireless mouse 7000, when i press button6 or buttton 7 ubuntu recives a lot of press and realease events!! any ideas of how to solve this or how to search in google?? [17:24] <pksadiq> neohunter111: does system > preferences > mouse has any option? [17:26] <neohunter111> pksadiq yes the mouse works, the problem is that i set the boutton 6 and 7 (muse wheel to left o right) to change the desktop screen. and when i press it the desktop cube turns like crazy a lot of times, but before was working ok. [17:27] <pksadiq> neohunter111: go to compiz settings in system > preferences,a dn select 3D desktop plugin and change settings Predicted Question: Where is 3d desktop plugin? (Correct) Is there a keyboard shortcut to change desktop? (Wrong) Example 2: Time: 2011-12-22 Conversation Context: [15:59] <gplikespie> Hello, I am new to Linux and am not sure how to move ï¬les from windows to linux, | 1707.05409#3 | Neural Matching Models for Question Retrieval and Next Question Prediction in Conversation | The recent boom of AI has seen the emergence of many human-computer
conversation systems such as Google Assistant, Microsoft Cortana, Amazon Echo
and Apple Siri. We introduce and formalize the task of predicting questions in
conversations, where the goal is to predict the new question that the user will
ask, given the past conversational context. This task can be modeled as a
"sequence matching" problem, where two sequences are given and the aim is to
learn a model that maps any pair of sequences to a matching probability. Neural
matching models, which adopt deep neural networks to learn sequence
representations and matching scores, have attracted immense research interests
of information retrieval and natural language processing communities. In this
paper, we first study neural matching models for the question retrieval task
that has been widely explored in the literature, whereas the effectiveness of
neural models for this task is relatively unstudied. We further evaluate the
neural matching models in the next question prediction task in conversations.
We have used the publicly available Quora data and Ubuntu chat logs in our
experiments. Our evaluations investigate the potential of neural matching
models with representation learning for question retrieval and next question
prediction in conversations. Experimental results show that neural matching
models perform well for both tasks. | http://arxiv.org/pdf/1707.05409 | Liu Yang, Hamed Zamani, Yongfeng Zhang, Jiafeng Guo, W. Bruce Croft | cs.IR | Neu-IR 2017: The SIGIR 2017 Workshop on Neural Information Retrieval
(SIGIR Neu-IR 2017), Tokyo, Japan, August 7-11, 2017 | null | cs.IR | 20170717 | 20170717 | [] |
1707.05173 | 4 | The goal of HIRL is enabling an RL agent to learn a real-world task without a single catastrophe. We investigated the scalability of HIRL in Atari games, which are challenging toy environments for current AI [19]. HIRL was applied to Deep RL agents playing three games: Pong, Space Invaders, and Road Runner (see Figure 2). For the ï¬rst 4.5 hours of training, a human watched every frame and intervened to block the agent from taking catastrophic actions. In Pong and Space Invaders, where the class of catastrophes was chosen to be simple to learn, the supervised learner succeeded in blocking all catastrophes. In Road Runner, where the class of catastrophes was more diverse and complex, HIRL reduced the number catastrophes by a factor of 50 but did not reduce them to zero. | 1707.05173#4 | Trial without Error: Towards Safe Reinforcement Learning via Human Intervention | AI systems are increasingly applied to complex tasks that involve interaction
with humans. During training, such systems are potentially dangerous, as they
haven't yet learned to avoid actions that could cause serious harm. How can an
AI system explore and learn without making a single mistake that harms humans
or otherwise causes serious damage? For model-free reinforcement learning,
having a human "in the loop" and ready to intervene is currently the only way
to prevent all catastrophes. We formalize human intervention for RL and show
how to reduce the human labor required by training a supervised learner to
imitate the human's intervention decisions. We evaluate this scheme on Atari
games, with a Deep RL agent being overseen by a human for four hours. When the
class of catastrophes is simple, we are able to prevent all catastrophes
without affecting the agent's learning (whereas an RL baseline fails due to
catastrophic forgetting). However, this scheme is less successful when
catastrophes are more complex: it reduces but does not eliminate catastrophes
and the supervised learner fails on adversarial examples found by the agent.
Extrapolating to more challenging environments, we show that our implementation
would not scale (due to the infeasible amount of human labor required). We
outline extensions of the scheme that are necessary if we are to train
model-free agents without a single catastrophe. | http://arxiv.org/pdf/1707.05173 | William Saunders, Girish Sastry, Andreas Stuhlmueller, Owain Evans | cs.AI, cs.LG, cs.NE | null | null | cs.AI | 20170717 | 20170717 | [
{
"id": "1606.06565"
},
{
"id": "1606.04460"
},
{
"id": "1706.03741"
},
{
"id": "1610.03518"
},
{
"id": "1611.01211"
},
{
"id": "1606.01540"
},
{
"id": "1604.07095"
},
{
"id": "1612.01086"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.