id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
sequencelengths 1
1
|
---|---|---|---|---|---|---|
1604.00289#147 | Building Machines That Learn and Think Like People | Where next? Trends in Cognitive Sciences, 9 (3), 111â 117. Elman, J. L., Bates, E. A., Johnson, M. H., Karmiloï¬ -Smith, A., Parisi, D., & Plunkett, K. (1996). Rethinking innateness. Cambridge, MA: MIT Press. Eslami, S. M. A., Heess, N., Weber, T., Tassa, Y., Kavukcuoglu, K., & Hinton, G. E. (2016). Attend, infer, repeat: Fast scene understanding with generative models. arXiv preprint arXiv:1603.08575 . | 1604.00289#146 | 1604.00289#148 | 1604.00289 | [
"1511.06114"
] |
1604.00289#148 | Building Machines That Learn and Think Like People | Eslami, S. M. A., Tarlow, D., Kohli, P., & Winn, J. (2014). Just-in-time learning for fast and ï¬ exible inference. In Advances in Neural Information Processing Systems (pp. 154â 162). Fodor, J. A. (1975). The Language of Thought. Harvard University Press. Fodor, J. A., & Pylyshyn, Z. W. (1988). Connectionism and cognitive architecture: A critical analysis. Cognition, 28 , 3â 71. Frank, M. C., Goodman, N. D., & Tenenbaum, J. B. (2009). | 1604.00289#147 | 1604.00289#149 | 1604.00289 | [
"1511.06114"
] |
1604.00289#149 | Building Machines That Learn and Think Like People | Using speakersâ referential intentions to model early cross-situational word learning. Psychological Science, 20 , 578â 585. Freyd, J. (1983). Representing the dynamics of a static form. Memory and Cognition, 11 (4), 342â 346. Freyd, J. (1987). Dynamic Mental Representations. Psychological Review , 94 (4), 427â 438. Fukushima, K. (1980). | 1604.00289#148 | 1604.00289#150 | 1604.00289 | [
"1511.06114"
] |
1604.00289#150 | Building Machines That Learn and Think Like People | Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaï¬ ected by shift in position. Biological Cybernetics, 36 , 193â 202. Gallistel, C., & Matzel, L. D. (2013). The neuroscience of learning: beyond the Hebbian synapse. 47 Annual Review of Psychology, 64 , 169â 200. Gelly, S., & Silver, D. (2008). | 1604.00289#149 | 1604.00289#151 | 1604.00289 | [
"1511.06114"
] |
1604.00289#151 | Building Machines That Learn and Think Like People | Achieving master level play in 9 x 9 computer go.. Gelly, S., & Silver, D. (2011). Monte-carlo tree search and rapid action value estimation in computer go. Artiï¬ cial Intelligence, 175 (11), 1856â 1875. Gelman, A., Carlin, J. B., Stern, H. S., & Rubin, D. B. (2004). Bayesian Data Analysis. Chapman and Hall/CRC. Gelman, A., Lee, D., & Guo, J. (2015). Stan a probabilistic programming language for Bayesian inference and optimization. Journal of Educational and Behavioral Statistics, 40 , 530â 543. Geman, S., Bienenstock, E., & Doursat, R. (1992). | 1604.00289#150 | 1604.00289#152 | 1604.00289 | [
"1511.06114"
] |
1604.00289#152 | Building Machines That Learn and Think Like People | Neural networks and the bias/variance dilemma. Neural Computation, 4 , 1â 58. Gershman, S. J., & Goodman, N. D. (2014). Amortized inference in probabilistic reasoning. In Proceedings of the 36th Annual Conference of the Cognitive Science Society. (2015). Computational rationality: A converging paradigm for intelligence in brains, minds, and machines. Science, 349 , 273â 278. Gershman, S. J., Markman, A. B., & Otto, A. R. (2014). Retrospective revaluation in sequential decision making: A tale of two systems. Journal of Experimental Psychology: General , 143 , 182â 194. Gershman, S. J., Vul, E., & Tenenbaum, J. B. (2012). Multistability and perceptual inference. Neural Computation, 24 , 1â 24. Gerstenberg, T., Goodman, N. D., Lagnado, D. a., & Tenenbaum, J. B. (2015). | 1604.00289#151 | 1604.00289#153 | 1604.00289 | [
"1511.06114"
] |
1604.00289#153 | Building Machines That Learn and Think Like People | How, whether, why: Causal judgments as counterfactual contrasts. Proceedings of the 37th Annual Conference of the Cognitive Science Society. Ghahramani, Z. (2015). Probabilistic machine learning and artiï¬ cial intelligence. Nature, 521 , 452â 459. Goodman, N. D., Mansinghka, V. K., Roy, D. M., Bonawitz, K., & Tenenbaum, J. B. (2008). | 1604.00289#152 | 1604.00289#154 | 1604.00289 | [
"1511.06114"
] |
1604.00289#154 | Building Machines That Learn and Think Like People | Church: A language for generative models. Uncertainty in Artiï¬ cial Intelligence. Gopnik, A., Glymour, C., Sobel, D. M., Schulz, L. E., Kushnir, T., & Danks, D. (2004). A theory of causal learning in children: Causal maps and Bayes nets. Psychological Review , 111 (1), 3â 32. Gopnik, A., & Meltzoï¬ , A. N. (1999). Words, Thoughts, and Theories. Mind: A Quarterly Review of Philosophy, 108 , 0. Graves, A. (2014). | 1604.00289#153 | 1604.00289#155 | 1604.00289 | [
"1511.06114"
] |
1604.00289#155 | Building Machines That Learn and Think Like People | Generating sequences with recurrent neural networks. arXiv preprint. Retrieved from http://arxiv.org/abs/1308.0850 Graves, A., Mohamed, A.-r., & Hinton, G. (2013). Speech recognition with deep recurrent neu- In Acoustics, speech and signal processing (icassp), 2013 ieee international ral networks. conference on (pp. 6645â 6649). Graves, A., Wayne, G., & Danihelka, I. (2014). Neural Turing Machines. arXiv preprint. Retrieved from http://arxiv.org/abs/1410.5401v1 Graves, A., Wayne, G., Reynolds, M., Harley, T., Danihelka, I., Grabska-Barwi´nska, A., . . . Has- sabis, D. (2016). | 1604.00289#154 | 1604.00289#156 | 1604.00289 | [
"1511.06114"
] |
1604.00289#156 | Building Machines That Learn and Think Like People | Hybrid computing using a neural network with dynamic external memory. Nature. Grefenstette, E., Hermann, K. M., Suleyman, M., & Blunsom, P. (2015). Learning to Transduce with Unbounded Memory. In Advances in Neural Information Processing Systems. Gregor, K., Besse, F., Rezende, D. J., Danihelka, I., & Wierstra, D. (2016). Towards Conceptual Compression. arXiv preprint. Retrieved from http://arxiv.org/abs/1604.08772 | 1604.00289#155 | 1604.00289#157 | 1604.00289 | [
"1511.06114"
] |
1604.00289#157 | Building Machines That Learn and Think Like People | 48 Gregor, K., Danihelka, I., Graves, A., Rezende, D. J., & Wierstra, D. (2015). DRAW: A Recurrent Neural Network For Image Generation. In International Conference on Machine Learning (ICML). Griï¬ ths, T. L., Chater, N., Kemp, C., Perfors, A., & Tenenbaum, J. B. (2010). Probabilistic models of cognition: exploring representations and inductive biases. Trends in Cognitive Sciences, 14 (8), 357â 64. Griï¬ ths, T. L., Vul, E., & Sanborn, A. N. (2012). Bridging levels of analysis for probabilistic models of cognition. Current Directions in Psychological Science, 21 , 263â 268. Grossberg, S. (1976). Adaptive pattern classiï¬ | 1604.00289#156 | 1604.00289#158 | 1604.00289 | [
"1511.06114"
] |
1604.00289#158 | Building Machines That Learn and Think Like People | cation and universal recoding: I. parallel development and coding of neural feature detectors. Biological Cybernetics, 23 , 121â 134. Grosse, R., Salakhutdinov, R., Freeman, W. T., & Tenenbaum, J. B. (2012). Exploiting composi- tionality to explore a large space of model structures. In Uncertainty in Artiï¬ cial Intelligence. Guo, X., Singh, S., Lee, H., Lewis, R. L., & Wang, X. (2014). | 1604.00289#157 | 1604.00289#159 | 1604.00289 | [
"1511.06114"
] |
1604.00289#159 | Building Machines That Learn and Think Like People | Deep learning for real-time Atari game play using oï¬ ine Monte-Carlo tree search planning. In Advances in neural information processing systems (pp. 3338â 3346). Gweon, H., Tenenbaum, J. B., & Schulz, L. E. Infants consider both the sample and the sampling process in inductive generalization. Proceedings of the National Academy of Sciences, 107 , 9066â 9071. doi: 10.1073/pnas.1003095107 Halle, M., & Stevens, K. (1962). | 1604.00289#158 | 1604.00289#160 | 1604.00289 | [
"1511.06114"
] |
1604.00289#160 | Building Machines That Learn and Think Like People | Speech Recognition: A Model and a Program for Research. IRE Transactions on Information Theory, 8 (2), 155â 159. Hamlin, K. J. (2013). Moral Judgment and Action in Preverbal Infants and Toddlers: Evidence for an Innate Moral Core. Current Directions in Psychological Science, 22 , 186â 193. doi: 10.1177/0963721412470687 Hamlin, K. J., Ullman, T., Tenenbaum, J., Goodman, N. D., & Baker, C. (2013). The mentalistic basis of core social cognition: Experiments in preverbal infants and a computational model. Developmental Science, 16 , 209â 226. doi: 10.1111/desc.12017 Hamlin, K. J., Wynn, K., & Bloom, P. (2007). Social evaluation by preverbal infants. Nature, 450 , 557â 560. Hamlin, K. J., Wynn, K., & Bloom, P. (2010). | 1604.00289#159 | 1604.00289#161 | 1604.00289 | [
"1511.06114"
] |
1604.00289#161 | Building Machines That Learn and Think Like People | Three-month-olds show a negativity bias in their social evaluations. Developmental Science, 13 , 923â 929. doi: 10.1111/j.1467-7687.2010.00951 .x Harlow, H. F. (1949). The formation of learning sets. Psychological Review , 56 (1), 51â 65. Harlow, H. F. (1950). Learning and satiation of response in intrinsically motivated complex puzzle performance by monkeys. Journal of Comparative and Physiological Psychology, 43 , 289â 294. Hauser, M. D., Chomsky, N., & Fitch, W. T. (2002). The faculty of language: what is it, who has it, and how did it evolve? Science, 298 , 1569â 1579. | 1604.00289#160 | 1604.00289#162 | 1604.00289 | [
"1511.06114"
] |
1604.00289#162 | Building Machines That Learn and Think Like People | Hayes-Roth, B., & Hayes-Roth, F. (1979). A cognitive model of planning. Cognitive Science, 3 , 275â 310. He, K., Zhang, X., Ren, S., & Sun, J. (2015). Deep Residual Learning for Image Recognition. arXiv preprint. Retrieved from http://arxiv.org/abs/1512.03385 Hebb, D. O. (1949). The organization of behavior. | 1604.00289#161 | 1604.00289#163 | 1604.00289 | [
"1511.06114"
] |
1604.00289#163 | Building Machines That Learn and Think Like People | Wiley. Heess, N., Tarlow, D., & Winn, J. (2013). Learning to pass expectation propagation messages. In Advances in Neural Information Processing Systems (pp. 3219â 3227). Hespos, S. J., & Baillargeon, R. (2008). Young infantsâ actions reveal their developing knowledge of support variables: Converging evidence for violation-of-expectation ï¬ ndings. Cognition, 49 107 , 304â 316. | 1604.00289#162 | 1604.00289#164 | 1604.00289 | [
"1511.06114"
] |
1604.00289#164 | Building Machines That Learn and Think Like People | Hespos, S. J., Ferry, A. L., & Rips, L. J. (2009). Five-month-old infants have diï¬ erent expectations for solids and liquids. Psychological Science, 20 (5), 603â 611. Hinton, G. E. (2002). Training products of experts by minimizing contrastive divergence. Neural Computation, 14 (8), 1771â 800. Hinton, G. E., Dayan, P., Frey, B. J., & Neal, R. M. (1995). The â wake-sleepâ | 1604.00289#163 | 1604.00289#165 | 1604.00289 | [
"1511.06114"
] |
1604.00289#165 | Building Machines That Learn and Think Like People | algorithm for unsupervised neural networks. Science, 268 (5214), 1158â 61. Hinton, G. E., Deng, L., Yu, D., Dahl, G. E., Mohamed, A.-r., Jaitly, N., . . . Kingsbury, B. (2012). Deep neural networks for acoustic modeling in speech recognition. IEEE Signal Processing Magazine, 29 , 82â 97. Hinton, G. E., Osindero, S., & Teh, Y. W. (2006). | 1604.00289#164 | 1604.00289#166 | 1604.00289 | [
"1511.06114"
] |
1604.00289#166 | Building Machines That Learn and Think Like People | A fast learning algorithm for deep belief nets. Neural Computation, 18 , 1527â 1554. Hoï¬ man, D. D., & Richards, W. A. (1984). Parts of recognition. Cognition, 18 , 65â 96. Hofstadter, D. R. (1985). Metamagical themas: Questing for the essence of mind and pattern. New York: Basic Books. Horst, J. S., & Samuelson, L. K. (2008). Fast Mapping but Poor Retention by 24-Month-Old Infants. Infancy, 13 (2), 128â 157. Huang, Y., & Rao, R. P. (2014). | 1604.00289#165 | 1604.00289#167 | 1604.00289 | [
"1511.06114"
] |
1604.00289#167 | Building Machines That Learn and Think Like People | Neurons as Monte Carlo samplers: Bayesian? inference and In Advances in neural information processing systems (pp. learning in spiking networks. 1943â 1951). Hummel, J. E., & Biederman, I. (1992). Dynamic binding in a neural network for shape recognition. Psychological Review , 99 (3), 480â 517. Jackendoï¬ , R. (2003). Foundations of Language. Oxford University Press. Jara-Ettinger, J., Gweon, H., Tenenbaum, J. B., & Schulz, L. E. (2015). Childrens understanding of the costs and rewards underlying rational action. Cognition, 140 , 14â 23. Jern, A., & Kemp, C. (2013). | 1604.00289#166 | 1604.00289#168 | 1604.00289 | [
"1511.06114"
] |
1604.00289#168 | Building Machines That Learn and Think Like People | A probabilistic account of exemplar and category generation. Cognitive Psychology, 66 (1), 85â 125. Jern, A., & Kemp, C. (2015). A decision network account of reasoning about other peoples choices. Cognition, 142 , 12â 38. Johnson, S. C., Slaughter, V., & Carey, S. (1998). Whose gaze will infants follow? The elicitation of gaze-following in 12-month-olds. Developmental Science, 1 , 233â 238. doi: 10.1111/1467 -7687.00036 Juang, B. H., & Rabiner, L. R. (1990). Hidden Markov models for speech recognition. Technometric, 33 (3), 251â 272. | 1604.00289#167 | 1604.00289#169 | 1604.00289 | [
"1511.06114"
] |
1604.00289#169 | Building Machines That Learn and Think Like People | Karpathy, A., & Fei-Fei, L. (2015). Deep Visual-Semantic Alignments for Generating Image Desscriptions. In Computer Vision and Pattern Recognition (CVPR). Kemp, C. (2007). The acquisition of inductive constraints. Unpublished doctoral dissertation, MIT. Keramati, M., Dezfouli, A., & Piray, P. (2011). Speed/accuracy trade-oï¬ between the habitual and the goal-directed processes. | 1604.00289#168 | 1604.00289#170 | 1604.00289 | [
"1511.06114"
] |
1604.00289#170 | Building Machines That Learn and Think Like People | PLoS Computational Biology, 7 , e1002055. Khaligh-Razavi, S.-M., & Kriegeskorte, N. (2014). Deep Supervised, but Not Unsupervised, Models May Explain IT Cortical Representation. PLoS Computational Biology, 10 (11), e1003915. Kilner, J. M., Friston, K. J., & Frith, C. D. (2007). Predictive coding: An account of the mirror neuron system. Cognitive Processing, 8 (3), 159â 166. Kingma, D. P., Rezende, D. J., Mohamed, S., & Welling, M. (2014). | 1604.00289#169 | 1604.00289#171 | 1604.00289 | [
"1511.06114"
] |
1604.00289#171 | Building Machines That Learn and Think Like People | Semi-supervised Learning 50 with Deep Generative Models. In Neural Information Processing Systems (NIPS). Koch, G., Zemel, R. S., & Salakhutdinov, R. (2015). Siamese neural networks for one-shot image recognition. In ICML Deep Learning Workshop. Kodratoï¬ , Y., & Michalski, R. S. (2014). Machine learning: An artiï¬ cial intelligence approach (Vol. 3). Morgan Kaufmann. Koza, J. R. (1992). Genetic programming: on the programming of computers by means of natural selection (Vol. 1). MIT press. | 1604.00289#170 | 1604.00289#172 | 1604.00289 | [
"1511.06114"
] |
1604.00289#172 | Building Machines That Learn and Think Like People | Kriegeskorte, N. (2015). Deep Neural Networks: A New Framework for Modeling Biological Vision and Brain Information Processing. Annural Review of Vision Science, 1 , 417â 446. ImageNet classiï¬ cation with deep con- volutional neural networks. In Advances in Neural Information Processing Systems 25 (pp. 1097â 1105). Kulkarni, T. D., Kohli, P., Tenenbaum, J. B., & Mansinghka, V. (2015). Picture: | 1604.00289#171 | 1604.00289#173 | 1604.00289 | [
"1511.06114"
] |
1604.00289#173 | Building Machines That Learn and Think Like People | A probabilistic programming language for scene perception. In Computer Vision and Pattern Recognition (CVPR). Kulkarni, T. D., Narasimhan, K. R., Saeedi, A., & Tenenbaum, J. B. (2016). Hierarchical Deep Reinforcement Learning: Integrating Temporal Abstraction and Intrinsic Motivation. arXiv preprint. Kulkarni, T. D., Whitney, W., Kohli, P., & Tenenbaum, J. B. (2015). | 1604.00289#172 | 1604.00289#174 | 1604.00289 | [
"1511.06114"
] |
1604.00289#174 | Building Machines That Learn and Think Like People | Deep Convolutional Inverse Graphics Network. In Computer Vision and Pattern Recognition (CVPR). Lake, B. M. (2014). Towards more human-like concept learning in machines: Compositionality, causality, and learning-to-learn. Unpublished doctoral dissertation, MIT. Lake, B. M., Lee, C.-y., Glass, J. R., & Tenenbaum, J. B. (2014). One-shot learning of generative In Proceedings of the 36th Annual Conference of the Cognitive Science speech concepts. Society (pp. 803â 808). Lake, B. M., Salakhutdinov, R., & Tenenbaum, J. B. (2012). Concept learning as motor program induction: A large-scale empirical study. In Proceedings of the 34th Annual Conference of the Cognitive Science Society. Lake, B. M., Salakhutdinov, R., & Tenenbaum, J. B. (2015). Human-level concept learning through probabilistic program induction. Science, 350 (6266), 1332â 1338. Lake, B. M., Zaremba, W., Fergus, R., & Gureckis, T. M. (2015). | 1604.00289#173 | 1604.00289#175 | 1604.00289 | [
"1511.06114"
] |
1604.00289#175 | Building Machines That Learn and Think Like People | Deep Neural Networks Predict Category Typicality Ratings for Images. In Proceedings of the 37th Annual Conference of the Cognitive Science Society. Landau, B., Smith, L. B., & Jones, S. S. (1988). The importance of shape in early lexical learning. Cognitive Development, 3 (3), 299â 321. Langley, P., Bradshaw, G., Simon, H. A., & Zytkow, J. M. (1987). Scientiï¬ c discovery: Computa- tional explorations of the creative processes. MIT press. LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. | 1604.00289#174 | 1604.00289#176 | 1604.00289 | [
"1511.06114"
] |
1604.00289#176 | Building Machines That Learn and Think Like People | Nature, 521 , 436â 444. LeCun, Y., Boser, B., Denker, J. S., Henderson, D., Howard, R. E., Hubbard, W., & Jackel, L. D. (1989). Backpropagation applied to handwritten zip code recognition. Neural Computation, 1 , 541â 551. LeCun, Y., Bottou, L., Bengio, Y., & Haï¬ ner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE , 86 (11), 2278â 2323. | 1604.00289#175 | 1604.00289#177 | 1604.00289 | [
"1511.06114"
] |
1604.00289#177 | Building Machines That Learn and Think Like People | Lerer, A., Gross, S., & Fergus, R. (2016). Learning Physical Intuition of Block Towers by Example. arXiv preprint. Retrieved from http://arxiv.org/abs/1603.01312 51 (2009). Modeling the eï¬ ects of memory on human online sentence processing with particle ï¬ lters. In Advances in Neural Information Processing Systems (pp. 937â 944). Liao, Q., Leibo, J. Z., & Poggio, T. (2015). How important is weight symmetry in backpropagation? arXiv preprint arXiv:1510.05067 . Liberman, A. M., Cooper, F. S., Shankweiler, D. P., & Studdert-Kennedy, M. (1967). | 1604.00289#176 | 1604.00289#178 | 1604.00289 | [
"1511.06114"
] |
1604.00289#178 | Building Machines That Learn and Think Like People | Perception of the speech code. Psychological Review , 74 (6), 431â 461. Lillicrap, T. P., Cownden, D., Tweed, D. B., & Akerman, C. J. (2014). Random feedback weights support learning in deep neural networks. arXiv preprint arXiv:1411.0247 . Lloyd, J., Duvenaud, D., Grosse, R., Tenenbaum, J., & Ghahramani, Z. (2014). Automatic con- struction and natural-language description of nonparametric regression models. In Proceedings of the National Conference on Artiï¬ cial Intelligence (Vol. 2, pp. 1242â 1250). Lombrozo, T. (2009). | 1604.00289#177 | 1604.00289#179 | 1604.00289 | [
"1511.06114"
] |
1604.00289#179 | Building Machines That Learn and Think Like People | Explanation and categorization: How â why?â informs â what?â . Cognition, 110 (2), 248â 53. Lopez-Paz, D., Bottou, L., Scholk¨opf, B., & Vapnik, V. (2016). Unifying distillation and privileged information. In International Conference on Learning Representations (ICLR). Lopez-Paz, D., Muandet, K., Scholk¨opf, B., & Tolstikhin, I. (2015). Towards a Learning Theory of Cause-Eï¬ ect Inference. In Proceedings of the 32nd International Conference on Machine Learning (ICML). | 1604.00289#178 | 1604.00289#180 | 1604.00289 | [
"1511.06114"
] |
1604.00289#180 | Building Machines That Learn and Think Like People | Luong, M.-T., Le, Q. V., Sutskever, I., Vinyals, O., & Kaiser, L. (2015). Multi-task sequence to sequence learning. arXiv preprint arXiv:1511.06114 . Lupyan, G., & Bergen, B. (2016). How Language Programs the Mind. Topics in Cognitive Science, 8 (2), 408â 424. Retrieved from http://doi.wiley.com/10.1111/tops.12155 (2015). | 1604.00289#179 | 1604.00289#181 | 1604.00289 | [
"1511.06114"
] |
1604.00289#181 | Building Machines That Learn and Think Like People | Words and the world: Predictive coding and the language- perception-cognition interface. Current Directions in Psychological Science, 24 (4), 279â 284. (2013). Sidekick agents for sequential planning problems. Unpublished doctoral Lupyan, Macindoe, O. dissertation, Massachusetts Institute of Technology. Magid, R. W., Sheskin, M., & Schulz, L. E. (2015). Imagination and the generation of new ideas. Cognitive Development, 34 , 99â 110. Mansinghka, V., Selsam, D., & Perov, Y. (2014). Venture: | 1604.00289#180 | 1604.00289#182 | 1604.00289 | [
"1511.06114"
] |
1604.00289#182 | Building Machines That Learn and Think Like People | A higher-order probabilistic program- ming platform with programmable inference. arXiv preprint arXiv:1404.0099 . Marcus, G. (1998). Rethinking Eliminative Connectionism. Cognitive Psychology, 282 (37), 243â 282. Marcus, G. (2001). The algebraic mind: Integrating connectionism and cognitive science. MIT press. Markman, A. B., & Makin, V. S. (1998). Referential communication and category acquisition. Journal of Experimental Psychology: General , 127 (4), 331â 54. Markman, A. B., & Ross, B. H. (2003). Category use and category learning. Psychological Bulletin, 129 (4), 592â 613. Markman, E. M. (1989). Categorization and Naming in Children. Cambridge, MA: MIT Press. Marr, D. C. (1982). | 1604.00289#181 | 1604.00289#183 | 1604.00289 | [
"1511.06114"
] |
1604.00289#183 | Building Machines That Learn and Think Like People | Vision. San Francisco, CA: W.H. Freeman and Company. Marr, D. C., & Nishihara, H. K. (1978). Representation and recognition of the spatial organization of three-dimensional shapes. Proceedings of the Royal Society of London. Series B , 200 (1140), 269â 94. McClelland, J. L. (1988). Parallel distributed processing: Implications for cognition and development 52 (Tech. Rep.). DTIC Document. McClelland, J. L., Botvinick, M. M., Noelle, D. C., Plaut, D. C., Rogers, T. T., Seidenberg, M. S., (2010). Letting structure emerge: connectionist and dynamical systems & Smith, L. B. approaches to cognition. Trends in Cognitive Sciences, 14 (8), 348â 56. McClelland, J. L., McNaughton, B. L., & Oâ Reilly, R. C. (1995). | 1604.00289#182 | 1604.00289#184 | 1604.00289 | [
"1511.06114"
] |
1604.00289#184 | Building Machines That Learn and Think Like People | Why there are complementary learning systems in the hippocampus and neocortex: insights from the successes and failures of connectionist models of learning and memory. Psychological Review , 102 (3), 419â 57. McClelland, J. L., Rumelhart, D. E., & the PDP Research Group. (1986). Parallel Distributed Processing: Explorations in the microstructure of cognition. Volume II. Cambridge, MA: MIT Press. Mikolov, T., Joulin, A., & Baroni, M. (2016). A Roadmap towards Machine Intelligence. arXiv preprint. Retrieved from http://arxiv.org/abs/1511.08130 Mikolov, T., Sutskever, I., & Chen, K. (2013). Distributed Representations of Words and Phrases and their Compositionality. Advances in Neural Information Processing Systems. | 1604.00289#183 | 1604.00289#185 | 1604.00289 | [
"1511.06114"
] |
1604.00289#185 | Building Machines That Learn and Think Like People | Miller, E. G., Matsakis, N. E., & Viola, P. A. (2000). Learning from one example through shared densities on transformations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Miller, G. A., & Johnson-Laird, P. N. (1976). Language and perception. Cambridge, MA: Belknap Press. Minsky, M. L. (1974). A framework for representing knowledge. MIT-AI Laboratory Memo 306 . Minsky, M. L., & Papert, S. A. (1969). Perceptrons: An introduction to computational geometry. | 1604.00289#184 | 1604.00289#186 | 1604.00289 | [
"1511.06114"
] |
1604.00289#186 | Building Machines That Learn and Think Like People | MIT Press. Mitchell, T. M., Keller, R. R., & Kedar-cabelli, S. T. (1986). Explanation-Based Generalization: A Unifying View. Machine Learning, 1 , 47â 80. Mnih, A., & Gregor, K. (2014). Neural variational inference and learning in belief networks. In Proceedings of the 31st International Conference on Machine Learning (pp. 1791â 1799). Mnih, V., Heess, N., Graves, A., & Kavukcuoglu, K. (2014). Recurrent Models of Visual Attention. In Advances in Neural Information Processing Systems 27 (pp. 1â | 1604.00289#185 | 1604.00289#187 | 1604.00289 | [
"1511.06114"
] |
1604.00289#187 | Building Machines That Learn and Think Like People | 9). Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., . . . Hassabis, D. (2015). Human-level control through deep reinforcement learning. Nature, 518 (7540), 529â 533. Mohamed, S., & Rezende, D. J. (2015). Variational information maximisation for intrinsically motivated reinforcement learning. In Advances in neural information processing systems (pp. 2125â 2133). Moreno-Bote, R., Knill, D. C., & Pouget, A. (2011). Bayesian sampling in visual perception. Proceedings of the National Academy of Sciences, 108 , 12491â 12496. Murphy, G. L. (1988). Comprehending complex concepts. Cognitive Science, 12 (4), 529â 562. Murphy, G. L., & Medin, D. L. (1985). The role of theories in conceptual coherence. Psychological Review , 92 (3), 289â 316. Murphy, G. L., & Ross, B. H. Psychology, 27 , 148â 193. (1994). Predictions from Uncertain Categorizations. Cognitive Neisser, U. (1966). Cognitive Psychology. | 1604.00289#186 | 1604.00289#188 | 1604.00289 | [
"1511.06114"
] |
1604.00289#188 | Building Machines That Learn and Think Like People | New York: Appleton-Century-Crofts. Newell, A., & Simon, H. A. (1961). Gps, a program that simulates human thought. Defense Technical Information Center. Newell, A., & Simon, H. A. (1972). Human problem solving. Prentice-Hall. 53 Niv, Y. (2009). Reinforcement learning in the brain. Journal of Mathematical Psychology, 53 , 139â 154. Oâ Donnell, T. J. (2015). | 1604.00289#187 | 1604.00289#189 | 1604.00289 | [
"1511.06114"
] |
1604.00289#189 | Building Machines That Learn and Think Like People | Productivity and Reuse in Language: A Theory of Linguistic Computation and Storage. Cambridge, MA: MIT Press. Osherson, D. N., & Smith, E. E. (1981). On the adequacy of prototype theory as a theory of concepts. Cognition, 9 (1), 35â 58. Parisotto, E., Ba, J. L., & Salakhutdinov, R. (2016). Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning. In International Conference on Learning Representations (ICLR). Retrieved from http://arxiv.org/abs/1511.06342 Pecevski, D., Buesing, L., & Maass, W. (2011). Probabilistic inference in general graphical models through sampling in stochastic networks of spiking neurons. PLoS Computational Biology, 7 , e1002294. Peterson, J. C., Abbott, J. T., & Griï¬ ths, T. L. (2016). | 1604.00289#188 | 1604.00289#190 | 1604.00289 | [
"1511.06114"
] |
1604.00289#190 | Building Machines That Learn and Think Like People | Adapting Deep Network Features to Capture Psychological Representations. In Proceedings of the 38th Annual Conference of the Cognitive Science Society. Piantadosi, S. T. (2011). Learning and the language of thought. Unpublished doctoral dissertation, Massachusetts Institute of Technology. Pinker, S. (2007). The Stuï¬ of Thought. Penguin. Pinker, S., & Prince, A. (1988). On language and connectionism: Analysis of a parallel distributed processing model of language acquisition. Cognition, 28 , 73â 193. Power, J. M., Thompson, L. T., Moyer, J. R., & Disterhoft, J. F. (1997). Enhanced synaptic transmission in ca1 hippocampus after eyeblink conditioning. Journal of Neurophysiology, 78 , 1184â | 1604.00289#189 | 1604.00289#191 | 1604.00289 | [
"1511.06114"
] |
1604.00289#191 | Building Machines That Learn and Think Like People | 1187. Premack, D., & Premack, A. J. (1997). Infants Attribute Value to the Goal-Directed Actions of Self-propelled Objects (Vol. 9). doi: 10.1162/jocn.1997.9.6.848 Reed, S., & de Freitas, N. (2016). Neural Programmer-Interpreters. In International Conference on Learning Representations (ICLR). Retrieved from http://arxiv.org/abs/1511.06279 # Rehder, B. (2003). A causal-model theory of conceptual representation and categorization. | 1604.00289#190 | 1604.00289#192 | 1604.00289 | [
"1511.06114"
] |
1604.00289#192 | Building Machines That Learn and Think Like People | Journal of Experimental Psychology: Learning, Memory, and Cognition, 29 (6), 1141â 59. Rehder, B., & Hastie, R. (2001). Causal Knowledge and Categories: The Eï¬ ects of Causal Beliefs on Categorization, Induction, and Similarity. Journal of Experimental Psychology: General , 130 (3), 323â 360. Rehling, J. A. (2001). Letter Spirit (Part Two): Modeling Creativity in a Visual Domain. | 1604.00289#191 | 1604.00289#193 | 1604.00289 | [
"1511.06114"
] |
1604.00289#193 | Building Machines That Learn and Think Like People | Unpub- lished doctoral dissertation, Indiana University. Rezende, D. J., Mohamed, S., Danihelka, I., Gregor, K., & Wierstra, D. (2016). One-Shot Gen- In International Conference on Machine Learning eralization in Deep Generative Models. (ICML). Retrieved from http://arxiv.org/abs/1603.05106v1 Rezende, D. J., Mohamed, S., & Wierstra, D. (2014). | 1604.00289#192 | 1604.00289#194 | 1604.00289 | [
"1511.06114"
] |
1604.00289#194 | Building Machines That Learn and Think Like People | Stochastic backpropagation and approxi- mate inference in deep generative models. In International Conference on Machine Learning (ICML). Rips, L. J. (1975). Inductive judgments about natural categories. Journal of Verbal Learning and Verbal Behavior , 14 (6), 665â 681. Rips, L. J., & Hespos, S. J. (2015). Divisions of the physical world: Concepts of objects and substances. Psychological Bulletin, 141 , 786â 811. Rogers, T. T., & McClelland, J. L. (2004). Semantic Cognition. Cambridge, MA: MIT Press. | 1604.00289#193 | 1604.00289#195 | 1604.00289 | [
"1511.06114"
] |
1604.00289#195 | Building Machines That Learn and Think Like People | 54 Rosenblatt, F. (1958). The perceptron: a probabilistic model for information storage and organi- zation in the brain. Psychological Review , 65 , 386â 408. Rougier, N. P., Noelle, D. C., Braver, T. S., Cohen, J. D., & Oâ Reilly, R. C. (2005). Prefrontal cortex and ï¬ exible cognitive control: | 1604.00289#194 | 1604.00289#196 | 1604.00289 | [
"1511.06114"
] |
1604.00289#196 | Building Machines That Learn and Think Like People | Rules without symbols. Proceedings of the National Academy of Sciences (PNAS), 102 (20), 7338â 7343. Rumelhart, D. E., Hinton, G., & Williams, R. (1986). Learning representations by back-propagating errors. Nature, 323 (9), 533â 536. Rumelhart, D. E., & McClelland, J. L. (1986). On Learning the Past Tenses of English Verbs. | 1604.00289#195 | 1604.00289#197 | 1604.00289 | [
"1511.06114"
] |
1604.00289#197 | Building Machines That Learn and Think Like People | In Parallel distributed processing: Explorations in the microstructure of cognition (pp. 216â 271). Cambridge, MA: MIT Press. Rumelhart, D. E., McClelland, J. L., & the PDP Research Group. (1986). Parallel Distributed Processing: Explorations in the microstructure of cognition. Volume I. Cambridge, MA: MIT Press. Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., . . . Fei-Fei, L. (2015). ImageNet large scale visual recognition challenge (Tech. | 1604.00289#196 | 1604.00289#198 | 1604.00289 | [
"1511.06114"
] |
1604.00289#198 | Building Machines That Learn and Think Like People | Rep.). Russell, S., & Norvig, P. (2003). Artiï¬ cial Intelligence: A Modern Approach. Upper Saddle River, NJ: Prentice Hall. Rusu, A. A., Rabinowitz, N. C., Desjardins, G., Soyer, H., Kirkpatrick, J., Kavukcuoglu, K., . . . Hadsell, R. (2016). Progressive Neural Networks. arXiv preprint. Retrieved from http:// arxiv.org/abs/1606.04671 Salakhutdinov, R., Tenenbaum, J., & Torralba, A. (2012). One-shot learning with a hierarchical nonparametric Bayesian model. JMLR Workshop on Unsupervised and Transfer Learning, 27 , 195â | 1604.00289#197 | 1604.00289#199 | 1604.00289 | [
"1511.06114"
] |
1604.00289#199 | Building Machines That Learn and Think Like People | 207. Salakhutdinov, R., Tenenbaum, J. B., & Torralba, A. (2013). Learning with Hierarchical-Deep Models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35 (8), 1958â 71. Salakhutdinov, R., Torralba, A., & Tenenbaum, J. (2011). Learning to Share Visual Appearance for Multiclass Object Detection. In Computer Vision and Pattern Recognition (CVPR). | 1604.00289#198 | 1604.00289#200 | 1604.00289 | [
"1511.06114"
] |
1604.00289#200 | Building Machines That Learn and Think Like People | Sanborn, A. N., Mansinghka, V. K., & Griï¬ ths, T. L. (2013). Reconciling intuitive physics and newtonian mechanics for colliding objects. Psychological Review , 120 (2), 411. Scellier, B., & Bengio, Y. (2016). Towards a biologically plausible backprop. arXiv preprint arXiv:1602.05179 . Schank, R. C. (1972). Conceptual dependency: A theory of natural language understanding. Cognitive Psychology, 3 , 552â 631. In International Conference on Learning Representations (ICLR). Retrieved from http://arxiv .org/abs/1511.05952 Schlottmann, A., Cole, K., Watts, R., & White, M. (2013). | 1604.00289#199 | 1604.00289#201 | 1604.00289 | [
"1511.06114"
] |
1604.00289#201 | Building Machines That Learn and Think Like People | Domain-speciï¬ c perceptual causality in children depends on the spatio-temporal conï¬ guration, not motion onset. Frontiers in Psychology, 4 . doi: 10.3389/fpsyg.2013.00365 Schlottmann, A., Ray, E. D., Mitchell, A., & Demetriou, N. (2006). Perceived physical and social causality in animated motions: Spontaneous reports and ratings. Acta Psychologica, 123 , 112â 143. doi: 10.1016/j.actpsy.2006.05.006 Schmidhuber, J. (2015). Deep learning in neural networks: An overview. Neural Networks, 61 , 85â 117. Scholl, B. J., & Gao, T. (2013). Perceiving Animacy and Intentionality: Visual Processing or | 1604.00289#200 | 1604.00289#202 | 1604.00289 | [
"1511.06114"
] |
1604.00289#202 | Building Machines That Learn and Think Like People | 55 Higher-Level Judgment? Social perception: Detection and interpretation of animacy, agency, and intention. Schultz, W., Dayan, P., & Montague, P. R. (1997). A neural substrate of prediction and reward. Science, 275 , 1593â 1599. Schulz, L. (2012). The origins of inquiry: Inductive inference and exploration in early childhood. Trends in Cognitive Sciences, 16 (7), 382â | 1604.00289#201 | 1604.00289#203 | 1604.00289 | [
"1511.06114"
] |
1604.00289#203 | Building Machines That Learn and Think Like People | 9. Schulz, L. E., Gopnik, A., & Glymour, C. (2007). Preschool children learn about causal structure from conditional interventions. Developmental Science, 10 , 322â 332. doi: 10.1111/j.1467 -7687.2007.00587.x Sermanet, P., Eigen, D., Zhang, X., Mathieu, M., Fergus, R., & LeCun, Y. (2014). OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks. In Inter- national Conference on Learning Representations (ICLR). | 1604.00289#202 | 1604.00289#204 | 1604.00289 | [
"1511.06114"
] |
1604.00289#204 | Building Machines That Learn and Think Like People | Shafto, P., Goodman, N. D., & Griï¬ ths, T. L. (2014). A rational account of pedagogical reasoning: Teaching by, and learning from, examples. Cognitive Psychology, 71 , 55â 89. Shultz, T. R. (2003). Computational developmental psychology. MIT Press. Siegler, R. S., & Chen, Z. (1998). Developmental diï¬ erences in rule learning: A microgenetic analysis. Cognitive Psychology, 36 (3), 273â 310. Silver, D. (2016). Personal communication. Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Driessche, G. V. D., . . . Hassabis, D. (2016). | 1604.00289#203 | 1604.00289#205 | 1604.00289 | [
"1511.06114"
] |
1604.00289#205 | Building Machines That Learn and Think Like People | Mastering the game of Go with deep neural networks and tree search. Nature, 529 (7585), 484â 489. Smith, L. B., Jones, S. S., Landau, B., Gershkoï¬ -Stowe, L., & Samuelson, L. (2002). Object name learning provides on-the-job training for attention. Psychological Science, 13 (1), 13â 19. Solomon, K., Medin, D., & Lynch, E. Cognitive Sciences, 3 (3), 99â 105. (1999). Concepts do more than categorize. Trends in | 1604.00289#204 | 1604.00289#206 | 1604.00289 | [
"1511.06114"
] |
1604.00289#206 | Building Machines That Learn and Think Like People | Spelke, E. S. (1990). Principles of Object Perception. Cognitive Science, 14 (1), 29â 56. Spelke, E. S. (2003). Core knowledge. Attention and performance, 20 . Spelke, E. S., Gutheil, G., & Van de Walle, G. (1995). The development of object perception. In Visual cognition: An invitation to cognitive science, vol. 2 (2nd ed.). an invitation to cognitive science (pp. 297â | 1604.00289#205 | 1604.00289#207 | 1604.00289 | [
"1511.06114"
] |
1604.00289#207 | Building Machines That Learn and Think Like People | 330). Spelke, E. S., & Kinzler, K. D. (2007). Core knowledge. Developmental Science, 10 (1), 89â 96. Srivastava, N., & Salakhutdinov, R. (2013). Discriminative Transfer Learning with Tree-based Priors. In Advances in Neural Information Processing Systems 26. Stadie, B. C., Levine, S., & Abbeel, P. (2016). Incentivizing Exploration In Reinforcement Learning With Deep Predictive Models. arXiv preprint. Retrieved from http://arxiv.org/abs/ 1507.00814 Stahl, A. E., & Feigenson, L. (2015). Observing the unexpected enhances infantsâ learning and exploration. Science, 348 (6230), 91â 94. Sternberg, R. J., & Davidson, J. E. (1995). | 1604.00289#206 | 1604.00289#208 | 1604.00289 | [
"1511.06114"
] |
1604.00289#208 | Building Machines That Learn and Think Like People | The nature of insight. The MIT Press. Stuhlm¨uller, A., Taylor, J., & Goodman, N. D. (2013). Learning stochastic inverses. In Advances in Neural Information Processing Systems (pp. 3048â 3056). Sukhbaatar, S., Szlam, A., Weston, J., & Fergus, R. (2015). End-To-End Memory Networks. In Advances in Neural Information Processing Systems 29. Retrieved from http://arxiv.org/ abs/1503.08895 Sutton, R. S. (1990). | 1604.00289#207 | 1604.00289#209 | 1604.00289 | [
"1511.06114"
] |
1604.00289#209 | Building Machines That Learn and Think Like People | Integrated architectures for learning, planning, and reacting based on ap- 56 proximating dynamic programming. In Proceedings of the Seventh International Conference on Machine Learning (pp. 216â 224). Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., . . . Rabinovich, A. (2014). Going Deeper with Convolutions. arXiv preprint. Retrieved from http://arxiv.org/abs/ 1409.4842 Tauber, S., & Steyvers, M. (2011). Using inverse planning and theory of mind for social goal inference. In Proceedings of the 33rd annual conference of the cognitive science society (pp. 2480â 2485). T´egl´as, E., Vul, E., Girotto, V., Gonzalez, M., Tenenbaum, J. B., & Bonatti, L. L. (2011). Pure reasoning in 12-month-old infants as probabilistic inference. Science, 332 (6033), 1054â 9. Tenenbaum, J. B., Kemp, C., Griï¬ ths, T. L., & Goodman, N. D. (2011). | 1604.00289#208 | 1604.00289#210 | 1604.00289 | [
"1511.06114"
] |
1604.00289#210 | Building Machines That Learn and Think Like People | How to Grow a Mind: Statistics, Structure, and Abstraction. Science, 331 (6022), 1279â 85. Tian, Y., & Zhu, Y. (2016). Better Computer Go Player with Neural Network and Long-term In International Conference on Learning Representations (ICLR). Retrieved Prediction. from http://arxiv.org/abs/1511.06410 Tomasello, M. (2010). Origins of human communication. MIT press. Torralba, A., Murphy, K. P., & Freeman, W. T. (2007). | 1604.00289#209 | 1604.00289#211 | 1604.00289 | [
"1511.06114"
] |
1604.00289#211 | Building Machines That Learn and Think Like People | Sharing visual features for multiclass and multiview object detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29 (5), 854â 869. Tremoulet, P. D., & Feldman, J. (2000). Perception of animacy from the motion of a single object. Perception, 29 , 943â 951. Tsividis, P., Gershman, S. J., Tenenbaum, J. B., & Schulz, L. (2013). Information Selection in Noisy Environments with Large Action Spaces. In Proceedings of the 36th Annual Conference of the Cognitive Science Society (pp. 1622â | 1604.00289#210 | 1604.00289#212 | 1604.00289 | [
"1511.06114"
] |
1604.00289#212 | Building Machines That Learn and Think Like People | 1627). Tsividis, P., Tenenbaum, J. B., & Schulz, L. E. (2015). Constraints on hypothesis selection in causal learning. Proceedings of the 37th Annual Cognitive Science Society. Turing, A. M. (1950). Computing Machine and Intelligence. MIND, LIX , 433â 460. Retrieved doi: http://dx.doi.org/ from http://mind.oxfordjournals.org/content/LIX/236/433 10.1093 %2FLIX.236.433 # %2Fmind \ Tversky, B., & Hemenway, K. (1984). Objects, Parts, and Categories. Journal of Experimental Psychology: General , 113 (2), 169â 191. Ullman, S., Harari, D., & Dorfman, N. (2012). | 1604.00289#211 | 1604.00289#213 | 1604.00289 | [
"1511.06114"
] |
1604.00289#213 | Building Machines That Learn and Think Like People | From simple innate biases to complex visual concepts. Proceedings of the National Academy of Sciences, 109 (44), 18215â 18220. Ullman, T. D., Goodman, N. D., & Tenenbaum, J. B. (2012). Theory learning as stochastic search in the language of thought. Cognitive Development, 27 (4), 455â 480. van den Hengel, A., Russell, C., Dick, A., Bastian, J., Pooley, D., Fleming, L., & Agapito, L. In Computer Vision and (2015). Part-based modelling of compound scenes from images. Pattern Recognition (CVPR) (pp. 878â | 1604.00289#212 | 1604.00289#214 | 1604.00289 | [
"1511.06114"
] |
1604.00289#214 | Building Machines That Learn and Think Like People | 886). van Hasselt, H., Guez, A., & Silver, D. (2016). Deep Reinforcement Learning with Double Q- learning. In Thirtieth Conference on Artiï¬ cial Intelligence (AAAI). Vinyals, O., Blundell, C., Lillicrap, T., Kavukcuoglu, K., & Wierstra, D. (2016). Matching Networks for One Shot Learning. arXiv preprint. Retrieved from http://arxiv.org/abs/1606.04080 Vinyals, O., Toshev, A., Bengio, S., & Erhan, D. (2014). | 1604.00289#213 | 1604.00289#215 | 1604.00289 | [
"1511.06114"
] |
1604.00289#215 | Building Machines That Learn and Think Like People | Show and Tell: A Neural Image Caption Generator. In International Conference on Machine Learning (ICML). Vul, E., Goodman, N., Griï¬ ths, T. L., & Tenenbaum, J. B. (2014). One and Done? Optimal 57 Decisions From Very Few Samples. Cognitive Science. Wang, Z., Schaul, T., Hessel, M., van Hasselt, H., Lanctot, M., & de Freitas, N. (2016). Duel- ing network architectures for deep reinforcement learning. arXiv preprint. Retrieved from http://arxiv.org/abs/1511.06581 Ward, T. B. (1994). Structured imagination: The role of category structure in exemplar generation. Cognitive Psychology, 27 , 1â | 1604.00289#214 | 1604.00289#216 | 1604.00289 | [
"1511.06114"
] |
1604.00289#216 | Building Machines That Learn and Think Like People | 40. Watkins, C. J., & Dayan, P. (1992). Q-learning. Machine Learning, 8 , 279â 292. Wellman, H. M., & Gelman, S. A. (1992). Cognitive development: Foundational theories of core domains. Annual Review of Psychology, 43 , 337â 75. Wellman, H. M., & Gelman, S. A. (1998). Knowledge acquisition in foundational domains. In The handbook of child psychology (pp. 523â 573). Retrieved from http://doi.apa.org/psycinfo/ 2005-01927-010 Weng, C., Yu, D., Watanabe, S., & Juang, B.-H. F. (2014). | 1604.00289#215 | 1604.00289#217 | 1604.00289 | [
"1511.06114"
] |
1604.00289#217 | Building Machines That Learn and Think Like People | Recurrent deep neural networks for robust speech recognition. ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings(2), 5532â 5536. Weston, J., Chopra, S., & Bordes, A. (2015). Memory Networks. In International Conference on Learning Representations (ICLR). Williams, J. J., & Lombrozo, T. (2010). The role of explanation in discovery and generalization: Evidence from category learning. Cognitive Science, 34 (5), 776â 806. Winograd, T. (1972). | 1604.00289#216 | 1604.00289#218 | 1604.00289 | [
"1511.06114"
] |
1604.00289#218 | Building Machines That Learn and Think Like People | Understanding natural language. Cognitive Psychology, 3 , 1â 191. Winston, P. H. (1975). Learning structural descriptions from examples. In P. H. Winston (Ed.), The psychology of computer vision. New York: McGraw-Hill. Xu, F., & Tenenbaum, J. B. (2007). Word learning as Bayesian inference. Psychological Review , 114 (2), 245â 272. Xu, K., Ba, J., Kiros, R., Cho, K., Courville, A., Salakhutdinov, R., . . . Bengio, Y. (2015). | 1604.00289#217 | 1604.00289#219 | 1604.00289 | [
"1511.06114"
] |
1604.00289#219 | Building Machines That Learn and Think Like People | Show, Attend and Tell: Neural Image Caption Generation with Visual Attention. In International Conference on Machine Learning (ICML). Retrieved from http://arxiv.org/abs/1502 .03044 Yamins, D. L. K., Hong, H., Cadieu, C. F., Solomon, E. a., Seibert, D., & DiCarlo, J. J. (2014). Performance-optimized hierarchical models predict neural responses in higher visual cortex. Proceedings of the National Academy of Sciences, 111 (23), 8619â 24. Yildirim, I., Kulkarni, T. D., Freiwald, W. A., & Te. (2015). | 1604.00289#218 | 1604.00289#220 | 1604.00289 | [
"1511.06114"
] |
1604.00289#220 | Building Machines That Learn and Think Like People | Eï¬ cient analysis-by-synthesis in vision: A computational framework, behavioral tests, and comparison with neural representations. In Proceedings of the 37th Annual Conference of the Cognitive Science Society. Yosinski, J., Clune, J., Bengio, Y., & Lipson, H. (2014). How transferable are features in deep neural networks? In Advances in Neural Information Processing Systems (NIPS). Zeiler, M. D., & Fergus, R. (2014). | 1604.00289#219 | 1604.00289#221 | 1604.00289 | [
"1511.06114"
] |
1604.00289#221 | Building Machines That Learn and Think Like People | Visualizing and Understanding Convolutional Networks. In European Conference on Computer Vision (ECCV). 58 | 1604.00289#220 | 1604.00289 | [
"1511.06114"
] |
|
1603.09025#0 | Recurrent Batch Normalization | 7 1 0 2 b e F 8 2 ] G L . s c [ 5 v 5 2 0 9 0 . 3 0 6 1 : v i X r a Published as a conference paper at ICLR 2017 # RECURRENT BATCH NORMALIZATION Tim Cooijmans, Nicolas Ballas, César Laurent, à aË glar Gülçehre & Aaron Courville MILA - Université de Montréal [email protected] # ABSTRACT We propose a reparameterization of LSTM that brings the beneï¬ ts of batch nor- malization to recurrent neural networks. Whereas previous works only apply batch normalization to the input-to-hidden transformation of RNNs, we demonstrate that it is both possible and beneï¬ cial to batch-normalize the hidden-to-hidden transi- tion, thereby reducing internal covariate shift between time steps. We evaluate our proposal on various sequential problems such as sequence classi- ï¬ cation, language modeling and question answering. Our empirical results show that our batch-normalized LSTM consistently leads to faster convergence and im- proved generalization. # INTRODUCTION Recurrent neural network architectures such as LSTM (Hochreiter & Schmidhuber, 1997) and GRU (Cho et al., 2014) have recently exhibited state-of-the-art performance on a wide range of complex sequential problems including speech recognition Amodei et al. (2015), machine transla- tion (Bahdanau et al., 2015) and image and video captioning (Xu et al., 2015; Yao et al., 2015). Top-performing models, however, are based on very high-capacity networks that are computation- ally intensive and costly to train. Effective optimization of recurrent neural networks is thus an active area of study (Pascanu et al., 2012; Martens & Sutskever, 2011; Ollivier, 2013). It is well-known that for deep feed-forward neural networks, covariate shift (Shimodaira, 2000; Ioffe & Szegedy, 2015) degrades the efï¬ ciency of training. Covariate shift is a change in the distribution of the inputs to a model. | 1603.09025#1 | 1603.09025 | [
"1609.01704"
] |
|
1603.09025#1 | Recurrent Batch Normalization | This occurs continuously during training of feed-forward neural networks, where changing the parameters of a layer affects the distribution of the inputs to all layers above it. As a result, the upper layers are continually adapting to the shifting input distribution and unable to learn effectively. This internal covariate shift (Ioffe & Szegedy, 2015) may play an especially important role in recurrent neural networks, which resemble very deep feed-forward networks. Batch normalization (Ioffe & Szegedy, 2015) is a recently proposed technique for controlling the distributions of feed-forward neural network activations, thereby reducing internal covariate shift. It involves standardizing the activations going into each layer, enforcing their means and variances to be invariant to changes in the parameters of the underlying layers. This effectively decouples each layerâ s parameters from those of other layers, leading to a better-conditioned optimization problem. Indeed, deep neural networks trained with batch normalization converge signiï¬ cantly faster and generalize better. Although batch normalization has demonstrated signiï¬ cant training speed-ups and generalization beneï¬ ts in feed-forward networks, it is proven to be difï¬ cult to apply in recurrent architectures (Lau- rent et al., 2016; Amodei et al., 2015). It has found limited use in stacked RNNs, where the nor- to the input of each RNN, but not â horizontallyâ between malization is applied â verticallyâ , i.e. timesteps. RNNs are deeper in the time direction, and as such batch normalization would be most beneï¬ cial when applied horizontally. However, Laurent et al. (2016) hypothesized that applying batch normalization in this way hurts training because of exploding gradients due to repeated rescal- ing. | 1603.09025#0 | 1603.09025#2 | 1603.09025 | [
"1609.01704"
] |
1603.09025#2 | Recurrent Batch Normalization | Our ï¬ ndings run counter to this hypothesis. We show that it is both possible and highly beneï¬ cial to apply batch normalization in the hidden-to-hidden transition of recurrent models. In particular, we describe a reparameterization of LSTM (Section 3) that involves batch normalization and demon- strate that it is easier to optimize and generalizes better. In addition, we empirically analyze the 1 Published as a conference paper at ICLR 2017 gradient backpropagation and show that proper initialization of the batch normalization parameters is crucial to avoiding vanishing gradient (Section 4). We evaluate our proposal on several sequen- tial problems and show (Section 5) that our LSTM reparameterization consistently outperforms the LSTM baseline across tasks, in terms of both time to convergence and performance. Liao & Poggio (2016) simultaneously investigated batch normalization in recurrent neural networks, albeit only for very short sequences (10 steps). Ba et al. (2016) independently developed a variant of batch normalization that is also applicable to recurrent neural networks and delivers similar im- provements as our method. | 1603.09025#1 | 1603.09025#3 | 1603.09025 | [
"1609.01704"
] |
1603.09025#3 | Recurrent Batch Normalization | # 2 PREREQUISITES 2.1 LSTM Long Short-Term Memory (LSTM) networks are an instance of a more general class of recurrent neural networks (RNNs), which we review brieï¬ y in this paper. Given an input sequence X = (x1, x2, . . . , xT ), an RNN deï¬ nes a sequence of hidden states ht according to (1) where Wh â Rdhà dh, Wx â Rdxà dh , b â Rdh and the initial state h0 â Rdh are model parame- ters. A popular choice for the activation function Ï ( · ) is tanh. RNNs are popular in sequence modeling thanks to their natural ability to process variable-length sequences. However, training RNNs using first-order stochastic gradient descent (SGD) is notori- ously difficult due to the well-known problem of exploding/vanishing gradients (Bengio et al., 1994; Hochreiter, 1991; Pascanu et al., 2012). Gradient vanishing occurs when states h, are not influenced by small changes in much earlier states h,, t < 7, preventing learning of long-term dependencies in the input data. Although learning long-term dependencies is fundamentally difficult (Bengio et al., 1994), its effects can be mitigated through architectural variations such as LSTM (Hochreiter & Schmidhuber, 1997), GRU (Cho et al., 2014) and iRNN/uRNN (Le et al., 2015; Arjovsky et al., 2015). In what follows, we focus on the LSTM architecture (Hochreiter & Schmidhuber, 1997) with recur- rent transition given by | 1603.09025#2 | 1603.09025#4 | 1603.09025 | [
"1609.01704"
] |
1603.09025#4 | Recurrent Batch Normalization | # Ë ft Ë it Ë ot Ë gt = Whhtâ 1 + Wxxt + b (2) cq = o(f,) Oc-it o(ir) © tanh(gz) h, = o(6;) © tanh(c;), cq = o(f,) Oc-it o(ir) © tanh(gz) (3) h, = o(6;) © tanh(c;), (4) where W), â ¬ R@r*44, W,,.R%*44> b © R4*¢ and the initial states ho â ¬ R@,c9 â ¬ R® are model parameters. o is the logistic sigmoid function, and the © operator denotes the Hadamard product. The LSTM differs from simple RNNs in that it has an additional memory cell ct whose update is nearly linear which allows the gradient to ï¬ ow back through time more easily. In addition, unlike the RNN which overwrites its content at each timestep, the update of the LSTM cell is regulated by a set of gates. The forget gate ft determines the extent to which information is carried over from the previous timestep, and the input gate it controls the ï¬ ow of information from the current input xt. The output gate ot allows the model to read from the cell. This carefully controlled interaction with the cell is what allows the LSTM to robustly retain information for long periods of time. 2.2 BATCH NORMALIZATION Covariate shift (Shimodaira, 2000) is a phenomenon in machine learning where the features pre- sented to a model change in distribution. In order for learning to succeed in the presence of covari- ate shift, the modelâ s parameters must be adjusted not just to learn the concept at hand but also to adapt to the changing distribution of the inputs. In deep neural networks, this problem manifests as | 1603.09025#3 | 1603.09025#5 | 1603.09025 | [
"1609.01704"
] |
1603.09025#5 | Recurrent Batch Normalization | 2 Published as a conference paper at ICLR 2017 internal covariate shift (Ioffe & Szegedy, 2015), where changing the parameters of a layer affects the distribution of the inputs to all layers above it. Batch Normalization (Ioffe & Szegedy, 2015) is a recently proposed network reparameterization It does so by standardizing the activations using which aims to reduce internal covariate shift. empirical estimates of their means and standard deviations. However, it does not decorrelate the activations due to the computationally costly matrix inversion. The batch normalizing transform is as follows: h-£h BN(h; 7,8) =6 +70 aE 6) Var [h] + â ¬ where h â ¬ R¢ is the vector of (pre)activations to be normalized, y â ¬ R?,8 â ¬ R¢@ are model parameters that determine the mean and standard deviation of the normalized activation, and e â ¬ R is a regularization hyperparameter. The division should be understood to proceed elementwise. At training time, the statistics E[h] and Var[h] are estimated by the sample mean and sample vari- ance of the current minibatch. This allows for backpropagation through the statistics, preserving the convergence properties of stochastic gradient descent. During inference, the statistics are typically estimated based on the entire training set, so as to produce a deterministic prediction. # 3 BATCH-NORMALIZED LSTM This section introduces a reparameterization of LSTM that takes advantage of batch normalization. Contrary to Laurent et al. (2016); Amodei et al. (2015), we leverage batch normalization in both the input-to-hidden and the hidden-to-hidden transformations. We introduce the batch-normalizing transform BN( · ; γ, β) into the LSTM as follows: i, Be] = BN(Waby1i7n, Bn) + BN(Wox0i 45 Be) + 6) t & cq = o(f;) Oc-it o(ir) © tanh(g;) (7) h, = o(6,) © tanh(BN(ey; 70, Be) (8) In our formulation, we normalize the recurrent term Whhtâ 1 and the input term Wxxt separately. | 1603.09025#4 | 1603.09025#6 | 1603.09025 | [
"1609.01704"
] |
1603.09025#6 | Recurrent Batch Normalization | Normalizing these terms individually gives the model better control over the relative contribution of the terms using the γh and γx parameters. We set βh = βx = 0 to avoid unnecessary redun- dancy, instead relying on the pre-existing parameter vector b to account for both biases. In order to leave the LSTM dynamics intact and preserve the gradient ï¬ ow through ct, we do not apply batch normalization in the cell update. The batch normalization transform relies on batch statistics to standardize the LSTM activations. It would seem natural to share the statistics that are used for normalization across time, just as recurrent neural networks share their parameters over time. | 1603.09025#5 | 1603.09025#7 | 1603.09025 | [
"1609.01704"
] |
1603.09025#7 | Recurrent Batch Normalization | However, we ï¬ nd that simply averaging statistics over time severely degrades performance. Although LSTM activations do converge to a stationary distribution, we observe that their statistics during the initial transient differ signiï¬ cantly (see Fig- ure 5 in Appendix A). Consequently, we recommend using separate statistics for each timestep to preserve information of the initial transient phase in the activations.1 Generalizing the model to sequences longer than those seen during training is straightforward thanks to the rapid convergence of the activations to their steady-state distributions (cf. Figure 5). For our experiments we estimate the population statistics separately for each timestep 1, . . . , Tmax where 1 Note that we separate only the statistics over time and not the γ and β parameters. 3 | 1603.09025#6 | 1603.09025#8 | 1603.09025 | [
"1609.01704"
] |
1603.09025#8 | Recurrent Batch Normalization | Published as a conference paper at ICLR 2017 Tmax is the length of the longest training sequence. When at test time we need to generalize beyond Tmax, we use the population statistic of time Tmax for all time steps beyond it. During training we estimate the statistics across the minibatch, independently for each timestep. At test time we use estimates obtained by averaging the minibatch estimates over the training set. 4 # INITIALIZING γ FOR GRADIENT FLOW Although batch normalization allows for easy control of the pre-activation variance through the γ parameters, common practice is to normalize to unit variance. We suspect that the previous difï¬ cul- ties with recurrent batch normalization reported in Laurent et al. (2016); Amodei et al. (2015) are largely due to improper initialization of the batch normalization parameters, and γ in particular. In this section we demonstrate the impact of γ on gradient ï¬ | 1603.09025#7 | 1603.09025#9 | 1603.09025 | [
"1609.01704"
] |
1603.09025#9 | Recurrent Batch Normalization | ow. RNN gradient propagation 10° 1.0 derivative through tanh 107 zs 10 . 6 £ 0.8) 10" g nw, 10 = =10% z Bey ||â gamma=0.10 60.6) s'10%))_â 8 10% |) â Z £10!) â $0.41 Ea pl|â Fi â gamma=0.60 s 107°) â gamma=0.70 2 102, â gamma=0.80 Boz) 24|| â gamma=0.90 ES 10 | gamma=1.00 o 10""0 100 200 300 460 500 600 700 600 Bo 02 04 0.6 08 1.0 t input standard deviation (a) We visualize the gradient ï¬ ow through a batch- normalized tanh RNN as a function of γ. High variance causes vanishing gradient. (b) We show the empirical expected derivative and interquartile range of tanh nonlinearity as a func- tion of input variance. High variance causes satura- tion, which decreases the expected derivative. | 1603.09025#8 | 1603.09025#10 | 1603.09025 | [
"1609.01704"
] |
1603.09025#10 | Recurrent Batch Normalization | Figure 1: Inï¬ uence of pre-activation variance on gradient propagation. In Figure 1(a), we show how the pre-activation variance impacts gradient propagation in a simple RNN on the sequential MNIST task described in Section 5.1. Since backpropagation operates in reverse, the plot is best read from right to left. The quantity plotted is the norm of the gradient of the loss with respect to the hidden state at different time steps. For large values of γ, the norm quickly goes to zero as gradient is propagated back in time. For small values of γ the norm is nearly constant. To demonstrate what we think is the cause of this vanishing, we drew samples x from a set of centered Gaussian distributions with standard deviation ranging from 0 to 1, and computed the derivative tanhâ (x) = 1 â tanh?(z) â ¬ [0, 1] for each. | 1603.09025#9 | 1603.09025#11 | 1603.09025 | [
"1609.01704"
] |
1603.09025#11 | Recurrent Batch Normalization | Figure 1(b) shows the empirical distribution of the derivative as a function of standard deviation. When the input standard deviation is low, the input tends to be close to the origin where the derivative is close to 1. As the standard deviation increases, the expected derivative decreases as the input is more likely to be in the saturation regime. At unit standard deviation, the expected derivative is much smaller than 1. We conjecture that this is what causes the gradient to vanish, and recommend initializing γ to a small value. In our trials we found that values of 0.01 or lower caused instabilities during training. Our choice of 0.1 seems to work well across different tasks. # 5 EXPERIMENTS This section presents an empirical evaluation of the proposed batch-normalized LSTM on four dif- ferent tasks. Note that for all the experiments, we initialize the batch normalization scale and shift parameters γ and β to 0.1 and 0 respectively. | 1603.09025#10 | 1603.09025#12 | 1603.09025 | [
"1609.01704"
] |
1603.09025#12 | Recurrent Batch Normalization | 4 Published as a conference paper at ICLR 2017 Pixel-by-Pixel MNIST (Validation Set) Pixel-by-Pixel Permuted-MNIST (Validation Set) = stm 02] â stm â bn_Istm â bnistm 0 20000 40000 60000 0000 00000 0 20000 40000 60000 0000 00000 Training Iteration Training Iteration Figure 2: Accuracy on the validation set for the pixel by pixel MNIST classiï¬ cation tasks. The batch-normalized LSTM is able to converge faster relatively to a baseline LSTM. Batch-normalized LSTM also shows some improve generalization on the permuted sequential MNIST that require to preserve long-term memory information. 5.1 SEQUENTIAL MNIST We evaluate our batch-normalized LSTM on a sequential version of the MNIST classiï¬ cation task (Le et al., 2015). The model processes each image one pixel at a time and ï¬ nally predicts the label. We consider both sequential MNIST tasks, MNIST and permuted MNIST (pMNIST). In MNIST, the pixels are processed in scanline order. In pMNIST the pixels are processed in a ï¬ xed random order. Our baseline consists of an LSTM with 100 hidden units, with a softmax classiï¬ er to produce a prediction from the ï¬ nal hidden state. We use orthogonal initialization for all weight matrices, except for the hidden-to-hidden weight matrix which we initialize to be the identity matrix, as this yields better generalization performance on this task for both models. The model is trained using RMSProp (Tieleman & Hinton, 2012) with learning rate of 10â 3 and 0.9 momentum. We apply gradient clipping at 1 to avoid exploding gradients. The in-order MNIST task poses a unique problem for our model: the input for the ï¬ rst hundred or so timesteps is constant across examples since the upper pixels are almost always black. This causes the variance of the hidden states to be exactly zero for a long period of time. Normalizing these zero- variance activations involves dividing zero by a small number at many timesteps, which does not affect the forward-propagated activations but causes the back-propagated gradient to explode. We work around this by adding Gaussian noise to the initial hidden states. | 1603.09025#11 | 1603.09025#13 | 1603.09025 | [
"1609.01704"
] |
1603.09025#13 | Recurrent Batch Normalization | Although the normalization ampliï¬ es the noise to signal level, we ï¬ nd that it does not hurt performance compared to data- dependent ways of initializing the hidden states. Model TANH-RNN (Le et al., 2015) iRNN (Le et al., 2015) uRNN (Arjovsky et al., 2015) sTANH-RNN (Zhang et al., 2016) 35.0 97.0 95.1 98.1 35.0 82.0 91.4 94.0 LSTM (ours) BN-LSTM (ours) 98.9 99.0 90.2 95.4 Table 1: Accuracy obtained on the test set for the pixel by pixel MNIST classiï¬ cation tasks In Figure 2 we show the validation accuracy while training for both LSTM and batch-normalized LSTM (BN-LSTM). BN-LSTM converges faster than LSTM on both tasks. Additionally, we ob- serve that BN-LSTM generalizes signiï¬ cantly better on pMNIST. It has been highlighted in Ar- jovsky et al. (2015) that pMNIST contains many longer term dependencies across pixels than in the original pixel ordering, where a lot of structure is local. A recurrent network therefore needs to | 1603.09025#12 | 1603.09025#14 | 1603.09025 | [
"1609.01704"
] |
1603.09025#14 | Recurrent Batch Normalization | 5 Published as a conference paper at ICLR 2017 Model LSTM (Graves, 2013) Penn Treebank 1.262 HF-MRNN (Mikolov et al., 2012) Norm-stabilized LSTM (Krueger & Memisevic, 2016) ME n-gram (Mikolov et al., 2012) 1.41 1.39 1.37 LSTM (ours) BN-LSTM (ours) 1.38 1.32 Zoneout (Krueger et al., 2016) HM-LSTM (Chung et al., 2016) HyperNetworks (Ha et al., 2016) 1.27 1.24 1.22 Table 2: Bits-per-character on the Penn Treebank test sequence. characterize dependencies across varying time scales in order to solve this task. Our results suggest that BN-LSTM is better able to capture these long-term dependencies. Table 1 reports the test set accuracy of the early stop model for LSTM and BN-LSTM using the pop- ulation statistics. Recurrent batch normalization leads to a better test score, especially for pMNIST where models have to leverage long-term temporal depencies. In addition, Table 1 shows that our batch-normalized LSTM achieves state of the art on both MNIST and pMNIST. | 1603.09025#13 | 1603.09025#15 | 1603.09025 | [
"1609.01704"
] |
1603.09025#15 | Recurrent Batch Normalization | 5.2 CHARACTER-LEVEL PENN TREEBANK We evaluate our model on the task of character-level language modeling on the Penn Treebank corpus (Marcus et al., 1993) according to the train/valid/test partition of Mikolov et al. (2012). For training, we segment the training sequence into examples of length 100. The training sequence does not cleanly divide by 100, so for each epoch we randomly crop a subsequence that does and segment that instead. Our baseline is an LSTM with 1000 units, trained to predict the next character using a softmax classiï¬ | 1603.09025#14 | 1603.09025#16 | 1603.09025 | [
"1609.01704"
] |
1603.09025#16 | Recurrent Batch Normalization | er on the hidden state ht. We use stochastic gradient descent on minibatches of size 64, with gradient clipping at 1.0 and step rule determined by Adam (Kingma & Ba, 2014) with learning rate 0.002. We use orthogonal initialization for all weight matrices. The setup for the batch-normalized LSTM is the same in all respects except for the introduction of batch normalization as detailed in 3. We show the learning curves in Figure 3(a). BN-LSTM converges faster and generalizes better than the LSTM baseline. Figure 3(b) shows the generalization of our model to longer sequences. We observe that using the population statistics improves generalization performance, which conï¬ rms that repeating the last population statistic (cf. Section 3) is a viable strategy. In table 2 we report the performance of our best models (early-stopped on validation performance) on the Penn Treebank test sequence. Follow up works havd since improved the state of the art (Krueger et al., 2016; Chung et al., 2016; Ha et al., 2016). # 5.3 TEXT8 We evaluate our model on a second character-level language modeling task on the much larger text8 dataset (Mahoney, 2009). This dataset is derived from Wikipedia and consists of a sequence of 100M characters including only alphabetical characters and spaces. We follow Mikolov et al. (2012); Zhang et al. (2016) and use the ï¬ rst 90M characters for training, the next 5M for validation and the ï¬ nal 5M characters for testing. We train on nonoverlapping sequences of length 180. Both our baseline and batch-normalized models are LSTMs with 2000 units, trained to predict the next character using a softmax classiï¬ er on the hidden state ht. We use stochastic gradient descent on minibatches of size 128, with gradient clipping at 1.0 and step rule determined by Adam (Kingma & Ba, 2014) with learning rate 0.001. All weight matrices were initialized to be orthogonal. | 1603.09025#15 | 1603.09025#17 | 1603.09025 | [
"1609.01704"
] |
1603.09025#17 | Recurrent Batch Normalization | 6 Published as a conference paper at ICLR 2017 We early-stop on validation performance and report the test performance of the resulting model in table 3. We observe that BN-LSTM obtains a signiï¬ cant performance improvement over the LSTM baseline. Chung et al. (2016) has since improved on our performance. Model text8 td-LSTM (Zhang et al., 2016) HF-MRNN (Mikolov et al., 2012) skipping RNN (Pachitariu & Sahani, 2013) 1.63 1.54 1.48 LSTM (ours) BN-LSTM (ours) 1.43 1.36 HM-LSTM (Chung et al., 2016) 1.29 Table 3: Bits-per-character on the text8 test sequence. 5.4 TEACHING MACHINES TO READ AND COMPREHEND Recently, Hermann et al. (2015) introduced a set of challenging benchmarks for natural language processing, along with neural network architectures to address them. The tasks involve reading real news articles and answering questions about their content. Their principal model, the Atten- tive Reader, is a recurrent neural network that invokes an attention mechanism to locate relevant information in the document. Such models are notoriously hard to optimize and yet increasingly popular. To demonstrate the generality and practical applicability of our proposal, we apply batch normaliza- tion in the Attentive Reader model and show that this drastically improves training. We evaluate several variants. | 1603.09025#16 | 1603.09025#18 | 1603.09025 | [
"1609.01704"
] |
1603.09025#18 | Recurrent Batch Normalization | The ï¬ rst variant, referred to as BN-LSTM, consists of the vanilla At- tentive Reader model with the LSTM simply replaced by our BN-LSTM reparameterization. The second variant, termed BN-everywhere, is exactly like the ï¬ rst, except that we also introduce batch normalization into the attention computations, normalizing each term going into the tanh nonlin- earities. Our third variant, BN-e*, is like BN-everywhere, but improved to more carefully handle variable- length sequences. Throughout this experiment we followed the common practice of padding each batch of variable-length data with zeros. However, this biases the batch mean and variance of xt toward zero. We address this effect using sequencewise normalization of the inputs as proposed by Laurent et al. (2016); Amodei et al. (2015). That is, we share statistics over time for normalization | 1603.09025#17 | 1603.09025#19 | 1603.09025 | [
"1609.01704"
] |
1603.09025#19 | Recurrent Batch Normalization | â isTâ ¢ â isTâ ¢ â BNLSTM â _BN-LSTM, population statistics == _BN-LSTM, batch statistics 22 mean bits per 16 Myâ â g000 a0 6000 8000 10000 12600 14000 16000 13h 200300 add â 500 600700800340 â T000 training steps sequence length (a) Performance in bits-per-character on length- 100 subsequences of the Penn Treebank validation sequence during training. (b) Generalization to longer subsequences of Penn Treebank using population statistics. The subse- quences are taken from the test sequence. Figure 3: Penn Treebank evaluation 7 Published as a conference paper at ICLR 2017 1.0 STM train BN-e* train â LsTM valid â BN-e** valid error rate error rate 0.2 cats â od 100 200 300 400 500 600 700 800 a) 50. 100 150 200 250 300 350 400 training steps (thousands) training steps (thousands) (b) Error rate on the validation set on the full CNN QA task from Hermann et al. (2015). (a) Error rate on the validation set for the Atten- tive Reader models on a variant of the CNN QA task (Hermann et al., 2015). As detailed in Ap- pendix C, the theoretical lower bound on the error rate on this task is 43%. Figure 4: Training curves on the CNN question-answering tasks. of the input terms Wxxt, but not for the recurrent terms Whht or the cell output ct. Doing so avoids many issues involving degenerate statistics due to input sequence padding. | 1603.09025#18 | 1603.09025#20 | 1603.09025 | [
"1609.01704"
] |
1603.09025#20 | Recurrent Batch Normalization | Our fourth and ï¬ nal variant BN-e** is like BN-e* but bidirectional. The main difï¬ culty in adapting to bidirectional models also involves padding. Padding poses no problem as long as it is properly ignored (by not updating the hidden states based on padded regions of the input). However to perform the reverse application of a bidirectional model, it is common to simply reverse the padded sequences, thus moving the padding to the front. This causes similar problems as were observed on the sequential MNIST task (Section 5.1): the hidden states will not diverge during the initial timesteps and hence their variance will be severely underestimated. To get around this, we reverse only the unpadded portion of the input sequences and leave the padding in place. See Appendix C for hyperparameters and task details. Figure 4(a) shows the learning curves for the different variants of the attentive reader. BN-LSTM trains dramatically faster than the LSTM baseline. BN-everywhere in turn shows a signiï¬ cant im- provement over BN-LSTM. In addition, both BN-LSTM and BN-everywhere show a generalization beneï¬ t over the baseline. The validation curves have minima of 50.3%, 49.5% and 50.0% for the baseline, BN-LSTM and BN-everywhere respectively. We emphasize that these results were ob- tained without any tweaking â all we did was to introduce batch normalization. BN-e* and BN-e** converge faster yet, and reach lower minima: 47.1% and 43.9% respectively. Model CNN valid CNN test Attentive Reader (Hermann et al., 2015) 38.4 37.0 LSTM (ours) BN-e** (ours) 45.5 37.9 45.0 36.3 Table 4: Error rates on the CNN question-answering task Hermann et al. (2015). We train and evaluate our best model, BN-e**, on the full task from (Hermann et al., 2015). On this dataset we had to reduce the number of hidden units to 120 to avoid severe overï¬ tting. Training curves for BN-e** and a vanilla LSTM are shown in Figure 4(b). Table 4 reports performances of the early-stopped models. | 1603.09025#19 | 1603.09025#21 | 1603.09025 | [
"1609.01704"
] |
1603.09025#21 | Recurrent Batch Normalization | 8 Published as a conference paper at ICLR 2017 # 6 CONCLUSION Contrary to previous ï¬ ndings by Laurent et al. (2016); Amodei et al. (2015), we have demonstrated that batch-normalizing the hidden states of recurrent neural networks greatly improves optimiza- tion. Indeed, doing so yields beneï¬ ts similar to those of batch normalization in feed-forward neural networks: our proposed BN-LSTM trains faster and generalizes better on a variety of tasks in- cluding language modeling and question-answering. We have argued that proper initialization of the batch normalization parameters is crucial, and suggest that previous difï¬ culties (Laurent et al., 2016; Amodei et al., 2015) were due in large part to improper initialization. Finally, we have shown our model to apply to complex settings involving variable-length data, bidirectionality and highly nonlinear attention mechanisms. # ACKNOWLEDGEMENTS The authors would like to acknowledge the following agencies for research funding and computing support: the Nuance Foundation, Samsung, NSERC, Calcul Québec, Compute Canada, the Canada Research Chairs and CIFAR. Experiments were carried out using the Theano (Team et al., 2016) and the Blocks and Fuel (van Merriënboer et al., 2015) libraries for scientiï¬ c computing. We thank David Krueger, Saizheng Zhang, Ishmael Belghazi and Yoshua Bengio for discussions and suggestions. # REFERENCES D. Amodei et al. Deep speech 2: End-to-end speech recognition in english and mandarin. arXiv:1512.02595, 2015. M. Arjovsky, A. Shah, and Y. Bengio. Unitary evolution recurrent neural networks. arXiv:1511.06464, 2015. | 1603.09025#20 | 1603.09025#22 | 1603.09025 | [
"1609.01704"
] |
1603.09025#22 | Recurrent Batch Normalization | Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv:1607.06450, 2016. D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and translate. ICLR, 2015. Y. Bengio, P. Simard, and P. Frasconi. Learning long-term dependencies with gradient descent is difï¬ cult. Neural Networks, IEEE Transactions on, 1994. K. Cho, B. Van Merriënboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv:1406.1078, 2014. Junyoung Chung, Sungjin Ahn, and Yoshua Bengio. Hierarchical multiscale recurrent neural net- works. arXiv:1609.01704, 2016. A. Graves. | 1603.09025#21 | 1603.09025#23 | 1603.09025 | [
"1609.01704"
] |
1603.09025#23 | Recurrent Batch Normalization | Generating sequences with recurrent neural networks. arXiv:1308.0850, 2013. David Ha, Andrew Dai, and Quoc V Le. Hypernetworks. arXiv:1609.09106, 2016. K. M. Hermann, T. Kocisky, E. Grefenstette, L. Espeholt, W. Kay, M. Suleyman, and P. Blunsom. Teaching machines to read and comprehend. In NIPS, 2015. | 1603.09025#22 | 1603.09025#24 | 1603.09025 | [
"1609.01704"
] |
1603.09025#24 | Recurrent Batch Normalization | S. Hochreiter. Untersuchungen zu dynamischen neuronalen netzen. Masterâ s thesis, 1991. S. Hochreiter and J Schmidhuber. Long short-term memory. Neural computation, 1997. Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. abs/1502.03167, 2015. D. Kingma and J. Ba. Adam: | 1603.09025#23 | 1603.09025#25 | 1603.09025 | [
"1609.01704"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.