doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1604.00289 | 170 | 46
Deacon, T. W. (1998). The symbolic species: The co-evolution of language and the brain. WW Norton & Company.
Deci, E. L., & Ryan, R. M. (1975). Intrinsic motivation. Wiley Online Library. de Jonge, M., & Racine, R. J. (1985). The eï¬ects of repeated induction of long-term potentiation
in the dentate gyrus. Brain Research, 328 , 181â185.
Denton, E., Chintala, S., Szlam, A., & Fergus, R. (2015). Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks. In Advances in Neural Information Processing Systems 29. Retrieved from http://arxiv.org/abs/1506.05751
Diuk, C., Cohen, A., & Littman, M. L. (2008). An Object-Oriented representation for eï¬cient In Proceedings of the 25th International Conference on Machine reinforcement learning. Learning (ICML) (pp. 240â247). | 1604.00289#170 | Building Machines That Learn and Think Like People | Recent progress in artificial intelligence (AI) has renewed interest in
building systems that learn and think like people. Many advances have come from
using deep neural networks trained end-to-end in tasks such as object
recognition, video games, and board games, achieving performance that equals or
even beats humans in some respects. Despite their biological inspiration and
performance achievements, these systems differ from human intelligence in
crucial ways. We review progress in cognitive science suggesting that truly
human-like learning and thinking machines will have to reach beyond current
engineering trends in both what they learn, and how they learn it.
Specifically, we argue that these machines should (a) build causal models of
the world that support explanation and understanding, rather than merely
solving pattern recognition problems; (b) ground learning in intuitive theories
of physics and psychology, to support and enrich the knowledge that is learned;
and (c) harness compositionality and learning-to-learn to rapidly acquire and
generalize knowledge to new tasks and situations. We suggest concrete
challenges and promising routes towards these goals that can combine the
strengths of recent neural network advances with more structured cognitive
models. | http://arxiv.org/pdf/1604.00289 | Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman | cs.AI, cs.CV, cs.LG, cs.NE, stat.ML | In press at Behavioral and Brain Sciences. Open call for commentary
proposals (until Nov. 22, 2016).
https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary | null | cs.AI | 20160401 | 20161102 | [
{
"id": "1511.06114"
},
{
"id": "1510.05067"
},
{
"id": "1602.05179"
},
{
"id": "1603.08575"
}
] |
1604.00289 | 171 | Dolan, R. J., & Dayan, P. (2013). Goals and habits in the brain. Neuron, 80 , 312â325. Donahue, J., Jia, Y., Vinyals, O., Hoï¬man, J., Zhang, N., Tzeng, E., & Darrell, T.
(2013). Decaf: A deep convolutional activation feature for generic visual recognition. arXiv preprint arXiv:1310.1531 .
Economides, M., Kurth-Nelson, Z., L¨ubbert, A., Guitart-Masip, M., & Dolan, R. J. (2015). Model- based reasoning in humans becomes automatic with training. PLoS Computation Biology, 11 , e1004463.
Edelman, S. (2015). The minority report: some common assumptions to reconsider in the modelling of the brain and behaviour. Journal of Experimental & Theoretical Artiï¬cial Intelligence, 28 (4), 751â776.
Eden, M. (1962). Handwriting and Pattern Recognition. IRE Transactions on Information Theory, 160â166. | 1604.00289#171 | Building Machines That Learn and Think Like People | Recent progress in artificial intelligence (AI) has renewed interest in
building systems that learn and think like people. Many advances have come from
using deep neural networks trained end-to-end in tasks such as object
recognition, video games, and board games, achieving performance that equals or
even beats humans in some respects. Despite their biological inspiration and
performance achievements, these systems differ from human intelligence in
crucial ways. We review progress in cognitive science suggesting that truly
human-like learning and thinking machines will have to reach beyond current
engineering trends in both what they learn, and how they learn it.
Specifically, we argue that these machines should (a) build causal models of
the world that support explanation and understanding, rather than merely
solving pattern recognition problems; (b) ground learning in intuitive theories
of physics and psychology, to support and enrich the knowledge that is learned;
and (c) harness compositionality and learning-to-learn to rapidly acquire and
generalize knowledge to new tasks and situations. We suggest concrete
challenges and promising routes towards these goals that can combine the
strengths of recent neural network advances with more structured cognitive
models. | http://arxiv.org/pdf/1604.00289 | Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman | cs.AI, cs.CV, cs.LG, cs.NE, stat.ML | In press at Behavioral and Brain Sciences. Open call for commentary
proposals (until Nov. 22, 2016).
https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary | null | cs.AI | 20160401 | 20161102 | [
{
"id": "1511.06114"
},
{
"id": "1510.05067"
},
{
"id": "1602.05179"
},
{
"id": "1603.08575"
}
] |
1604.00289 | 172 | Eden, M. (1962). Handwriting and Pattern Recognition. IRE Transactions on Information Theory, 160â166.
Eliasmith, C., Stewart, T. C., Choo, X., Bekolay, T., DeWolf, T., Tang, Y., & Rasmussen, D. (2012). A large-scale model of the functioning brain. Science, 338 (6111), 1202â1205. Elman, J. L. (2005). Connectionist models of cognitive development: Where next? Trends in
Cognitive Sciences, 9 (3), 111â117.
Elman, J. L., Bates, E. A., Johnson, M. H., Karmiloï¬-Smith, A., Parisi, D., & Plunkett, K. (1996). Rethinking innateness. Cambridge, MA: MIT Press.
Eslami, S. M. A., Heess, N., Weber, T., Tassa, Y., Kavukcuoglu, K., & Hinton, G. E. (2016). Attend, infer, repeat: Fast scene understanding with generative models. arXiv preprint arXiv:1603.08575 . | 1604.00289#172 | Building Machines That Learn and Think Like People | Recent progress in artificial intelligence (AI) has renewed interest in
building systems that learn and think like people. Many advances have come from
using deep neural networks trained end-to-end in tasks such as object
recognition, video games, and board games, achieving performance that equals or
even beats humans in some respects. Despite their biological inspiration and
performance achievements, these systems differ from human intelligence in
crucial ways. We review progress in cognitive science suggesting that truly
human-like learning and thinking machines will have to reach beyond current
engineering trends in both what they learn, and how they learn it.
Specifically, we argue that these machines should (a) build causal models of
the world that support explanation and understanding, rather than merely
solving pattern recognition problems; (b) ground learning in intuitive theories
of physics and psychology, to support and enrich the knowledge that is learned;
and (c) harness compositionality and learning-to-learn to rapidly acquire and
generalize knowledge to new tasks and situations. We suggest concrete
challenges and promising routes towards these goals that can combine the
strengths of recent neural network advances with more structured cognitive
models. | http://arxiv.org/pdf/1604.00289 | Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman | cs.AI, cs.CV, cs.LG, cs.NE, stat.ML | In press at Behavioral and Brain Sciences. Open call for commentary
proposals (until Nov. 22, 2016).
https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary | null | cs.AI | 20160401 | 20161102 | [
{
"id": "1511.06114"
},
{
"id": "1510.05067"
},
{
"id": "1602.05179"
},
{
"id": "1603.08575"
}
] |
1604.00289 | 173 | Eslami, S. M. A., Tarlow, D., Kohli, P., & Winn, J. (2014). Just-in-time learning for fast and ï¬exible inference. In Advances in Neural Information Processing Systems (pp. 154â162).
Fodor, J. A. (1975). The Language of Thought. Harvard University Press. Fodor, J. A., & Pylyshyn, Z. W. (1988). Connectionism and cognitive architecture: A critical
analysis. Cognition, 28 , 3â71.
Frank, M. C., Goodman, N. D., & Tenenbaum, J. B. (2009). Using speakersâ referential intentions to model early cross-situational word learning. Psychological Science, 20 , 578â585.
Freyd, J. (1983). Representing the dynamics of a static form. Memory and Cognition, 11 (4), 342â346. | 1604.00289#173 | Building Machines That Learn and Think Like People | Recent progress in artificial intelligence (AI) has renewed interest in
building systems that learn and think like people. Many advances have come from
using deep neural networks trained end-to-end in tasks such as object
recognition, video games, and board games, achieving performance that equals or
even beats humans in some respects. Despite their biological inspiration and
performance achievements, these systems differ from human intelligence in
crucial ways. We review progress in cognitive science suggesting that truly
human-like learning and thinking machines will have to reach beyond current
engineering trends in both what they learn, and how they learn it.
Specifically, we argue that these machines should (a) build causal models of
the world that support explanation and understanding, rather than merely
solving pattern recognition problems; (b) ground learning in intuitive theories
of physics and psychology, to support and enrich the knowledge that is learned;
and (c) harness compositionality and learning-to-learn to rapidly acquire and
generalize knowledge to new tasks and situations. We suggest concrete
challenges and promising routes towards these goals that can combine the
strengths of recent neural network advances with more structured cognitive
models. | http://arxiv.org/pdf/1604.00289 | Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman | cs.AI, cs.CV, cs.LG, cs.NE, stat.ML | In press at Behavioral and Brain Sciences. Open call for commentary
proposals (until Nov. 22, 2016).
https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary | null | cs.AI | 20160401 | 20161102 | [
{
"id": "1511.06114"
},
{
"id": "1510.05067"
},
{
"id": "1602.05179"
},
{
"id": "1603.08575"
}
] |
1604.00289 | 174 | Freyd, J. (1983). Representing the dynamics of a static form. Memory and Cognition, 11 (4), 342â346.
Freyd, J. (1987). Dynamic Mental Representations. Psychological Review , 94 (4), 427â438. Fukushima, K. (1980). Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaï¬ected by shift in position. Biological Cybernetics, 36 , 193â202. Gallistel, C., & Matzel, L. D. (2013). The neuroscience of learning: beyond the Hebbian synapse.
47
Annual Review of Psychology, 64 , 169â200.
Gelly, S., & Silver, D. (2008). Achieving master level play in 9 x 9 computer go.. Gelly, S., & Silver, D. (2011). Monte-carlo tree search and rapid action value estimation in computer
go. Artiï¬cial Intelligence, 175 (11), 1856â1875.
Gelman, A., Carlin, J. B., Stern, H. S., & Rubin, D. B. (2004). Bayesian Data Analysis. Chapman and Hall/CRC. | 1604.00289#174 | Building Machines That Learn and Think Like People | Recent progress in artificial intelligence (AI) has renewed interest in
building systems that learn and think like people. Many advances have come from
using deep neural networks trained end-to-end in tasks such as object
recognition, video games, and board games, achieving performance that equals or
even beats humans in some respects. Despite their biological inspiration and
performance achievements, these systems differ from human intelligence in
crucial ways. We review progress in cognitive science suggesting that truly
human-like learning and thinking machines will have to reach beyond current
engineering trends in both what they learn, and how they learn it.
Specifically, we argue that these machines should (a) build causal models of
the world that support explanation and understanding, rather than merely
solving pattern recognition problems; (b) ground learning in intuitive theories
of physics and psychology, to support and enrich the knowledge that is learned;
and (c) harness compositionality and learning-to-learn to rapidly acquire and
generalize knowledge to new tasks and situations. We suggest concrete
challenges and promising routes towards these goals that can combine the
strengths of recent neural network advances with more structured cognitive
models. | http://arxiv.org/pdf/1604.00289 | Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman | cs.AI, cs.CV, cs.LG, cs.NE, stat.ML | In press at Behavioral and Brain Sciences. Open call for commentary
proposals (until Nov. 22, 2016).
https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary | null | cs.AI | 20160401 | 20161102 | [
{
"id": "1511.06114"
},
{
"id": "1510.05067"
},
{
"id": "1602.05179"
},
{
"id": "1603.08575"
}
] |
1604.00289 | 175 | Gelman, A., Carlin, J. B., Stern, H. S., & Rubin, D. B. (2004). Bayesian Data Analysis. Chapman and Hall/CRC.
Gelman, A., Lee, D., & Guo, J. (2015). Stan a probabilistic programming language for Bayesian inference and optimization. Journal of Educational and Behavioral Statistics, 40 , 530â543.
Geman, S., Bienenstock, E., & Doursat, R. (1992). Neural networks and the bias/variance dilemma. Neural Computation, 4 , 1â58.
Gershman, S. J., & Goodman, N. D. (2014). Amortized inference in probabilistic reasoning. In Proceedings of the 36th Annual Conference of the Cognitive Science Society.
(2015). Computational rationality: A converging paradigm for intelligence in brains, minds, and machines. Science, 349 , 273â278. Gershman, S. J., Markman, A. B., & Otto, A. R. (2014). Retrospective revaluation in sequential decision making: A tale of two systems. Journal of Experimental Psychology: General , 143 , 182â194.
Gershman, S. J., Vul, E., & Tenenbaum, J. B. (2012). Multistability and perceptual inference. Neural Computation, 24 , 1â24. | 1604.00289#175 | Building Machines That Learn and Think Like People | Recent progress in artificial intelligence (AI) has renewed interest in
building systems that learn and think like people. Many advances have come from
using deep neural networks trained end-to-end in tasks such as object
recognition, video games, and board games, achieving performance that equals or
even beats humans in some respects. Despite their biological inspiration and
performance achievements, these systems differ from human intelligence in
crucial ways. We review progress in cognitive science suggesting that truly
human-like learning and thinking machines will have to reach beyond current
engineering trends in both what they learn, and how they learn it.
Specifically, we argue that these machines should (a) build causal models of
the world that support explanation and understanding, rather than merely
solving pattern recognition problems; (b) ground learning in intuitive theories
of physics and psychology, to support and enrich the knowledge that is learned;
and (c) harness compositionality and learning-to-learn to rapidly acquire and
generalize knowledge to new tasks and situations. We suggest concrete
challenges and promising routes towards these goals that can combine the
strengths of recent neural network advances with more structured cognitive
models. | http://arxiv.org/pdf/1604.00289 | Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman | cs.AI, cs.CV, cs.LG, cs.NE, stat.ML | In press at Behavioral and Brain Sciences. Open call for commentary
proposals (until Nov. 22, 2016).
https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary | null | cs.AI | 20160401 | 20161102 | [
{
"id": "1511.06114"
},
{
"id": "1510.05067"
},
{
"id": "1602.05179"
},
{
"id": "1603.08575"
}
] |
1604.00289 | 176 | Gerstenberg, T., Goodman, N. D., Lagnado, D. a., & Tenenbaum, J. B. (2015). How, whether, why: Causal judgments as counterfactual contrasts. Proceedings of the 37th Annual Conference of the Cognitive Science Society.
Ghahramani, Z. (2015). Probabilistic machine learning and artiï¬cial intelligence. Nature, 521 , 452â459.
Goodman, N. D., Mansinghka, V. K., Roy, D. M., Bonawitz, K., & Tenenbaum, J. B. (2008). Church: A language for generative models. Uncertainty in Artiï¬cial Intelligence.
Gopnik, A., Glymour, C., Sobel, D. M., Schulz, L. E., Kushnir, T., & Danks, D. (2004). A theory of causal learning in children: Causal maps and Bayes nets. Psychological Review , 111 (1), 3â32.
Gopnik, A., & Meltzoï¬, A. N. (1999). Words, Thoughts, and Theories. Mind: A Quarterly Review of Philosophy, 108 , 0. | 1604.00289#176 | Building Machines That Learn and Think Like People | Recent progress in artificial intelligence (AI) has renewed interest in
building systems that learn and think like people. Many advances have come from
using deep neural networks trained end-to-end in tasks such as object
recognition, video games, and board games, achieving performance that equals or
even beats humans in some respects. Despite their biological inspiration and
performance achievements, these systems differ from human intelligence in
crucial ways. We review progress in cognitive science suggesting that truly
human-like learning and thinking machines will have to reach beyond current
engineering trends in both what they learn, and how they learn it.
Specifically, we argue that these machines should (a) build causal models of
the world that support explanation and understanding, rather than merely
solving pattern recognition problems; (b) ground learning in intuitive theories
of physics and psychology, to support and enrich the knowledge that is learned;
and (c) harness compositionality and learning-to-learn to rapidly acquire and
generalize knowledge to new tasks and situations. We suggest concrete
challenges and promising routes towards these goals that can combine the
strengths of recent neural network advances with more structured cognitive
models. | http://arxiv.org/pdf/1604.00289 | Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman | cs.AI, cs.CV, cs.LG, cs.NE, stat.ML | In press at Behavioral and Brain Sciences. Open call for commentary
proposals (until Nov. 22, 2016).
https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary | null | cs.AI | 20160401 | 20161102 | [
{
"id": "1511.06114"
},
{
"id": "1510.05067"
},
{
"id": "1602.05179"
},
{
"id": "1603.08575"
}
] |
1604.00289 | 177 | Gopnik, A., & Meltzoï¬, A. N. (1999). Words, Thoughts, and Theories. Mind: A Quarterly Review of Philosophy, 108 , 0.
Graves, A. (2014). Generating sequences with recurrent neural networks. arXiv preprint. Retrieved from http://arxiv.org/abs/1308.0850
Graves, A., Mohamed, A.-r., & Hinton, G. (2013). Speech recognition with deep recurrent neu- In Acoustics, speech and signal processing (icassp), 2013 ieee international ral networks. conference on (pp. 6645â6649).
Graves, A., Wayne, G., & Danihelka, I. (2014). Neural Turing Machines. arXiv preprint. Retrieved from http://arxiv.org/abs/1410.5401v1
Graves, A., Wayne, G., Reynolds, M., Harley, T., Danihelka, I., Grabska-Barwi´nska, A., . . . Has- sabis, D. (2016). Hybrid computing using a neural network with dynamic external memory. Nature. | 1604.00289#177 | Building Machines That Learn and Think Like People | Recent progress in artificial intelligence (AI) has renewed interest in
building systems that learn and think like people. Many advances have come from
using deep neural networks trained end-to-end in tasks such as object
recognition, video games, and board games, achieving performance that equals or
even beats humans in some respects. Despite their biological inspiration and
performance achievements, these systems differ from human intelligence in
crucial ways. We review progress in cognitive science suggesting that truly
human-like learning and thinking machines will have to reach beyond current
engineering trends in both what they learn, and how they learn it.
Specifically, we argue that these machines should (a) build causal models of
the world that support explanation and understanding, rather than merely
solving pattern recognition problems; (b) ground learning in intuitive theories
of physics and psychology, to support and enrich the knowledge that is learned;
and (c) harness compositionality and learning-to-learn to rapidly acquire and
generalize knowledge to new tasks and situations. We suggest concrete
challenges and promising routes towards these goals that can combine the
strengths of recent neural network advances with more structured cognitive
models. | http://arxiv.org/pdf/1604.00289 | Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman | cs.AI, cs.CV, cs.LG, cs.NE, stat.ML | In press at Behavioral and Brain Sciences. Open call for commentary
proposals (until Nov. 22, 2016).
https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary | null | cs.AI | 20160401 | 20161102 | [
{
"id": "1511.06114"
},
{
"id": "1510.05067"
},
{
"id": "1602.05179"
},
{
"id": "1603.08575"
}
] |
1604.00289 | 178 | Grefenstette, E., Hermann, K. M., Suleyman, M., & Blunsom, P. (2015). Learning to Transduce with Unbounded Memory. In Advances in Neural Information Processing Systems.
Gregor, K., Besse, F., Rezende, D. J., Danihelka, I., & Wierstra, D. (2016). Towards Conceptual Compression. arXiv preprint. Retrieved from http://arxiv.org/abs/1604.08772
48
Gregor, K., Danihelka, I., Graves, A., Rezende, D. J., & Wierstra, D. (2015). DRAW: A Recurrent Neural Network For Image Generation. In International Conference on Machine Learning (ICML).
Griï¬ths, T. L., Chater, N., Kemp, C., Perfors, A., & Tenenbaum, J. B. (2010). Probabilistic models of cognition: exploring representations and inductive biases. Trends in Cognitive Sciences, 14 (8), 357â64.
Griï¬ths, T. L., Vul, E., & Sanborn, A. N. (2012). Bridging levels of analysis for probabilistic models of cognition. Current Directions in Psychological Science, 21 , 263â268. | 1604.00289#178 | Building Machines That Learn and Think Like People | Recent progress in artificial intelligence (AI) has renewed interest in
building systems that learn and think like people. Many advances have come from
using deep neural networks trained end-to-end in tasks such as object
recognition, video games, and board games, achieving performance that equals or
even beats humans in some respects. Despite their biological inspiration and
performance achievements, these systems differ from human intelligence in
crucial ways. We review progress in cognitive science suggesting that truly
human-like learning and thinking machines will have to reach beyond current
engineering trends in both what they learn, and how they learn it.
Specifically, we argue that these machines should (a) build causal models of
the world that support explanation and understanding, rather than merely
solving pattern recognition problems; (b) ground learning in intuitive theories
of physics and psychology, to support and enrich the knowledge that is learned;
and (c) harness compositionality and learning-to-learn to rapidly acquire and
generalize knowledge to new tasks and situations. We suggest concrete
challenges and promising routes towards these goals that can combine the
strengths of recent neural network advances with more structured cognitive
models. | http://arxiv.org/pdf/1604.00289 | Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman | cs.AI, cs.CV, cs.LG, cs.NE, stat.ML | In press at Behavioral and Brain Sciences. Open call for commentary
proposals (until Nov. 22, 2016).
https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary | null | cs.AI | 20160401 | 20161102 | [
{
"id": "1511.06114"
},
{
"id": "1510.05067"
},
{
"id": "1602.05179"
},
{
"id": "1603.08575"
}
] |
1604.00289 | 179 | Grossberg, S. (1976). Adaptive pattern classiï¬cation and universal recoding: I. parallel development and coding of neural feature detectors. Biological Cybernetics, 23 , 121â134.
Grosse, R., Salakhutdinov, R., Freeman, W. T., & Tenenbaum, J. B. (2012). Exploiting composi- tionality to explore a large space of model structures. In Uncertainty in Artiï¬cial Intelligence. Guo, X., Singh, S., Lee, H., Lewis, R. L., & Wang, X. (2014). Deep learning for real-time Atari game play using oï¬ine Monte-Carlo tree search planning. In Advances in neural information processing systems (pp. 3338â3346). Gweon, H., Tenenbaum, J. B., & Schulz, L. E.
Infants consider both the sample and the sampling process in inductive generalization. Proceedings of the National Academy of Sciences, 107 , 9066â9071. doi: 10.1073/pnas.1003095107
Halle, M., & Stevens, K. (1962). Speech Recognition: A Model and a Program for Research. IRE Transactions on Information Theory, 8 (2), 155â159. | 1604.00289#179 | Building Machines That Learn and Think Like People | Recent progress in artificial intelligence (AI) has renewed interest in
building systems that learn and think like people. Many advances have come from
using deep neural networks trained end-to-end in tasks such as object
recognition, video games, and board games, achieving performance that equals or
even beats humans in some respects. Despite their biological inspiration and
performance achievements, these systems differ from human intelligence in
crucial ways. We review progress in cognitive science suggesting that truly
human-like learning and thinking machines will have to reach beyond current
engineering trends in both what they learn, and how they learn it.
Specifically, we argue that these machines should (a) build causal models of
the world that support explanation and understanding, rather than merely
solving pattern recognition problems; (b) ground learning in intuitive theories
of physics and psychology, to support and enrich the knowledge that is learned;
and (c) harness compositionality and learning-to-learn to rapidly acquire and
generalize knowledge to new tasks and situations. We suggest concrete
challenges and promising routes towards these goals that can combine the
strengths of recent neural network advances with more structured cognitive
models. | http://arxiv.org/pdf/1604.00289 | Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman | cs.AI, cs.CV, cs.LG, cs.NE, stat.ML | In press at Behavioral and Brain Sciences. Open call for commentary
proposals (until Nov. 22, 2016).
https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary | null | cs.AI | 20160401 | 20161102 | [
{
"id": "1511.06114"
},
{
"id": "1510.05067"
},
{
"id": "1602.05179"
},
{
"id": "1603.08575"
}
] |
1604.00289 | 180 | Hamlin, K. J. (2013). Moral Judgment and Action in Preverbal Infants and Toddlers: Evidence for an Innate Moral Core. Current Directions in Psychological Science, 22 , 186â193. doi: 10.1177/0963721412470687
Hamlin, K. J., Ullman, T., Tenenbaum, J., Goodman, N. D., & Baker, C. (2013). The mentalistic basis of core social cognition: Experiments in preverbal infants and a computational model. Developmental Science, 16 , 209â226. doi: 10.1111/desc.12017
Hamlin, K. J., Wynn, K., & Bloom, P. (2007). Social evaluation by preverbal infants. Nature, 450 , 557â560.
Hamlin, K. J., Wynn, K., & Bloom, P. (2010). Three-month-olds show a negativity bias in their social evaluations. Developmental Science, 13 , 923â929. doi: 10.1111/j.1467-7687.2010.00951 .x | 1604.00289#180 | Building Machines That Learn and Think Like People | Recent progress in artificial intelligence (AI) has renewed interest in
building systems that learn and think like people. Many advances have come from
using deep neural networks trained end-to-end in tasks such as object
recognition, video games, and board games, achieving performance that equals or
even beats humans in some respects. Despite their biological inspiration and
performance achievements, these systems differ from human intelligence in
crucial ways. We review progress in cognitive science suggesting that truly
human-like learning and thinking machines will have to reach beyond current
engineering trends in both what they learn, and how they learn it.
Specifically, we argue that these machines should (a) build causal models of
the world that support explanation and understanding, rather than merely
solving pattern recognition problems; (b) ground learning in intuitive theories
of physics and psychology, to support and enrich the knowledge that is learned;
and (c) harness compositionality and learning-to-learn to rapidly acquire and
generalize knowledge to new tasks and situations. We suggest concrete
challenges and promising routes towards these goals that can combine the
strengths of recent neural network advances with more structured cognitive
models. | http://arxiv.org/pdf/1604.00289 | Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman | cs.AI, cs.CV, cs.LG, cs.NE, stat.ML | In press at Behavioral and Brain Sciences. Open call for commentary
proposals (until Nov. 22, 2016).
https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary | null | cs.AI | 20160401 | 20161102 | [
{
"id": "1511.06114"
},
{
"id": "1510.05067"
},
{
"id": "1602.05179"
},
{
"id": "1603.08575"
}
] |
1604.00289 | 181 | Harlow, H. F. (1949). The formation of learning sets. Psychological Review , 56 (1), 51â65. Harlow, H. F. (1950). Learning and satiation of response in intrinsically motivated complex puzzle performance by monkeys. Journal of Comparative and Physiological Psychology, 43 , 289â294. Hauser, M. D., Chomsky, N., & Fitch, W. T. (2002). The faculty of language: what is it, who has
it, and how did it evolve? Science, 298 , 1569â1579.
Hayes-Roth, B., & Hayes-Roth, F. (1979). A cognitive model of planning. Cognitive Science, 3 , 275â310.
He, K., Zhang, X., Ren, S., & Sun, J. (2015). Deep Residual Learning for Image Recognition. arXiv preprint. Retrieved from http://arxiv.org/abs/1512.03385
Hebb, D. O. (1949). The organization of behavior. Wiley. Heess, N., Tarlow, D., & Winn, J. (2013). Learning to pass expectation propagation messages. In
Advances in Neural Information Processing Systems (pp. 3219â3227). | 1604.00289#181 | Building Machines That Learn and Think Like People | Recent progress in artificial intelligence (AI) has renewed interest in
building systems that learn and think like people. Many advances have come from
using deep neural networks trained end-to-end in tasks such as object
recognition, video games, and board games, achieving performance that equals or
even beats humans in some respects. Despite their biological inspiration and
performance achievements, these systems differ from human intelligence in
crucial ways. We review progress in cognitive science suggesting that truly
human-like learning and thinking machines will have to reach beyond current
engineering trends in both what they learn, and how they learn it.
Specifically, we argue that these machines should (a) build causal models of
the world that support explanation and understanding, rather than merely
solving pattern recognition problems; (b) ground learning in intuitive theories
of physics and psychology, to support and enrich the knowledge that is learned;
and (c) harness compositionality and learning-to-learn to rapidly acquire and
generalize knowledge to new tasks and situations. We suggest concrete
challenges and promising routes towards these goals that can combine the
strengths of recent neural network advances with more structured cognitive
models. | http://arxiv.org/pdf/1604.00289 | Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman | cs.AI, cs.CV, cs.LG, cs.NE, stat.ML | In press at Behavioral and Brain Sciences. Open call for commentary
proposals (until Nov. 22, 2016).
https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary | null | cs.AI | 20160401 | 20161102 | [
{
"id": "1511.06114"
},
{
"id": "1510.05067"
},
{
"id": "1602.05179"
},
{
"id": "1603.08575"
}
] |
1604.00289 | 182 | Advances in Neural Information Processing Systems (pp. 3219â3227).
Hespos, S. J., & Baillargeon, R. (2008). Young infantsâ actions reveal their developing knowledge of support variables: Converging evidence for violation-of-expectation ï¬ndings. Cognition,
49
107 , 304â316.
Hespos, S. J., Ferry, A. L., & Rips, L. J. (2009). Five-month-old infants have diï¬erent expectations for solids and liquids. Psychological Science, 20 (5), 603â611.
Hinton, G. E. (2002). Training products of experts by minimizing contrastive divergence. Neural Computation, 14 (8), 1771â800.
Hinton, G. E., Dayan, P., Frey, B. J., & Neal, R. M. (1995). The âwake-sleepâ algorithm for unsupervised neural networks. Science, 268 (5214), 1158â61.
Hinton, G. E., Deng, L., Yu, D., Dahl, G. E., Mohamed, A.-r., Jaitly, N., . . . Kingsbury, B. (2012). Deep neural networks for acoustic modeling in speech recognition. IEEE Signal Processing Magazine, 29 , 82â97. | 1604.00289#182 | Building Machines That Learn and Think Like People | Recent progress in artificial intelligence (AI) has renewed interest in
building systems that learn and think like people. Many advances have come from
using deep neural networks trained end-to-end in tasks such as object
recognition, video games, and board games, achieving performance that equals or
even beats humans in some respects. Despite their biological inspiration and
performance achievements, these systems differ from human intelligence in
crucial ways. We review progress in cognitive science suggesting that truly
human-like learning and thinking machines will have to reach beyond current
engineering trends in both what they learn, and how they learn it.
Specifically, we argue that these machines should (a) build causal models of
the world that support explanation and understanding, rather than merely
solving pattern recognition problems; (b) ground learning in intuitive theories
of physics and psychology, to support and enrich the knowledge that is learned;
and (c) harness compositionality and learning-to-learn to rapidly acquire and
generalize knowledge to new tasks and situations. We suggest concrete
challenges and promising routes towards these goals that can combine the
strengths of recent neural network advances with more structured cognitive
models. | http://arxiv.org/pdf/1604.00289 | Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman | cs.AI, cs.CV, cs.LG, cs.NE, stat.ML | In press at Behavioral and Brain Sciences. Open call for commentary
proposals (until Nov. 22, 2016).
https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary | null | cs.AI | 20160401 | 20161102 | [
{
"id": "1511.06114"
},
{
"id": "1510.05067"
},
{
"id": "1602.05179"
},
{
"id": "1603.08575"
}
] |
1604.00289 | 183 | Hinton, G. E., Osindero, S., & Teh, Y. W. (2006). A fast learning algorithm for deep belief nets. Neural Computation, 18 , 1527â1554.
Hoï¬man, D. D., & Richards, W. A. (1984). Parts of recognition. Cognition, 18 , 65â96. Hofstadter, D. R. (1985). Metamagical themas: Questing for the essence of mind and pattern. New
York: Basic Books.
Horst, J. S., & Samuelson, L. K. (2008). Fast Mapping but Poor Retention by 24-Month-Old Infants. Infancy, 13 (2), 128â157.
Huang, Y., & Rao, R. P. (2014). Neurons as Monte Carlo samplers: Bayesian? inference and In Advances in neural information processing systems (pp. learning in spiking networks. 1943â1951).
Hummel, J. E., & Biederman, I. (1992). Dynamic binding in a neural network for shape recognition. Psychological Review , 99 (3), 480â517. | 1604.00289#183 | Building Machines That Learn and Think Like People | Recent progress in artificial intelligence (AI) has renewed interest in
building systems that learn and think like people. Many advances have come from
using deep neural networks trained end-to-end in tasks such as object
recognition, video games, and board games, achieving performance that equals or
even beats humans in some respects. Despite their biological inspiration and
performance achievements, these systems differ from human intelligence in
crucial ways. We review progress in cognitive science suggesting that truly
human-like learning and thinking machines will have to reach beyond current
engineering trends in both what they learn, and how they learn it.
Specifically, we argue that these machines should (a) build causal models of
the world that support explanation and understanding, rather than merely
solving pattern recognition problems; (b) ground learning in intuitive theories
of physics and psychology, to support and enrich the knowledge that is learned;
and (c) harness compositionality and learning-to-learn to rapidly acquire and
generalize knowledge to new tasks and situations. We suggest concrete
challenges and promising routes towards these goals that can combine the
strengths of recent neural network advances with more structured cognitive
models. | http://arxiv.org/pdf/1604.00289 | Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman | cs.AI, cs.CV, cs.LG, cs.NE, stat.ML | In press at Behavioral and Brain Sciences. Open call for commentary
proposals (until Nov. 22, 2016).
https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary | null | cs.AI | 20160401 | 20161102 | [
{
"id": "1511.06114"
},
{
"id": "1510.05067"
},
{
"id": "1602.05179"
},
{
"id": "1603.08575"
}
] |
1604.00289 | 184 | Hummel, J. E., & Biederman, I. (1992). Dynamic binding in a neural network for shape recognition. Psychological Review , 99 (3), 480â517.
Jackendoï¬, R. (2003). Foundations of Language. Oxford University Press. Jara-Ettinger, J., Gweon, H., Tenenbaum, J. B., & Schulz, L. E. (2015). Childrens understanding
of the costs and rewards underlying rational action. Cognition, 140 , 14â23.
Jern, A., & Kemp, C. (2013). A probabilistic account of exemplar and category generation. Cognitive Psychology, 66 (1), 85â125.
Jern, A., & Kemp, C. (2015). A decision network account of reasoning about other peoples choices. Cognition, 142 , 12â38.
Johnson, S. C., Slaughter, V., & Carey, S. (1998). Whose gaze will infants follow? The elicitation of gaze-following in 12-month-olds. Developmental Science, 1 , 233â238. doi: 10.1111/1467 -7687.00036
Juang, B. H., & Rabiner, L. R. (1990). Hidden Markov models for speech recognition. Technometric, | 1604.00289#184 | Building Machines That Learn and Think Like People | Recent progress in artificial intelligence (AI) has renewed interest in
building systems that learn and think like people. Many advances have come from
using deep neural networks trained end-to-end in tasks such as object
recognition, video games, and board games, achieving performance that equals or
even beats humans in some respects. Despite their biological inspiration and
performance achievements, these systems differ from human intelligence in
crucial ways. We review progress in cognitive science suggesting that truly
human-like learning and thinking machines will have to reach beyond current
engineering trends in both what they learn, and how they learn it.
Specifically, we argue that these machines should (a) build causal models of
the world that support explanation and understanding, rather than merely
solving pattern recognition problems; (b) ground learning in intuitive theories
of physics and psychology, to support and enrich the knowledge that is learned;
and (c) harness compositionality and learning-to-learn to rapidly acquire and
generalize knowledge to new tasks and situations. We suggest concrete
challenges and promising routes towards these goals that can combine the
strengths of recent neural network advances with more structured cognitive
models. | http://arxiv.org/pdf/1604.00289 | Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman | cs.AI, cs.CV, cs.LG, cs.NE, stat.ML | In press at Behavioral and Brain Sciences. Open call for commentary
proposals (until Nov. 22, 2016).
https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary | null | cs.AI | 20160401 | 20161102 | [
{
"id": "1511.06114"
},
{
"id": "1510.05067"
},
{
"id": "1602.05179"
},
{
"id": "1603.08575"
}
] |
1604.00289 | 185 | Juang, B. H., & Rabiner, L. R. (1990). Hidden Markov models for speech recognition. Technometric,
33 (3), 251â272. Karpathy, A., & Fei-Fei, L.
(2015). Deep Visual-Semantic Alignments for Generating Image Desscriptions. In Computer Vision and Pattern Recognition (CVPR).
Kemp, C. (2007). The acquisition of inductive constraints. Unpublished doctoral dissertation, MIT.
Keramati, M., Dezfouli, A., & Piray, P. (2011). Speed/accuracy trade-oï¬ between the habitual and the goal-directed processes. PLoS Computational Biology, 7 , e1002055.
Khaligh-Razavi, S.-M., & Kriegeskorte, N. (2014). Deep Supervised, but Not Unsupervised, Models May Explain IT Cortical Representation. PLoS Computational Biology, 10 (11), e1003915.
Kilner, J. M., Friston, K. J., & Frith, C. D. (2007). Predictive coding: An account of the mirror neuron system. Cognitive Processing, 8 (3), 159â166.
Kingma, D. P., Rezende, D. J., Mohamed, S., & Welling, M. (2014). Semi-supervised Learning | 1604.00289#185 | Building Machines That Learn and Think Like People | Recent progress in artificial intelligence (AI) has renewed interest in
building systems that learn and think like people. Many advances have come from
using deep neural networks trained end-to-end in tasks such as object
recognition, video games, and board games, achieving performance that equals or
even beats humans in some respects. Despite their biological inspiration and
performance achievements, these systems differ from human intelligence in
crucial ways. We review progress in cognitive science suggesting that truly
human-like learning and thinking machines will have to reach beyond current
engineering trends in both what they learn, and how they learn it.
Specifically, we argue that these machines should (a) build causal models of
the world that support explanation and understanding, rather than merely
solving pattern recognition problems; (b) ground learning in intuitive theories
of physics and psychology, to support and enrich the knowledge that is learned;
and (c) harness compositionality and learning-to-learn to rapidly acquire and
generalize knowledge to new tasks and situations. We suggest concrete
challenges and promising routes towards these goals that can combine the
strengths of recent neural network advances with more structured cognitive
models. | http://arxiv.org/pdf/1604.00289 | Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman | cs.AI, cs.CV, cs.LG, cs.NE, stat.ML | In press at Behavioral and Brain Sciences. Open call for commentary
proposals (until Nov. 22, 2016).
https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary | null | cs.AI | 20160401 | 20161102 | [
{
"id": "1511.06114"
},
{
"id": "1510.05067"
},
{
"id": "1602.05179"
},
{
"id": "1603.08575"
}
] |
1604.00289 | 186 | Kingma, D. P., Rezende, D. J., Mohamed, S., & Welling, M. (2014). Semi-supervised Learning
50
with Deep Generative Models. In Neural Information Processing Systems (NIPS).
Koch, G., Zemel, R. S., & Salakhutdinov, R. (2015). Siamese neural networks for one-shot image recognition. In ICML Deep Learning Workshop.
Kodratoï¬, Y., & Michalski, R. S. (2014). Machine learning: An artiï¬cial intelligence approach (Vol. 3). Morgan Kaufmann.
Koza, J. R. (1992). Genetic programming: on the programming of computers by means of natural selection (Vol. 1). MIT press.
Kriegeskorte, N. (2015). Deep Neural Networks: A New Framework for Modeling Biological Vision and Brain Information Processing. Annural Review of Vision Science, 1 , 417â446.
ImageNet classiï¬cation with deep con- volutional neural networks. In Advances in Neural Information Processing Systems 25 (pp. 1097â1105). | 1604.00289#186 | Building Machines That Learn and Think Like People | Recent progress in artificial intelligence (AI) has renewed interest in
building systems that learn and think like people. Many advances have come from
using deep neural networks trained end-to-end in tasks such as object
recognition, video games, and board games, achieving performance that equals or
even beats humans in some respects. Despite their biological inspiration and
performance achievements, these systems differ from human intelligence in
crucial ways. We review progress in cognitive science suggesting that truly
human-like learning and thinking machines will have to reach beyond current
engineering trends in both what they learn, and how they learn it.
Specifically, we argue that these machines should (a) build causal models of
the world that support explanation and understanding, rather than merely
solving pattern recognition problems; (b) ground learning in intuitive theories
of physics and psychology, to support and enrich the knowledge that is learned;
and (c) harness compositionality and learning-to-learn to rapidly acquire and
generalize knowledge to new tasks and situations. We suggest concrete
challenges and promising routes towards these goals that can combine the
strengths of recent neural network advances with more structured cognitive
models. | http://arxiv.org/pdf/1604.00289 | Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman | cs.AI, cs.CV, cs.LG, cs.NE, stat.ML | In press at Behavioral and Brain Sciences. Open call for commentary
proposals (until Nov. 22, 2016).
https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary | null | cs.AI | 20160401 | 20161102 | [
{
"id": "1511.06114"
},
{
"id": "1510.05067"
},
{
"id": "1602.05179"
},
{
"id": "1603.08575"
}
] |
1604.00289 | 187 | ImageNet classiï¬cation with deep con- volutional neural networks. In Advances in Neural Information Processing Systems 25 (pp. 1097â1105).
Kulkarni, T. D., Kohli, P., Tenenbaum, J. B., & Mansinghka, V. (2015). Picture: A probabilistic programming language for scene perception. In Computer Vision and Pattern Recognition (CVPR).
Kulkarni, T. D., Narasimhan, K. R., Saeedi, A., & Tenenbaum, J. B. (2016). Hierarchical Deep Reinforcement Learning: Integrating Temporal Abstraction and Intrinsic Motivation. arXiv preprint.
Kulkarni, T. D., Whitney, W., Kohli, P., & Tenenbaum, J. B. (2015). Deep Convolutional Inverse Graphics Network. In Computer Vision and Pattern Recognition (CVPR).
Lake, B. M. (2014). Towards more human-like concept learning in machines: Compositionality, causality, and learning-to-learn. Unpublished doctoral dissertation, MIT. | 1604.00289#187 | Building Machines That Learn and Think Like People | Recent progress in artificial intelligence (AI) has renewed interest in
building systems that learn and think like people. Many advances have come from
using deep neural networks trained end-to-end in tasks such as object
recognition, video games, and board games, achieving performance that equals or
even beats humans in some respects. Despite their biological inspiration and
performance achievements, these systems differ from human intelligence in
crucial ways. We review progress in cognitive science suggesting that truly
human-like learning and thinking machines will have to reach beyond current
engineering trends in both what they learn, and how they learn it.
Specifically, we argue that these machines should (a) build causal models of
the world that support explanation and understanding, rather than merely
solving pattern recognition problems; (b) ground learning in intuitive theories
of physics and psychology, to support and enrich the knowledge that is learned;
and (c) harness compositionality and learning-to-learn to rapidly acquire and
generalize knowledge to new tasks and situations. We suggest concrete
challenges and promising routes towards these goals that can combine the
strengths of recent neural network advances with more structured cognitive
models. | http://arxiv.org/pdf/1604.00289 | Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman | cs.AI, cs.CV, cs.LG, cs.NE, stat.ML | In press at Behavioral and Brain Sciences. Open call for commentary
proposals (until Nov. 22, 2016).
https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary | null | cs.AI | 20160401 | 20161102 | [
{
"id": "1511.06114"
},
{
"id": "1510.05067"
},
{
"id": "1602.05179"
},
{
"id": "1603.08575"
}
] |
1604.00289 | 188 | Lake, B. M. (2014). Towards more human-like concept learning in machines: Compositionality, causality, and learning-to-learn. Unpublished doctoral dissertation, MIT.
Lake, B. M., Lee, C.-y., Glass, J. R., & Tenenbaum, J. B. (2014). One-shot learning of generative In Proceedings of the 36th Annual Conference of the Cognitive Science speech concepts. Society (pp. 803â808).
Lake, B. M., Salakhutdinov, R., & Tenenbaum, J. B. (2012). Concept learning as motor program induction: A large-scale empirical study. In Proceedings of the 34th Annual Conference of the Cognitive Science Society.
Lake, B. M., Salakhutdinov, R., & Tenenbaum, J. B. (2015). Human-level concept learning through probabilistic program induction. Science, 350 (6266), 1332â1338.
Lake, B. M., Zaremba, W., Fergus, R., & Gureckis, T. M. (2015). Deep Neural Networks Predict Category Typicality Ratings for Images. In Proceedings of the 37th Annual Conference of the Cognitive Science Society. | 1604.00289#188 | Building Machines That Learn and Think Like People | Recent progress in artificial intelligence (AI) has renewed interest in
building systems that learn and think like people. Many advances have come from
using deep neural networks trained end-to-end in tasks such as object
recognition, video games, and board games, achieving performance that equals or
even beats humans in some respects. Despite their biological inspiration and
performance achievements, these systems differ from human intelligence in
crucial ways. We review progress in cognitive science suggesting that truly
human-like learning and thinking machines will have to reach beyond current
engineering trends in both what they learn, and how they learn it.
Specifically, we argue that these machines should (a) build causal models of
the world that support explanation and understanding, rather than merely
solving pattern recognition problems; (b) ground learning in intuitive theories
of physics and psychology, to support and enrich the knowledge that is learned;
and (c) harness compositionality and learning-to-learn to rapidly acquire and
generalize knowledge to new tasks and situations. We suggest concrete
challenges and promising routes towards these goals that can combine the
strengths of recent neural network advances with more structured cognitive
models. | http://arxiv.org/pdf/1604.00289 | Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman | cs.AI, cs.CV, cs.LG, cs.NE, stat.ML | In press at Behavioral and Brain Sciences. Open call for commentary
proposals (until Nov. 22, 2016).
https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary | null | cs.AI | 20160401 | 20161102 | [
{
"id": "1511.06114"
},
{
"id": "1510.05067"
},
{
"id": "1602.05179"
},
{
"id": "1603.08575"
}
] |
1604.00289 | 189 | Landau, B., Smith, L. B., & Jones, S. S. (1988). The importance of shape in early lexical learning. Cognitive Development, 3 (3), 299â321.
Langley, P., Bradshaw, G., Simon, H. A., & Zytkow, J. M. (1987). Scientiï¬c discovery: Computa- tional explorations of the creative processes. MIT press.
LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521 , 436â444. LeCun, Y., Boser, B., Denker, J. S., Henderson, D., Howard, R. E., Hubbard, W., & Jackel, L. D. (1989). Backpropagation applied to handwritten zip code recognition. Neural Computation, 1 , 541â551.
LeCun, Y., Bottou, L., Bengio, Y., & Haï¬ner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE , 86 (11), 2278â2323.
Lerer, A., Gross, S., & Fergus, R. (2016). Learning Physical Intuition of Block Towers by Example. arXiv preprint. Retrieved from http://arxiv.org/abs/1603.01312
51 | 1604.00289#189 | Building Machines That Learn and Think Like People | Recent progress in artificial intelligence (AI) has renewed interest in
building systems that learn and think like people. Many advances have come from
using deep neural networks trained end-to-end in tasks such as object
recognition, video games, and board games, achieving performance that equals or
even beats humans in some respects. Despite their biological inspiration and
performance achievements, these systems differ from human intelligence in
crucial ways. We review progress in cognitive science suggesting that truly
human-like learning and thinking machines will have to reach beyond current
engineering trends in both what they learn, and how they learn it.
Specifically, we argue that these machines should (a) build causal models of
the world that support explanation and understanding, rather than merely
solving pattern recognition problems; (b) ground learning in intuitive theories
of physics and psychology, to support and enrich the knowledge that is learned;
and (c) harness compositionality and learning-to-learn to rapidly acquire and
generalize knowledge to new tasks and situations. We suggest concrete
challenges and promising routes towards these goals that can combine the
strengths of recent neural network advances with more structured cognitive
models. | http://arxiv.org/pdf/1604.00289 | Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman | cs.AI, cs.CV, cs.LG, cs.NE, stat.ML | In press at Behavioral and Brain Sciences. Open call for commentary
proposals (until Nov. 22, 2016).
https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary | null | cs.AI | 20160401 | 20161102 | [
{
"id": "1511.06114"
},
{
"id": "1510.05067"
},
{
"id": "1602.05179"
},
{
"id": "1603.08575"
}
] |
1604.00289 | 190 | 51
(2009). Modeling the eï¬ects of memory on human online sentence processing with particle ï¬lters. In Advances in Neural Information Processing Systems (pp. 937â944).
Liao, Q., Leibo, J. Z., & Poggio, T. (2015). How important is weight symmetry in backpropagation? arXiv preprint arXiv:1510.05067 .
Liberman, A. M., Cooper, F. S., Shankweiler, D. P., & Studdert-Kennedy, M. (1967). Perception of the speech code. Psychological Review , 74 (6), 431â461.
Lillicrap, T. P., Cownden, D., Tweed, D. B., & Akerman, C. J. (2014). Random feedback weights support learning in deep neural networks. arXiv preprint arXiv:1411.0247 .
Lloyd, J., Duvenaud, D., Grosse, R., Tenenbaum, J., & Ghahramani, Z. (2014). Automatic con- struction and natural-language description of nonparametric regression models. In Proceedings of the National Conference on Artiï¬cial Intelligence (Vol. 2, pp. 1242â1250). | 1604.00289#190 | Building Machines That Learn and Think Like People | Recent progress in artificial intelligence (AI) has renewed interest in
building systems that learn and think like people. Many advances have come from
using deep neural networks trained end-to-end in tasks such as object
recognition, video games, and board games, achieving performance that equals or
even beats humans in some respects. Despite their biological inspiration and
performance achievements, these systems differ from human intelligence in
crucial ways. We review progress in cognitive science suggesting that truly
human-like learning and thinking machines will have to reach beyond current
engineering trends in both what they learn, and how they learn it.
Specifically, we argue that these machines should (a) build causal models of
the world that support explanation and understanding, rather than merely
solving pattern recognition problems; (b) ground learning in intuitive theories
of physics and psychology, to support and enrich the knowledge that is learned;
and (c) harness compositionality and learning-to-learn to rapidly acquire and
generalize knowledge to new tasks and situations. We suggest concrete
challenges and promising routes towards these goals that can combine the
strengths of recent neural network advances with more structured cognitive
models. | http://arxiv.org/pdf/1604.00289 | Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman | cs.AI, cs.CV, cs.LG, cs.NE, stat.ML | In press at Behavioral and Brain Sciences. Open call for commentary
proposals (until Nov. 22, 2016).
https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary | null | cs.AI | 20160401 | 20161102 | [
{
"id": "1511.06114"
},
{
"id": "1510.05067"
},
{
"id": "1602.05179"
},
{
"id": "1603.08575"
}
] |
1604.00289 | 191 | Lombrozo, T. (2009). Explanation and categorization: How âwhy?â informs âwhat?â. Cognition, 110 (2), 248â53.
Lopez-Paz, D., Bottou, L., Scholk¨opf, B., & Vapnik, V. (2016). Unifying distillation and privileged information. In International Conference on Learning Representations (ICLR).
Lopez-Paz, D., Muandet, K., Scholk¨opf, B., & Tolstikhin, I. (2015). Towards a Learning Theory of Cause-Eï¬ect Inference. In Proceedings of the 32nd International Conference on Machine Learning (ICML).
Luong, M.-T., Le, Q. V., Sutskever, I., Vinyals, O., & Kaiser, L. (2015). Multi-task sequence to sequence learning. arXiv preprint arXiv:1511.06114 .
Lupyan, G., & Bergen, B. (2016). How Language Programs the Mind. Topics in Cognitive Science, 8 (2), 408â424. Retrieved from http://doi.wiley.com/10.1111/tops.12155 | 1604.00289#191 | Building Machines That Learn and Think Like People | Recent progress in artificial intelligence (AI) has renewed interest in
building systems that learn and think like people. Many advances have come from
using deep neural networks trained end-to-end in tasks such as object
recognition, video games, and board games, achieving performance that equals or
even beats humans in some respects. Despite their biological inspiration and
performance achievements, these systems differ from human intelligence in
crucial ways. We review progress in cognitive science suggesting that truly
human-like learning and thinking machines will have to reach beyond current
engineering trends in both what they learn, and how they learn it.
Specifically, we argue that these machines should (a) build causal models of
the world that support explanation and understanding, rather than merely
solving pattern recognition problems; (b) ground learning in intuitive theories
of physics and psychology, to support and enrich the knowledge that is learned;
and (c) harness compositionality and learning-to-learn to rapidly acquire and
generalize knowledge to new tasks and situations. We suggest concrete
challenges and promising routes towards these goals that can combine the
strengths of recent neural network advances with more structured cognitive
models. | http://arxiv.org/pdf/1604.00289 | Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman | cs.AI, cs.CV, cs.LG, cs.NE, stat.ML | In press at Behavioral and Brain Sciences. Open call for commentary
proposals (until Nov. 22, 2016).
https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary | null | cs.AI | 20160401 | 20161102 | [
{
"id": "1511.06114"
},
{
"id": "1510.05067"
},
{
"id": "1602.05179"
},
{
"id": "1603.08575"
}
] |
1604.00289 | 192 | (2015). Words and the world: Predictive coding and the language- perception-cognition interface. Current Directions in Psychological Science, 24 (4), 279â284. (2013). Sidekick agents for sequential planning problems. Unpublished doctoral
Lupyan,
Macindoe, O. dissertation, Massachusetts Institute of Technology.
Magid, R. W., Sheskin, M., & Schulz, L. E. (2015). Imagination and the generation of new ideas. Cognitive Development, 34 , 99â110.
Mansinghka, V., Selsam, D., & Perov, Y. (2014). Venture: A higher-order probabilistic program- ming platform with programmable inference. arXiv preprint arXiv:1404.0099 .
Marcus, G. (1998). Rethinking Eliminative Connectionism. Cognitive Psychology, 282 (37), 243â 282.
Marcus, G. (2001). The algebraic mind: Integrating connectionism and cognitive science. MIT press.
Markman, A. B., & Makin, V. S. (1998). Referential communication and category acquisition. Journal of Experimental Psychology: General , 127 (4), 331â54.
Markman, A. B., & Ross, B. H. (2003). Category use and category learning. Psychological Bulletin, 129 (4), 592â613. | 1604.00289#192 | Building Machines That Learn and Think Like People | Recent progress in artificial intelligence (AI) has renewed interest in
building systems that learn and think like people. Many advances have come from
using deep neural networks trained end-to-end in tasks such as object
recognition, video games, and board games, achieving performance that equals or
even beats humans in some respects. Despite their biological inspiration and
performance achievements, these systems differ from human intelligence in
crucial ways. We review progress in cognitive science suggesting that truly
human-like learning and thinking machines will have to reach beyond current
engineering trends in both what they learn, and how they learn it.
Specifically, we argue that these machines should (a) build causal models of
the world that support explanation and understanding, rather than merely
solving pattern recognition problems; (b) ground learning in intuitive theories
of physics and psychology, to support and enrich the knowledge that is learned;
and (c) harness compositionality and learning-to-learn to rapidly acquire and
generalize knowledge to new tasks and situations. We suggest concrete
challenges and promising routes towards these goals that can combine the
strengths of recent neural network advances with more structured cognitive
models. | http://arxiv.org/pdf/1604.00289 | Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman | cs.AI, cs.CV, cs.LG, cs.NE, stat.ML | In press at Behavioral and Brain Sciences. Open call for commentary
proposals (until Nov. 22, 2016).
https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary | null | cs.AI | 20160401 | 20161102 | [
{
"id": "1511.06114"
},
{
"id": "1510.05067"
},
{
"id": "1602.05179"
},
{
"id": "1603.08575"
}
] |
1604.00289 | 193 | Markman, A. B., & Ross, B. H. (2003). Category use and category learning. Psychological Bulletin, 129 (4), 592â613.
Markman, E. M. (1989). Categorization and Naming in Children. Cambridge, MA: MIT Press. Marr, D. C. (1982). Vision. San Francisco, CA: W.H. Freeman and Company. Marr, D. C., & Nishihara, H. K. (1978). Representation and recognition of the spatial organization of three-dimensional shapes. Proceedings of the Royal Society of London. Series B , 200 (1140), 269â94.
McClelland, J. L. (1988). Parallel distributed processing: Implications for cognition and development
52
(Tech. Rep.). DTIC Document.
McClelland, J. L., Botvinick, M. M., Noelle, D. C., Plaut, D. C., Rogers, T. T., Seidenberg, M. S., (2010). Letting structure emerge: connectionist and dynamical systems & Smith, L. B. approaches to cognition. Trends in Cognitive Sciences, 14 (8), 348â56. | 1604.00289#193 | Building Machines That Learn and Think Like People | Recent progress in artificial intelligence (AI) has renewed interest in
building systems that learn and think like people. Many advances have come from
using deep neural networks trained end-to-end in tasks such as object
recognition, video games, and board games, achieving performance that equals or
even beats humans in some respects. Despite their biological inspiration and
performance achievements, these systems differ from human intelligence in
crucial ways. We review progress in cognitive science suggesting that truly
human-like learning and thinking machines will have to reach beyond current
engineering trends in both what they learn, and how they learn it.
Specifically, we argue that these machines should (a) build causal models of
the world that support explanation and understanding, rather than merely
solving pattern recognition problems; (b) ground learning in intuitive theories
of physics and psychology, to support and enrich the knowledge that is learned;
and (c) harness compositionality and learning-to-learn to rapidly acquire and
generalize knowledge to new tasks and situations. We suggest concrete
challenges and promising routes towards these goals that can combine the
strengths of recent neural network advances with more structured cognitive
models. | http://arxiv.org/pdf/1604.00289 | Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman | cs.AI, cs.CV, cs.LG, cs.NE, stat.ML | In press at Behavioral and Brain Sciences. Open call for commentary
proposals (until Nov. 22, 2016).
https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary | null | cs.AI | 20160401 | 20161102 | [
{
"id": "1511.06114"
},
{
"id": "1510.05067"
},
{
"id": "1602.05179"
},
{
"id": "1603.08575"
}
] |
1604.00289 | 194 | McClelland, J. L., McNaughton, B. L., & OâReilly, R. C. (1995). Why there are complementary learning systems in the hippocampus and neocortex: insights from the successes and failures of connectionist models of learning and memory. Psychological Review , 102 (3), 419â57. McClelland, J. L., Rumelhart, D. E., & the PDP Research Group. (1986). Parallel Distributed Processing: Explorations in the microstructure of cognition. Volume II. Cambridge, MA: MIT Press.
Mikolov, T., Joulin, A., & Baroni, M. (2016). A Roadmap towards Machine Intelligence. arXiv preprint. Retrieved from http://arxiv.org/abs/1511.08130
Mikolov, T., Sutskever, I., & Chen, K. (2013). Distributed Representations of Words and Phrases and their Compositionality. Advances in Neural Information Processing Systems.
Miller, E. G., Matsakis, N. E., & Viola, P. A. (2000). Learning from one example through shared densities on transformations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. | 1604.00289#194 | Building Machines That Learn and Think Like People | Recent progress in artificial intelligence (AI) has renewed interest in
building systems that learn and think like people. Many advances have come from
using deep neural networks trained end-to-end in tasks such as object
recognition, video games, and board games, achieving performance that equals or
even beats humans in some respects. Despite their biological inspiration and
performance achievements, these systems differ from human intelligence in
crucial ways. We review progress in cognitive science suggesting that truly
human-like learning and thinking machines will have to reach beyond current
engineering trends in both what they learn, and how they learn it.
Specifically, we argue that these machines should (a) build causal models of
the world that support explanation and understanding, rather than merely
solving pattern recognition problems; (b) ground learning in intuitive theories
of physics and psychology, to support and enrich the knowledge that is learned;
and (c) harness compositionality and learning-to-learn to rapidly acquire and
generalize knowledge to new tasks and situations. We suggest concrete
challenges and promising routes towards these goals that can combine the
strengths of recent neural network advances with more structured cognitive
models. | http://arxiv.org/pdf/1604.00289 | Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman | cs.AI, cs.CV, cs.LG, cs.NE, stat.ML | In press at Behavioral and Brain Sciences. Open call for commentary
proposals (until Nov. 22, 2016).
https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary | null | cs.AI | 20160401 | 20161102 | [
{
"id": "1511.06114"
},
{
"id": "1510.05067"
},
{
"id": "1602.05179"
},
{
"id": "1603.08575"
}
] |
1604.00289 | 195 | Miller, G. A., & Johnson-Laird, P. N. (1976). Language and perception. Cambridge, MA: Belknap Press.
Minsky, M. L. (1974). A framework for representing knowledge. MIT-AI Laboratory Memo 306 . Minsky, M. L., & Papert, S. A. (1969). Perceptrons: An introduction to computational geometry.
MIT Press.
Mitchell, T. M., Keller, R. R., & Kedar-cabelli, S. T. (1986). Explanation-Based Generalization: A Unifying View. Machine Learning, 1 , 47â80.
Mnih, A., & Gregor, K. (2014). Neural variational inference and learning in belief networks. In Proceedings of the 31st International Conference on Machine Learning (pp. 1791â1799). Mnih, V., Heess, N., Graves, A., & Kavukcuoglu, K. (2014). Recurrent Models of Visual Attention.
In Advances in Neural Information Processing Systems 27 (pp. 1â9).
Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., . . . Hassabis, D. (2015). Human-level control through deep reinforcement learning. Nature, 518 (7540), 529â533. | 1604.00289#195 | Building Machines That Learn and Think Like People | Recent progress in artificial intelligence (AI) has renewed interest in
building systems that learn and think like people. Many advances have come from
using deep neural networks trained end-to-end in tasks such as object
recognition, video games, and board games, achieving performance that equals or
even beats humans in some respects. Despite their biological inspiration and
performance achievements, these systems differ from human intelligence in
crucial ways. We review progress in cognitive science suggesting that truly
human-like learning and thinking machines will have to reach beyond current
engineering trends in both what they learn, and how they learn it.
Specifically, we argue that these machines should (a) build causal models of
the world that support explanation and understanding, rather than merely
solving pattern recognition problems; (b) ground learning in intuitive theories
of physics and psychology, to support and enrich the knowledge that is learned;
and (c) harness compositionality and learning-to-learn to rapidly acquire and
generalize knowledge to new tasks and situations. We suggest concrete
challenges and promising routes towards these goals that can combine the
strengths of recent neural network advances with more structured cognitive
models. | http://arxiv.org/pdf/1604.00289 | Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman | cs.AI, cs.CV, cs.LG, cs.NE, stat.ML | In press at Behavioral and Brain Sciences. Open call for commentary
proposals (until Nov. 22, 2016).
https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary | null | cs.AI | 20160401 | 20161102 | [
{
"id": "1511.06114"
},
{
"id": "1510.05067"
},
{
"id": "1602.05179"
},
{
"id": "1603.08575"
}
] |
1604.00289 | 196 | Mohamed, S., & Rezende, D. J. (2015). Variational information maximisation for intrinsically motivated reinforcement learning. In Advances in neural information processing systems (pp. 2125â2133).
Moreno-Bote, R., Knill, D. C., & Pouget, A. (2011). Bayesian sampling in visual perception. Proceedings of the National Academy of Sciences, 108 , 12491â12496.
Murphy, G. L. (1988). Comprehending complex concepts. Cognitive Science, 12 (4), 529â562. Murphy, G. L., & Medin, D. L. (1985). The role of theories in conceptual coherence. Psychological
Review , 92 (3), 289â316.
Murphy, G. L., & Ross, B. H. Psychology, 27 , 148â193. (1994). Predictions from Uncertain Categorizations. Cognitive
Neisser, U. (1966). Cognitive Psychology. New York: Appleton-Century-Crofts. Newell, A., & Simon, H. A.
(1961). Gps, a program that simulates human thought. Defense Technical Information Center.
Newell, A., & Simon, H. A. (1972). Human problem solving. Prentice-Hall.
53
Niv, Y. (2009). Reinforcement learning in the brain. Journal of Mathematical Psychology, 53 , 139â154. | 1604.00289#196 | Building Machines That Learn and Think Like People | Recent progress in artificial intelligence (AI) has renewed interest in
building systems that learn and think like people. Many advances have come from
using deep neural networks trained end-to-end in tasks such as object
recognition, video games, and board games, achieving performance that equals or
even beats humans in some respects. Despite their biological inspiration and
performance achievements, these systems differ from human intelligence in
crucial ways. We review progress in cognitive science suggesting that truly
human-like learning and thinking machines will have to reach beyond current
engineering trends in both what they learn, and how they learn it.
Specifically, we argue that these machines should (a) build causal models of
the world that support explanation and understanding, rather than merely
solving pattern recognition problems; (b) ground learning in intuitive theories
of physics and psychology, to support and enrich the knowledge that is learned;
and (c) harness compositionality and learning-to-learn to rapidly acquire and
generalize knowledge to new tasks and situations. We suggest concrete
challenges and promising routes towards these goals that can combine the
strengths of recent neural network advances with more structured cognitive
models. | http://arxiv.org/pdf/1604.00289 | Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman | cs.AI, cs.CV, cs.LG, cs.NE, stat.ML | In press at Behavioral and Brain Sciences. Open call for commentary
proposals (until Nov. 22, 2016).
https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary | null | cs.AI | 20160401 | 20161102 | [
{
"id": "1511.06114"
},
{
"id": "1510.05067"
},
{
"id": "1602.05179"
},
{
"id": "1603.08575"
}
] |
1604.00289 | 197 | 53
Niv, Y. (2009). Reinforcement learning in the brain. Journal of Mathematical Psychology, 53 , 139â154.
OâDonnell, T. J. (2015). Productivity and Reuse in Language: A Theory of Linguistic Computation and Storage. Cambridge, MA: MIT Press.
Osherson, D. N., & Smith, E. E. (1981). On the adequacy of prototype theory as a theory of concepts. Cognition, 9 (1), 35â58.
Parisotto, E., Ba, J. L., & Salakhutdinov, R. (2016). Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning. In International Conference on Learning Representations (ICLR). Retrieved from http://arxiv.org/abs/1511.06342
Pecevski, D., Buesing, L., & Maass, W. (2011). Probabilistic inference in general graphical models through sampling in stochastic networks of spiking neurons. PLoS Computational Biology, 7 , e1002294.
Peterson, J. C., Abbott, J. T., & Griï¬ths, T. L. (2016). Adapting Deep Network Features to Capture Psychological Representations. In Proceedings of the 38th Annual Conference of the Cognitive Science Society. | 1604.00289#197 | Building Machines That Learn and Think Like People | Recent progress in artificial intelligence (AI) has renewed interest in
building systems that learn and think like people. Many advances have come from
using deep neural networks trained end-to-end in tasks such as object
recognition, video games, and board games, achieving performance that equals or
even beats humans in some respects. Despite their biological inspiration and
performance achievements, these systems differ from human intelligence in
crucial ways. We review progress in cognitive science suggesting that truly
human-like learning and thinking machines will have to reach beyond current
engineering trends in both what they learn, and how they learn it.
Specifically, we argue that these machines should (a) build causal models of
the world that support explanation and understanding, rather than merely
solving pattern recognition problems; (b) ground learning in intuitive theories
of physics and psychology, to support and enrich the knowledge that is learned;
and (c) harness compositionality and learning-to-learn to rapidly acquire and
generalize knowledge to new tasks and situations. We suggest concrete
challenges and promising routes towards these goals that can combine the
strengths of recent neural network advances with more structured cognitive
models. | http://arxiv.org/pdf/1604.00289 | Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman | cs.AI, cs.CV, cs.LG, cs.NE, stat.ML | In press at Behavioral and Brain Sciences. Open call for commentary
proposals (until Nov. 22, 2016).
https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary | null | cs.AI | 20160401 | 20161102 | [
{
"id": "1511.06114"
},
{
"id": "1510.05067"
},
{
"id": "1602.05179"
},
{
"id": "1603.08575"
}
] |
1604.00289 | 198 | Piantadosi, S. T. (2011). Learning and the language of thought. Unpublished doctoral dissertation,
Massachusetts Institute of Technology. Pinker, S. (2007). The Stuï¬ of Thought. Penguin. Pinker, S., & Prince, A. (1988). On language and connectionism: Analysis of a parallel distributed
processing model of language acquisition. Cognition, 28 , 73â193.
Power, J. M., Thompson, L. T., Moyer, J. R., & Disterhoft, J. F. (1997). Enhanced synaptic transmission in ca1 hippocampus after eyeblink conditioning. Journal of Neurophysiology, 78 , 1184â1187.
Premack, D., & Premack, A. J. (1997). Infants Attribute Value to the Goal-Directed Actions of Self-propelled Objects (Vol. 9). doi: 10.1162/jocn.1997.9.6.848
Reed, S., & de Freitas, N. (2016). Neural Programmer-Interpreters. In International Conference on Learning Representations (ICLR). Retrieved from http://arxiv.org/abs/1511.06279
# Rehder, B. (2003). A causal-model theory of conceptual representation and categorization. Journal
of Experimental Psychology: Learning, Memory, and Cognition, 29 (6), 1141â59. | 1604.00289#198 | Building Machines That Learn and Think Like People | Recent progress in artificial intelligence (AI) has renewed interest in
building systems that learn and think like people. Many advances have come from
using deep neural networks trained end-to-end in tasks such as object
recognition, video games, and board games, achieving performance that equals or
even beats humans in some respects. Despite their biological inspiration and
performance achievements, these systems differ from human intelligence in
crucial ways. We review progress in cognitive science suggesting that truly
human-like learning and thinking machines will have to reach beyond current
engineering trends in both what they learn, and how they learn it.
Specifically, we argue that these machines should (a) build causal models of
the world that support explanation and understanding, rather than merely
solving pattern recognition problems; (b) ground learning in intuitive theories
of physics and psychology, to support and enrich the knowledge that is learned;
and (c) harness compositionality and learning-to-learn to rapidly acquire and
generalize knowledge to new tasks and situations. We suggest concrete
challenges and promising routes towards these goals that can combine the
strengths of recent neural network advances with more structured cognitive
models. | http://arxiv.org/pdf/1604.00289 | Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman | cs.AI, cs.CV, cs.LG, cs.NE, stat.ML | In press at Behavioral and Brain Sciences. Open call for commentary
proposals (until Nov. 22, 2016).
https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary | null | cs.AI | 20160401 | 20161102 | [
{
"id": "1511.06114"
},
{
"id": "1510.05067"
},
{
"id": "1602.05179"
},
{
"id": "1603.08575"
}
] |
1604.00289 | 199 | of Experimental Psychology: Learning, Memory, and Cognition, 29 (6), 1141â59.
Rehder, B., & Hastie, R. (2001). Causal Knowledge and Categories: The Eï¬ects of Causal Beliefs on Categorization, Induction, and Similarity. Journal of Experimental Psychology: General , 130 (3), 323â360.
Rehling, J. A. (2001). Letter Spirit (Part Two): Modeling Creativity in a Visual Domain. Unpub- lished doctoral dissertation, Indiana University.
Rezende, D. J., Mohamed, S., Danihelka, I., Gregor, K., & Wierstra, D. (2016). One-Shot Gen- In International Conference on Machine Learning eralization in Deep Generative Models. (ICML). Retrieved from http://arxiv.org/abs/1603.05106v1
Rezende, D. J., Mohamed, S., & Wierstra, D. (2014). Stochastic backpropagation and approxi- mate inference in deep generative models. In International Conference on Machine Learning (ICML).
Rips, L. J. (1975). Inductive judgments about natural categories. Journal of Verbal Learning and Verbal Behavior , 14 (6), 665â681. | 1604.00289#199 | Building Machines That Learn and Think Like People | Recent progress in artificial intelligence (AI) has renewed interest in
building systems that learn and think like people. Many advances have come from
using deep neural networks trained end-to-end in tasks such as object
recognition, video games, and board games, achieving performance that equals or
even beats humans in some respects. Despite their biological inspiration and
performance achievements, these systems differ from human intelligence in
crucial ways. We review progress in cognitive science suggesting that truly
human-like learning and thinking machines will have to reach beyond current
engineering trends in both what they learn, and how they learn it.
Specifically, we argue that these machines should (a) build causal models of
the world that support explanation and understanding, rather than merely
solving pattern recognition problems; (b) ground learning in intuitive theories
of physics and psychology, to support and enrich the knowledge that is learned;
and (c) harness compositionality and learning-to-learn to rapidly acquire and
generalize knowledge to new tasks and situations. We suggest concrete
challenges and promising routes towards these goals that can combine the
strengths of recent neural network advances with more structured cognitive
models. | http://arxiv.org/pdf/1604.00289 | Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman | cs.AI, cs.CV, cs.LG, cs.NE, stat.ML | In press at Behavioral and Brain Sciences. Open call for commentary
proposals (until Nov. 22, 2016).
https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary | null | cs.AI | 20160401 | 20161102 | [
{
"id": "1511.06114"
},
{
"id": "1510.05067"
},
{
"id": "1602.05179"
},
{
"id": "1603.08575"
}
] |
1604.00289 | 200 | Rips, L. J. (1975). Inductive judgments about natural categories. Journal of Verbal Learning and Verbal Behavior , 14 (6), 665â681.
Rips, L. J., & Hespos, S. J. (2015). Divisions of the physical world: Concepts of objects and substances. Psychological Bulletin, 141 , 786â811.
Rogers, T. T., & McClelland, J. L. (2004). Semantic Cognition. Cambridge, MA: MIT Press.
54
Rosenblatt, F. (1958). The perceptron: a probabilistic model for information storage and organi- zation in the brain. Psychological Review , 65 , 386â408.
Rougier, N. P., Noelle, D. C., Braver, T. S., Cohen, J. D., & OâReilly, R. C. (2005). Prefrontal cortex and ï¬exible cognitive control: Rules without symbols. Proceedings of the National Academy of Sciences (PNAS), 102 (20), 7338â7343.
Rumelhart, D. E., Hinton, G., & Williams, R. (1986). Learning representations by back-propagating errors. Nature, 323 (9), 533â536. | 1604.00289#200 | Building Machines That Learn and Think Like People | Recent progress in artificial intelligence (AI) has renewed interest in
building systems that learn and think like people. Many advances have come from
using deep neural networks trained end-to-end in tasks such as object
recognition, video games, and board games, achieving performance that equals or
even beats humans in some respects. Despite their biological inspiration and
performance achievements, these systems differ from human intelligence in
crucial ways. We review progress in cognitive science suggesting that truly
human-like learning and thinking machines will have to reach beyond current
engineering trends in both what they learn, and how they learn it.
Specifically, we argue that these machines should (a) build causal models of
the world that support explanation and understanding, rather than merely
solving pattern recognition problems; (b) ground learning in intuitive theories
of physics and psychology, to support and enrich the knowledge that is learned;
and (c) harness compositionality and learning-to-learn to rapidly acquire and
generalize knowledge to new tasks and situations. We suggest concrete
challenges and promising routes towards these goals that can combine the
strengths of recent neural network advances with more structured cognitive
models. | http://arxiv.org/pdf/1604.00289 | Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman | cs.AI, cs.CV, cs.LG, cs.NE, stat.ML | In press at Behavioral and Brain Sciences. Open call for commentary
proposals (until Nov. 22, 2016).
https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary | null | cs.AI | 20160401 | 20161102 | [
{
"id": "1511.06114"
},
{
"id": "1510.05067"
},
{
"id": "1602.05179"
},
{
"id": "1603.08575"
}
] |
1604.00289 | 201 | Rumelhart, D. E., & McClelland, J. L. (1986). On Learning the Past Tenses of English Verbs. In Parallel distributed processing: Explorations in the microstructure of cognition (pp. 216â271). Cambridge, MA: MIT Press.
Rumelhart, D. E., McClelland, J. L., & the PDP Research Group. (1986). Parallel Distributed Processing: Explorations in the microstructure of cognition. Volume I. Cambridge, MA: MIT Press.
Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., . . . Fei-Fei, L. (2015). ImageNet large scale visual recognition challenge (Tech. Rep.).
Russell, S., & Norvig, P. (2003). Artiï¬cial Intelligence: A Modern Approach. Upper Saddle River, NJ: Prentice Hall.
Rusu, A. A., Rabinowitz, N. C., Desjardins, G., Soyer, H., Kirkpatrick, J., Kavukcuoglu, K., . . . Hadsell, R. (2016). Progressive Neural Networks. arXiv preprint. Retrieved from http:// arxiv.org/abs/1606.04671 | 1604.00289#201 | Building Machines That Learn and Think Like People | Recent progress in artificial intelligence (AI) has renewed interest in
building systems that learn and think like people. Many advances have come from
using deep neural networks trained end-to-end in tasks such as object
recognition, video games, and board games, achieving performance that equals or
even beats humans in some respects. Despite their biological inspiration and
performance achievements, these systems differ from human intelligence in
crucial ways. We review progress in cognitive science suggesting that truly
human-like learning and thinking machines will have to reach beyond current
engineering trends in both what they learn, and how they learn it.
Specifically, we argue that these machines should (a) build causal models of
the world that support explanation and understanding, rather than merely
solving pattern recognition problems; (b) ground learning in intuitive theories
of physics and psychology, to support and enrich the knowledge that is learned;
and (c) harness compositionality and learning-to-learn to rapidly acquire and
generalize knowledge to new tasks and situations. We suggest concrete
challenges and promising routes towards these goals that can combine the
strengths of recent neural network advances with more structured cognitive
models. | http://arxiv.org/pdf/1604.00289 | Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman | cs.AI, cs.CV, cs.LG, cs.NE, stat.ML | In press at Behavioral and Brain Sciences. Open call for commentary
proposals (until Nov. 22, 2016).
https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary | null | cs.AI | 20160401 | 20161102 | [
{
"id": "1511.06114"
},
{
"id": "1510.05067"
},
{
"id": "1602.05179"
},
{
"id": "1603.08575"
}
] |
1604.00289 | 202 | Salakhutdinov, R., Tenenbaum, J., & Torralba, A. (2012). One-shot learning with a hierarchical nonparametric Bayesian model. JMLR Workshop on Unsupervised and Transfer Learning, 27 , 195â207.
Salakhutdinov, R., Tenenbaum, J. B., & Torralba, A. (2013). Learning with Hierarchical-Deep Models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35 (8), 1958â71. Salakhutdinov, R., Torralba, A., & Tenenbaum, J. (2011). Learning to Share Visual Appearance for Multiclass Object Detection. In Computer Vision and Pattern Recognition (CVPR). Sanborn, A. N., Mansinghka, V. K., & Griï¬ths, T. L. (2013). Reconciling intuitive physics and
newtonian mechanics for colliding objects. Psychological Review , 120 (2), 411.
Scellier, B., & Bengio, Y. (2016). Towards a biologically plausible backprop. arXiv preprint arXiv:1602.05179 .
Schank, R. C. (1972). Conceptual dependency: A theory of natural language understanding. Cognitive Psychology, 3 , 552â631. | 1604.00289#202 | Building Machines That Learn and Think Like People | Recent progress in artificial intelligence (AI) has renewed interest in
building systems that learn and think like people. Many advances have come from
using deep neural networks trained end-to-end in tasks such as object
recognition, video games, and board games, achieving performance that equals or
even beats humans in some respects. Despite their biological inspiration and
performance achievements, these systems differ from human intelligence in
crucial ways. We review progress in cognitive science suggesting that truly
human-like learning and thinking machines will have to reach beyond current
engineering trends in both what they learn, and how they learn it.
Specifically, we argue that these machines should (a) build causal models of
the world that support explanation and understanding, rather than merely
solving pattern recognition problems; (b) ground learning in intuitive theories
of physics and psychology, to support and enrich the knowledge that is learned;
and (c) harness compositionality and learning-to-learn to rapidly acquire and
generalize knowledge to new tasks and situations. We suggest concrete
challenges and promising routes towards these goals that can combine the
strengths of recent neural network advances with more structured cognitive
models. | http://arxiv.org/pdf/1604.00289 | Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman | cs.AI, cs.CV, cs.LG, cs.NE, stat.ML | In press at Behavioral and Brain Sciences. Open call for commentary
proposals (until Nov. 22, 2016).
https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary | null | cs.AI | 20160401 | 20161102 | [
{
"id": "1511.06114"
},
{
"id": "1510.05067"
},
{
"id": "1602.05179"
},
{
"id": "1603.08575"
}
] |
1604.00289 | 203 | Schank, R. C. (1972). Conceptual dependency: A theory of natural language understanding. Cognitive Psychology, 3 , 552â631.
In International Conference on Learning Representations (ICLR). Retrieved from http://arxiv .org/abs/1511.05952
Schlottmann, A., Cole, K., Watts, R., & White, M. (2013). Domain-speciï¬c perceptual causality in children depends on the spatio-temporal conï¬guration, not motion onset. Frontiers in Psychology, 4 . doi: 10.3389/fpsyg.2013.00365
Schlottmann, A., Ray, E. D., Mitchell, A., & Demetriou, N. (2006). Perceived physical and social causality in animated motions: Spontaneous reports and ratings. Acta Psychologica, 123 , 112â143. doi: 10.1016/j.actpsy.2006.05.006
Schmidhuber, J. (2015). Deep learning in neural networks: An overview. Neural Networks, 61 , 85â117.
Scholl, B. J., & Gao, T. (2013). Perceiving Animacy and Intentionality: Visual Processing or
55
Higher-Level Judgment? Social perception: Detection and interpretation of animacy, agency, and intention. | 1604.00289#203 | Building Machines That Learn and Think Like People | Recent progress in artificial intelligence (AI) has renewed interest in
building systems that learn and think like people. Many advances have come from
using deep neural networks trained end-to-end in tasks such as object
recognition, video games, and board games, achieving performance that equals or
even beats humans in some respects. Despite their biological inspiration and
performance achievements, these systems differ from human intelligence in
crucial ways. We review progress in cognitive science suggesting that truly
human-like learning and thinking machines will have to reach beyond current
engineering trends in both what they learn, and how they learn it.
Specifically, we argue that these machines should (a) build causal models of
the world that support explanation and understanding, rather than merely
solving pattern recognition problems; (b) ground learning in intuitive theories
of physics and psychology, to support and enrich the knowledge that is learned;
and (c) harness compositionality and learning-to-learn to rapidly acquire and
generalize knowledge to new tasks and situations. We suggest concrete
challenges and promising routes towards these goals that can combine the
strengths of recent neural network advances with more structured cognitive
models. | http://arxiv.org/pdf/1604.00289 | Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman | cs.AI, cs.CV, cs.LG, cs.NE, stat.ML | In press at Behavioral and Brain Sciences. Open call for commentary
proposals (until Nov. 22, 2016).
https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary | null | cs.AI | 20160401 | 20161102 | [
{
"id": "1511.06114"
},
{
"id": "1510.05067"
},
{
"id": "1602.05179"
},
{
"id": "1603.08575"
}
] |
1604.00289 | 204 | 55
Higher-Level Judgment? Social perception: Detection and interpretation of animacy, agency, and intention.
Schultz, W., Dayan, P., & Montague, P. R. (1997). A neural substrate of prediction and reward. Science, 275 , 1593â1599.
Schulz, L. (2012). The origins of inquiry: Inductive inference and exploration in early childhood. Trends in Cognitive Sciences, 16 (7), 382â9.
Schulz, L. E., Gopnik, A., & Glymour, C. (2007). Preschool children learn about causal structure from conditional interventions. Developmental Science, 10 , 322â332. doi: 10.1111/j.1467 -7687.2007.00587.x
Sermanet, P., Eigen, D., Zhang, X., Mathieu, M., Fergus, R., & LeCun, Y. (2014). OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks. In Inter- national Conference on Learning Representations (ICLR).
Shafto, P., Goodman, N. D., & Griï¬ths, T. L. (2014). A rational account of pedagogical reasoning: Teaching by, and learning from, examples. Cognitive Psychology, 71 , 55â89. | 1604.00289#204 | Building Machines That Learn and Think Like People | Recent progress in artificial intelligence (AI) has renewed interest in
building systems that learn and think like people. Many advances have come from
using deep neural networks trained end-to-end in tasks such as object
recognition, video games, and board games, achieving performance that equals or
even beats humans in some respects. Despite their biological inspiration and
performance achievements, these systems differ from human intelligence in
crucial ways. We review progress in cognitive science suggesting that truly
human-like learning and thinking machines will have to reach beyond current
engineering trends in both what they learn, and how they learn it.
Specifically, we argue that these machines should (a) build causal models of
the world that support explanation and understanding, rather than merely
solving pattern recognition problems; (b) ground learning in intuitive theories
of physics and psychology, to support and enrich the knowledge that is learned;
and (c) harness compositionality and learning-to-learn to rapidly acquire and
generalize knowledge to new tasks and situations. We suggest concrete
challenges and promising routes towards these goals that can combine the
strengths of recent neural network advances with more structured cognitive
models. | http://arxiv.org/pdf/1604.00289 | Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman | cs.AI, cs.CV, cs.LG, cs.NE, stat.ML | In press at Behavioral and Brain Sciences. Open call for commentary
proposals (until Nov. 22, 2016).
https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary | null | cs.AI | 20160401 | 20161102 | [
{
"id": "1511.06114"
},
{
"id": "1510.05067"
},
{
"id": "1602.05179"
},
{
"id": "1603.08575"
}
] |
1604.00289 | 205 | Shultz, T. R. (2003). Computational developmental psychology. MIT Press. Siegler, R. S., & Chen, Z.
(1998). Developmental diï¬erences in rule learning: A microgenetic analysis. Cognitive Psychology, 36 (3), 273â310.
Silver, D. (2016). Personal communication. Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Driessche, G. V. D., . . . Hassabis, D. (2016). Mastering the game of Go with deep neural networks and tree search. Nature, 529 (7585), 484â489.
Smith, L. B., Jones, S. S., Landau, B., Gershkoï¬-Stowe, L., & Samuelson, L. (2002). Object name learning provides on-the-job training for attention. Psychological Science, 13 (1), 13â19.
Solomon, K., Medin, D., & Lynch, E. Cognitive Sciences, 3 (3), 99â105. (1999). Concepts do more than categorize. Trends in | 1604.00289#205 | Building Machines That Learn and Think Like People | Recent progress in artificial intelligence (AI) has renewed interest in
building systems that learn and think like people. Many advances have come from
using deep neural networks trained end-to-end in tasks such as object
recognition, video games, and board games, achieving performance that equals or
even beats humans in some respects. Despite their biological inspiration and
performance achievements, these systems differ from human intelligence in
crucial ways. We review progress in cognitive science suggesting that truly
human-like learning and thinking machines will have to reach beyond current
engineering trends in both what they learn, and how they learn it.
Specifically, we argue that these machines should (a) build causal models of
the world that support explanation and understanding, rather than merely
solving pattern recognition problems; (b) ground learning in intuitive theories
of physics and psychology, to support and enrich the knowledge that is learned;
and (c) harness compositionality and learning-to-learn to rapidly acquire and
generalize knowledge to new tasks and situations. We suggest concrete
challenges and promising routes towards these goals that can combine the
strengths of recent neural network advances with more structured cognitive
models. | http://arxiv.org/pdf/1604.00289 | Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman | cs.AI, cs.CV, cs.LG, cs.NE, stat.ML | In press at Behavioral and Brain Sciences. Open call for commentary
proposals (until Nov. 22, 2016).
https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary | null | cs.AI | 20160401 | 20161102 | [
{
"id": "1511.06114"
},
{
"id": "1510.05067"
},
{
"id": "1602.05179"
},
{
"id": "1603.08575"
}
] |
1604.00289 | 206 | Solomon, K., Medin, D., & Lynch, E. Cognitive Sciences, 3 (3), 99â105. (1999). Concepts do more than categorize. Trends in
Spelke, E. S. (1990). Principles of Object Perception. Cognitive Science, 14 (1), 29â56. Spelke, E. S. (2003). Core knowledge. Attention and performance, 20 . Spelke, E. S., Gutheil, G., & Van de Walle, G. (1995). The development of object perception. In Visual cognition: An invitation to cognitive science, vol. 2 (2nd ed.). an invitation to cognitive science (pp. 297â330).
Spelke, E. S., & Kinzler, K. D. (2007). Core knowledge. Developmental Science, 10 (1), 89â96. Srivastava, N., & Salakhutdinov, R. (2013). Discriminative Transfer Learning with Tree-based Priors. In Advances in Neural Information Processing Systems 26.
Stadie, B. C., Levine, S., & Abbeel, P. (2016). Incentivizing Exploration In Reinforcement Learning With Deep Predictive Models. arXiv preprint. Retrieved from http://arxiv.org/abs/ 1507.00814 | 1604.00289#206 | Building Machines That Learn and Think Like People | Recent progress in artificial intelligence (AI) has renewed interest in
building systems that learn and think like people. Many advances have come from
using deep neural networks trained end-to-end in tasks such as object
recognition, video games, and board games, achieving performance that equals or
even beats humans in some respects. Despite their biological inspiration and
performance achievements, these systems differ from human intelligence in
crucial ways. We review progress in cognitive science suggesting that truly
human-like learning and thinking machines will have to reach beyond current
engineering trends in both what they learn, and how they learn it.
Specifically, we argue that these machines should (a) build causal models of
the world that support explanation and understanding, rather than merely
solving pattern recognition problems; (b) ground learning in intuitive theories
of physics and psychology, to support and enrich the knowledge that is learned;
and (c) harness compositionality and learning-to-learn to rapidly acquire and
generalize knowledge to new tasks and situations. We suggest concrete
challenges and promising routes towards these goals that can combine the
strengths of recent neural network advances with more structured cognitive
models. | http://arxiv.org/pdf/1604.00289 | Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman | cs.AI, cs.CV, cs.LG, cs.NE, stat.ML | In press at Behavioral and Brain Sciences. Open call for commentary
proposals (until Nov. 22, 2016).
https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary | null | cs.AI | 20160401 | 20161102 | [
{
"id": "1511.06114"
},
{
"id": "1510.05067"
},
{
"id": "1602.05179"
},
{
"id": "1603.08575"
}
] |
1604.00289 | 207 | Stahl, A. E., & Feigenson, L. (2015). Observing the unexpected enhances infantsâ learning and exploration. Science, 348 (6230), 91â94.
Sternberg, R. J., & Davidson, J. E. (1995). The nature of insight. The MIT Press. Stuhlm¨uller, A., Taylor, J., & Goodman, N. D. (2013). Learning stochastic inverses. In Advances
in Neural Information Processing Systems (pp. 3048â3056).
Sukhbaatar, S., Szlam, A., Weston, J., & Fergus, R. (2015). End-To-End Memory Networks. In Advances in Neural Information Processing Systems 29. Retrieved from http://arxiv.org/ abs/1503.08895
Sutton, R. S. (1990). Integrated architectures for learning, planning, and reacting based on ap56
proximating dynamic programming. In Proceedings of the Seventh International Conference on Machine Learning (pp. 216â224).
Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., . . . Rabinovich, A. (2014). Going Deeper with Convolutions. arXiv preprint. Retrieved from http://arxiv.org/abs/ 1409.4842 | 1604.00289#207 | Building Machines That Learn and Think Like People | Recent progress in artificial intelligence (AI) has renewed interest in
building systems that learn and think like people. Many advances have come from
using deep neural networks trained end-to-end in tasks such as object
recognition, video games, and board games, achieving performance that equals or
even beats humans in some respects. Despite their biological inspiration and
performance achievements, these systems differ from human intelligence in
crucial ways. We review progress in cognitive science suggesting that truly
human-like learning and thinking machines will have to reach beyond current
engineering trends in both what they learn, and how they learn it.
Specifically, we argue that these machines should (a) build causal models of
the world that support explanation and understanding, rather than merely
solving pattern recognition problems; (b) ground learning in intuitive theories
of physics and psychology, to support and enrich the knowledge that is learned;
and (c) harness compositionality and learning-to-learn to rapidly acquire and
generalize knowledge to new tasks and situations. We suggest concrete
challenges and promising routes towards these goals that can combine the
strengths of recent neural network advances with more structured cognitive
models. | http://arxiv.org/pdf/1604.00289 | Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman | cs.AI, cs.CV, cs.LG, cs.NE, stat.ML | In press at Behavioral and Brain Sciences. Open call for commentary
proposals (until Nov. 22, 2016).
https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary | null | cs.AI | 20160401 | 20161102 | [
{
"id": "1511.06114"
},
{
"id": "1510.05067"
},
{
"id": "1602.05179"
},
{
"id": "1603.08575"
}
] |
1604.00289 | 208 | Tauber, S., & Steyvers, M. (2011). Using inverse planning and theory of mind for social goal inference. In Proceedings of the 33rd annual conference of the cognitive science society (pp. 2480â2485).
T´egl´as, E., Vul, E., Girotto, V., Gonzalez, M., Tenenbaum, J. B., & Bonatti, L. L. (2011). Pure reasoning in 12-month-old infants as probabilistic inference. Science, 332 (6033), 1054â9. Tenenbaum, J. B., Kemp, C., Griï¬ths, T. L., & Goodman, N. D. (2011). How to Grow a Mind:
Statistics, Structure, and Abstraction. Science, 331 (6022), 1279â85.
Tian, Y., & Zhu, Y. (2016). Better Computer Go Player with Neural Network and Long-term In International Conference on Learning Representations (ICLR). Retrieved Prediction. from http://arxiv.org/abs/1511.06410 | 1604.00289#208 | Building Machines That Learn and Think Like People | Recent progress in artificial intelligence (AI) has renewed interest in
building systems that learn and think like people. Many advances have come from
using deep neural networks trained end-to-end in tasks such as object
recognition, video games, and board games, achieving performance that equals or
even beats humans in some respects. Despite their biological inspiration and
performance achievements, these systems differ from human intelligence in
crucial ways. We review progress in cognitive science suggesting that truly
human-like learning and thinking machines will have to reach beyond current
engineering trends in both what they learn, and how they learn it.
Specifically, we argue that these machines should (a) build causal models of
the world that support explanation and understanding, rather than merely
solving pattern recognition problems; (b) ground learning in intuitive theories
of physics and psychology, to support and enrich the knowledge that is learned;
and (c) harness compositionality and learning-to-learn to rapidly acquire and
generalize knowledge to new tasks and situations. We suggest concrete
challenges and promising routes towards these goals that can combine the
strengths of recent neural network advances with more structured cognitive
models. | http://arxiv.org/pdf/1604.00289 | Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman | cs.AI, cs.CV, cs.LG, cs.NE, stat.ML | In press at Behavioral and Brain Sciences. Open call for commentary
proposals (until Nov. 22, 2016).
https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary | null | cs.AI | 20160401 | 20161102 | [
{
"id": "1511.06114"
},
{
"id": "1510.05067"
},
{
"id": "1602.05179"
},
{
"id": "1603.08575"
}
] |
1604.00289 | 209 | Tomasello, M. (2010). Origins of human communication. MIT press. Torralba, A., Murphy, K. P., & Freeman, W. T. (2007). Sharing visual features for multiclass and multiview object detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29 (5), 854â869.
Tremoulet, P. D., & Feldman, J. (2000). Perception of animacy from the motion of a single object. Perception, 29 , 943â951.
Tsividis, P., Gershman, S. J., Tenenbaum, J. B., & Schulz, L. (2013). Information Selection in Noisy Environments with Large Action Spaces. In Proceedings of the 36th Annual Conference of the Cognitive Science Society (pp. 1622â1627).
Tsividis, P., Tenenbaum, J. B., & Schulz, L. E. (2015). Constraints on hypothesis selection in causal learning. Proceedings of the 37th Annual Cognitive Science Society.
Turing, A. M. (1950). Computing Machine and Intelligence. MIND, LIX , 433â460. Retrieved doi: http://dx.doi.org/ from http://mind.oxfordjournals.org/content/LIX/236/433 10.1093 %2FLIX.236.433
# %2Fmind \ \ | 1604.00289#209 | Building Machines That Learn and Think Like People | Recent progress in artificial intelligence (AI) has renewed interest in
building systems that learn and think like people. Many advances have come from
using deep neural networks trained end-to-end in tasks such as object
recognition, video games, and board games, achieving performance that equals or
even beats humans in some respects. Despite their biological inspiration and
performance achievements, these systems differ from human intelligence in
crucial ways. We review progress in cognitive science suggesting that truly
human-like learning and thinking machines will have to reach beyond current
engineering trends in both what they learn, and how they learn it.
Specifically, we argue that these machines should (a) build causal models of
the world that support explanation and understanding, rather than merely
solving pattern recognition problems; (b) ground learning in intuitive theories
of physics and psychology, to support and enrich the knowledge that is learned;
and (c) harness compositionality and learning-to-learn to rapidly acquire and
generalize knowledge to new tasks and situations. We suggest concrete
challenges and promising routes towards these goals that can combine the
strengths of recent neural network advances with more structured cognitive
models. | http://arxiv.org/pdf/1604.00289 | Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman | cs.AI, cs.CV, cs.LG, cs.NE, stat.ML | In press at Behavioral and Brain Sciences. Open call for commentary
proposals (until Nov. 22, 2016).
https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary | null | cs.AI | 20160401 | 20161102 | [
{
"id": "1511.06114"
},
{
"id": "1510.05067"
},
{
"id": "1602.05179"
},
{
"id": "1603.08575"
}
] |
1604.00289 | 210 | # %2Fmind \ Tversky, B., & Hemenway, K. (1984). Objects, Parts, and Categories. Journal of Experimental Psychology: General , 113 (2), 169â191.
Ullman, S., Harari, D., & Dorfman, N. (2012). From simple innate biases to complex visual concepts. Proceedings of the National Academy of Sciences, 109 (44), 18215â18220.
Ullman, T. D., Goodman, N. D., & Tenenbaum, J. B. (2012). Theory learning as stochastic search in the language of thought. Cognitive Development, 27 (4), 455â480.
van den Hengel, A., Russell, C., Dick, A., Bastian, J., Pooley, D., Fleming, L., & Agapito, L. In Computer Vision and (2015). Part-based modelling of compound scenes from images. Pattern Recognition (CVPR) (pp. 878â886).
van Hasselt, H., Guez, A., & Silver, D. (2016). Deep Reinforcement Learning with Double Q- learning. In Thirtieth Conference on Artiï¬cial Intelligence (AAAI). | 1604.00289#210 | Building Machines That Learn and Think Like People | Recent progress in artificial intelligence (AI) has renewed interest in
building systems that learn and think like people. Many advances have come from
using deep neural networks trained end-to-end in tasks such as object
recognition, video games, and board games, achieving performance that equals or
even beats humans in some respects. Despite their biological inspiration and
performance achievements, these systems differ from human intelligence in
crucial ways. We review progress in cognitive science suggesting that truly
human-like learning and thinking machines will have to reach beyond current
engineering trends in both what they learn, and how they learn it.
Specifically, we argue that these machines should (a) build causal models of
the world that support explanation and understanding, rather than merely
solving pattern recognition problems; (b) ground learning in intuitive theories
of physics and psychology, to support and enrich the knowledge that is learned;
and (c) harness compositionality and learning-to-learn to rapidly acquire and
generalize knowledge to new tasks and situations. We suggest concrete
challenges and promising routes towards these goals that can combine the
strengths of recent neural network advances with more structured cognitive
models. | http://arxiv.org/pdf/1604.00289 | Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman | cs.AI, cs.CV, cs.LG, cs.NE, stat.ML | In press at Behavioral and Brain Sciences. Open call for commentary
proposals (until Nov. 22, 2016).
https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary | null | cs.AI | 20160401 | 20161102 | [
{
"id": "1511.06114"
},
{
"id": "1510.05067"
},
{
"id": "1602.05179"
},
{
"id": "1603.08575"
}
] |
1604.00289 | 211 | Vinyals, O., Blundell, C., Lillicrap, T., Kavukcuoglu, K., & Wierstra, D. (2016). Matching Networks for One Shot Learning. arXiv preprint. Retrieved from http://arxiv.org/abs/1606.04080 Vinyals, O., Toshev, A., Bengio, S., & Erhan, D. (2014). Show and Tell: A Neural Image Caption
Generator. In International Conference on Machine Learning (ICML).
Vul, E., Goodman, N., Griï¬ths, T. L., & Tenenbaum, J. B. (2014). One and Done? Optimal
57
Decisions From Very Few Samples. Cognitive Science.
Wang, Z., Schaul, T., Hessel, M., van Hasselt, H., Lanctot, M., & de Freitas, N. (2016). Duel- ing network architectures for deep reinforcement learning. arXiv preprint. Retrieved from http://arxiv.org/abs/1511.06581
Ward, T. B. (1994). Structured imagination: The role of category structure in exemplar generation. Cognitive Psychology, 27 , 1â40. | 1604.00289#211 | Building Machines That Learn and Think Like People | Recent progress in artificial intelligence (AI) has renewed interest in
building systems that learn and think like people. Many advances have come from
using deep neural networks trained end-to-end in tasks such as object
recognition, video games, and board games, achieving performance that equals or
even beats humans in some respects. Despite their biological inspiration and
performance achievements, these systems differ from human intelligence in
crucial ways. We review progress in cognitive science suggesting that truly
human-like learning and thinking machines will have to reach beyond current
engineering trends in both what they learn, and how they learn it.
Specifically, we argue that these machines should (a) build causal models of
the world that support explanation and understanding, rather than merely
solving pattern recognition problems; (b) ground learning in intuitive theories
of physics and psychology, to support and enrich the knowledge that is learned;
and (c) harness compositionality and learning-to-learn to rapidly acquire and
generalize knowledge to new tasks and situations. We suggest concrete
challenges and promising routes towards these goals that can combine the
strengths of recent neural network advances with more structured cognitive
models. | http://arxiv.org/pdf/1604.00289 | Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman | cs.AI, cs.CV, cs.LG, cs.NE, stat.ML | In press at Behavioral and Brain Sciences. Open call for commentary
proposals (until Nov. 22, 2016).
https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary | null | cs.AI | 20160401 | 20161102 | [
{
"id": "1511.06114"
},
{
"id": "1510.05067"
},
{
"id": "1602.05179"
},
{
"id": "1603.08575"
}
] |
1604.00289 | 212 | Ward, T. B. (1994). Structured imagination: The role of category structure in exemplar generation. Cognitive Psychology, 27 , 1â40.
Watkins, C. J., & Dayan, P. (1992). Q-learning. Machine Learning, 8 , 279â292. Wellman, H. M., & Gelman, S. A. (1992). Cognitive development: Foundational theories of core
domains. Annual Review of Psychology, 43 , 337â75.
Wellman, H. M., & Gelman, S. A. (1998). Knowledge acquisition in foundational domains. In The handbook of child psychology (pp. 523â573). Retrieved from http://doi.apa.org/psycinfo/ 2005-01927-010
Weng, C., Yu, D., Watanabe, S., & Juang, B.-H. F. (2014). Recurrent deep neural networks for robust speech recognition. ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings(2), 5532â5536.
Weston, J., Chopra, S., & Bordes, A. (2015). Memory Networks. In International Conference on Learning Representations (ICLR). | 1604.00289#212 | Building Machines That Learn and Think Like People | Recent progress in artificial intelligence (AI) has renewed interest in
building systems that learn and think like people. Many advances have come from
using deep neural networks trained end-to-end in tasks such as object
recognition, video games, and board games, achieving performance that equals or
even beats humans in some respects. Despite their biological inspiration and
performance achievements, these systems differ from human intelligence in
crucial ways. We review progress in cognitive science suggesting that truly
human-like learning and thinking machines will have to reach beyond current
engineering trends in both what they learn, and how they learn it.
Specifically, we argue that these machines should (a) build causal models of
the world that support explanation and understanding, rather than merely
solving pattern recognition problems; (b) ground learning in intuitive theories
of physics and psychology, to support and enrich the knowledge that is learned;
and (c) harness compositionality and learning-to-learn to rapidly acquire and
generalize knowledge to new tasks and situations. We suggest concrete
challenges and promising routes towards these goals that can combine the
strengths of recent neural network advances with more structured cognitive
models. | http://arxiv.org/pdf/1604.00289 | Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman | cs.AI, cs.CV, cs.LG, cs.NE, stat.ML | In press at Behavioral and Brain Sciences. Open call for commentary
proposals (until Nov. 22, 2016).
https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary | null | cs.AI | 20160401 | 20161102 | [
{
"id": "1511.06114"
},
{
"id": "1510.05067"
},
{
"id": "1602.05179"
},
{
"id": "1603.08575"
}
] |
1604.00289 | 213 | Weston, J., Chopra, S., & Bordes, A. (2015). Memory Networks. In International Conference on Learning Representations (ICLR).
Williams, J. J., & Lombrozo, T. (2010). The role of explanation in discovery and generalization: Evidence from category learning. Cognitive Science, 34 (5), 776â806.
Winograd, T. (1972). Understanding natural language. Cognitive Psychology, 3 , 1â191. Winston, P. H. (1975). Learning structural descriptions from examples. In P. H. Winston (Ed.),
The psychology of computer vision. New York: McGraw-Hill.
Xu, F., & Tenenbaum, J. B. (2007). Word learning as Bayesian inference. Psychological Review , 114 (2), 245â272.
Xu, K., Ba, J., Kiros, R., Cho, K., Courville, A., Salakhutdinov, R., . . . Bengio, Y. (2015). Show, Attend and Tell: Neural Image Caption Generation with Visual Attention. In International Conference on Machine Learning (ICML). Retrieved from http://arxiv.org/abs/1502 .03044 | 1604.00289#213 | Building Machines That Learn and Think Like People | Recent progress in artificial intelligence (AI) has renewed interest in
building systems that learn and think like people. Many advances have come from
using deep neural networks trained end-to-end in tasks such as object
recognition, video games, and board games, achieving performance that equals or
even beats humans in some respects. Despite their biological inspiration and
performance achievements, these systems differ from human intelligence in
crucial ways. We review progress in cognitive science suggesting that truly
human-like learning and thinking machines will have to reach beyond current
engineering trends in both what they learn, and how they learn it.
Specifically, we argue that these machines should (a) build causal models of
the world that support explanation and understanding, rather than merely
solving pattern recognition problems; (b) ground learning in intuitive theories
of physics and psychology, to support and enrich the knowledge that is learned;
and (c) harness compositionality and learning-to-learn to rapidly acquire and
generalize knowledge to new tasks and situations. We suggest concrete
challenges and promising routes towards these goals that can combine the
strengths of recent neural network advances with more structured cognitive
models. | http://arxiv.org/pdf/1604.00289 | Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman | cs.AI, cs.CV, cs.LG, cs.NE, stat.ML | In press at Behavioral and Brain Sciences. Open call for commentary
proposals (until Nov. 22, 2016).
https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary | null | cs.AI | 20160401 | 20161102 | [
{
"id": "1511.06114"
},
{
"id": "1510.05067"
},
{
"id": "1602.05179"
},
{
"id": "1603.08575"
}
] |
1604.00289 | 214 | Yamins, D. L. K., Hong, H., Cadieu, C. F., Solomon, E. a., Seibert, D., & DiCarlo, J. J. (2014). Performance-optimized hierarchical models predict neural responses in higher visual cortex. Proceedings of the National Academy of Sciences, 111 (23), 8619â24.
Yildirim, I., Kulkarni, T. D., Freiwald, W. A., & Te. (2015). Eï¬cient analysis-by-synthesis in vision: A computational framework, behavioral tests, and comparison with neural representations. In Proceedings of the 37th Annual Conference of the Cognitive Science Society.
Yosinski, J., Clune, J., Bengio, Y., & Lipson, H. (2014). How transferable are features in deep neural networks? In Advances in Neural Information Processing Systems (NIPS).
Zeiler, M. D., & Fergus, R. (2014). Visualizing and Understanding Convolutional Networks. In European Conference on Computer Vision (ECCV).
58 | 1604.00289#214 | Building Machines That Learn and Think Like People | Recent progress in artificial intelligence (AI) has renewed interest in
building systems that learn and think like people. Many advances have come from
using deep neural networks trained end-to-end in tasks such as object
recognition, video games, and board games, achieving performance that equals or
even beats humans in some respects. Despite their biological inspiration and
performance achievements, these systems differ from human intelligence in
crucial ways. We review progress in cognitive science suggesting that truly
human-like learning and thinking machines will have to reach beyond current
engineering trends in both what they learn, and how they learn it.
Specifically, we argue that these machines should (a) build causal models of
the world that support explanation and understanding, rather than merely
solving pattern recognition problems; (b) ground learning in intuitive theories
of physics and psychology, to support and enrich the knowledge that is learned;
and (c) harness compositionality and learning-to-learn to rapidly acquire and
generalize knowledge to new tasks and situations. We suggest concrete
challenges and promising routes towards these goals that can combine the
strengths of recent neural network advances with more structured cognitive
models. | http://arxiv.org/pdf/1604.00289 | Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman | cs.AI, cs.CV, cs.LG, cs.NE, stat.ML | In press at Behavioral and Brain Sciences. Open call for commentary
proposals (until Nov. 22, 2016).
https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary | null | cs.AI | 20160401 | 20161102 | [
{
"id": "1511.06114"
},
{
"id": "1510.05067"
},
{
"id": "1602.05179"
},
{
"id": "1603.08575"
}
] |
1603.09025 | 0 | 7 1 0 2
b e F 8 2 ] G L . s c [
5 v 5 2 0 9 0 . 3 0 6 1 : v i X r a
Published as a conference paper at ICLR 2017
# RECURRENT BATCH NORMALIZATION
Tim Cooijmans, Nicolas Ballas, César Laurent, ÃaËglar Gülçehre & Aaron Courville MILA - Université de Montréal [email protected]
# ABSTRACT
We propose a reparameterization of LSTM that brings the beneï¬ts of batch nor- malization to recurrent neural networks. Whereas previous works only apply batch normalization to the input-to-hidden transformation of RNNs, we demonstrate that it is both possible and beneï¬cial to batch-normalize the hidden-to-hidden transi- tion, thereby reducing internal covariate shift between time steps. We evaluate our proposal on various sequential problems such as sequence classi- ï¬cation, language modeling and question answering. Our empirical results show that our batch-normalized LSTM consistently leads to faster convergence and im- proved generalization.
# INTRODUCTION | 1603.09025#0 | Recurrent Batch Normalization | We propose a reparameterization of LSTM that brings the benefits of batch
normalization to recurrent neural networks. Whereas previous works only apply
batch normalization to the input-to-hidden transformation of RNNs, we
demonstrate that it is both possible and beneficial to batch-normalize the
hidden-to-hidden transition, thereby reducing internal covariate shift between
time steps. We evaluate our proposal on various sequential problems such as
sequence classification, language modeling and question answering. Our
empirical results show that our batch-normalized LSTM consistently leads to
faster convergence and improved generalization. | http://arxiv.org/pdf/1603.09025 | Tim Cooijmans, Nicolas Ballas, César Laurent, Çağlar Gülçehre, Aaron Courville | cs.LG | null | null | cs.LG | 20160330 | 20170228 | [
{
"id": "1609.01704"
},
{
"id": "1609.09106"
},
{
"id": "1602.08210"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1511.06464"
},
{
"id": "1604.03640"
},
{
"id": "1512.02595"
},
{
"id": "1607.06450"
},
{
"id": "1502.03044"
}
] |
1603.09025 | 1 | # INTRODUCTION
Recurrent neural network architectures such as LSTM (Hochreiter & Schmidhuber, 1997) and GRU (Cho et al., 2014) have recently exhibited state-of-the-art performance on a wide range of complex sequential problems including speech recognition Amodei et al. (2015), machine transla- tion (Bahdanau et al., 2015) and image and video captioning (Xu et al., 2015; Yao et al., 2015). Top-performing models, however, are based on very high-capacity networks that are computation- ally intensive and costly to train. Effective optimization of recurrent neural networks is thus an active area of study (Pascanu et al., 2012; Martens & Sutskever, 2011; Ollivier, 2013). | 1603.09025#1 | Recurrent Batch Normalization | We propose a reparameterization of LSTM that brings the benefits of batch
normalization to recurrent neural networks. Whereas previous works only apply
batch normalization to the input-to-hidden transformation of RNNs, we
demonstrate that it is both possible and beneficial to batch-normalize the
hidden-to-hidden transition, thereby reducing internal covariate shift between
time steps. We evaluate our proposal on various sequential problems such as
sequence classification, language modeling and question answering. Our
empirical results show that our batch-normalized LSTM consistently leads to
faster convergence and improved generalization. | http://arxiv.org/pdf/1603.09025 | Tim Cooijmans, Nicolas Ballas, César Laurent, Çağlar Gülçehre, Aaron Courville | cs.LG | null | null | cs.LG | 20160330 | 20170228 | [
{
"id": "1609.01704"
},
{
"id": "1609.09106"
},
{
"id": "1602.08210"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1511.06464"
},
{
"id": "1604.03640"
},
{
"id": "1512.02595"
},
{
"id": "1607.06450"
},
{
"id": "1502.03044"
}
] |
1603.09320 | 1 | # Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs
Yu. A. Malkov, D. A. Yashunin
Abstract â We present a new approach for the approximate K-nearest neighbor search based on navigable small world graphs with controllable hierarchy (Hierarchical NSW, HNSW). The proposed solution is fully graph-based, without any need for additional search structures, which are typically used at the coarse search stage of the most proximity graph techniques. Hierarchical NSW incrementally builds a multi-layer structure consisting from hierarchical set of proximity graphs (layers) for nested subsets of the stored elements. The maximum layer in which an element is present is selected randomly with an exponentially decaying probability distribution. This allows producing graphs similar to the previously studied Navigable Small World (NSW) structures while additionally having the links separated by their characteristic distance scales. Starting search from the upper layer together with utilizing the scale separation boosts the performance compared to NSW and allows a logarithmic complexity scaling. Additional employment of a heuristic for selecting proximity graph neighbors significantly increases performance at high recall and in case of highly clustered data. Performance evaluation has demonstrated that the proposed general metric space search index is able to strongly outperform previous opensource state-of-the-art vector-only approaches. Similarity of the algorithm to the skip list structure allows straightforward balanced distributed implementation. | 1603.09320#1 | Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs | We present a new approach for the approximate K-nearest neighbor search based
on navigable small world graphs with controllable hierarchy (Hierarchical NSW,
HNSW). The proposed solution is fully graph-based, without any need for
additional search structures, which are typically used at the coarse search
stage of the most proximity graph techniques. Hierarchical NSW incrementally
builds a multi-layer structure consisting from hierarchical set of proximity
graphs (layers) for nested subsets of the stored elements. The maximum layer in
which an element is present is selected randomly with an exponentially decaying
probability distribution. This allows producing graphs similar to the
previously studied Navigable Small World (NSW) structures while additionally
having the links separated by their characteristic distance scales. Starting
search from the upper layer together with utilizing the scale separation boosts
the performance compared to NSW and allows a logarithmic complexity scaling.
Additional employment of a heuristic for selecting proximity graph neighbors
significantly increases performance at high recall and in case of highly
clustered data. Performance evaluation has demonstrated that the proposed
general metric space search index is able to strongly outperform previous
opensource state-of-the-art vector-only approaches. Similarity of the algorithm
to the skip list structure allows straightforward balanced distributed
implementation. | http://arxiv.org/pdf/1603.09320 | Yu. A. Malkov, D. A. Yashunin | cs.DS, cs.CV, cs.IR, cs.SI | 13 pages, 15 figures | null | cs.DS | 20160330 | 20180814 | [] |
1603.09025 | 2 | It is well-known that for deep feed-forward neural networks, covariate shift (Shimodaira, 2000; Ioffe & Szegedy, 2015) degrades the efï¬ciency of training. Covariate shift is a change in the distribution of the inputs to a model. This occurs continuously during training of feed-forward neural networks, where changing the parameters of a layer affects the distribution of the inputs to all layers above it. As a result, the upper layers are continually adapting to the shifting input distribution and unable to learn effectively. This internal covariate shift (Ioffe & Szegedy, 2015) may play an especially important role in recurrent neural networks, which resemble very deep feed-forward networks.
Batch normalization (Ioffe & Szegedy, 2015) is a recently proposed technique for controlling the distributions of feed-forward neural network activations, thereby reducing internal covariate shift. It involves standardizing the activations going into each layer, enforcing their means and variances to be invariant to changes in the parameters of the underlying layers. This effectively decouples each layerâs parameters from those of other layers, leading to a better-conditioned optimization problem. Indeed, deep neural networks trained with batch normalization converge signiï¬cantly faster and generalize better. | 1603.09025#2 | Recurrent Batch Normalization | We propose a reparameterization of LSTM that brings the benefits of batch
normalization to recurrent neural networks. Whereas previous works only apply
batch normalization to the input-to-hidden transformation of RNNs, we
demonstrate that it is both possible and beneficial to batch-normalize the
hidden-to-hidden transition, thereby reducing internal covariate shift between
time steps. We evaluate our proposal on various sequential problems such as
sequence classification, language modeling and question answering. Our
empirical results show that our batch-normalized LSTM consistently leads to
faster convergence and improved generalization. | http://arxiv.org/pdf/1603.09025 | Tim Cooijmans, Nicolas Ballas, César Laurent, Çağlar Gülçehre, Aaron Courville | cs.LG | null | null | cs.LG | 20160330 | 20170228 | [
{
"id": "1609.01704"
},
{
"id": "1609.09106"
},
{
"id": "1602.08210"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1511.06464"
},
{
"id": "1604.03640"
},
{
"id": "1512.02595"
},
{
"id": "1607.06450"
},
{
"id": "1502.03044"
}
] |
1603.09320 | 2 | Index Terms â Graph and tree search strategies, Artificial Intelligence, Information Search and Retrieval, Information Storage and Retrieval, Information Technology and Systems, Search process, Graphs and networks, Data Structures, Nearest neighbor search, Big data, Approximate search, Similarity search
ââââââââââ ââââââââââ
# 1 INTRODUCTION
onstantly growing amount of the available infor- mation resources has led to high demand in scalable and efficient similarity search data structures. One of the generally used approaches for information search is the K-Nearest Neighbor Search (K-NNS). The K-NNS as- sumes you have a defined distance function between the data elements and aims at finding the K elements from the dataset which minimize the distance to a given query. Such algorithms are used in many applications, such as non-parametric machine learning algorithms, image fea- tures matching in large scale databases [1] and semantic document retrieval [2]. A naïve approach to K-NNS is to compute the distances between the query and every ele- ment in the dataset and select the elements with minimal distance. Unfortunately, the complexity of the naïve ap- proach scales linearly with the number of stored elements making it infeasible for large-scale datasets. This has led to a high interest in development of fast and scalable K- NNS algorithms. | 1603.09320#2 | Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs | We present a new approach for the approximate K-nearest neighbor search based
on navigable small world graphs with controllable hierarchy (Hierarchical NSW,
HNSW). The proposed solution is fully graph-based, without any need for
additional search structures, which are typically used at the coarse search
stage of the most proximity graph techniques. Hierarchical NSW incrementally
builds a multi-layer structure consisting from hierarchical set of proximity
graphs (layers) for nested subsets of the stored elements. The maximum layer in
which an element is present is selected randomly with an exponentially decaying
probability distribution. This allows producing graphs similar to the
previously studied Navigable Small World (NSW) structures while additionally
having the links separated by their characteristic distance scales. Starting
search from the upper layer together with utilizing the scale separation boosts
the performance compared to NSW and allows a logarithmic complexity scaling.
Additional employment of a heuristic for selecting proximity graph neighbors
significantly increases performance at high recall and in case of highly
clustered data. Performance evaluation has demonstrated that the proposed
general metric space search index is able to strongly outperform previous
opensource state-of-the-art vector-only approaches. Similarity of the algorithm
to the skip list structure allows straightforward balanced distributed
implementation. | http://arxiv.org/pdf/1603.09320 | Yu. A. Malkov, D. A. Yashunin | cs.DS, cs.CV, cs.IR, cs.SI | 13 pages, 15 figures | null | cs.DS | 20160330 | 20180814 | [] |
1603.09025 | 3 | Although batch normalization has demonstrated signiï¬cant training speed-ups and generalization beneï¬ts in feed-forward networks, it is proven to be difï¬cult to apply in recurrent architectures (Lau- rent et al., 2016; Amodei et al., 2015). It has found limited use in stacked RNNs, where the nor- to the input of each RNN, but not âhorizontallyâ between malization is applied âverticallyâ, i.e. timesteps. RNNs are deeper in the time direction, and as such batch normalization would be most beneï¬cial when applied horizontally. However, Laurent et al. (2016) hypothesized that applying batch normalization in this way hurts training because of exploding gradients due to repeated rescal- ing.
Our ï¬ndings run counter to this hypothesis. We show that it is both possible and highly beneï¬cial to apply batch normalization in the hidden-to-hidden transition of recurrent models. In particular, we describe a reparameterization of LSTM (Section 3) that involves batch normalization and demon- strate that it is easier to optimize and generalizes better. In addition, we empirically analyze the
1
Published as a conference paper at ICLR 2017 | 1603.09025#3 | Recurrent Batch Normalization | We propose a reparameterization of LSTM that brings the benefits of batch
normalization to recurrent neural networks. Whereas previous works only apply
batch normalization to the input-to-hidden transformation of RNNs, we
demonstrate that it is both possible and beneficial to batch-normalize the
hidden-to-hidden transition, thereby reducing internal covariate shift between
time steps. We evaluate our proposal on various sequential problems such as
sequence classification, language modeling and question answering. Our
empirical results show that our batch-normalized LSTM consistently leads to
faster convergence and improved generalization. | http://arxiv.org/pdf/1603.09025 | Tim Cooijmans, Nicolas Ballas, César Laurent, Çağlar Gülçehre, Aaron Courville | cs.LG | null | null | cs.LG | 20160330 | 20170228 | [
{
"id": "1609.01704"
},
{
"id": "1609.09106"
},
{
"id": "1602.08210"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1511.06464"
},
{
"id": "1604.03640"
},
{
"id": "1512.02595"
},
{
"id": "1607.06450"
},
{
"id": "1502.03044"
}
] |
1603.09320 | 3 | Exact solutions for K-NNS [3-5] may offer a substantial search speedup only in case of relatively low dimensional data due to âcurse of dimensionalityâ. To overcome this problem a concept of Approximate Nearest Neighbors Search (K-ANNS) was proposed, which relaxes the condi- tion of the exact search by allowing a small number of
errors. The quality of an inexact search (the recall) is de- fined as the ratio between the number of found true near- est neighbors and K. The most popular K-ANNS solu- tions are based on approximated versions of tree algo- rithms [6, 7], locality-sensitive hashing (LSH) [8, 9] and product quantization (PQ) [10-17]. Proximity graph K- ANNS algorithms [10, 18-26] have recently gained popu- larity offering a better performance on high dimensional datasets. However, the power-law scaling of the proximi- ty graph routing causes extreme performance degrada- tion in case of low dimensional or clustered data. | 1603.09320#3 | Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs | We present a new approach for the approximate K-nearest neighbor search based
on navigable small world graphs with controllable hierarchy (Hierarchical NSW,
HNSW). The proposed solution is fully graph-based, without any need for
additional search structures, which are typically used at the coarse search
stage of the most proximity graph techniques. Hierarchical NSW incrementally
builds a multi-layer structure consisting from hierarchical set of proximity
graphs (layers) for nested subsets of the stored elements. The maximum layer in
which an element is present is selected randomly with an exponentially decaying
probability distribution. This allows producing graphs similar to the
previously studied Navigable Small World (NSW) structures while additionally
having the links separated by their characteristic distance scales. Starting
search from the upper layer together with utilizing the scale separation boosts
the performance compared to NSW and allows a logarithmic complexity scaling.
Additional employment of a heuristic for selecting proximity graph neighbors
significantly increases performance at high recall and in case of highly
clustered data. Performance evaluation has demonstrated that the proposed
general metric space search index is able to strongly outperform previous
opensource state-of-the-art vector-only approaches. Similarity of the algorithm
to the skip list structure allows straightforward balanced distributed
implementation. | http://arxiv.org/pdf/1603.09320 | Yu. A. Malkov, D. A. Yashunin | cs.DS, cs.CV, cs.IR, cs.SI | 13 pages, 15 figures | null | cs.DS | 20160330 | 20180814 | [] |
1603.09025 | 4 | 1
Published as a conference paper at ICLR 2017
gradient backpropagation and show that proper initialization of the batch normalization parameters is crucial to avoiding vanishing gradient (Section 4). We evaluate our proposal on several sequen- tial problems and show (Section 5) that our LSTM reparameterization consistently outperforms the LSTM baseline across tasks, in terms of both time to convergence and performance.
Liao & Poggio (2016) simultaneously investigated batch normalization in recurrent neural networks, albeit only for very short sequences (10 steps). Ba et al. (2016) independently developed a variant of batch normalization that is also applicable to recurrent neural networks and delivers similar im- provements as our method.
# 2 PREREQUISITES
2.1 LSTM
Long Short-Term Memory (LSTM) networks are an instance of a more general class of recurrent neural networks (RNNs), which we review brieï¬y in this paper. Given an input sequence X = (x1, x2, . . . , xT ), an RNN deï¬nes a sequence of hidden states ht according to | 1603.09025#4 | Recurrent Batch Normalization | We propose a reparameterization of LSTM that brings the benefits of batch
normalization to recurrent neural networks. Whereas previous works only apply
batch normalization to the input-to-hidden transformation of RNNs, we
demonstrate that it is both possible and beneficial to batch-normalize the
hidden-to-hidden transition, thereby reducing internal covariate shift between
time steps. We evaluate our proposal on various sequential problems such as
sequence classification, language modeling and question answering. Our
empirical results show that our batch-normalized LSTM consistently leads to
faster convergence and improved generalization. | http://arxiv.org/pdf/1603.09025 | Tim Cooijmans, Nicolas Ballas, César Laurent, Çağlar Gülçehre, Aaron Courville | cs.LG | null | null | cs.LG | 20160330 | 20170228 | [
{
"id": "1609.01704"
},
{
"id": "1609.09106"
},
{
"id": "1602.08210"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1511.06464"
},
{
"id": "1604.03640"
},
{
"id": "1512.02595"
},
{
"id": "1607.06450"
},
{
"id": "1502.03044"
}
] |
1603.09320 | 4 | In this paper we propose the Hierarchical Navigable Small World (Hierarchical NSW, HNSW), a new fully graph based incremental K-ANNS structure, which can offer a much better logarithmic complexity scaling. The main contributions are: explicit selection of the graphâs enter-point node, separation of links by different scales and use of an advanced heuristic to select the neighbors. Alternatively, Hierarchical NSW algorithm can be seen as an extension of the probabilistic skip list structure [27] with proximity graphs instead of the linked lists. Perfor- mance evaluation has demonstrated that the proposed general metric space method is able to strongly outper- form previous opensource state-of-the-art approaches suitable only for vector spaces.
ââââââââââââââââ Y. Malkov is with the Federal state budgetary institution of science Institute of Applied Physics of the Russian Academy of Sciences, 46 Ul'yanov Street, 603950 Nizhny Novgorod, Russia. E-mail: [email protected].
This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible.
D. Yashunin. Addres: 31-33 ul. Krasnozvezdnaya, 603104 Nizhny Novgorod, Russia. E-mail: [email protected]
1
2 | 1603.09320#4 | Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs | We present a new approach for the approximate K-nearest neighbor search based
on navigable small world graphs with controllable hierarchy (Hierarchical NSW,
HNSW). The proposed solution is fully graph-based, without any need for
additional search structures, which are typically used at the coarse search
stage of the most proximity graph techniques. Hierarchical NSW incrementally
builds a multi-layer structure consisting from hierarchical set of proximity
graphs (layers) for nested subsets of the stored elements. The maximum layer in
which an element is present is selected randomly with an exponentially decaying
probability distribution. This allows producing graphs similar to the
previously studied Navigable Small World (NSW) structures while additionally
having the links separated by their characteristic distance scales. Starting
search from the upper layer together with utilizing the scale separation boosts
the performance compared to NSW and allows a logarithmic complexity scaling.
Additional employment of a heuristic for selecting proximity graph neighbors
significantly increases performance at high recall and in case of highly
clustered data. Performance evaluation has demonstrated that the proposed
general metric space search index is able to strongly outperform previous
opensource state-of-the-art vector-only approaches. Similarity of the algorithm
to the skip list structure allows straightforward balanced distributed
implementation. | http://arxiv.org/pdf/1603.09320 | Yu. A. Malkov, D. A. Yashunin | cs.DS, cs.CV, cs.IR, cs.SI | 13 pages, 15 figures | null | cs.DS | 20160330 | 20180814 | [] |
1603.09025 | 5 | (1) where Wh â RdhÃdh, Wx â RdxÃdh , b â Rdh and the initial state h0 â Rdh are model parame- ters. A popular choice for the activation function Ï( · ) is tanh.
RNNs are popular in sequence modeling thanks to their natural ability to process variable-length sequences. However, training RNNs using first-order stochastic gradient descent (SGD) is notori- ously difficult due to the well-known problem of exploding/vanishing gradients (Bengio et al., 1994; Hochreiter, 1991; Pascanu et al., 2012). Gradient vanishing occurs when states h, are not influenced by small changes in much earlier states h,, t < 7, preventing learning of long-term dependencies in the input data. Although learning long-term dependencies is fundamentally difficult (Bengio et al., 1994), its effects can be mitigated through architectural variations such as LSTM (Hochreiter & Schmidhuber, 1997), GRU (Cho et al., 2014) and iRNN/uRNN (Le et al., 2015; Arjovsky et al., 2015).
In what follows, we focus on the LSTM architecture (Hochreiter & Schmidhuber, 1997) with recur- rent transition given by | 1603.09025#5 | Recurrent Batch Normalization | We propose a reparameterization of LSTM that brings the benefits of batch
normalization to recurrent neural networks. Whereas previous works only apply
batch normalization to the input-to-hidden transformation of RNNs, we
demonstrate that it is both possible and beneficial to batch-normalize the
hidden-to-hidden transition, thereby reducing internal covariate shift between
time steps. We evaluate our proposal on various sequential problems such as
sequence classification, language modeling and question answering. Our
empirical results show that our batch-normalized LSTM consistently leads to
faster convergence and improved generalization. | http://arxiv.org/pdf/1603.09025 | Tim Cooijmans, Nicolas Ballas, César Laurent, Çağlar Gülçehre, Aaron Courville | cs.LG | null | null | cs.LG | 20160330 | 20170228 | [
{
"id": "1609.01704"
},
{
"id": "1609.09106"
},
{
"id": "1602.08210"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1511.06464"
},
{
"id": "1604.03640"
},
{
"id": "1512.02595"
},
{
"id": "1607.06450"
},
{
"id": "1502.03044"
}
] |
1603.09320 | 5 | 1
2
# 2 RELATED WORKS
2.1 Proximity graph techniques In the vast majority of studied graph algorithms search- ing takes a form of greedy routing in k-Nearest Neighbor (k-NN) graphs [10, 18-26]. For a given proximity graph, we start the search at some enter point (it can be random or supplied by a separate algorithm) and iteratively trav- erse the graph. At each step of the traversal the algorithm examines the distances from a query to the neighbors of a current base node and then selects as the next base node the adjacent node that minimizes the distance, while con- stantly keeping track of the best discovered neighbors. The search is terminated when some stopping condition is met (e.g. the number of distance calculations). Links to the closest neighbors in a k-NN graph serve as a simple approximation of the Delaunay graph [25, 26] (a graph which guranties that the result of a basic greedy graph traversal is always the nearest neighbor). Unfortunately, Delaunay graph cannot be efficiently constructed without prior information about the structure of a space [4], but its approximation by the nearest neighbors can be done by using only distances between the stored elements. It was shown that proximity graph approaches with such approximation perform competitive to other k-ANNS thechniques, such as kd-trees or LSH [18-26]. | 1603.09320#5 | Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs | We present a new approach for the approximate K-nearest neighbor search based
on navigable small world graphs with controllable hierarchy (Hierarchical NSW,
HNSW). The proposed solution is fully graph-based, without any need for
additional search structures, which are typically used at the coarse search
stage of the most proximity graph techniques. Hierarchical NSW incrementally
builds a multi-layer structure consisting from hierarchical set of proximity
graphs (layers) for nested subsets of the stored elements. The maximum layer in
which an element is present is selected randomly with an exponentially decaying
probability distribution. This allows producing graphs similar to the
previously studied Navigable Small World (NSW) structures while additionally
having the links separated by their characteristic distance scales. Starting
search from the upper layer together with utilizing the scale separation boosts
the performance compared to NSW and allows a logarithmic complexity scaling.
Additional employment of a heuristic for selecting proximity graph neighbors
significantly increases performance at high recall and in case of highly
clustered data. Performance evaluation has demonstrated that the proposed
general metric space search index is able to strongly outperform previous
opensource state-of-the-art vector-only approaches. Similarity of the algorithm
to the skip list structure allows straightforward balanced distributed
implementation. | http://arxiv.org/pdf/1603.09320 | Yu. A. Malkov, D. A. Yashunin | cs.DS, cs.CV, cs.IR, cs.SI | 13 pages, 15 figures | null | cs.DS | 20160330 | 20180814 | [] |
1603.09025 | 6 | In what follows, we focus on the LSTM architecture (Hochreiter & Schmidhuber, 1997) with recur- rent transition given by
# Ëft Ëit Ëot Ëgt
= Whhtâ1 + Wxxt + b (2)
cq = o(f,) Oc-it o(ir) © tanh(gz) h, = o(6;) © tanh(c;),
cq = o(f,) Oc-it o(ir) © tanh(gz) (3) h, = o(6;) © tanh(c;), (4) where W), ⬠R@r*44, W,,.R%*44> b © R4*¢ and the initial states ho ⬠R@,c9 ⬠R® are model parameters. o is the logistic sigmoid function, and the © operator denotes the Hadamard product. | 1603.09025#6 | Recurrent Batch Normalization | We propose a reparameterization of LSTM that brings the benefits of batch
normalization to recurrent neural networks. Whereas previous works only apply
batch normalization to the input-to-hidden transformation of RNNs, we
demonstrate that it is both possible and beneficial to batch-normalize the
hidden-to-hidden transition, thereby reducing internal covariate shift between
time steps. We evaluate our proposal on various sequential problems such as
sequence classification, language modeling and question answering. Our
empirical results show that our batch-normalized LSTM consistently leads to
faster convergence and improved generalization. | http://arxiv.org/pdf/1603.09025 | Tim Cooijmans, Nicolas Ballas, César Laurent, Çağlar Gülçehre, Aaron Courville | cs.LG | null | null | cs.LG | 20160330 | 20170228 | [
{
"id": "1609.01704"
},
{
"id": "1609.09106"
},
{
"id": "1602.08210"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1511.06464"
},
{
"id": "1604.03640"
},
{
"id": "1512.02595"
},
{
"id": "1607.06450"
},
{
"id": "1502.03044"
}
] |
1603.09320 | 6 | The main drawbacks of the k-NN graph approaches are: 1) the power law scaling of the number of steps with the dataset size during the routing process [28, 29]; 2) a possible loss of global connectivity which leads to poor search results on clusetered data. To overcome these problems many hybrid approaches have been proposed that use auxiliary algorithms applicable only for vector data product quantization [10]) to find better candidates for the enter nodes by doing a coarse search.
In [25, 26, 30] authors proposed a proximity graph K-ANNS algorithm called Navigable Small World (NSW, also known as Metricized Small World, MSW), which utilized navigable graphs, i.e. graphs with logarithmic or polylogarithmic scaling of the number of hops during the greedy traversal with the respect of the network size [31, 32]. The NSW graph is constructed via consecu- tive insertion of elements in random order by bidirection- ally connecting them to the M closest neighbors from the previously inserted elements. The M closest neighbors are found using the structureâs search procedure (a variant of a greedy search from multiple random enter nodes). Links to the closest neighbors of the elements inserted in the beginning of the construction later become bridges between the network hubs that keep the overall graph connectivity and allow the logarithmic scaling of the number of hops during greedy routing. | 1603.09320#6 | Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs | We present a new approach for the approximate K-nearest neighbor search based
on navigable small world graphs with controllable hierarchy (Hierarchical NSW,
HNSW). The proposed solution is fully graph-based, without any need for
additional search structures, which are typically used at the coarse search
stage of the most proximity graph techniques. Hierarchical NSW incrementally
builds a multi-layer structure consisting from hierarchical set of proximity
graphs (layers) for nested subsets of the stored elements. The maximum layer in
which an element is present is selected randomly with an exponentially decaying
probability distribution. This allows producing graphs similar to the
previously studied Navigable Small World (NSW) structures while additionally
having the links separated by their characteristic distance scales. Starting
search from the upper layer together with utilizing the scale separation boosts
the performance compared to NSW and allows a logarithmic complexity scaling.
Additional employment of a heuristic for selecting proximity graph neighbors
significantly increases performance at high recall and in case of highly
clustered data. Performance evaluation has demonstrated that the proposed
general metric space search index is able to strongly outperform previous
opensource state-of-the-art vector-only approaches. Similarity of the algorithm
to the skip list structure allows straightforward balanced distributed
implementation. | http://arxiv.org/pdf/1603.09320 | Yu. A. Malkov, D. A. Yashunin | cs.DS, cs.CV, cs.IR, cs.SI | 13 pages, 15 figures | null | cs.DS | 20160330 | 20180814 | [] |
1603.09025 | 7 | The LSTM differs from simple RNNs in that it has an additional memory cell ct whose update is nearly linear which allows the gradient to ï¬ow back through time more easily. In addition, unlike the RNN which overwrites its content at each timestep, the update of the LSTM cell is regulated by a set of gates. The forget gate ft determines the extent to which information is carried over from the previous timestep, and the input gate it controls the ï¬ow of information from the current input xt. The output gate ot allows the model to read from the cell. This carefully controlled interaction with the cell is what allows the LSTM to robustly retain information for long periods of time.
2.2 BATCH NORMALIZATION
Covariate shift (Shimodaira, 2000) is a phenomenon in machine learning where the features pre- sented to a model change in distribution. In order for learning to succeed in the presence of covari- ate shift, the modelâs parameters must be adjusted not just to learn the concept at hand but also to adapt to the changing distribution of the inputs. In deep neural networks, this problem manifests as
2
Published as a conference paper at ICLR 2017
internal covariate shift (Ioffe & Szegedy, 2015), where changing the parameters of a layer affects the distribution of the inputs to all layers above it. | 1603.09025#7 | Recurrent Batch Normalization | We propose a reparameterization of LSTM that brings the benefits of batch
normalization to recurrent neural networks. Whereas previous works only apply
batch normalization to the input-to-hidden transformation of RNNs, we
demonstrate that it is both possible and beneficial to batch-normalize the
hidden-to-hidden transition, thereby reducing internal covariate shift between
time steps. We evaluate our proposal on various sequential problems such as
sequence classification, language modeling and question answering. Our
empirical results show that our batch-normalized LSTM consistently leads to
faster convergence and improved generalization. | http://arxiv.org/pdf/1603.09025 | Tim Cooijmans, Nicolas Ballas, César Laurent, Çağlar Gülçehre, Aaron Courville | cs.LG | null | null | cs.LG | 20160330 | 20170228 | [
{
"id": "1609.01704"
},
{
"id": "1609.09106"
},
{
"id": "1602.08210"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1511.06464"
},
{
"id": "1604.03640"
},
{
"id": "1512.02595"
},
{
"id": "1607.06450"
},
{
"id": "1502.03044"
}
] |
1603.09320 | 7 | Construction phase of the NSW structure can be effi- ciently parallelized without global synchronization and without mesuarable effect on accuracy [26], being a good choice for distributed search systems. The NSW approach delivered the state-of-the-art performance on some da- tasets [33, 34], however, due to the overall polylogarith- mic complexity scaling, the algorithm was still prone to
IEEE TRANSACTIONS ON JOURNAL NAME, MANUSCRIPT ID
severe performance degradation on low dimensional da- tasets (on which NSW could lose to tree-based algorithms by several orders of magnitude [34]).
2.2 Navigable small world models Networks with logarithmic or polylogarithmic scaling of the greedy graph routing are known as the navigable small world networks [31, 32]. Such networks are an im- portant topic of complex network theory aiming at un- derstanding of underlying mechanisms of real-life net- works formation in order to apply them for applications of scalable routing [32, 35, 36] and distributed similarity search [25, 26, 30, 37-40]. | 1603.09320#7 | Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs | We present a new approach for the approximate K-nearest neighbor search based
on navigable small world graphs with controllable hierarchy (Hierarchical NSW,
HNSW). The proposed solution is fully graph-based, without any need for
additional search structures, which are typically used at the coarse search
stage of the most proximity graph techniques. Hierarchical NSW incrementally
builds a multi-layer structure consisting from hierarchical set of proximity
graphs (layers) for nested subsets of the stored elements. The maximum layer in
which an element is present is selected randomly with an exponentially decaying
probability distribution. This allows producing graphs similar to the
previously studied Navigable Small World (NSW) structures while additionally
having the links separated by their characteristic distance scales. Starting
search from the upper layer together with utilizing the scale separation boosts
the performance compared to NSW and allows a logarithmic complexity scaling.
Additional employment of a heuristic for selecting proximity graph neighbors
significantly increases performance at high recall and in case of highly
clustered data. Performance evaluation has demonstrated that the proposed
general metric space search index is able to strongly outperform previous
opensource state-of-the-art vector-only approaches. Similarity of the algorithm
to the skip list structure allows straightforward balanced distributed
implementation. | http://arxiv.org/pdf/1603.09320 | Yu. A. Malkov, D. A. Yashunin | cs.DS, cs.CV, cs.IR, cs.SI | 13 pages, 15 figures | null | cs.DS | 20160330 | 20180814 | [] |
1603.09025 | 8 | internal covariate shift (Ioffe & Szegedy, 2015), where changing the parameters of a layer affects the distribution of the inputs to all layers above it.
Batch Normalization (Ioffe & Szegedy, 2015) is a recently proposed network reparameterization It does so by standardizing the activations using which aims to reduce internal covariate shift. empirical estimates of their means and standard deviations. However, it does not decorrelate the activations due to the computationally costly matrix inversion. The batch normalizing transform is as follows:
h-£h BN(h; 7,8) =6 +70 aE 6) Var [h] + â¬
where h ⬠R¢ is the vector of (pre)activations to be normalized, y ⬠R?,8 ⬠R¢@ are model parameters that determine the mean and standard deviation of the normalized activation, and e ⬠R is a regularization hyperparameter. The division should be understood to proceed elementwise. At training time, the statistics E[h] and Var[h] are estimated by the sample mean and sample vari- ance of the current minibatch. This allows for backpropagation through the statistics, preserving the convergence properties of stochastic gradient descent. During inference, the statistics are typically estimated based on the entire training set, so as to produce a deterministic prediction.
# 3 BATCH-NORMALIZED LSTM | 1603.09025#8 | Recurrent Batch Normalization | We propose a reparameterization of LSTM that brings the benefits of batch
normalization to recurrent neural networks. Whereas previous works only apply
batch normalization to the input-to-hidden transformation of RNNs, we
demonstrate that it is both possible and beneficial to batch-normalize the
hidden-to-hidden transition, thereby reducing internal covariate shift between
time steps. We evaluate our proposal on various sequential problems such as
sequence classification, language modeling and question answering. Our
empirical results show that our batch-normalized LSTM consistently leads to
faster convergence and improved generalization. | http://arxiv.org/pdf/1603.09025 | Tim Cooijmans, Nicolas Ballas, César Laurent, Çağlar Gülçehre, Aaron Courville | cs.LG | null | null | cs.LG | 20160330 | 20170228 | [
{
"id": "1609.01704"
},
{
"id": "1609.09106"
},
{
"id": "1602.08210"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1511.06464"
},
{
"id": "1604.03640"
},
{
"id": "1512.02595"
},
{
"id": "1607.06450"
},
{
"id": "1502.03044"
}
] |
1603.09320 | 8 | The first works to consider spatial models of navigable networks were done by J. Kleinberg [31, 41] as social net- work models for the famous Milgram experiment [42]. Kleinberg studied a variant of random Watts-Strogatz networks [43], using a regular lattice graph in d- dimensional vector space together with augmentation of long-range links following a specific long link length dis- tribution r-. For =d the number of hops to get to the target by greedy routing scales polylogarithmically (in- stead of a power law for any other value of ). This idea has inspired development of many K-NNS and K-ANNS algorithms based on the navigation effect [37-40]. But even though the Kleinbergâs navigability criterion in principle can be extended for more general spaces, in or- der to build such a navigable network one has to know the data distribution beforehand. In addition, greedy routing in Kleinbergâs graphs suffers from polylogarith- mic complexity scalability at best. | 1603.09320#8 | Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs | We present a new approach for the approximate K-nearest neighbor search based
on navigable small world graphs with controllable hierarchy (Hierarchical NSW,
HNSW). The proposed solution is fully graph-based, without any need for
additional search structures, which are typically used at the coarse search
stage of the most proximity graph techniques. Hierarchical NSW incrementally
builds a multi-layer structure consisting from hierarchical set of proximity
graphs (layers) for nested subsets of the stored elements. The maximum layer in
which an element is present is selected randomly with an exponentially decaying
probability distribution. This allows producing graphs similar to the
previously studied Navigable Small World (NSW) structures while additionally
having the links separated by their characteristic distance scales. Starting
search from the upper layer together with utilizing the scale separation boosts
the performance compared to NSW and allows a logarithmic complexity scaling.
Additional employment of a heuristic for selecting proximity graph neighbors
significantly increases performance at high recall and in case of highly
clustered data. Performance evaluation has demonstrated that the proposed
general metric space search index is able to strongly outperform previous
opensource state-of-the-art vector-only approaches. Similarity of the algorithm
to the skip list structure allows straightforward balanced distributed
implementation. | http://arxiv.org/pdf/1603.09320 | Yu. A. Malkov, D. A. Yashunin | cs.DS, cs.CV, cs.IR, cs.SI | 13 pages, 15 figures | null | cs.DS | 20160330 | 20180814 | [] |
1603.09025 | 9 | # 3 BATCH-NORMALIZED LSTM
This section introduces a reparameterization of LSTM that takes advantage of batch normalization. Contrary to Laurent et al. (2016); Amodei et al. (2015), we leverage batch normalization in both the input-to-hidden and the hidden-to-hidden transformations. We introduce the batch-normalizing transform BN( · ; γ, β) into the LSTM as follows:
i, Be] = BN(Waby1i7n, Bn) + BN(Wox0i 45 Be) + 6) t & cq = o(f;) Oc-it o(ir) © tanh(g;) (7) h, = o(6,) © tanh(BN(ey; 70, Be) (8)
In our formulation, we normalize the recurrent term Whhtâ1 and the input term Wxxt separately. Normalizing these terms individually gives the model better control over the relative contribution of the terms using the γh and γx parameters. We set βh = βx = 0 to avoid unnecessary redun- dancy, instead relying on the pre-existing parameter vector b to account for both biases. In order to leave the LSTM dynamics intact and preserve the gradient ï¬ow through ct, we do not apply batch normalization in the cell update. | 1603.09025#9 | Recurrent Batch Normalization | We propose a reparameterization of LSTM that brings the benefits of batch
normalization to recurrent neural networks. Whereas previous works only apply
batch normalization to the input-to-hidden transformation of RNNs, we
demonstrate that it is both possible and beneficial to batch-normalize the
hidden-to-hidden transition, thereby reducing internal covariate shift between
time steps. We evaluate our proposal on various sequential problems such as
sequence classification, language modeling and question answering. Our
empirical results show that our batch-normalized LSTM consistently leads to
faster convergence and improved generalization. | http://arxiv.org/pdf/1603.09025 | Tim Cooijmans, Nicolas Ballas, César Laurent, Çağlar Gülçehre, Aaron Courville | cs.LG | null | null | cs.LG | 20160330 | 20170228 | [
{
"id": "1609.01704"
},
{
"id": "1609.09106"
},
{
"id": "1602.08210"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1511.06464"
},
{
"id": "1604.03640"
},
{
"id": "1512.02595"
},
{
"id": "1607.06450"
},
{
"id": "1502.03044"
}
] |
1603.09320 | 9 | Another well-known class of navigable networks are the scale-free models [32, 35, 36], which can reproduce several features of real-life networks and advertised for routing applications [35]. However, networks produced by such models have even worse power law complexity scaling of the greedy search [44] and, just like the Klein- bergâs model, scale-free models require global knowledge of the data distribution, making them unusable for search applications.
The above-described NSW algorithm uses a simpler, previously unknown model of navigable networks, al- lowing decentralized graph construction and suitable for data in arbitrary spaces. It was suggested [44] that the NSW network formation mechanism may be responsible for navigability of large-scale biological neural networks (presence of which is disputable): similar models were able to describe growth of small brain networks, while the model predicts several high-level features observed in large scale neural networks. However, the NSW model also suffers from the polylogarithmic search complexity of the routing process.
# 3 MOTIVATION
The ways of improving the NSW search complexity can be identified through the analysis of the routing process, which was studied in detail in [32, 44]. The routing can be divided into two phases: âzoom-outâ and âzoom-inâ [32]. The greedy algorithm starts in the âzoom-outâ phase
AUTHOR ET AL.: TITLE | 1603.09320#9 | Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs | We present a new approach for the approximate K-nearest neighbor search based
on navigable small world graphs with controllable hierarchy (Hierarchical NSW,
HNSW). The proposed solution is fully graph-based, without any need for
additional search structures, which are typically used at the coarse search
stage of the most proximity graph techniques. Hierarchical NSW incrementally
builds a multi-layer structure consisting from hierarchical set of proximity
graphs (layers) for nested subsets of the stored elements. The maximum layer in
which an element is present is selected randomly with an exponentially decaying
probability distribution. This allows producing graphs similar to the
previously studied Navigable Small World (NSW) structures while additionally
having the links separated by their characteristic distance scales. Starting
search from the upper layer together with utilizing the scale separation boosts
the performance compared to NSW and allows a logarithmic complexity scaling.
Additional employment of a heuristic for selecting proximity graph neighbors
significantly increases performance at high recall and in case of highly
clustered data. Performance evaluation has demonstrated that the proposed
general metric space search index is able to strongly outperform previous
opensource state-of-the-art vector-only approaches. Similarity of the algorithm
to the skip list structure allows straightforward balanced distributed
implementation. | http://arxiv.org/pdf/1603.09320 | Yu. A. Malkov, D. A. Yashunin | cs.DS, cs.CV, cs.IR, cs.SI | 13 pages, 15 figures | null | cs.DS | 20160330 | 20180814 | [] |
1603.09025 | 10 | The batch normalization transform relies on batch statistics to standardize the LSTM activations. It would seem natural to share the statistics that are used for normalization across time, just as recurrent neural networks share their parameters over time. However, we ï¬nd that simply averaging statistics over time severely degrades performance. Although LSTM activations do converge to a stationary distribution, we observe that their statistics during the initial transient differ signiï¬cantly (see Fig- ure 5 in Appendix A). Consequently, we recommend using separate statistics for each timestep to preserve information of the initial transient phase in the activations.1
Generalizing the model to sequences longer than those seen during training is straightforward thanks to the rapid convergence of the activations to their steady-state distributions (cf. Figure 5). For our experiments we estimate the population statistics separately for each timestep 1, . . . , Tmax where
1 Note that we separate only the statistics over time and not the γ and β parameters.
3
Published as a conference paper at ICLR 2017
Tmax is the length of the longest training sequence. When at test time we need to generalize beyond Tmax, we use the population statistic of time Tmax for all time steps beyond it.
During training we estimate the statistics across the minibatch, independently for each timestep. At test time we use estimates obtained by averaging the minibatch estimates over the training set.
4 | 1603.09025#10 | Recurrent Batch Normalization | We propose a reparameterization of LSTM that brings the benefits of batch
normalization to recurrent neural networks. Whereas previous works only apply
batch normalization to the input-to-hidden transformation of RNNs, we
demonstrate that it is both possible and beneficial to batch-normalize the
hidden-to-hidden transition, thereby reducing internal covariate shift between
time steps. We evaluate our proposal on various sequential problems such as
sequence classification, language modeling and question answering. Our
empirical results show that our batch-normalized LSTM consistently leads to
faster convergence and improved generalization. | http://arxiv.org/pdf/1603.09025 | Tim Cooijmans, Nicolas Ballas, César Laurent, Çağlar Gülçehre, Aaron Courville | cs.LG | null | null | cs.LG | 20160330 | 20170228 | [
{
"id": "1609.01704"
},
{
"id": "1609.09106"
},
{
"id": "1602.08210"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1511.06464"
},
{
"id": "1604.03640"
},
{
"id": "1512.02595"
},
{
"id": "1607.06450"
},
{
"id": "1502.03044"
}
] |
1603.09320 | 10 | AUTHOR ET AL.: TITLE
from a low degree node and traverses the graph simulta- neously increasing the nodeâs degree until the characteris- tic radius of the node links length reaches the scale of the distance to the query. Before the latter happens, the aver- age degree of a node can stay relatively small, which leads to an increased probability of being stuck in a dis- tant false local minimum.
One can avoid the described problem in NSW by start- ing the search from a node with the maximum degree (good candidates are the first nodes inserted in the NSW structure [44]), directly going to the âzoom-inâ phase of the search. Tests show that setting hubs as starting points substantially increases probability of successful routing in the structure and provides significantly better perfor- mance at low dimensional data. However, it still has only a polylogarithmic complexity scalability of a single greedy search at best, and performs worse on high di- mensional data compared to Hierarchical NSW. | 1603.09320#10 | Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs | We present a new approach for the approximate K-nearest neighbor search based
on navigable small world graphs with controllable hierarchy (Hierarchical NSW,
HNSW). The proposed solution is fully graph-based, without any need for
additional search structures, which are typically used at the coarse search
stage of the most proximity graph techniques. Hierarchical NSW incrementally
builds a multi-layer structure consisting from hierarchical set of proximity
graphs (layers) for nested subsets of the stored elements. The maximum layer in
which an element is present is selected randomly with an exponentially decaying
probability distribution. This allows producing graphs similar to the
previously studied Navigable Small World (NSW) structures while additionally
having the links separated by their characteristic distance scales. Starting
search from the upper layer together with utilizing the scale separation boosts
the performance compared to NSW and allows a logarithmic complexity scaling.
Additional employment of a heuristic for selecting proximity graph neighbors
significantly increases performance at high recall and in case of highly
clustered data. Performance evaluation has demonstrated that the proposed
general metric space search index is able to strongly outperform previous
opensource state-of-the-art vector-only approaches. Similarity of the algorithm
to the skip list structure allows straightforward balanced distributed
implementation. | http://arxiv.org/pdf/1603.09320 | Yu. A. Malkov, D. A. Yashunin | cs.DS, cs.CV, cs.IR, cs.SI | 13 pages, 15 figures | null | cs.DS | 20160330 | 20180814 | [] |
1603.09025 | 11 | During training we estimate the statistics across the minibatch, independently for each timestep. At test time we use estimates obtained by averaging the minibatch estimates over the training set.
4
# INITIALIZING γ FOR GRADIENT FLOW
Although batch normalization allows for easy control of the pre-activation variance through the γ parameters, common practice is to normalize to unit variance. We suspect that the previous difï¬cul- ties with recurrent batch normalization reported in Laurent et al. (2016); Amodei et al. (2015) are largely due to improper initialization of the batch normalization parameters, and γ in particular. In this section we demonstrate the impact of γ on gradient ï¬ow. | 1603.09025#11 | Recurrent Batch Normalization | We propose a reparameterization of LSTM that brings the benefits of batch
normalization to recurrent neural networks. Whereas previous works only apply
batch normalization to the input-to-hidden transformation of RNNs, we
demonstrate that it is both possible and beneficial to batch-normalize the
hidden-to-hidden transition, thereby reducing internal covariate shift between
time steps. We evaluate our proposal on various sequential problems such as
sequence classification, language modeling and question answering. Our
empirical results show that our batch-normalized LSTM consistently leads to
faster convergence and improved generalization. | http://arxiv.org/pdf/1603.09025 | Tim Cooijmans, Nicolas Ballas, César Laurent, Çağlar Gülçehre, Aaron Courville | cs.LG | null | null | cs.LG | 20160330 | 20170228 | [
{
"id": "1609.01704"
},
{
"id": "1609.09106"
},
{
"id": "1602.08210"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1511.06464"
},
{
"id": "1604.03640"
},
{
"id": "1512.02595"
},
{
"id": "1607.06450"
},
{
"id": "1502.03044"
}
] |
1603.09320 | 11 | The reason for the polylogarithmic complexity scaling of a single greedy search in NSW is that the overall num- ber of distance computations is roughly proportional to a product of the average number of greedy algorithm hops by the average degree of the nodes on the greedy path. The average number of hops scales logarithmically [26, 44], while the average degree of the nodes on the greedy path also scales logarithmically due to the facts that: 1) the greedy search tends to go through the same hubs as the network grows [32, 44]; 2) the average num- ber of hub connections grows logarithmically with an increase of the network size. Thus we get an overall pol- ylogarithmic dependence of the resulting complexity.
The idea of Hierarchical NSW algorithm is to separate the links according to their length scale into different lay- ers and then search in a multilayer graph. In this case we can evaluate only a needed fixed portion of the connec- tions for each element independently of the networks size, thus allowing a logarithmic scalability. In such struc- ture the search starts from the upper layer which has only
Layer=2 Decreasing characteristic radius | 1603.09320#11 | Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs | We present a new approach for the approximate K-nearest neighbor search based
on navigable small world graphs with controllable hierarchy (Hierarchical NSW,
HNSW). The proposed solution is fully graph-based, without any need for
additional search structures, which are typically used at the coarse search
stage of the most proximity graph techniques. Hierarchical NSW incrementally
builds a multi-layer structure consisting from hierarchical set of proximity
graphs (layers) for nested subsets of the stored elements. The maximum layer in
which an element is present is selected randomly with an exponentially decaying
probability distribution. This allows producing graphs similar to the
previously studied Navigable Small World (NSW) structures while additionally
having the links separated by their characteristic distance scales. Starting
search from the upper layer together with utilizing the scale separation boosts
the performance compared to NSW and allows a logarithmic complexity scaling.
Additional employment of a heuristic for selecting proximity graph neighbors
significantly increases performance at high recall and in case of highly
clustered data. Performance evaluation has demonstrated that the proposed
general metric space search index is able to strongly outperform previous
opensource state-of-the-art vector-only approaches. Similarity of the algorithm
to the skip list structure allows straightforward balanced distributed
implementation. | http://arxiv.org/pdf/1603.09320 | Yu. A. Malkov, D. A. Yashunin | cs.DS, cs.CV, cs.IR, cs.SI | 13 pages, 15 figures | null | cs.DS | 20160330 | 20180814 | [] |
1603.09025 | 12 | RNN gradient propagation 10° 1.0 derivative through tanh 107 zs 10 . 6 £ 0.8) 10" g nw, 10 = =10% z Bey ||â gamma=0.10 60.6) s'10%))_â 8 10% |) â Z £10!) â $0.41 Ea pl|â Fi â gamma=0.60 s 107°) â gamma=0.70 2 102, â gamma=0.80 Boz) 24|| â gamma=0.90 ES 10 | gamma=1.00 o 10""0 100 200 300 460 500 600 700 600 Bo 02 04 0.6 08 1.0 t input standard deviation
(a) We visualize the gradient ï¬ow through a batch- normalized tanh RNN as a function of γ. High variance causes vanishing gradient. (b) We show the empirical expected derivative and interquartile range of tanh nonlinearity as a func- tion of input variance. High variance causes satura- tion, which decreases the expected derivative.
Figure 1: Inï¬uence of pre-activation variance on gradient propagation. | 1603.09025#12 | Recurrent Batch Normalization | We propose a reparameterization of LSTM that brings the benefits of batch
normalization to recurrent neural networks. Whereas previous works only apply
batch normalization to the input-to-hidden transformation of RNNs, we
demonstrate that it is both possible and beneficial to batch-normalize the
hidden-to-hidden transition, thereby reducing internal covariate shift between
time steps. We evaluate our proposal on various sequential problems such as
sequence classification, language modeling and question answering. Our
empirical results show that our batch-normalized LSTM consistently leads to
faster convergence and improved generalization. | http://arxiv.org/pdf/1603.09025 | Tim Cooijmans, Nicolas Ballas, César Laurent, Çağlar Gülçehre, Aaron Courville | cs.LG | null | null | cs.LG | 20160330 | 20170228 | [
{
"id": "1609.01704"
},
{
"id": "1609.09106"
},
{
"id": "1602.08210"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1511.06464"
},
{
"id": "1604.03640"
},
{
"id": "1512.02595"
},
{
"id": "1607.06450"
},
{
"id": "1502.03044"
}
] |
1603.09320 | 12 | Layer=2 Decreasing characteristic radius
Fig. 1. Illustration of the Hierarchical NSW idea. The search starts from an element from the top layer (shown red). Red arrows show direction of the greedy algorithm from the entry point to the query (shown green).
the longest links (the âzoom-inâ phase). The algorithm greedily traverses through the elements from the upper layer until a local minimum is reached (see Fig. 1 for illus- tration). After that, the search switches to the lower layer (which has shorter links), restarts from the element which was the local minimum in the previous layer and the pro- cess repeats. The maximum number of connections per element in all layers can be made constant, thus allowing a logarithmic complexity scaling of routing in a navigable small world network. | 1603.09320#12 | Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs | We present a new approach for the approximate K-nearest neighbor search based
on navigable small world graphs with controllable hierarchy (Hierarchical NSW,
HNSW). The proposed solution is fully graph-based, without any need for
additional search structures, which are typically used at the coarse search
stage of the most proximity graph techniques. Hierarchical NSW incrementally
builds a multi-layer structure consisting from hierarchical set of proximity
graphs (layers) for nested subsets of the stored elements. The maximum layer in
which an element is present is selected randomly with an exponentially decaying
probability distribution. This allows producing graphs similar to the
previously studied Navigable Small World (NSW) structures while additionally
having the links separated by their characteristic distance scales. Starting
search from the upper layer together with utilizing the scale separation boosts
the performance compared to NSW and allows a logarithmic complexity scaling.
Additional employment of a heuristic for selecting proximity graph neighbors
significantly increases performance at high recall and in case of highly
clustered data. Performance evaluation has demonstrated that the proposed
general metric space search index is able to strongly outperform previous
opensource state-of-the-art vector-only approaches. Similarity of the algorithm
to the skip list structure allows straightforward balanced distributed
implementation. | http://arxiv.org/pdf/1603.09320 | Yu. A. Malkov, D. A. Yashunin | cs.DS, cs.CV, cs.IR, cs.SI | 13 pages, 15 figures | null | cs.DS | 20160330 | 20180814 | [] |
1603.09025 | 13 | Figure 1: Inï¬uence of pre-activation variance on gradient propagation.
In Figure 1(a), we show how the pre-activation variance impacts gradient propagation in a simple RNN on the sequential MNIST task described in Section 5.1. Since backpropagation operates in reverse, the plot is best read from right to left. The quantity plotted is the norm of the gradient of the loss with respect to the hidden state at different time steps. For large values of γ, the norm quickly goes to zero as gradient is propagated back in time. For small values of γ the norm is nearly constant.
To demonstrate what we think is the cause of this vanishing, we drew samples x from a set of centered Gaussian distributions with standard deviation ranging from 0 to 1, and computed the derivative tanhâ(x) = 1 â tanh?(z) ⬠[0, 1] for each. Figure 1(b) shows the empirical distribution of the derivative as a function of standard deviation. When the input standard deviation is low, the input tends to be close to the origin where the derivative is close to 1. As the standard deviation increases, the expected derivative decreases as the input is more likely to be in the saturation regime. At unit standard deviation, the expected derivative is much smaller than 1. | 1603.09025#13 | Recurrent Batch Normalization | We propose a reparameterization of LSTM that brings the benefits of batch
normalization to recurrent neural networks. Whereas previous works only apply
batch normalization to the input-to-hidden transformation of RNNs, we
demonstrate that it is both possible and beneficial to batch-normalize the
hidden-to-hidden transition, thereby reducing internal covariate shift between
time steps. We evaluate our proposal on various sequential problems such as
sequence classification, language modeling and question answering. Our
empirical results show that our batch-normalized LSTM consistently leads to
faster convergence and improved generalization. | http://arxiv.org/pdf/1603.09025 | Tim Cooijmans, Nicolas Ballas, César Laurent, Çağlar Gülçehre, Aaron Courville | cs.LG | null | null | cs.LG | 20160330 | 20170228 | [
{
"id": "1609.01704"
},
{
"id": "1609.09106"
},
{
"id": "1602.08210"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1511.06464"
},
{
"id": "1604.03640"
},
{
"id": "1512.02595"
},
{
"id": "1607.06450"
},
{
"id": "1502.03044"
}
] |
1603.09320 | 13 | One way to form such a layered structure is to explicit- ly set links with different length scales by introducing layers. For every element we select an integer level l which defines the maximum layer for which the element belongs to. For all elements in a layer a proximity graph (i.e. graph containing only âshortâ links that approximate Delaunay graph) is built incrementally. If we set an expo- nentially decaying probability of l (i.e. following a geo- metric distribution) we get a logarithmic scaling of the expected number of layers in the structure. The search procedure is an iterative greedy search starting from the top layer and finishing at the zero layer.
In case we merge connections from all layers, the struc- ture becomes similar to the NSW graph (in this case the l can be put in correspondence to the node degree in NSW). In contrast to NSW, Hierarchical NSW construc- tion algorithm does not require the elements to be shuf- fled before the insertion - the stochasticity is achieved by using level randomization, thus allowing truly incremen- tal indexing even in case of temporarily alterating data distribution (though changing the order of the insertion slightly alters the performace due to only partially de- termenistic construction procedure). | 1603.09320#13 | Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs | We present a new approach for the approximate K-nearest neighbor search based
on navigable small world graphs with controllable hierarchy (Hierarchical NSW,
HNSW). The proposed solution is fully graph-based, without any need for
additional search structures, which are typically used at the coarse search
stage of the most proximity graph techniques. Hierarchical NSW incrementally
builds a multi-layer structure consisting from hierarchical set of proximity
graphs (layers) for nested subsets of the stored elements. The maximum layer in
which an element is present is selected randomly with an exponentially decaying
probability distribution. This allows producing graphs similar to the
previously studied Navigable Small World (NSW) structures while additionally
having the links separated by their characteristic distance scales. Starting
search from the upper layer together with utilizing the scale separation boosts
the performance compared to NSW and allows a logarithmic complexity scaling.
Additional employment of a heuristic for selecting proximity graph neighbors
significantly increases performance at high recall and in case of highly
clustered data. Performance evaluation has demonstrated that the proposed
general metric space search index is able to strongly outperform previous
opensource state-of-the-art vector-only approaches. Similarity of the algorithm
to the skip list structure allows straightforward balanced distributed
implementation. | http://arxiv.org/pdf/1603.09320 | Yu. A. Malkov, D. A. Yashunin | cs.DS, cs.CV, cs.IR, cs.SI | 13 pages, 15 figures | null | cs.DS | 20160330 | 20180814 | [] |
1603.09025 | 14 | We conjecture that this is what causes the gradient to vanish, and recommend initializing γ to a small value. In our trials we found that values of 0.01 or lower caused instabilities during training. Our choice of 0.1 seems to work well across different tasks.
# 5 EXPERIMENTS
This section presents an empirical evaluation of the proposed batch-normalized LSTM on four dif- ferent tasks. Note that for all the experiments, we initialize the batch normalization scale and shift parameters γ and β to 0.1 and 0 respectively.
4
Published as a conference paper at ICLR 2017
Pixel-by-Pixel MNIST (Validation Set) Pixel-by-Pixel Permuted-MNIST (Validation Set) = stm 02] â stm â bn_Istm â bnistm 0 20000 40000 60000 0000 00000 0 20000 40000 60000 0000 00000 Training Iteration Training Iteration
Figure 2: Accuracy on the validation set for the pixel by pixel MNIST classiï¬cation tasks. The batch-normalized LSTM is able to converge faster relatively to a baseline LSTM. Batch-normalized LSTM also shows some improve generalization on the permuted sequential MNIST that require to preserve long-term memory information.
5.1 SEQUENTIAL MNIST | 1603.09025#14 | Recurrent Batch Normalization | We propose a reparameterization of LSTM that brings the benefits of batch
normalization to recurrent neural networks. Whereas previous works only apply
batch normalization to the input-to-hidden transformation of RNNs, we
demonstrate that it is both possible and beneficial to batch-normalize the
hidden-to-hidden transition, thereby reducing internal covariate shift between
time steps. We evaluate our proposal on various sequential problems such as
sequence classification, language modeling and question answering. Our
empirical results show that our batch-normalized LSTM consistently leads to
faster convergence and improved generalization. | http://arxiv.org/pdf/1603.09025 | Tim Cooijmans, Nicolas Ballas, César Laurent, Çağlar Gülçehre, Aaron Courville | cs.LG | null | null | cs.LG | 20160330 | 20170228 | [
{
"id": "1609.01704"
},
{
"id": "1609.09106"
},
{
"id": "1602.08210"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1511.06464"
},
{
"id": "1604.03640"
},
{
"id": "1512.02595"
},
{
"id": "1607.06450"
},
{
"id": "1502.03044"
}
] |
1603.09320 | 14 | The Hierarchical NSW idea is also very similar to a well-known 1D probabilistic skip list structure [27] and can be described using its terms. The major difference to skip list is that we generalize the structure by replacing the linked list with proximity graphs. The Hierarchical
«a@®@ Cluster 2 @ { ) @ Cluster 1
Fig. 2. Illustration of the heuristic used to select the graph neigh- bors for two isolated clusters. A new element is inserted on the boundary of Cluster 1. All of the closest neighbors of the element belong to the Cluster 1, thus missing the edges of Delaunay graph between the clusters. The heuristic, however, selects element e2 from Cluster 2, thus, maintaining the global connectivity in case the inserted element is the closest to e2 compared to any other element from Cluster 1.
3
4
NSW approach thus can utilize the same methods for making the distributed approximate search/overlay structures [45]. | 1603.09320#14 | Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs | We present a new approach for the approximate K-nearest neighbor search based
on navigable small world graphs with controllable hierarchy (Hierarchical NSW,
HNSW). The proposed solution is fully graph-based, without any need for
additional search structures, which are typically used at the coarse search
stage of the most proximity graph techniques. Hierarchical NSW incrementally
builds a multi-layer structure consisting from hierarchical set of proximity
graphs (layers) for nested subsets of the stored elements. The maximum layer in
which an element is present is selected randomly with an exponentially decaying
probability distribution. This allows producing graphs similar to the
previously studied Navigable Small World (NSW) structures while additionally
having the links separated by their characteristic distance scales. Starting
search from the upper layer together with utilizing the scale separation boosts
the performance compared to NSW and allows a logarithmic complexity scaling.
Additional employment of a heuristic for selecting proximity graph neighbors
significantly increases performance at high recall and in case of highly
clustered data. Performance evaluation has demonstrated that the proposed
general metric space search index is able to strongly outperform previous
opensource state-of-the-art vector-only approaches. Similarity of the algorithm
to the skip list structure allows straightforward balanced distributed
implementation. | http://arxiv.org/pdf/1603.09320 | Yu. A. Malkov, D. A. Yashunin | cs.DS, cs.CV, cs.IR, cs.SI | 13 pages, 15 figures | null | cs.DS | 20160330 | 20180814 | [] |
1603.09025 | 15 | 5.1 SEQUENTIAL MNIST
We evaluate our batch-normalized LSTM on a sequential version of the MNIST classiï¬cation task (Le et al., 2015). The model processes each image one pixel at a time and ï¬nally predicts the label. We consider both sequential MNIST tasks, MNIST and permuted MNIST (pMNIST). In MNIST, the pixels are processed in scanline order. In pMNIST the pixels are processed in a ï¬xed random order.
Our baseline consists of an LSTM with 100 hidden units, with a softmax classiï¬er to produce a prediction from the ï¬nal hidden state. We use orthogonal initialization for all weight matrices, except for the hidden-to-hidden weight matrix which we initialize to be the identity matrix, as this yields better generalization performance on this task for both models. The model is trained using RMSProp (Tieleman & Hinton, 2012) with learning rate of 10â3 and 0.9 momentum. We apply gradient clipping at 1 to avoid exploding gradients. | 1603.09025#15 | Recurrent Batch Normalization | We propose a reparameterization of LSTM that brings the benefits of batch
normalization to recurrent neural networks. Whereas previous works only apply
batch normalization to the input-to-hidden transformation of RNNs, we
demonstrate that it is both possible and beneficial to batch-normalize the
hidden-to-hidden transition, thereby reducing internal covariate shift between
time steps. We evaluate our proposal on various sequential problems such as
sequence classification, language modeling and question answering. Our
empirical results show that our batch-normalized LSTM consistently leads to
faster convergence and improved generalization. | http://arxiv.org/pdf/1603.09025 | Tim Cooijmans, Nicolas Ballas, César Laurent, Çağlar Gülçehre, Aaron Courville | cs.LG | null | null | cs.LG | 20160330 | 20170228 | [
{
"id": "1609.01704"
},
{
"id": "1609.09106"
},
{
"id": "1602.08210"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1511.06464"
},
{
"id": "1604.03640"
},
{
"id": "1512.02595"
},
{
"id": "1607.06450"
},
{
"id": "1502.03044"
}
] |
1603.09320 | 15 | For the selection of the proximity graph connections during the element insertion we utilize a heuristic that takes into account the distances between the candidate elements to create diverse connections (a similar algo- rithm was utilized in the spatial approximation tree [4] to select the tree children) instead of just selecting the closest neighbors. The heuristic examines the candidates starting from the nearest (with respect to the inserted element) and creates a connection to a candidate only if it is closer to the base (inserted) element compared to any of the al- ready connected candidates (see Section 4 for the details). When the number of candidates is large enough the heuristic allows getting the exact relative neighborhood graph [46] as a subgraph, a minimal subgraph of the De- launay graph deducible by using only the distances be- tween the nodes. The relative neighborhood graph allows easily keeping the global connected component, even in case of highly clustered data (see Fig. 2 for illustration). Note that the heuristic creates extra edges compared to the exact relative neighborhood graphs, allowing control- ling the number of the connections which is important for search performance. For the case of 1D data the heuristic allows getting the exact Delaunay subgraph (which in this case coincides with the relative neighborhood graph) by using only information about the distances between the elements, thus making a direct transition from Hierar- chical NSW to the 1D probabilistic skip list algorithm. | 1603.09320#15 | Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs | We present a new approach for the approximate K-nearest neighbor search based
on navigable small world graphs with controllable hierarchy (Hierarchical NSW,
HNSW). The proposed solution is fully graph-based, without any need for
additional search structures, which are typically used at the coarse search
stage of the most proximity graph techniques. Hierarchical NSW incrementally
builds a multi-layer structure consisting from hierarchical set of proximity
graphs (layers) for nested subsets of the stored elements. The maximum layer in
which an element is present is selected randomly with an exponentially decaying
probability distribution. This allows producing graphs similar to the
previously studied Navigable Small World (NSW) structures while additionally
having the links separated by their characteristic distance scales. Starting
search from the upper layer together with utilizing the scale separation boosts
the performance compared to NSW and allows a logarithmic complexity scaling.
Additional employment of a heuristic for selecting proximity graph neighbors
significantly increases performance at high recall and in case of highly
clustered data. Performance evaluation has demonstrated that the proposed
general metric space search index is able to strongly outperform previous
opensource state-of-the-art vector-only approaches. Similarity of the algorithm
to the skip list structure allows straightforward balanced distributed
implementation. | http://arxiv.org/pdf/1603.09320 | Yu. A. Malkov, D. A. Yashunin | cs.DS, cs.CV, cs.IR, cs.SI | 13 pages, 15 figures | null | cs.DS | 20160330 | 20180814 | [] |
1603.09025 | 16 | The in-order MNIST task poses a unique problem for our model: the input for the ï¬rst hundred or so timesteps is constant across examples since the upper pixels are almost always black. This causes the variance of the hidden states to be exactly zero for a long period of time. Normalizing these zero- variance activations involves dividing zero by a small number at many timesteps, which does not affect the forward-propagated activations but causes the back-propagated gradient to explode. We work around this by adding Gaussian noise to the initial hidden states. Although the normalization ampliï¬es the noise to signal level, we ï¬nd that it does not hurt performance compared to data- dependent ways of initializing the hidden states.
Model TANH-RNN (Le et al., 2015) iRNN (Le et al., 2015) uRNN (Arjovsky et al., 2015) sTANH-RNN (Zhang et al., 2016) 35.0 97.0 95.1 98.1 35.0 82.0 91.4 94.0 LSTM (ours) BN-LSTM (ours) 98.9 99.0 90.2 95.4
Table 1: Accuracy obtained on the test set for the pixel by pixel MNIST classiï¬cation tasks | 1603.09025#16 | Recurrent Batch Normalization | We propose a reparameterization of LSTM that brings the benefits of batch
normalization to recurrent neural networks. Whereas previous works only apply
batch normalization to the input-to-hidden transformation of RNNs, we
demonstrate that it is both possible and beneficial to batch-normalize the
hidden-to-hidden transition, thereby reducing internal covariate shift between
time steps. We evaluate our proposal on various sequential problems such as
sequence classification, language modeling and question answering. Our
empirical results show that our batch-normalized LSTM consistently leads to
faster convergence and improved generalization. | http://arxiv.org/pdf/1603.09025 | Tim Cooijmans, Nicolas Ballas, César Laurent, Çağlar Gülçehre, Aaron Courville | cs.LG | null | null | cs.LG | 20160330 | 20170228 | [
{
"id": "1609.01704"
},
{
"id": "1609.09106"
},
{
"id": "1602.08210"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1511.06464"
},
{
"id": "1604.03640"
},
{
"id": "1512.02595"
},
{
"id": "1607.06450"
},
{
"id": "1502.03044"
}
] |
1603.09025 | 17 | Table 1: Accuracy obtained on the test set for the pixel by pixel MNIST classiï¬cation tasks
In Figure 2 we show the validation accuracy while training for both LSTM and batch-normalized LSTM (BN-LSTM). BN-LSTM converges faster than LSTM on both tasks. Additionally, we ob- serve that BN-LSTM generalizes signiï¬cantly better on pMNIST. It has been highlighted in Ar- jovsky et al. (2015) that pMNIST contains many longer term dependencies across pixels than in the original pixel ordering, where a lot of structure is local. A recurrent network therefore needs to
5
Published as a conference paper at ICLR 2017
Model LSTM (Graves, 2013) Penn Treebank 1.262 HF-MRNN (Mikolov et al., 2012) Norm-stabilized LSTM (Krueger & Memisevic, 2016) ME n-gram (Mikolov et al., 2012) 1.41 1.39 1.37 LSTM (ours) BN-LSTM (ours) 1.38 1.32 Zoneout (Krueger et al., 2016) HM-LSTM (Chung et al., 2016) HyperNetworks (Ha et al., 2016) 1.27 1.24 1.22 | 1603.09025#17 | Recurrent Batch Normalization | We propose a reparameterization of LSTM that brings the benefits of batch
normalization to recurrent neural networks. Whereas previous works only apply
batch normalization to the input-to-hidden transformation of RNNs, we
demonstrate that it is both possible and beneficial to batch-normalize the
hidden-to-hidden transition, thereby reducing internal covariate shift between
time steps. We evaluate our proposal on various sequential problems such as
sequence classification, language modeling and question answering. Our
empirical results show that our batch-normalized LSTM consistently leads to
faster convergence and improved generalization. | http://arxiv.org/pdf/1603.09025 | Tim Cooijmans, Nicolas Ballas, César Laurent, Çağlar Gülçehre, Aaron Courville | cs.LG | null | null | cs.LG | 20160330 | 20170228 | [
{
"id": "1609.01704"
},
{
"id": "1609.09106"
},
{
"id": "1602.08210"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1511.06464"
},
{
"id": "1604.03640"
},
{
"id": "1512.02595"
},
{
"id": "1607.06450"
},
{
"id": "1502.03044"
}
] |
1603.09320 | 17 | Algorithm 1 INSERT(hnsw, q, M, Mmax, efConstruction, mL) Input: multilayer graph hnsw, new element q, number of established connections M, maximum number of connections for each element per layer Mmax, size of the dynamic candidate list efConstruction, nor- malization factor for level generation mL Output: update hnsw inserting element q 1 W â â
// list for the currently found nearest elements 2 ep â get enter point for hnsw 3 L â level of ep // top layer for hnsw 4 l â â-ln(unif(0..1))âmLâ // new elementâs level 5 for lc â L ⦠l+1 6 W â SEARCH-LAYER(q, ep, ef=1, lc) 7 ep â get the nearest element from W to q 8 for lc â min(L, l) ⦠0 9 W â SEARCH-LAYER(q, ep, efConstruction, lc) 10 neighbors â SELECT-NEIGHBORS(q, W, M, lc) // alg. 3 or alg. 4 11 add bidirectionall connectionts from neighbors to q at layer lc 12 for each e â neighbors // shrink connections if needed 13 eConn â | 1603.09320#17 | Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs | We present a new approach for the approximate K-nearest neighbor search based
on navigable small world graphs with controllable hierarchy (Hierarchical NSW,
HNSW). The proposed solution is fully graph-based, without any need for
additional search structures, which are typically used at the coarse search
stage of the most proximity graph techniques. Hierarchical NSW incrementally
builds a multi-layer structure consisting from hierarchical set of proximity
graphs (layers) for nested subsets of the stored elements. The maximum layer in
which an element is present is selected randomly with an exponentially decaying
probability distribution. This allows producing graphs similar to the
previously studied Navigable Small World (NSW) structures while additionally
having the links separated by their characteristic distance scales. Starting
search from the upper layer together with utilizing the scale separation boosts
the performance compared to NSW and allows a logarithmic complexity scaling.
Additional employment of a heuristic for selecting proximity graph neighbors
significantly increases performance at high recall and in case of highly
clustered data. Performance evaluation has demonstrated that the proposed
general metric space search index is able to strongly outperform previous
opensource state-of-the-art vector-only approaches. Similarity of the algorithm
to the skip list structure allows straightforward balanced distributed
implementation. | http://arxiv.org/pdf/1603.09320 | Yu. A. Malkov, D. A. Yashunin | cs.DS, cs.CV, cs.IR, cs.SI | 13 pages, 15 figures | null | cs.DS | 20160330 | 20180814 | [] |
1603.09025 | 18 | Table 2: Bits-per-character on the Penn Treebank test sequence.
characterize dependencies across varying time scales in order to solve this task. Our results suggest that BN-LSTM is better able to capture these long-term dependencies.
Table 1 reports the test set accuracy of the early stop model for LSTM and BN-LSTM using the pop- ulation statistics. Recurrent batch normalization leads to a better test score, especially for pMNIST where models have to leverage long-term temporal depencies. In addition, Table 1 shows that our batch-normalized LSTM achieves state of the art on both MNIST and pMNIST.
5.2 CHARACTER-LEVEL PENN TREEBANK
We evaluate our model on the task of character-level language modeling on the Penn Treebank corpus (Marcus et al., 1993) according to the train/valid/test partition of Mikolov et al. (2012). For training, we segment the training sequence into examples of length 100. The training sequence does not cleanly divide by 100, so for each epoch we randomly crop a subsequence that does and segment that instead. | 1603.09025#18 | Recurrent Batch Normalization | We propose a reparameterization of LSTM that brings the benefits of batch
normalization to recurrent neural networks. Whereas previous works only apply
batch normalization to the input-to-hidden transformation of RNNs, we
demonstrate that it is both possible and beneficial to batch-normalize the
hidden-to-hidden transition, thereby reducing internal covariate shift between
time steps. We evaluate our proposal on various sequential problems such as
sequence classification, language modeling and question answering. Our
empirical results show that our batch-normalized LSTM consistently leads to
faster convergence and improved generalization. | http://arxiv.org/pdf/1603.09025 | Tim Cooijmans, Nicolas Ballas, César Laurent, Çağlar Gülçehre, Aaron Courville | cs.LG | null | null | cs.LG | 20160330 | 20170228 | [
{
"id": "1609.01704"
},
{
"id": "1609.09106"
},
{
"id": "1602.08210"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1511.06464"
},
{
"id": "1604.03640"
},
{
"id": "1512.02595"
},
{
"id": "1607.06450"
},
{
"id": "1502.03044"
}
] |
1603.09320 | 18 | alg. 4 11 add bidirectionall connectionts from neighbors to q at layer lc 12 for each e â neighbors // shrink connections if needed 13 eConn â neighbourhood(e) at layer lc 14 if âeConnâ > Mmax // shrink connections of e // if lc = 0 then Mmax = Mmax0 15 eNewConn â SELECT-NEIGHBORS(e, eConn, Mmax, lc) // alg. 3 or alg. 4 16 set neighbourhood(e) at layer lc to eNewConn 17 ep â W 18 if l > L 19 set enter point for hnsw to q | 1603.09320#18 | Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs | We present a new approach for the approximate K-nearest neighbor search based
on navigable small world graphs with controllable hierarchy (Hierarchical NSW,
HNSW). The proposed solution is fully graph-based, without any need for
additional search structures, which are typically used at the coarse search
stage of the most proximity graph techniques. Hierarchical NSW incrementally
builds a multi-layer structure consisting from hierarchical set of proximity
graphs (layers) for nested subsets of the stored elements. The maximum layer in
which an element is present is selected randomly with an exponentially decaying
probability distribution. This allows producing graphs similar to the
previously studied Navigable Small World (NSW) structures while additionally
having the links separated by their characteristic distance scales. Starting
search from the upper layer together with utilizing the scale separation boosts
the performance compared to NSW and allows a logarithmic complexity scaling.
Additional employment of a heuristic for selecting proximity graph neighbors
significantly increases performance at high recall and in case of highly
clustered data. Performance evaluation has demonstrated that the proposed
general metric space search index is able to strongly outperform previous
opensource state-of-the-art vector-only approaches. Similarity of the algorithm
to the skip list structure allows straightforward balanced distributed
implementation. | http://arxiv.org/pdf/1603.09320 | Yu. A. Malkov, D. A. Yashunin | cs.DS, cs.CV, cs.IR, cs.SI | 13 pages, 15 figures | null | cs.DS | 20160330 | 20180814 | [] |
1603.09025 | 19 | Our baseline is an LSTM with 1000 units, trained to predict the next character using a softmax classiï¬er on the hidden state ht. We use stochastic gradient descent on minibatches of size 64, with gradient clipping at 1.0 and step rule determined by Adam (Kingma & Ba, 2014) with learning rate 0.002. We use orthogonal initialization for all weight matrices. The setup for the batch-normalized LSTM is the same in all respects except for the introduction of batch normalization as detailed in 3.
We show the learning curves in Figure 3(a). BN-LSTM converges faster and generalizes better than the LSTM baseline. Figure 3(b) shows the generalization of our model to longer sequences. We observe that using the population statistics improves generalization performance, which conï¬rms that repeating the last population statistic (cf. Section 3) is a viable strategy. In table 2 we report the performance of our best models (early-stopped on validation performance) on the Penn Treebank test sequence. Follow up works havd since improved the state of the art (Krueger et al., 2016; Chung et al., 2016; Ha et al., 2016).
# 5.3 TEXT8 | 1603.09025#19 | Recurrent Batch Normalization | We propose a reparameterization of LSTM that brings the benefits of batch
normalization to recurrent neural networks. Whereas previous works only apply
batch normalization to the input-to-hidden transformation of RNNs, we
demonstrate that it is both possible and beneficial to batch-normalize the
hidden-to-hidden transition, thereby reducing internal covariate shift between
time steps. We evaluate our proposal on various sequential problems such as
sequence classification, language modeling and question answering. Our
empirical results show that our batch-normalized LSTM consistently leads to
faster convergence and improved generalization. | http://arxiv.org/pdf/1603.09025 | Tim Cooijmans, Nicolas Ballas, César Laurent, Çağlar Gülçehre, Aaron Courville | cs.LG | null | null | cs.LG | 20160330 | 20170228 | [
{
"id": "1609.01704"
},
{
"id": "1609.09106"
},
{
"id": "1602.08210"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1511.06464"
},
{
"id": "1604.03640"
},
{
"id": "1512.02595"
},
{
"id": "1607.06450"
},
{
"id": "1502.03044"
}
] |
1603.09320 | 19 | IEEE TRANSACTIONS ON JOURNAL NAME, MANUSCRIPT ID
(published shortly after the first versions of the current manuscript were posted online) with a slightly different interpretation, based on the sparse neighborhood graphâs property of the exact routing [18].
# 4 ALGORITHM DESCRIPTION
Network construction algorithm (alg. 1) is organized via consecutive insertions of the stored elements into the graph structure. For every inserted element an integer maximum layer l is randomly selected with an exponen- tially decaying probability distribution (normalized by the mL parameter, see line 4 in alg. 1). | 1603.09320#19 | Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs | We present a new approach for the approximate K-nearest neighbor search based
on navigable small world graphs with controllable hierarchy (Hierarchical NSW,
HNSW). The proposed solution is fully graph-based, without any need for
additional search structures, which are typically used at the coarse search
stage of the most proximity graph techniques. Hierarchical NSW incrementally
builds a multi-layer structure consisting from hierarchical set of proximity
graphs (layers) for nested subsets of the stored elements. The maximum layer in
which an element is present is selected randomly with an exponentially decaying
probability distribution. This allows producing graphs similar to the
previously studied Navigable Small World (NSW) structures while additionally
having the links separated by their characteristic distance scales. Starting
search from the upper layer together with utilizing the scale separation boosts
the performance compared to NSW and allows a logarithmic complexity scaling.
Additional employment of a heuristic for selecting proximity graph neighbors
significantly increases performance at high recall and in case of highly
clustered data. Performance evaluation has demonstrated that the proposed
general metric space search index is able to strongly outperform previous
opensource state-of-the-art vector-only approaches. Similarity of the algorithm
to the skip list structure allows straightforward balanced distributed
implementation. | http://arxiv.org/pdf/1603.09320 | Yu. A. Malkov, D. A. Yashunin | cs.DS, cs.CV, cs.IR, cs.SI | 13 pages, 15 figures | null | cs.DS | 20160330 | 20180814 | [] |
1603.09025 | 20 | # 5.3 TEXT8
We evaluate our model on a second character-level language modeling task on the much larger text8 dataset (Mahoney, 2009). This dataset is derived from Wikipedia and consists of a sequence of 100M characters including only alphabetical characters and spaces. We follow Mikolov et al. (2012); Zhang et al. (2016) and use the ï¬rst 90M characters for training, the next 5M for validation and the ï¬nal 5M characters for testing. We train on nonoverlapping sequences of length 180.
Both our baseline and batch-normalized models are LSTMs with 2000 units, trained to predict the next character using a softmax classiï¬er on the hidden state ht. We use stochastic gradient descent on minibatches of size 128, with gradient clipping at 1.0 and step rule determined by Adam (Kingma & Ba, 2014) with learning rate 0.001. All weight matrices were initialized to be orthogonal.
6
Published as a conference paper at ICLR 2017
We early-stop on validation performance and report the test performance of the resulting model in table 3. We observe that BN-LSTM obtains a signiï¬cant performance improvement over the LSTM baseline. Chung et al. (2016) has since improved on our performance. | 1603.09025#20 | Recurrent Batch Normalization | We propose a reparameterization of LSTM that brings the benefits of batch
normalization to recurrent neural networks. Whereas previous works only apply
batch normalization to the input-to-hidden transformation of RNNs, we
demonstrate that it is both possible and beneficial to batch-normalize the
hidden-to-hidden transition, thereby reducing internal covariate shift between
time steps. We evaluate our proposal on various sequential problems such as
sequence classification, language modeling and question answering. Our
empirical results show that our batch-normalized LSTM consistently leads to
faster convergence and improved generalization. | http://arxiv.org/pdf/1603.09025 | Tim Cooijmans, Nicolas Ballas, César Laurent, Çağlar Gülçehre, Aaron Courville | cs.LG | null | null | cs.LG | 20160330 | 20170228 | [
{
"id": "1609.01704"
},
{
"id": "1609.09106"
},
{
"id": "1602.08210"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1511.06464"
},
{
"id": "1604.03640"
},
{
"id": "1512.02595"
},
{
"id": "1607.06450"
},
{
"id": "1502.03044"
}
] |
1603.09320 | 20 | The first phase of the insertion process starts from the top layer by greedily traversing the graph in order to find the ef closest neighbors to the inserted element q in the layer. After that, the algorithm continues the search from the next layer using the found closest neighbors from the previous layer as enter points, and the process repeats. Closest neighbors at each layer are found by a variant of the greedy search algorithm described in alg. 2, which is an updated version of the algorithm from [26]. To obtain the approximate ef nearest neighbors in some layer lÑ, a dynamic list W of ef closest found elements (initially filled with enter points) is kept during the search. The list is updated at each step by evaluating the neighborhood of the closest previously non-evaluated element in the list until the neighborhood of every element from the list is evaluated. Compared to limiting the number of distance calculations, Hierarchical NSW stop condition has an ad- vantage - it allows discarding candidates for evalution that are further from the query than the furthest element in the list, thus avoiding bloating of search structures. As in NSW, the list is emulated via two priority queues for better performance. The | 1603.09320#20 | Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs | We present a new approach for the approximate K-nearest neighbor search based
on navigable small world graphs with controllable hierarchy (Hierarchical NSW,
HNSW). The proposed solution is fully graph-based, without any need for
additional search structures, which are typically used at the coarse search
stage of the most proximity graph techniques. Hierarchical NSW incrementally
builds a multi-layer structure consisting from hierarchical set of proximity
graphs (layers) for nested subsets of the stored elements. The maximum layer in
which an element is present is selected randomly with an exponentially decaying
probability distribution. This allows producing graphs similar to the
previously studied Navigable Small World (NSW) structures while additionally
having the links separated by their characteristic distance scales. Starting
search from the upper layer together with utilizing the scale separation boosts
the performance compared to NSW and allows a logarithmic complexity scaling.
Additional employment of a heuristic for selecting proximity graph neighbors
significantly increases performance at high recall and in case of highly
clustered data. Performance evaluation has demonstrated that the proposed
general metric space search index is able to strongly outperform previous
opensource state-of-the-art vector-only approaches. Similarity of the algorithm
to the skip list structure allows straightforward balanced distributed
implementation. | http://arxiv.org/pdf/1603.09320 | Yu. A. Malkov, D. A. Yashunin | cs.DS, cs.CV, cs.IR, cs.SI | 13 pages, 15 figures | null | cs.DS | 20160330 | 20180814 | [] |
1603.09025 | 21 | Model text8 td-LSTM (Zhang et al., 2016) HF-MRNN (Mikolov et al., 2012) skipping RNN (Pachitariu & Sahani, 2013) 1.63 1.54 1.48 LSTM (ours) BN-LSTM (ours) 1.43 1.36 HM-LSTM (Chung et al., 2016) 1.29
Table 3: Bits-per-character on the text8 test sequence.
5.4 TEACHING MACHINES TO READ AND COMPREHEND
Recently, Hermann et al. (2015) introduced a set of challenging benchmarks for natural language processing, along with neural network architectures to address them. The tasks involve reading real news articles and answering questions about their content. Their principal model, the Atten- tive Reader, is a recurrent neural network that invokes an attention mechanism to locate relevant information in the document. Such models are notoriously hard to optimize and yet increasingly popular.
To demonstrate the generality and practical applicability of our proposal, we apply batch normaliza- tion in the Attentive Reader model and show that this drastically improves training. | 1603.09025#21 | Recurrent Batch Normalization | We propose a reparameterization of LSTM that brings the benefits of batch
normalization to recurrent neural networks. Whereas previous works only apply
batch normalization to the input-to-hidden transformation of RNNs, we
demonstrate that it is both possible and beneficial to batch-normalize the
hidden-to-hidden transition, thereby reducing internal covariate shift between
time steps. We evaluate our proposal on various sequential problems such as
sequence classification, language modeling and question answering. Our
empirical results show that our batch-normalized LSTM consistently leads to
faster convergence and improved generalization. | http://arxiv.org/pdf/1603.09025 | Tim Cooijmans, Nicolas Ballas, César Laurent, Çağlar Gülçehre, Aaron Courville | cs.LG | null | null | cs.LG | 20160330 | 20170228 | [
{
"id": "1609.01704"
},
{
"id": "1609.09106"
},
{
"id": "1602.08210"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1511.06464"
},
{
"id": "1604.03640"
},
{
"id": "1512.02595"
},
{
"id": "1607.06450"
},
{
"id": "1502.03044"
}
] |
1603.09320 | 21 | the furthest element in the list, thus avoiding bloating of search structures. As in NSW, the list is emulated via two priority queues for better performance. The distinctions from NSW (along with some queue optimizations) are: 1) the enter point is a fixed parameter; 2) instead of changing the number of multi-searches, the quality of the search is controlled by a different parameter ef (which was set to K in NSW [26]). | 1603.09320#21 | Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs | We present a new approach for the approximate K-nearest neighbor search based
on navigable small world graphs with controllable hierarchy (Hierarchical NSW,
HNSW). The proposed solution is fully graph-based, without any need for
additional search structures, which are typically used at the coarse search
stage of the most proximity graph techniques. Hierarchical NSW incrementally
builds a multi-layer structure consisting from hierarchical set of proximity
graphs (layers) for nested subsets of the stored elements. The maximum layer in
which an element is present is selected randomly with an exponentially decaying
probability distribution. This allows producing graphs similar to the
previously studied Navigable Small World (NSW) structures while additionally
having the links separated by their characteristic distance scales. Starting
search from the upper layer together with utilizing the scale separation boosts
the performance compared to NSW and allows a logarithmic complexity scaling.
Additional employment of a heuristic for selecting proximity graph neighbors
significantly increases performance at high recall and in case of highly
clustered data. Performance evaluation has demonstrated that the proposed
general metric space search index is able to strongly outperform previous
opensource state-of-the-art vector-only approaches. Similarity of the algorithm
to the skip list structure allows straightforward balanced distributed
implementation. | http://arxiv.org/pdf/1603.09320 | Yu. A. Malkov, D. A. Yashunin | cs.DS, cs.CV, cs.IR, cs.SI | 13 pages, 15 figures | null | cs.DS | 20160330 | 20180814 | [] |
1603.09025 | 22 | To demonstrate the generality and practical applicability of our proposal, we apply batch normaliza- tion in the Attentive Reader model and show that this drastically improves training.
We evaluate several variants. The ï¬rst variant, referred to as BN-LSTM, consists of the vanilla At- tentive Reader model with the LSTM simply replaced by our BN-LSTM reparameterization. The second variant, termed BN-everywhere, is exactly like the ï¬rst, except that we also introduce batch normalization into the attention computations, normalizing each term going into the tanh nonlin- earities.
Our third variant, BN-e*, is like BN-everywhere, but improved to more carefully handle variable- length sequences. Throughout this experiment we followed the common practice of padding each batch of variable-length data with zeros. However, this biases the batch mean and variance of xt toward zero. We address this effect using sequencewise normalization of the inputs as proposed by Laurent et al. (2016); Amodei et al. (2015). That is, we share statistics over time for normalization | 1603.09025#22 | Recurrent Batch Normalization | We propose a reparameterization of LSTM that brings the benefits of batch
normalization to recurrent neural networks. Whereas previous works only apply
batch normalization to the input-to-hidden transformation of RNNs, we
demonstrate that it is both possible and beneficial to batch-normalize the
hidden-to-hidden transition, thereby reducing internal covariate shift between
time steps. We evaluate our proposal on various sequential problems such as
sequence classification, language modeling and question answering. Our
empirical results show that our batch-normalized LSTM consistently leads to
faster convergence and improved generalization. | http://arxiv.org/pdf/1603.09025 | Tim Cooijmans, Nicolas Ballas, César Laurent, Çağlar Gülçehre, Aaron Courville | cs.LG | null | null | cs.LG | 20160330 | 20170228 | [
{
"id": "1609.01704"
},
{
"id": "1609.09106"
},
{
"id": "1602.08210"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1511.06464"
},
{
"id": "1604.03640"
},
{
"id": "1512.02595"
},
{
"id": "1607.06450"
},
{
"id": "1502.03044"
}
] |
1603.09320 | 22 | Algorithm 2 SEARCH-LAYER(q, ep, ef, lc) Input: query element q, enter points ep, number of nearest to q ele- ments to return ef, layer number lc Output: ef closest neighbors to q 1 v â ep // set of visited elements 2 C â ep // set of candidates 3 W â ep // dynamic list of found nearest neighbors 4 while âCâ > 0 5 c â extract nearest element from C to q 6 f â get furthest element from W to q 7 if distance(c, q) > distance(f, q) 8 break // all elements in W are evaluated 9 for each e â neighbourhood(c) at layer lc // update C and W 10 if e â v 11 v â v â e 12 f â get furthest element from W to q 13 if distance(e, q) < distance(f, q) or âWâ < ef 14 C â C â e 15 W â W â e 16 if âWâ > ef 17 remove furthest element from W to q 18 return W
AUTHOR ET AL.: TITLE | 1603.09320#22 | Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs | We present a new approach for the approximate K-nearest neighbor search based
on navigable small world graphs with controllable hierarchy (Hierarchical NSW,
HNSW). The proposed solution is fully graph-based, without any need for
additional search structures, which are typically used at the coarse search
stage of the most proximity graph techniques. Hierarchical NSW incrementally
builds a multi-layer structure consisting from hierarchical set of proximity
graphs (layers) for nested subsets of the stored elements. The maximum layer in
which an element is present is selected randomly with an exponentially decaying
probability distribution. This allows producing graphs similar to the
previously studied Navigable Small World (NSW) structures while additionally
having the links separated by their characteristic distance scales. Starting
search from the upper layer together with utilizing the scale separation boosts
the performance compared to NSW and allows a logarithmic complexity scaling.
Additional employment of a heuristic for selecting proximity graph neighbors
significantly increases performance at high recall and in case of highly
clustered data. Performance evaluation has demonstrated that the proposed
general metric space search index is able to strongly outperform previous
opensource state-of-the-art vector-only approaches. Similarity of the algorithm
to the skip list structure allows straightforward balanced distributed
implementation. | http://arxiv.org/pdf/1603.09320 | Yu. A. Malkov, D. A. Yashunin | cs.DS, cs.CV, cs.IR, cs.SI | 13 pages, 15 figures | null | cs.DS | 20160330 | 20180814 | [] |
1603.09025 | 23 | â isT⢠â isT⢠â BNLSTM â _BN-LSTM, population statistics == _BN-LSTM, batch statistics 22 mean bits per 16 Myââg000 a0 6000 8000 10000 12600 14000 16000 13h 200300 add â500 600700800340 âT000 training steps sequence length
(a) Performance in bits-per-character on length- 100 subsequences of the Penn Treebank validation sequence during training. (b) Generalization to longer subsequences of Penn Treebank using population statistics. The subse- quences are taken from the test sequence.
Figure 3: Penn Treebank evaluation
7
Published as a conference paper at ICLR 2017
1.0 STM train BN-e* train â LsTM valid â BN-e** valid error rate error rate 0.2 cats âod 100 200 300 400 500 600 700 800 a) 50. 100 150 200 250 300 350 400 training steps (thousands) training steps (thousands)
(b) Error rate on the validation set on the full CNN QA task from Hermann et al. (2015).
(a) Error rate on the validation set for the Atten- tive Reader models on a variant of the CNN QA task (Hermann et al., 2015). As detailed in Ap- pendix C, the theoretical lower bound on the error rate on this task is 43%. | 1603.09025#23 | Recurrent Batch Normalization | We propose a reparameterization of LSTM that brings the benefits of batch
normalization to recurrent neural networks. Whereas previous works only apply
batch normalization to the input-to-hidden transformation of RNNs, we
demonstrate that it is both possible and beneficial to batch-normalize the
hidden-to-hidden transition, thereby reducing internal covariate shift between
time steps. We evaluate our proposal on various sequential problems such as
sequence classification, language modeling and question answering. Our
empirical results show that our batch-normalized LSTM consistently leads to
faster convergence and improved generalization. | http://arxiv.org/pdf/1603.09025 | Tim Cooijmans, Nicolas Ballas, César Laurent, Çağlar Gülçehre, Aaron Courville | cs.LG | null | null | cs.LG | 20160330 | 20170228 | [
{
"id": "1609.01704"
},
{
"id": "1609.09106"
},
{
"id": "1602.08210"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1511.06464"
},
{
"id": "1604.03640"
},
{
"id": "1512.02595"
},
{
"id": "1607.06450"
},
{
"id": "1502.03044"
}
] |
1603.09320 | 23 | AUTHOR ET AL.: TITLE
Algorithm 3 SELECT-NEIGHBORS-SIMPLE(q, C, M) Input: base element q, candidate elements C, number of neighbors to return M Output: M nearest elements to q return M nearest elements from C to q
Algorithm 5 K-NN-SEARCH(hnsw, q, K, ef) Input: multilayer graph hnsw, query element q, number of nearest neighbors to return K, size of the dynamic candidate list ef Output: K nearest elements to q 1 W â â
// set for the current nearest elements 2 ep â get enter point for hnsw 3 L â level of ep // top layer for hnsw 4 for lc â L ⦠1 5 W â SEARCH-LAYER(q, ep, ef=1, lc) 6 ep â get nearest element from W to q 7 W â SEARCH-LAYER(q, ep, ef, lc =0) 8 return K nearest elements from W to q
During the first phase of the search the ef parameter is set to 1 (simple greedy search) to avoid introduction of addi- tional parameters. | 1603.09320#23 | Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs | We present a new approach for the approximate K-nearest neighbor search based
on navigable small world graphs with controllable hierarchy (Hierarchical NSW,
HNSW). The proposed solution is fully graph-based, without any need for
additional search structures, which are typically used at the coarse search
stage of the most proximity graph techniques. Hierarchical NSW incrementally
builds a multi-layer structure consisting from hierarchical set of proximity
graphs (layers) for nested subsets of the stored elements. The maximum layer in
which an element is present is selected randomly with an exponentially decaying
probability distribution. This allows producing graphs similar to the
previously studied Navigable Small World (NSW) structures while additionally
having the links separated by their characteristic distance scales. Starting
search from the upper layer together with utilizing the scale separation boosts
the performance compared to NSW and allows a logarithmic complexity scaling.
Additional employment of a heuristic for selecting proximity graph neighbors
significantly increases performance at high recall and in case of highly
clustered data. Performance evaluation has demonstrated that the proposed
general metric space search index is able to strongly outperform previous
opensource state-of-the-art vector-only approaches. Similarity of the algorithm
to the skip list structure allows straightforward balanced distributed
implementation. | http://arxiv.org/pdf/1603.09320 | Yu. A. Malkov, D. A. Yashunin | cs.DS, cs.CV, cs.IR, cs.SI | 13 pages, 15 figures | null | cs.DS | 20160330 | 20180814 | [] |
1603.09025 | 24 | Figure 4: Training curves on the CNN question-answering tasks.
of the input terms Wxxt, but not for the recurrent terms Whht or the cell output ct. Doing so avoids many issues involving degenerate statistics due to input sequence padding.
Our fourth and ï¬nal variant BN-e** is like BN-e* but bidirectional. The main difï¬culty in adapting to bidirectional models also involves padding. Padding poses no problem as long as it is properly ignored (by not updating the hidden states based on padded regions of the input). However to perform the reverse application of a bidirectional model, it is common to simply reverse the padded sequences, thus moving the padding to the front. This causes similar problems as were observed on the sequential MNIST task (Section 5.1): the hidden states will not diverge during the initial timesteps and hence their variance will be severely underestimated. To get around this, we reverse only the unpadded portion of the input sequences and leave the padding in place.
See Appendix C for hyperparameters and task details. | 1603.09025#24 | Recurrent Batch Normalization | We propose a reparameterization of LSTM that brings the benefits of batch
normalization to recurrent neural networks. Whereas previous works only apply
batch normalization to the input-to-hidden transformation of RNNs, we
demonstrate that it is both possible and beneficial to batch-normalize the
hidden-to-hidden transition, thereby reducing internal covariate shift between
time steps. We evaluate our proposal on various sequential problems such as
sequence classification, language modeling and question answering. Our
empirical results show that our batch-normalized LSTM consistently leads to
faster convergence and improved generalization. | http://arxiv.org/pdf/1603.09025 | Tim Cooijmans, Nicolas Ballas, César Laurent, Çağlar Gülçehre, Aaron Courville | cs.LG | null | null | cs.LG | 20160330 | 20170228 | [
{
"id": "1609.01704"
},
{
"id": "1609.09106"
},
{
"id": "1602.08210"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1511.06464"
},
{
"id": "1604.03640"
},
{
"id": "1512.02595"
},
{
"id": "1607.06450"
},
{
"id": "1502.03044"
}
] |
1603.09320 | 24 | During the first phase of the search the ef parameter is set to 1 (simple greedy search) to avoid introduction of addi- tional parameters.
When the search reaches the layer that is equal or less than l, the second phase of the construction algorithm is initiated. The second phase differs in two points: 1) the ef parameter is increased from 1 to efConstruction in order to control the recall of the greedy search procedure; 2) the found closest neighbors on each layer are also used as candidates for the connections of the inserted element.
Two methods for the selection of M neighbors from the candidates were considered: simple connection to the closest elements (alg. 3) and the heuristic that accounts for the distances between the candidate elements to create connections in diverse directions (alg. 4), described in the Section 3. The heuristic has two additional parameters: extendCandidates (set to false by default) which extends the candidate set and useful only for extremely clustered data, and keepPrunedConnections which allows getting fixed number of connection per element. The maximum number of connections that an element can have per layer is defined by the parameter Mmax for every layer higher than zero (a special parameter Mmax0 is used for the ground layer separately). If a node is already full at the moment of making of a new connection, then its extended connection list gets shrunk by the same algorithm that used for the neighbors selection (algs. 3 or 4). | 1603.09320#24 | Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs | We present a new approach for the approximate K-nearest neighbor search based
on navigable small world graphs with controllable hierarchy (Hierarchical NSW,
HNSW). The proposed solution is fully graph-based, without any need for
additional search structures, which are typically used at the coarse search
stage of the most proximity graph techniques. Hierarchical NSW incrementally
builds a multi-layer structure consisting from hierarchical set of proximity
graphs (layers) for nested subsets of the stored elements. The maximum layer in
which an element is present is selected randomly with an exponentially decaying
probability distribution. This allows producing graphs similar to the
previously studied Navigable Small World (NSW) structures while additionally
having the links separated by their characteristic distance scales. Starting
search from the upper layer together with utilizing the scale separation boosts
the performance compared to NSW and allows a logarithmic complexity scaling.
Additional employment of a heuristic for selecting proximity graph neighbors
significantly increases performance at high recall and in case of highly
clustered data. Performance evaluation has demonstrated that the proposed
general metric space search index is able to strongly outperform previous
opensource state-of-the-art vector-only approaches. Similarity of the algorithm
to the skip list structure allows straightforward balanced distributed
implementation. | http://arxiv.org/pdf/1603.09320 | Yu. A. Malkov, D. A. Yashunin | cs.DS, cs.CV, cs.IR, cs.SI | 13 pages, 15 figures | null | cs.DS | 20160330 | 20180814 | [] |
1603.09025 | 25 | See Appendix C for hyperparameters and task details.
Figure 4(a) shows the learning curves for the different variants of the attentive reader. BN-LSTM trains dramatically faster than the LSTM baseline. BN-everywhere in turn shows a signiï¬cant im- provement over BN-LSTM. In addition, both BN-LSTM and BN-everywhere show a generalization beneï¬t over the baseline. The validation curves have minima of 50.3%, 49.5% and 50.0% for the baseline, BN-LSTM and BN-everywhere respectively. We emphasize that these results were ob- tained without any tweaking â all we did was to introduce batch normalization.
BN-e* and BN-e** converge faster yet, and reach lower minima: 47.1% and 43.9% respectively.
Model CNN valid CNN test Attentive Reader (Hermann et al., 2015) 38.4 37.0 LSTM (ours) BN-e** (ours) 45.5 37.9 45.0 36.3
Table 4: Error rates on the CNN question-answering task Hermann et al. (2015). | 1603.09025#25 | Recurrent Batch Normalization | We propose a reparameterization of LSTM that brings the benefits of batch
normalization to recurrent neural networks. Whereas previous works only apply
batch normalization to the input-to-hidden transformation of RNNs, we
demonstrate that it is both possible and beneficial to batch-normalize the
hidden-to-hidden transition, thereby reducing internal covariate shift between
time steps. We evaluate our proposal on various sequential problems such as
sequence classification, language modeling and question answering. Our
empirical results show that our batch-normalized LSTM consistently leads to
faster convergence and improved generalization. | http://arxiv.org/pdf/1603.09025 | Tim Cooijmans, Nicolas Ballas, César Laurent, Çağlar Gülçehre, Aaron Courville | cs.LG | null | null | cs.LG | 20160330 | 20170228 | [
{
"id": "1609.01704"
},
{
"id": "1609.09106"
},
{
"id": "1602.08210"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1511.06464"
},
{
"id": "1604.03640"
},
{
"id": "1512.02595"
},
{
"id": "1607.06450"
},
{
"id": "1502.03044"
}
] |
1603.09320 | 25 | The insertion procedure terminates when the connec- tions of the inserted elements are established on the zero layer.
The K-ANNS search algorithm used in Hierarchical NSW is presented in alg. 5. It is roughly equivalent to the insertion algorithm for an item with layer l=0. The differ- ence is that the closest neighbors found at the ground layer which are used as candidates for the connections are now returned as the search result. The quality of the search is controlled by the ef parameter (corresponding to efConstruction in the construction algorithm). | 1603.09320#25 | Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs | We present a new approach for the approximate K-nearest neighbor search based
on navigable small world graphs with controllable hierarchy (Hierarchical NSW,
HNSW). The proposed solution is fully graph-based, without any need for
additional search structures, which are typically used at the coarse search
stage of the most proximity graph techniques. Hierarchical NSW incrementally
builds a multi-layer structure consisting from hierarchical set of proximity
graphs (layers) for nested subsets of the stored elements. The maximum layer in
which an element is present is selected randomly with an exponentially decaying
probability distribution. This allows producing graphs similar to the
previously studied Navigable Small World (NSW) structures while additionally
having the links separated by their characteristic distance scales. Starting
search from the upper layer together with utilizing the scale separation boosts
the performance compared to NSW and allows a logarithmic complexity scaling.
Additional employment of a heuristic for selecting proximity graph neighbors
significantly increases performance at high recall and in case of highly
clustered data. Performance evaluation has demonstrated that the proposed
general metric space search index is able to strongly outperform previous
opensource state-of-the-art vector-only approaches. Similarity of the algorithm
to the skip list structure allows straightforward balanced distributed
implementation. | http://arxiv.org/pdf/1603.09320 | Yu. A. Malkov, D. A. Yashunin | cs.DS, cs.CV, cs.IR, cs.SI | 13 pages, 15 figures | null | cs.DS | 20160330 | 20180814 | [] |
1603.09025 | 26 | Table 4: Error rates on the CNN question-answering task Hermann et al. (2015).
We train and evaluate our best model, BN-e**, on the full task from (Hermann et al., 2015). On this dataset we had to reduce the number of hidden units to 120 to avoid severe overï¬tting. Training curves for BN-e** and a vanilla LSTM are shown in Figure 4(b). Table 4 reports performances of the early-stopped models.
8
Published as a conference paper at ICLR 2017
# 6 CONCLUSION | 1603.09025#26 | Recurrent Batch Normalization | We propose a reparameterization of LSTM that brings the benefits of batch
normalization to recurrent neural networks. Whereas previous works only apply
batch normalization to the input-to-hidden transformation of RNNs, we
demonstrate that it is both possible and beneficial to batch-normalize the
hidden-to-hidden transition, thereby reducing internal covariate shift between
time steps. We evaluate our proposal on various sequential problems such as
sequence classification, language modeling and question answering. Our
empirical results show that our batch-normalized LSTM consistently leads to
faster convergence and improved generalization. | http://arxiv.org/pdf/1603.09025 | Tim Cooijmans, Nicolas Ballas, César Laurent, Çağlar Gülçehre, Aaron Courville | cs.LG | null | null | cs.LG | 20160330 | 20170228 | [
{
"id": "1609.01704"
},
{
"id": "1609.09106"
},
{
"id": "1602.08210"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1511.06464"
},
{
"id": "1604.03640"
},
{
"id": "1512.02595"
},
{
"id": "1607.06450"
},
{
"id": "1502.03044"
}
] |
1603.09320 | 26 | Algorithm 4 SELECT-NEIGHBORS-HEURISTIC(q, C, M, lc, extendCandidates, keep- PrunedConnections) Input: base element q, candidate elements C, number of neighbors to return M, layer number lc, flag indicating whether or not to extend candidate list extendCandidates, flag indicating whether or not to add discarded elements keepPrunedConnections Output: M elements selected by the heuristic 1 R â â
2 W â C // working queue for the candidates 3 if extendCandidates // extend candidates by their neighbors 4 for each e â C 5 for each eadj â neighbourhood(e) at layer lc 6 if eadj â W 7 W â W â eadj 8 Wd â â
// queue for the discarded candidates 9 while âWâ > 0 and âRâ< M 10 e â extract nearest element from W to q 11 if e is closer to q compared to any element from R 12 R â R â e 13 else 14 Wd â Wd â e 15 if keepPrunedConnections // add some of the discarded // connections from Wd 16 while âWdâ> 0 and âRâ< M 17 R â R â extract nearest element from Wd to q 18 return R | 1603.09320#26 | Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs | We present a new approach for the approximate K-nearest neighbor search based
on navigable small world graphs with controllable hierarchy (Hierarchical NSW,
HNSW). The proposed solution is fully graph-based, without any need for
additional search structures, which are typically used at the coarse search
stage of the most proximity graph techniques. Hierarchical NSW incrementally
builds a multi-layer structure consisting from hierarchical set of proximity
graphs (layers) for nested subsets of the stored elements. The maximum layer in
which an element is present is selected randomly with an exponentially decaying
probability distribution. This allows producing graphs similar to the
previously studied Navigable Small World (NSW) structures while additionally
having the links separated by their characteristic distance scales. Starting
search from the upper layer together with utilizing the scale separation boosts
the performance compared to NSW and allows a logarithmic complexity scaling.
Additional employment of a heuristic for selecting proximity graph neighbors
significantly increases performance at high recall and in case of highly
clustered data. Performance evaluation has demonstrated that the proposed
general metric space search index is able to strongly outperform previous
opensource state-of-the-art vector-only approaches. Similarity of the algorithm
to the skip list structure allows straightforward balanced distributed
implementation. | http://arxiv.org/pdf/1603.09320 | Yu. A. Malkov, D. A. Yashunin | cs.DS, cs.CV, cs.IR, cs.SI | 13 pages, 15 figures | null | cs.DS | 20160330 | 20180814 | [] |
1603.09025 | 27 | 8
Published as a conference paper at ICLR 2017
# 6 CONCLUSION
Contrary to previous ï¬ndings by Laurent et al. (2016); Amodei et al. (2015), we have demonstrated that batch-normalizing the hidden states of recurrent neural networks greatly improves optimiza- tion. Indeed, doing so yields beneï¬ts similar to those of batch normalization in feed-forward neural networks: our proposed BN-LSTM trains faster and generalizes better on a variety of tasks in- cluding language modeling and question-answering. We have argued that proper initialization of the batch normalization parameters is crucial, and suggest that previous difï¬culties (Laurent et al., 2016; Amodei et al., 2015) were due in large part to improper initialization. Finally, we have shown our model to apply to complex settings involving variable-length data, bidirectionality and highly nonlinear attention mechanisms.
# ACKNOWLEDGEMENTS | 1603.09025#27 | Recurrent Batch Normalization | We propose a reparameterization of LSTM that brings the benefits of batch
normalization to recurrent neural networks. Whereas previous works only apply
batch normalization to the input-to-hidden transformation of RNNs, we
demonstrate that it is both possible and beneficial to batch-normalize the
hidden-to-hidden transition, thereby reducing internal covariate shift between
time steps. We evaluate our proposal on various sequential problems such as
sequence classification, language modeling and question answering. Our
empirical results show that our batch-normalized LSTM consistently leads to
faster convergence and improved generalization. | http://arxiv.org/pdf/1603.09025 | Tim Cooijmans, Nicolas Ballas, César Laurent, Çağlar Gülçehre, Aaron Courville | cs.LG | null | null | cs.LG | 20160330 | 20170228 | [
{
"id": "1609.01704"
},
{
"id": "1609.09106"
},
{
"id": "1602.08210"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1511.06464"
},
{
"id": "1604.03640"
},
{
"id": "1512.02595"
},
{
"id": "1607.06450"
},
{
"id": "1502.03044"
}
] |
1603.09320 | 27 | 4.1 Influence of the construction parameters Algorithm construction parameters mL and Mmax0 are re- sponsible for maintaining the small world navigability in the constructed graphs. Setting mL to zero (this corre- sponds to a single layer in the graph) and Mmax0 to M leads to production of directed k-NN graphs with a pow- er-law search complexity well studied before [21, 29] (as- suming using the alg. 3 for neighbor selection). Setting mL to zero and Mmax0 to infinity leads to production of NSW graphs with polylogarithmic complexity [25, 26]. Finally, setting mL to some non-zero value leads to emergence of controllable hierarchy graphs which allow logarithmic search complexity by introduction of layers (see the Sec- tion 3).
To achieve the optimum performance advantage of the controllable hierarchy, the overlap between neighbors on different layers (i.e. percent of element neighbors that are also belong to other layers) has to be small. In order to decrease the overlap we need to decrease the mL. Howev- er, at the same time, decreasing mL leads to an increase of average hop number during a greedy search on each lay- er, which negatively affects the performance. This leads to existence of the optimal value for the mL parameter. | 1603.09320#27 | Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs | We present a new approach for the approximate K-nearest neighbor search based
on navigable small world graphs with controllable hierarchy (Hierarchical NSW,
HNSW). The proposed solution is fully graph-based, without any need for
additional search structures, which are typically used at the coarse search
stage of the most proximity graph techniques. Hierarchical NSW incrementally
builds a multi-layer structure consisting from hierarchical set of proximity
graphs (layers) for nested subsets of the stored elements. The maximum layer in
which an element is present is selected randomly with an exponentially decaying
probability distribution. This allows producing graphs similar to the
previously studied Navigable Small World (NSW) structures while additionally
having the links separated by their characteristic distance scales. Starting
search from the upper layer together with utilizing the scale separation boosts
the performance compared to NSW and allows a logarithmic complexity scaling.
Additional employment of a heuristic for selecting proximity graph neighbors
significantly increases performance at high recall and in case of highly
clustered data. Performance evaluation has demonstrated that the proposed
general metric space search index is able to strongly outperform previous
opensource state-of-the-art vector-only approaches. Similarity of the algorithm
to the skip list structure allows straightforward balanced distributed
implementation. | http://arxiv.org/pdf/1603.09320 | Yu. A. Malkov, D. A. Yashunin | cs.DS, cs.CV, cs.IR, cs.SI | 13 pages, 15 figures | null | cs.DS | 20160330 | 20180814 | [] |
1603.09025 | 28 | # ACKNOWLEDGEMENTS
The authors would like to acknowledge the following agencies for research funding and computing support: the Nuance Foundation, Samsung, NSERC, Calcul Québec, Compute Canada, the Canada Research Chairs and CIFAR. Experiments were carried out using the Theano (Team et al., 2016) and the Blocks and Fuel (van Merriënboer et al., 2015) libraries for scientiï¬c computing. We thank David Krueger, Saizheng Zhang, Ishmael Belghazi and Yoshua Bengio for discussions and suggestions.
# REFERENCES
D. Amodei et al. Deep speech 2: End-to-end speech recognition in english and mandarin. arXiv:1512.02595, 2015.
M. Arjovsky, A. Shah, and Y. Bengio. Unitary evolution recurrent neural networks. arXiv:1511.06464, 2015.
Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv:1607.06450, 2016.
D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and translate. ICLR, 2015. | 1603.09025#28 | Recurrent Batch Normalization | We propose a reparameterization of LSTM that brings the benefits of batch
normalization to recurrent neural networks. Whereas previous works only apply
batch normalization to the input-to-hidden transformation of RNNs, we
demonstrate that it is both possible and beneficial to batch-normalize the
hidden-to-hidden transition, thereby reducing internal covariate shift between
time steps. We evaluate our proposal on various sequential problems such as
sequence classification, language modeling and question answering. Our
empirical results show that our batch-normalized LSTM consistently leads to
faster convergence and improved generalization. | http://arxiv.org/pdf/1603.09025 | Tim Cooijmans, Nicolas Ballas, César Laurent, Çağlar Gülçehre, Aaron Courville | cs.LG | null | null | cs.LG | 20160330 | 20170228 | [
{
"id": "1609.01704"
},
{
"id": "1609.09106"
},
{
"id": "1602.08210"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1511.06464"
},
{
"id": "1604.03640"
},
{
"id": "1512.02595"
},
{
"id": "1607.06450"
},
{
"id": "1502.03044"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.