id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
sequencelengths
1
1
1604.00289#47
Building Machines That Learn and Think Like People
Adapted from Battaglia et al. (2013). is approximate and probabilistic, and oversimpliï¬ ed and incomplete in many ways. Still, it is rich enough to support mental simulations that can predict how objects will move in the immediate future, either on their own or in responses to forces we might apply. This â intuitive physics engineâ approach enables ï¬ exible adaptation to a wide range of everyday scenarios and judgments in a way that goes beyond perceptual cues. For example (Figure 4), a physics-engine reconstruction of a tower of wooden blocks from the game Jenga can be used to predict whether (and how) a tower will fall, ï¬ nding close quantitative ï¬ ts to how adults make these predictions (Battaglia et al., 2013) as well as simpler kinds of physical predictions that have been studied in infants (T´egl´as et al., 2011). Simulation-based models can also capture how people make hypothetical or counterfactual predictions: What would happen if certain blocks are taken away, more blocks are added, or the table supporting the tower is jostled? What if certain blocks were glued together, or attached to the table surface? What if the blocks were made of diï¬ erent materials (Styrofoam, lead, ice)? What if the blocks of one color were much heavier than other colors?
1604.00289#46
1604.00289#48
1604.00289
[ "1511.06114" ]
1604.00289#48
Building Machines That Learn and Think Like People
Each of these physical judgments may require new features or new training for a pattern recognition account to work at the same level as the model-based simulator. What are the prospects for embedding or acquiring this kind of intuitive physics in deep learning systems? Connectionist models in psychology have previously been applied to physical reasoning tasks such as balance-beam rules (McClelland, 1988; Shultz, 2003) or rules relating distance, velocity, and time in motion (Buckingham & Shultz, 2000), but these networks do not attempt to work with complex scenes as input or a wide range of scenarios and judgments as in Figure 4.
1604.00289#47
1604.00289#49
1604.00289
[ "1511.06114" ]
1604.00289#49
Building Machines That Learn and Think Like People
17 A recent paper from Facebook AI researchers (Lerer, Gross, & Fergus, 2016) represents an excit- ing step in this direction. Lerer et al. (2016) trained a deep convolutional network-based system (PhysNet) to predict the stability of block towers from simulated images similar to those in Figure 4A but with much simpler conï¬ gurations of two, three or four cubical blocks stacked vertically. Impressively, PhysNet generalized to simple real images of block towers, matching human perfor- mance on these images, meanwhile exceeding human performance on synthetic images. Human and PhysNet conï¬ dence were also correlated across towers, although not as strongly as for the approx- imate probabilistic simulation models and experiments of Battaglia et al. (2013). One limitation is that PhysNet currently requires extensive training â between 100,000 and 200,000 scenes â to learn judgments for just a single task (will the tower fall?) on a narrow range of scenes (towers with two to four cubes). It has been shown to generalize, but also only in limited ways (e.g., from towers of two and three cubes to towers of four cubes). In contrast, people require far less experience to perform any particular task, and can generalize to many novel judgments and complex scenes with no new training required (although they receive large amounts of physics experience through interacting with the world more generally). Could deep learning systems such as PhysNet cap- ture this ï¬ exibility, without explicitly simulating the causal interactions between objects in three dimensions?
1604.00289#48
1604.00289#50
1604.00289
[ "1511.06114" ]
1604.00289#50
Building Machines That Learn and Think Like People
We are not sure, but we hope this is a challenge they will take on. Alternatively, instead of trying to make predictions without simulating physics, could neural net- works be trained to emulate a general-purpose physics simulator, given the right type and quantity of training data, such as the raw input experienced by a child? This is an active and intriguing area of research, but it too faces signiï¬ cant challenges. For networks trained on object classiï¬ cation, deeper layers often become sensitive to successively higher-level features, from edges to textures to shape-parts to full objects (Yosinski, Clune, Bengio, & Lipson, 2014; Zeiler & Fergus, 2014). For deep networks trained on physics-related data, it remains to be seen whether higher layers will encode objects, general physical properties, forces and approximately Newtonian dynamics. A generic network trained on dynamic pixel data might learn an implicit representation of these con- cepts, but would it generalize broadly beyond training contexts as peopleâ s more explicit physical concepts do? Consider for example a network that learns to predict the trajectories of several balls bouncing in a box (Kodratoï¬ & Michalski, 2014). If this network has actually learned something like Newtonian mechanics, then it should be able to generalize to interestingly diï¬ erent scenarios â at a minimum diï¬ erent numbers of diï¬ erently shaped objects, bouncing in boxes of diï¬ erent shapes and sizes and orientations with respect to gravity, not to mention more severe generalization tests such as all of the tower tasks discussed above, which also fall under the Newtonian domain.
1604.00289#49
1604.00289#51
1604.00289
[ "1511.06114" ]
1604.00289#51
Building Machines That Learn and Think Like People
Neural network researchers have yet to take on this challenge, but we hope they will. Whether such models can be learned with the kind (and quantity) of data available to human infants is not clear, as we discuss further in Section 5. It may be diï¬ cult to integrate object and physics-based primitives into deep neural networks, but the payoï¬ in terms of learning speed and performance could be great for many tasks. Consider the case of learning to play Frostbite. Although it can be diï¬ cult to discern exactly how a network learns to solve a particular task, the DQN probably does not parse a Frostbite screenshot in terms of stable objects or sprites moving according to the rules of intuitive physics (Figure 2). But incorporating a physics-engine-based representation could help DQNs learn to play games such as Frostbite in a faster and more general way, whether the physics knowledge is captured implicitly in a neural network or more explicitly in simulator. Beyond reducing the amount of training data and
1604.00289#50
1604.00289#52
1604.00289
[ "1511.06114" ]
1604.00289#52
Building Machines That Learn and Think Like People
18 potentially improving the level of performance reached by the DQN, it could eliminate the need to retrain a Frostbite network if the objects (e.g., birds, ice-ï¬ oes and ï¬ sh) are slightly altered in their behavior, reward-structure, or appearance. When a new object type such as a bear is introduced, as in the later levels of Frostbite (Figure 2D), a network endowed with intuitive physics would also have an easier time adding this object type to its knowledge (the challenge of adding new objects was also discussed in Marcus, 1998, 2001). In this way, the integration of intuitive physics and deep learning could be an important step towards more human-like learning algorithms. # 4.1.2 Intuitive psychology Intuitive psychology is another early-emerging ability with an important inï¬ uence on human learn- ing and thought. Pre-verbal infants distinguish animate agents from inanimate objects. This distinction is partially based on innate or early-present detectors for low-level cues, such as the presence of eyes, motion initiated from rest, and biological motion (Johnson, Slaughter, & Carey, 1998; Premack & Premack, 1997; Schlottmann, Ray, Mitchell, & Demetriou, 2006; Tremoulet & Feldman, 2000). Such cues are often suï¬ cient but not necessary for the detection of agency. Beyond these low-level cues, infants also expect agents to act contingently and reciprocally, to have goals, and to take eï¬ cient actions towards those goals subject to constraints (Csibra, 2008; Csibra, Biro, Koos, & Gergely, 2003; Spelke & Kinzler, 2007). These goals can be socially directed; at around three months of age, infants begin to discriminate anti-social agents that hurt or hinder others from neutral agents (Hamlin, 2013; Hamlin, Wynn, & Bloom, 2010), and they later distinguish between anti-social, neutral, and pro-social agents (Hamlin, Ullman, Tenenbaum, Goodman, & Baker, 2013; Hamlin, Wynn, & Bloom, 2007). It is generally agreed that infants expect agents to act in a goal-directed, eï¬
1604.00289#51
1604.00289#53
1604.00289
[ "1511.06114" ]
1604.00289#53
Building Machines That Learn and Think Like People
cient, and socially sensitive fashion (Spelke & Kinzler, 2007). What is less agreed on is the computational architecture that supports this reasoning and whether it includes any reference to mental states and explicit goals. One possibility is that intuitive psychology is simply cues â all the way downâ (Schlottmann, Cole, Watts, & White, 2013; Scholl & Gao, 2013), though this would require more and more cues as the scenarios become more complex. Consider for example a scenario in which an agent A is moving towards a box, and an agent B moves in a way that blocks A from reaching the box. Infants and adults are likely to interpret Bâ s behavior as â hinderingâ (Hamlin, 2013). This inference could be captured by a cue that states â if an agentâ s expected trajectory is prevented from completion, the blocking agent is given some negative association.â While the cue is easily calculated, the scenario is also easily changed to necessitate a diï¬ erent type of cue. Suppose A was already negatively associated (a â bad guyâ ); acting negatively towards A could then be seen as good (Hamlin, 2013). Or suppose something harmful was in the box which A didnâ
1604.00289#52
1604.00289#54
1604.00289
[ "1511.06114" ]
1604.00289#54
Building Machines That Learn and Think Like People
t know about. Now B would be seen as helping, protecting, or defending A. Suppose A knew there was something bad in the box and wanted it anyway. B could be seen as acting paternalistically. A cue-based account would be twisted into gnarled combinations such as â If an expected trajectory is prevented from completion, the blocking agent is given some negative association, unless that trajectory leads to a negative outcome or the blocking agent is previously associated as positive, 19 or the blocked agent is previously associated as negative, or...â
1604.00289#53
1604.00289#55
1604.00289
[ "1511.06114" ]
1604.00289#55
Building Machines That Learn and Think Like People
One alternative to a cue-based account is to use generative models of action choice, as in the Bayesian inverse planning (or â Bayesian theory-of-mindâ ) models of Baker, Saxe, and Tenenbaum (2009) or the â naive utility calculusâ models of Jara-Ettinger, Gweon, Tenenbaum, and Schulz (2015) (See also Jern and Kemp (2015) and Tauber and Steyvers (2011), and a related alternative based on predictive coding from Kilner, Friston, and Frith (2007)). These models formalize ex- plicitly mentalistic concepts such as â
1604.00289#54
1604.00289#56
1604.00289
[ "1511.06114" ]
1604.00289#56
Building Machines That Learn and Think Like People
goal,â â agent,â â planning,â â cost,â â eï¬ ciency,â and â belief,â used to describe core psychological reasoning in infancy. They assume adults and children treat agents as approximately rational planners who choose the most eï¬ cient means to their goals. Planning computations may be formalized as solutions to Markov Decision Processes (or POMDPs), taking as input utility and belief functions deï¬ ned over an agentâ s state-space and the agentâ s state-action transition functions, and returning a series of actions the agent should perform to most eï¬ ciently fulï¬ ll their goals (or maximize their utility). By simulating these planning processes, people can predict what agents might do next, or use inverse reasoning from observing a series of actions to infer the utilities and beliefs of agents in a scene. This is directly analogous to how simulation engines can be used for intuitive physics, to predict what will happen next in a scene or to infer objectsâ dynamical properties from how they move. It yields similarly ï¬ exible reasoning abilities: Utilities and beliefs can be adjusted to take into account how agents might act for a wide range of novel goals and situations. Importantly, unlike in intuitive physics, simulation-based reasoning in intuitive psychology can be nested recursively to understand social interactions â we can think about agents thinking about other agents. As in the case of intuitive physics, the success that generic deep networks will have in capturing intu- itive psychological reasoning will depend in part on the representations humans use. Although deep networks have not yet been applied to scenarios involving theory-of-mind and intuitive psychology, they could probably learn visual cues, heuristics and summary statistics of a scene that happens to involve agents.5 If that is all that underlies human psychological reasoning, a data-driven deep learning approach can likely ï¬
1604.00289#55
1604.00289#57
1604.00289
[ "1511.06114" ]
1604.00289#57
Building Machines That Learn and Think Like People
nd success in this domain. However, it seems to us that any full formal account of intuitive psychological reasoning needs to include representations of agency, goals, eï¬ ciency, and reciprocal relations. As with objects and forces, it is unclear whether a complete representation of these concepts (agents, goals, etc.) could emerge from deep neural networks trained in a purely predictive capacity. Similar to the intuitive physics domain, it is possible that with a tremendous number of training trajectories in a variety of scenarios, deep learning techniques could approximate the reasoning found in infancy even without learning anything about goal-directed or social-directed behavior more generally. But this is also unlikely to resemble how humans learn, understand, and apply intuitive psychology unless the concepts are genuine. In the same way that altering the setting of a scene or the target of inference in a physics-related task may be diï¬ cult to generalize without an understanding of objects, altering the setting of an agent or their goals and beliefs is diï¬ cult to reason about without understanding intuitive psychology.
1604.00289#56
1604.00289#58
1604.00289
[ "1511.06114" ]
1604.00289#58
Building Machines That Learn and Think Like People
In introducing the Frostbite challenge, we discussed how people can learn to play the game ex- 5While connectionist networks have been used to model the general transition that children undergo between the ages of 3 and 4 regarding false belief (e.g., Berthiaume, Shultz, & Onishi, 2013), we are referring here to scenarios which require inferring goals, utilities, and relations. 20 tremely quickly by watching an experienced player for just a few minutes and then playing a few rounds themselves.
1604.00289#57
1604.00289#59
1604.00289
[ "1511.06114" ]
1604.00289#59
Building Machines That Learn and Think Like People
Intuitive psychology provides a basis for eï¬ cient learning from others, espe- cially in teaching settings with the goal of communicating knowledge eï¬ ciently (Shafto, Goodman, & Griï¬ ths, 2014). In the case of watching an expert play Frostbite, whether or not there is an explicit goal to teach, intuitive psychology lets us infer the beliefs, desires, and intentions of the experienced player. For instance, we can learn that the birds are to be avoided from seeing how the experienced player appears to avoid them. We do not need to experience a single example of encountering a bird â and watching the Frostbite Bailey die because of the bird â in order to infer that birds are probably dangerous. It is enough to see that the experienced playerâ s avoidance behavior is best explained as acting under that belief. Similarly, consider how a sidekick agent (increasingly popular in video-games) is expected to help a player achieve their goals. This agent can be useful in diï¬ erent ways under diï¬ erent circumstances, such as getting items, clearing paths, ï¬ ghting, defending, healing, and providing information â all under the general notion of being helpful (Macindoe, 2013). An explicit agent representation can predict how such an agent will be helpful in new circumstances, while a bottom-up pixel-based representation is likely to struggle. There are several ways that intuitive psychology could be incorporated into contemporary deep learning systems. While it could be built in, intuitive psychology may arise in other ways. Connec- tionists have argued that innate constraints in the form of hard-wired cortical circuits are unlikely (Elman, 2005; Elman et al., 1996), but a simple inductive bias, for example the tendency to notice things that move other things, can bootstrap reasoning about more abstract concepts of agency (S. Ullman, Harari, & Dorfman, 2012).6 Similarly, a great deal of goal-directed and socially- directed actions can also be boiled down to a simple utility-calculus (e.g., Jara-Ettinger et al., 2015), in a way that could be shared with other cognitive abilities. While the origins of intuitive psychology is still a matter of debate, it is clear that these abilities are early-emerging and play an important role in human learning and thought, as exempliï¬
1604.00289#58
1604.00289#60
1604.00289
[ "1511.06114" ]
1604.00289#60
Building Machines That Learn and Think Like People
ed in the Frostbite challenge and when learning to play novel video games more broadly. # 4.2 Learning as rapid model building Since their inception, neural networks models have stressed the importance of learning. There are many learning algorithms for neural networks, including the perceptron algorithm (Rosenblatt, 1958), Hebbian learning (Hebb, 1949), the BCM rule (Bienenstock, Cooper, & Munro, 1982), back- propagation (Rumelhart, Hinton, & Williams, 1986), the wake-sleep algorithm (Hinton, Dayan, Frey, & Neal, 1995), and contrastive divergence (Hinton, 2002). Whether the goal is supervised or unsupervised learning, these algorithms implement learning as a process of gradual adjustment of connection strengths. For supervised learning, the updates are usually aimed at improving the algorithmâ s pattern recognition capabilities. For unsupervised learning, the updates work towards gradually matching the statistics of the modelâ s internal patterns with the statistics of the input data.
1604.00289#59
1604.00289#61
1604.00289
[ "1511.06114" ]
1604.00289#61
Building Machines That Learn and Think Like People
6We must be careful here about what â simpleâ means. An inductive bias may appear simple in the sense that we can compactly describe it, but it may require complex computation (e.g., motion analysis, parsing images into objects, etc.) just to produce its inputs in a suitable form. 21 In recent years, machine learning has found particular success using backpropagation and large data sets to solve diï¬ cult pattern recognition problems. While these algorithms have reached human- level performance on several challenging benchmarks, they are still far from matching human-level learning in other ways. Deep neural networks often need more data than people do in order to solve the same types of problems, whether it is learning to recognize a new type of object or learning to play a new game. When learning the meanings of words in their native language, children make meaningful generalizations from very sparse data (Carey & Bartlett, 1978; Landau, Smith, & Jones, 1988; E.
1604.00289#60
1604.00289#62
1604.00289
[ "1511.06114" ]
1604.00289#62
Building Machines That Learn and Think Like People
M. Markman, 1989; Smith, Jones, Landau, Gershkoï¬ -Stowe, & Samuelson, 2002; F. Xu & Tenenbaum, 2007, although see Horst and Samuelson 2008 regarding memory limitations). Children may only need to see a few examples of the concepts hairbrush, pineapple or lightsaber before they largely â get it,â grasping the boundary of the inï¬ nite set that deï¬ nes each concept from the inï¬ nite set of all possible objects. Children are far more practiced than adults at learning new concepts â learning roughly nine or ten new words each day after beginning to speak through the end of high school (Bloom, 2000; Carey, 1978) â yet the ability for rapid â one-shotâ
1604.00289#61
1604.00289#63
1604.00289
[ "1511.06114" ]
1604.00289#63
Building Machines That Learn and Think Like People
learning does not disappear in adulthood. An adult may need to see a single image or movie of a novel two-wheeled vehicle to infer the boundary between this concept and others, allowing him or her to discriminate new examples of that concept from similar looking objects of a diï¬ erent type (Fig. 1B-i). Contrasting with the eï¬ ciency of human learning, neural networks â by virtue of their generality as highly ï¬ exible function approximators â are notoriously data hungry (the bias/variance dilemma; Geman, Bienenstock, & Doursat, 1992). Benchmark tasks such as the ImageNet data set for object recognition provides hundreds or thousands of examples per class (Krizhevsky et al., 2012; Russakovsky et al., 2015) â 1000 hairbrushes, 1000 pineapples, etc. In the context of learning new handwritten characters or learning to play Frostbite, the MNIST benchmark includes 6000 examples of each handwritten digit (LeCun et al., 1998), and the DQN of V. Mnih et al. (2015) played each Atari video game for approximately 924 hours of unique training experience (Figure 3). In both cases, the algorithms are clearly using information less eï¬ ciently than a person learning to perform the same tasks. It is also important to mention that there are many classes of concepts that people learn more slowly. Concepts that are learned in school are usually far more challenging and more diï¬ cult to acquire, including mathematical functions, logarithms, derivatives, integrals, atoms, electrons, gravity, DNA, evolution, etc. There are also domains for which machine learners outperform human learners, such as combing through ï¬ nancial or weather data. But for the vast majority of cognitively natural concepts â the types of things that children learn as the meanings of words â people are still far better learners than machines. This is the type of learning we focus on in this section, which is more suitable for the enterprise of reverse engineering and articulating additional principles that make human learning successful. It also opens the possibility of building these ingredients into the next generation of machine learning and AI algorithms, with potential for making progress on learning concepts that are both easy and diï¬ cult for humans to acquire.
1604.00289#62
1604.00289#64
1604.00289
[ "1511.06114" ]
1604.00289#64
Building Machines That Learn and Think Like People
Even with just a few examples, people can learn remarkably rich conceptual models. One indicator of richness is the variety of functions that these models support (A. B. Markman & Ross, 2003; Solomon, Medin, & Lynch, 1999). Beyond classiï¬ cation, concepts support prediction (Murphy & Ross, 1994; Rips, 1975), action (Barsalou, 1983), communication (A. B. Markman & Makin, 1998), imagination (Jern & Kemp, 2013; Ward, 1994), explanation (Lombrozo, 2009; Williams
1604.00289#63
1604.00289#65
1604.00289
[ "1511.06114" ]
1604.00289#65
Building Machines That Learn and Think Like People
22 & Lombrozo, 2010), and composition (Murphy, 1988; Osherson & Smith, 1981). These abilities are not independent; rather they hang together and interact (Solomon et al., 1999), coming for free with the acquisition of the underlying concept. Returning to the previous example of a novel two wheeled vehicle, a person can sketch a range of new instances (Figure 1B-ii), parse the concept into its most important components (Figure 1B-iii), or even create a new complex concept through the combination of familiar concepts (Figure 1B-iv). Likewise, as discussed in the context of Frostbite, a learner who has acquired the basics of the game could ï¬ exibly apply their knowledge to an inï¬ nite set of Frostbite variants (Section 3.2). The acquired knowledge supports reconï¬ guration to new tasks and new demands, such as modifying the goals of the game to survive while acquiring as few points as possible, or to eï¬ ciently teach the rules to a friend.
1604.00289#64
1604.00289#66
1604.00289
[ "1511.06114" ]
1604.00289#66
Building Machines That Learn and Think Like People
This richness and ï¬ exibility suggests that learning as model building is a better metaphor than learning as pattern recognition. Furthermore, the human capacity for one-shot learning suggests that these models are built upon rich domain knowledge rather than starting from a blank slate (Mikolov, Joulin, & Baroni, 2016; Mitchell, Keller, & Kedar-cabelli, 1986). In contrast, much of the recent progress in deep learning has been on pattern recognition problems, including object recognition, speech recognition, and (model-free) video game learning, that utilize large data sets and little domain knowledge. There has been recent work on other types of tasks including learning generative models of images (Denton, Chintala, Szlam, & Fergus, 2015; Gregor, Danihelka, Graves, Rezende, & Wierstra, 2015), caption generation (Karpathy & Fei-Fei, 2015; Vinyals, Toshev, Bengio, & Erhan, 2014; K. Xu et al., 2015), question answering (Sukhbaatar, Szlam, Weston, & Fergus, 2015; Weston, Chopra, & Bordes, 2015), and learning simple algorithms (Graves, Wayne, & Danihelka, 2014; Grefenstette, Hermann, Suleyman, & Blunsom, 2015); we discuss question answering and learning simple algorithms in Section 6.1. Yet, at least for image and caption generation, these tasks have been mostly studied in the big data setting that is at odds with the impressive human ability for generalizing from small data sets (although see Rezende, Mohamed, Danihelka, Gregor, & Wierstra, 2016, for a deep learning approach to the Character Challenge).
1604.00289#65
1604.00289#67
1604.00289
[ "1511.06114" ]
1604.00289#67
Building Machines That Learn and Think Like People
And it has been diï¬ cult to learn neural-network-style representations that eï¬ ortlessly generalize to new tasks that they were not trained on (see Davis & Marcus, 2015; Marcus, 1998, 2001). What additional ingredients may be needed in order to rapidly learn more powerful and more general-purpose representations? A relevant case study is from our own work on the Characters Challenge (Section 3.1; Lake, 2014; Lake, Salakhutdinov, & Tenenbaum, 2015). People and various machine learning approaches were compared on their ability to learn new handwritten characters from the worldâ s alphabets. In addition to evaluating several types of deep learning models, we developed an algorithm using Bayesian Program Learning (BPL) that represents concepts as simple stochastic programs â that is, structured procedures that generate new examples of a concept when executed (Figure 5A). These programs allow the model to express causal knowledge about how the raw data are formed, and the probabilistic semantics allow the model to handle noise and perform creative tasks. Structure sharing across concepts is accomplished by the compositional reuse of stochastic primitives that can combine in new ways to create new concepts.
1604.00289#66
1604.00289#68
1604.00289
[ "1511.06114" ]
1604.00289#68
Building Machines That Learn and Think Like People
Note that we are overloading the word â modelâ to refer to both the BPL framework as a whole (which is a generative model), as well as the individual probabilistic models (or concepts) that it infers from images to represent novel handwritten characters. There is a hierarchy of models: a 23 â 2 â U | t â -<-0d Q a ° wl srt i sub-parts oh, |-. 9 ~, MW MM 7 A] say YT | sMUSTTE , Wy Wtoyvioy MT Taio AT) Ir are IVE spree ) obi 34 kp ly AAW) "ry MA BT yz) ST SHU â oanie relation: relation: relation:
1604.00289#67
1604.00289#69
1604.00289
[ "1511.06114" ]
1604.00289#69
Building Machines That Learn and Think Like People
Human or Machine? Ne aac i ansen we fe. ; + oxo MIDE | m les 15 8s Spr ey BRS Bakes oe aes b b L 7 TOT TIT TO ge, ro 8 sie 3 z| Figure 5: A causal, compositional model of handwritten characters. A) New types are generated compositionally by choosing primitive actions (color coded) from a library (i), combining these sub- parts (ii) to make parts (iii), and combining parts with relations to deï¬ ne simple programs (iv). These programs can create diï¬ erent tokens of a concept (v) that are rendered as binary images (vi). B) Probabilistic inference allows the model to generate new examples from just one example of a new concept, shown here in a visual Turing Test. An example image of a new concept is shown above each pair of grids. One grid was generated by 9 people and the other is 9 samples from the BPL model. Which grid in each pair (A or B) was generated by the machine? Answers by row: 1,2;1,1. Adapted from Lake, Salakhutdinov, and Tenenbaum (2015). higher-level program that generates diï¬ erent types of concepts, which are themselves programs that can be run to generate tokens of a concept. Here, describing learning as â rapid model buildingâ refers to the fact that BPL constructs generative models (lower-level programs) that produce tokens of a concept (Figure 5B). Learning models of this form allows BPL to perform a challenging one-shot classiï¬ cation task at human level performance (Figure 1A-i) and to outperform current deep learning models such as convolutional networks (Koch, Zemel, & Salakhutdinov, 2015).7 The representations that BPL learns also enable it to generalize in other, more creative human-like ways, as evaluated using â visual Turing testsâ (e.g., Figure 5B). These tasks include generating new examples (Figure 1A-ii and Figure 5B), parsing objects into their essential components (Figure 1A-iii), and generating new concepts in the style of a particular alphabet (Figure 1A-iv). The following sections discuss the three main ingredients â
1604.00289#68
1604.00289#70
1604.00289
[ "1511.06114" ]
1604.00289#70
Building Machines That Learn and Think Like People
compositionality, causality, and learning-to-learn â that were important to the success of this framework and we believe are important to understanding human learning as rapid model building more broadly. While these ingredients ï¬ t naturally within a BPL or a probabilistic program induction framework, they could also be integrated into deep learning models and other types of machine learning algorithms, prospects we discuss in more detail below. 7A new approach using convolutional â matching networksâ achieves good one-shot classiï¬ cation performance when discriminating between characters from diï¬ erent alphabets (Vinyals, Blundell, Lillicrap, Kavukcuoglu, & Wierstra, 2016). It has not yet been directly compared with BPL, which was evaluated on one-shot classiï¬ cation with characters from the same alphabet.
1604.00289#69
1604.00289#71
1604.00289
[ "1511.06114" ]
1604.00289#71
Building Machines That Learn and Think Like People
24 # 4.2.1 Compositionality Compositionality is the classic idea that new representations can be constructed through the com- bination of primitive elements. In computer programming, primitive functions can be combined together to create new functions, and these new functions can be further combined to create even more complex functions. This function hierarchy provides an eï¬ cient description of higher-level functions, like a part hierarchy for describing complex objects or scenes (Bienenstock, Geman, & Potter, 1997). Compositionality is also at the core of productivity: an inï¬ nite number of repre- sentations can be constructed from a ï¬ nite set of primitives, just as the mind can think an inï¬ nite number of thoughts, utter or understand an inï¬ nite number of sentences, or learn new concepts from a seemingly inï¬ nite space of possibilities (Fodor, 1975; Fodor & Pylyshyn, 1988; Marcus, 2001; Piantadosi, 2011). Compositionality has been broadly inï¬ uential in both AI and cognitive science, especially as it pertains to theories of object recognition, conceptual representation, and language. Here we focus on compositional representations of object concepts for illustration. Structural description models represent visual concepts as compositions of parts and relations, which provides a strong inductive bias for constructing models of new concepts (Biederman, 1987; Hummel & Biederman, 1992; Marr & Nishihara, 1978; van den Hengel et al., 2015; Winston, 1975). For instance, the novel two-wheeled vehicle in Figure 1B might be represented as two wheels connected by a platform, which provides the base for a post, which holds the handlebars, etc. Parts can themselves be composed of sub-parts, forming a â partonomyâ of part-whole relationships (G. A. Miller & Johnson-Laird, 1976; Tversky & Hemenway, 1984). In the novel vehicle example, the parts and relations can be shared and reused from existing related concepts, such as cars, scooters, motorcycles, and unicycles. Since the parts and relations are themselves a product of previous learning, their facilitation of the construction of new models is also an example of learning-to-learn â
1604.00289#70
1604.00289#72
1604.00289
[ "1511.06114" ]
1604.00289#72
Building Machines That Learn and Think Like People
another ingredient that is covered below. While compositionality and learning-to-learn ï¬ t naturally together, there are also forms of compositionality that rely less on previous learning, such as the bottom-up parts-based representation of Hoï¬ man and Richards (1984). Learning models of novel handwritten characters can be operationalized in a similar way. Handwrit- ten characters are inherently compositional, where the parts are pen strokes and relations describe how these strokes connect to each other. Lake, Salakhutdinov, and Tenenbaum (2015) modeled these parts using an additional layer of compositionality, where parts are complex movements cre- ated from simpler sub-part movements. New characters can be constructed by combining parts, sub-parts, and relations in novel ways (Figure 5). Compositionality is also central to the construc- tion of other types of symbolic concepts beyond characters, where new spoken words can be created through a novel combination of phonemes (Lake, Lee, Glass, & Tenenbaum, 2014) or a new gesture or dance move can be created through a combination of more primitive body movements.
1604.00289#71
1604.00289#73
1604.00289
[ "1511.06114" ]
1604.00289#73
Building Machines That Learn and Think Like People
An eï¬ cient representation for Frostbite should be similarly compositional and productive. A scene from the game is a composition of various object types, including birds, ï¬ sh, ice ï¬ oes, igloos, etc. (Figure 2). Representing this compositional structure explicitly is both more economical and better for generalization, as noted in previous work on object-oriented reinforcement learning (Diuk, Cohen, & Littman, 2008). Many repetitions of the same objects are present at diï¬ erent locations in the scene, and thus representing each as an identical instance of the same object with the
1604.00289#72
1604.00289#74
1604.00289
[ "1511.06114" ]
1604.00289#74
Building Machines That Learn and Think Like People
25 a woman riding a horse on a an airplane is parked on the a group of people standing on dirt road tarmac at an airport top of a beach Figure 6: Perceiving scenes without intuitive physics, intuitive psychology, compositionality, and causality. Image captions are generated by a deep neural network (Karpathy & Fei-Fei, 2015) using code from github.com/karpathy/neuraltalk2. Image credits: Gabriel Villena Fern´andez (left), TVBS Taiwan / Agence France-Presse (middle) and AP Photo / Dave Martin (right). Similar examples using images from Reuters news can be found at twitter.com/interesting jpg.
1604.00289#73
1604.00289#75
1604.00289
[ "1511.06114" ]
1604.00289#75
Building Machines That Learn and Think Like People
same properties is important for eï¬ cient representation and quick learning of the game. Further, new levels may contain diï¬ erent numbers and combinations of objects, where a compositional representation of objects â using intuitive physics and intuitive psychology as glue â would aid in making these crucial generalizations (Figure 2D). Deep neural networks have at least a limited notion of compositionality. Networks trained for object recognition encode part-like features in their deeper layers (Zeiler & Fergus, 2014), whereby the presentation of new types of objects can activate novel combinations of feature detectors. Similarly, a DQN trained to play Frostbite may learn to represent multiple replications of the same object with the same features, facilitated by the invariance properties of a convolutional neural network architecture. Recent work has shown how this type of compositionality can be made more explicit, where neural networks can be used for eï¬ cient inference in more structured generative models (both neural networks and 3D scene models) that explicitly represent the number of objects in a scene (Eslami et al., 2016). Beyond the compositionality inherent in parts, objects, and scenes, compositionality can also be important at the level of goals and sub-goals. Recent work on hierarchical-DQNs shows that by providing explicit object representations to a DQN, and then deï¬ ning sub-goals based on reaching those objects, DQNs can learn to play games with sparse rewards (such as Montezumaâ s Revenge) by combining these sub-goals together to achieve larger goals (Kulkarni, Narasimhan, Saeedi, & Tenenbaum, 2016). We look forward to seeing these new ideas continue to develop, potentially providing even richer notions of compositionality in deep neural networks that lead to faster and more ï¬ exible learning. To capture the full extent of the mindâ s compositionality, a model must include explicit represen- tations of objects, identity, and relations â all while maintaining a notion of â coherenceâ when understanding novel conï¬ gurations.
1604.00289#74
1604.00289#76
1604.00289
[ "1511.06114" ]
1604.00289#76
Building Machines That Learn and Think Like People
Coherence is related to our next principle, causality, which is discussed in the section that follows. 26 # 4.2.2 Causality In concept learning and scene understanding, causal models represent hypothetical real world pro- cesses that produce the perceptual observations. In control and reinforcement learning, causal models represent the structure of the environment, such as modeling state-to-state transitions or action/state-to-state transitions. Concept learning and vision models that utilize causality are usually generative (as opposed to discriminative; see Glossary in Table 1), but not every generative model is also causal. While a generative model describes a process for generating data, or at least assigns a probability distribu- tion over possible data points, this generative process may not resemble how the data are produced in the real world. Causality refers to the subclass of generative models that resemble, at an abstract level, how the data are actually generated. While generative neural networks such as Deep Belief Networks (Hinton, Osindero, & Teh, 2006) or variational auto-encoders (Gregor, Besse, Rezende, Danihelka, & Wierstra, 2016; Kingma, Rezende, Mohamed, & Welling, 2014) may generate com- pelling handwritten digits, they mark one end of the â causality spectrum,â since the steps of the generative process bear little resemblance to steps in the actual process of writing. In contrast, the generative model for characters using Bayesian Program Learning (BPL) does resemble the steps of writing, although even more causally faithful models are possible. Causality has been inï¬ uential in theories of perception. â Analysis-by-synthesisâ theories of per- ception maintain that sensory data can be more richly represented by modeling the process that generated it (Bever & Poeppel, 2010; Eden, 1962; Halle & Stevens, 1962; Neisser, 1966). Relat- ing data to its causal source provides strong priors for perception and learning, as well as a richer basis for generalizing in new ways and to new tasks. The canonical examples of this approach are speech and visual perception.
1604.00289#75
1604.00289#77
1604.00289
[ "1511.06114" ]
1604.00289#77
Building Machines That Learn and Think Like People
For instance, Liberman, Cooper, Shankweiler, and Studdert- Kennedy (1967) argued that the richness of speech perception is best explained by inverting the production plan, at the level of vocal tract movements, in order to explain the large amounts of acoustic variability and the blending of cues across adjacent phonemes. As discussed, causality does not have to be a literal inversion of the actual generative mechanisms, as proposed in the motor theory of speech. For the BPL of learning handwritten characters, causality is operationalized by treating concepts as motor programs, or abstract causal descriptions of how to produce examples of the concept, rather than concrete conï¬
1604.00289#76
1604.00289#78
1604.00289
[ "1511.06114" ]
1604.00289#78
Building Machines That Learn and Think Like People
gurations of speciï¬ c muscles (Figure 5A). Causality is an important factor in the modelâ s success in classifying and generating new examples after seeing just a single example of a new concept (Lake, Salakhutdinov, & Tenenbaum, 2015) (Figure 5B). Causal knowledge has also been shown to inï¬ uence how people learn new concepts; providing a learner with diï¬ erent types of causal knowledge changes how they learn and generalize. For example, the structure of the causal network underlying the features of a category inï¬ uences how people categorize new examples (Rehder, 2003; Rehder & Hastie, 2001). Similarly, as related to the Characters Challenge, the way people learn to write a novel handwritten character inï¬ uences later perception and categorization (Freyd, 1983, 1987). To explain the role of causality in learning, conceptual representations have been likened to intu- itive theories or explanations, providing the glue that lets core features stick while other equally applicable features wash away (Murphy & Medin, 1985). Borrowing examples from Murphy and Medin (1985), the feature â ï¬ ammableâ is more closely attached to wood than money due to the
1604.00289#77
1604.00289#79
1604.00289
[ "1511.06114" ]
1604.00289#79
Building Machines That Learn and Think Like People
27 underlying causal roles of the concepts, even though the feature is equally applicable to both; these causal roles derive from the functions of objects. Causality can also glue some features together by relating them to a deeper underlying cause, explaining why some features such as â can ï¬ y,â â has wings,â and â has feathersâ co-occur across objects while others do not. Beyond concept learning, people also understand scenes by building causal models. Human-level scene understanding involves composing a story that explains the perceptual observations, drawing upon and integrating the ingredients of intuitive physics, intuitive psychology, and compositionality. Perception without these ingredients, and absent the causal glue that binds them together, can lead to revealing errors. Consider image captions generated by a deep neural network (Figure 6; Karpathy & Fei-Fei, 2015). In many cases, the network gets the key objects in a scene correct but fails to understand the physical forces at work, the mental states of the people, or the causal relationships between the objects â in other words, it does not build the right causal model of the data. There have been steps towards deep neural networks and related approaches that learn causal mod- els. Lopez-Paz, Muandet, Scholk¨opf, and Tolstikhin (2015) introduced a discriminative, data-driven framework for distinguishing the direction of causality from examples. While it outperforms exist- ing methods on various causal prediction tasks, it is unclear how to apply the approach to inferring rich hierarchies of latent causal variables, as needed for the Frostbite Challenge and (especially) the Characters Challenge. Graves (2014) learned a generative model of cursive handwriting using a recurrent neural network trained on handwriting data. While it synthesizes impressive examples of handwriting in various styles, it requires a large training corpus and has not been applied to other tasks. The DRAW network performs both recognition and generation of handwritten digits using recurrent neural networks with a window of attention, producing a limited circular area of the image at each time step (Gregor et al., 2015). A more recent variant of DRAW was applied to generating examples of a novel character from just a single training example (Rezende et al., 2016).
1604.00289#78
1604.00289#80
1604.00289
[ "1511.06114" ]
1604.00289#80
Building Machines That Learn and Think Like People
While the model demonstrates an impressive ability to make plausible generalizations that go beyond the training examples, it generalizes too broadly in other cases, in ways that are not especially human-like. It is not clear that it could yet pass any of the â visual Turing testsâ in Lake, Salakhutdinov, and Tenenbaum (2015) (Figure 5B), although we hope DRAW-style networks will continue to be extended and enriched, and could be made to pass these tests. Incorporating causality may greatly improve these deep learning models; they were trained without access to causal data about how characters are actually produced, and without any incentive to learn the true causal process. An attentional window is only a crude approximation to the true causal process of drawing with a pen, and in Rezende et al. (2016) the attentional window is not pen-like at all, although a more accurate pen model could be incorporated. We anticipate that these sequential generative neural networks could make sharper one-shot inferences â
1604.00289#79
1604.00289#81
1604.00289
[ "1511.06114" ]
1604.00289#81
Building Machines That Learn and Think Like People
with the goal of tackling the full Characters Challenge â by incorporating additional causal, compositional, and hierarchical structure (and by continuing to utilize learning-to-learn, described next), potentially leading to a more computationally eï¬ cient and neurally grounded variant of the BPL model of handwritten characters (Figure 5). A causal model of Frostbite would have to be more complex, gluing together object representations and explaining their interactions with intuitive physics and intuitive psychology, much like the game engine that generates the game dynamics and ultimately the frames of pixel images. Inference is
1604.00289#80
1604.00289#82
1604.00289
[ "1511.06114" ]
1604.00289#82
Building Machines That Learn and Think Like People
28 the process of inverting this causal generative model, explaining the raw pixels as objects and their interactions, such as the agent stepping on an ice ï¬ oe to deactivate it or a crab pushing the agent into the water (Figure 2). Deep neural networks could play a role in two ways: serving as a bottom-up proposer to make probabilistic inference more tractable in a structured generative model (Section 4.3.1) or by serving as the causal generative model if imbued with the right set of ingredients. # 4.2.3 Learning-to-learn When humans or machines make inferences that go far beyond the data, strong prior knowledge (or inductive biases or constraints) must be making up the diï¬ erence (Geman et al., 1992; Griï¬ ths, Chater, Kemp, Perfors, & Tenenbaum, 2010; Tenenbaum, Kemp, Griï¬ ths, & Goodman, 2011). One way people acquire this prior knowledge is through â learning-to-learn,â a term introduced by Harlow (1949) and closely related to the machine learning notions of â transfer learningâ , â multi-task learningâ or â representation learning.â These terms refer to ways that learning a new task (or a new concept) can be accelerated through previous or parallel learning of other related tasks (or other related concepts). The strong priors, constraints, or inductive bias needed to learn a particular task quickly are often shared to some extent with other related tasks. A range of mechanisms have been developed to adapt the learnerâ s inductive bias as they learn speciï¬ c tasks, and then apply these inductive biases to new tasks. In hierarchical Bayesian modeling (Gelman, Carlin, Stern, & Rubin, 2004), a general prior on concepts is shared by multiple speciï¬ c concepts, and the prior itself is learned over the course of learning the speciï¬ c concepts (Salakhutdinov, Tenenbaum, & Torralba, 2012, 2013). These models have been used to explain the dynamics of human learning-to-learn in many areas of cognition, including word learning, causal learning, and learning intuitive theories of physical and social domains (Tenenbaum et al., 2011).
1604.00289#81
1604.00289#83
1604.00289
[ "1511.06114" ]
1604.00289#83
Building Machines That Learn and Think Like People
In machine vision, for deep convolutional networks or other discriminative methods that form the core of recent recognition systems, learning-to-learn can occur through the sharing of features between the models learned for old objects (or old tasks) and the models learned for new objects (or new tasks) (Anselmi et al., 2016; Baxter, 2000; Bottou, 2014; Lopez-Paz, Bottou, Scholk¨opf, & Vapnik, 2016; Rusu et al., 2016; Salakhutdinov, Torralba, & Tenenbaum, 2011; Srivastava & Salakhutdinov, 2013; Torralba, Murphy, & Freeman, 2007; Zeiler & Fergus, 2014). Neural networks can also learn-to-learn by optimizing hyperparameters, including the form of their weight update rule (Andrychowicz et al., 2016), over a set of related tasks. While transfer learning and multi-task learning are already important themes across AI, and in deep learning in particular, they have not yet led to systems that learn new tasks as rapidly and ï¬
1604.00289#82
1604.00289#84
1604.00289
[ "1511.06114" ]
1604.00289#84
Building Machines That Learn and Think Like People
exibly as humans do. Capturing more human-like learning-to-learn dynamics in deep networks and other machine learning approaches could facilitate much stronger transfer to new tasks and new problems. To gain the full beneï¬ t that humans get from learning-to-learn, however, AI systems might ï¬ rst need to adopt the more compositional (or more language-like, see Section 5) and causal forms of representations that we have argued for above. We can see this potential in both of our Challenge problems. In the Characters Challenge as presented in Lake, Salakhutdinov, and Tenenbaum (2015), all viable models use â pre-trainingâ
1604.00289#83
1604.00289#85
1604.00289
[ "1511.06114" ]
1604.00289#85
Building Machines That Learn and Think Like People
29 on many character concepts in a background set of alphabets to tune the representations they use to learn new character concepts in a test set of alphabets. But to perform well, current neural network approaches require much more pre-training than do people or our Bayesian program learning approach, and they are still far from solving the Characters Challenge.8 We cannot be sure how people get to the knowledge they have in this domain, but we do understand how this works in BPL, and we think people might be similar. BPL transfers readily to new concepts because it learns about object parts, sub-parts, and relations, capturing learning about what each concept is like and what concepts are like in general. It is crucial that learning-to-learn occurs at multiple levels of the hierarchical generative process. Previously learned primitive actions and larger generative pieces can be re-used and re-combined to deï¬ ne new generative models for new characters (Figure 5A). Further transfer occurs by learning about the typical levels of variability within a typical generative model; this provides knowledge about how far and in what ways to generalize when we have seen only one example of a new character, which on its own could not possibly carry any information about variance.
1604.00289#84
1604.00289#86
1604.00289
[ "1511.06114" ]
1604.00289#86
Building Machines That Learn and Think Like People
BPL could also beneï¬ t from deeper forms of learning-to-learn than it currently does: Some of the important structure it exploits to generalize well is built in to the prior and not learned from the background pre-training, whereas people might learn this knowledge, and ultimately a human-like machine learning system should as well. Analogous learning-to-learn occurs for humans in learning many new object models, in vision and cognition: Consider the novel two-wheeled vehicle in Figure 1B, where learning-to-learn can operate through the transfer of previously learned parts and relations (sub-concepts such as wheels, motors, handle bars, attached, powered by, etc.) that reconï¬ gure compositionally to create a model of the new concept. If deep neural networks could adopt similarly compositional, hierarchical, and causal representations, we expect they might beneï¬ t more from learning-to-learn. In the Frostbite Challenge, and in video games more generally, there is a similar interdependence between the form of the representation and the eï¬ ectiveness of learning-to-learn. People seem to transfer knowledge at multiple levels, from low-level perception to high-level strategy, exploiting compositionality at all levels. Most basically, they immediately parse the game environment into objects, types of objects, and causal relations between them. People also understand that video games like this have goals, which often involve approaching or avoiding objects based on their type. Whether the person is a child or a seasoned gamer, it seems obvious that interacting with the birds and ï¬ sh will change the game state in some way, either good or bad, because video games typically yield costs or rewards for these types of interactions (e.g., dying or points). These types of hypotheses can be quite speciï¬ c and rely on prior knowledge: When the polar bear ï¬ rst appears and tracks the agentâ s location during advanced levels (Figure 2D), an attentive learner is sure to avoid it. Depending on the level, ice ï¬ oes can be spaced far apart (Figure 2A-C) or close together (Figure 2D), suggesting the agent may be able to cross some gaps but not others. In this way,
1604.00289#85
1604.00289#87
1604.00289
[ "1511.06114" ]
1604.00289#87
Building Machines That Learn and Think Like People
8Humans typically have direct experience with only one or a few alphabets, and even with related drawing experience, this likely amounts to the equivalent of a few hundred character-like visual concepts at most. For BPL, pre-training with characters in only ï¬ ve alphabets (for around 150 character types in total) is suï¬ cient to perform human-level one-shot classiï¬ cation and generation of new examples. The best neural network classiï¬ ers (deep convolutional networks) have error rates approximately ï¬ ve times higher than humans when pre-trained with ï¬ ve alphabets (23% versus 4% error), and two to three times higher when pre-training on six times as much data (30 alphabets) (Lake, Salakhutdinov, & Tenenbaum, 2015). The current need for extensive pre-training is illustrated for deep generative models by Rezende et al. (2016), who present extensions of the DRAW architecture capable of one-shot learning.
1604.00289#86
1604.00289#88
1604.00289
[ "1511.06114" ]
1604.00289#88
Building Machines That Learn and Think Like People
30 general world knowledge and previous video games may help inform exploration and generalization in new scenarios, helping people learn maximally from a single mistake or avoid mistakes altogether. Deep reinforcement learning systems for playing Atari games have had some impressive successes in transfer learning, but they still have not come close to learning to play new games as quickly as humans can. For example, Parisotto et al. (2016) presents the â Actor-mimicâ algorithm that ï¬ rst learns 13 Atari games by watching an expert network play and trying to mimic the expert network action selection and/or internal states (for about four million frames of experience each, or 18.5 hours per game). This algorithm can then learn new games faster than a randomly initialized DQN:
1604.00289#87
1604.00289#89
1604.00289
[ "1511.06114" ]
1604.00289#89
Building Machines That Learn and Think Like People
Scores that might have taken four or ï¬ ve million frames of learning to reach might now be reached after one or two million frames of practice. But anecdotally we ï¬ nd that humans can still reach these scores with a few minutes of practice, requiring far less experience than the DQNs. In sum, the interaction between representation and previous experience may be key to building machines that learn as fast as people do. A deep learning system trained on many video games may not, by itself, be enough to learn new games as quickly as people do. Yet if such a system aims to learn compositionally structured causal models of a each game â built on a foundation of intuitive physics and psychology â it could transfer knowledge more eï¬ ciently and thereby learn new games much more quickly. # 4.3 Thinking Fast The previous section focused on learning rich models from sparse data and proposed ingredients for achieving these human-like learning abilities. These cognitive abilities are even more striking when considering the speed of perception and thought â the amount of time required to understand a scene, think a thought, or choose an action. In general, richer and more structured models require more complex (and slower) inference algorithms â similar to how complex models require more data â making the speed of perception and thought all the more remarkable. The combination of rich models with eï¬ cient inference suggests another way psychology and neu- roscience may usefully inform AI. It also suggests an additional way to build on the successes of deep learning, where eï¬ cient inference and scalable learning are important strengths of the ap- proach. This section discusses possible paths towards resolving the conï¬ ict between fast inference and structured representations, including Helmholtz-machine-style approximate inference in gener- ative models (Dayan, Hinton, Neal, & Zemel, 1995; Hinton et al., 1995) and cooperation between model-free and model-based reinforcement learning systems. # 4.3.1 Approximate inference in structured models Hierarchical Bayesian models operating over probabilistic programs (Goodman et al., 2008; Lake, Salakhutdinov, & Tenenbaum, 2015; Tenenbaum et al., 2011) are equipped to deal with theory- like structures and rich causal representations of the world, yet there are formidable algorithmic challenges for eï¬ cient inference.
1604.00289#88
1604.00289#90
1604.00289
[ "1511.06114" ]
1604.00289#90
Building Machines That Learn and Think Like People
Computing a probability distribution over an entire space of programs is usually intractable, and often even ï¬ nding a single high-probability program poses an intractable search problem. In contrast, while representing intuitive theories and structured causal 31 models is less natural in deep neural networks, recent progress has demonstrated the remarkable eï¬ ectiveness of gradient-based learning in high-dimensional parameter spaces. A complete account of learning and inference must explain how the brain does so much with limited computational resources (Gershman, Horvitz, & Tenenbaum, 2015; Vul, Goodman, Griï¬ ths, & Tenenbaum, 2014). Popular algorithms for approximate inference in probabilistic machine learning have been proposed as psychological models (see Griï¬ ths, Vul, & Sanborn, 2012, for a review). Most prominently, it has been proposed that humans can approximate Bayesian inference using Monte Carlo methods, which stochastically sample the space of possible hypotheses and evaluate these samples according to their consistency with the data and prior knowledge (Bonawitz, Denison, Griï¬ ths, & Gopnik, 2014; Gershman, Vul, & Tenenbaum, 2012; T. D. Ullman, Goodman, & Tenenbaum, 2012; Vul et al., 2014). Monte Carlo sampling has been invoked to explain behavioral phenomena ranging from childrenâ s response variability (Bonawitz et al., 2014) to garden-path eï¬ ects in sentence processing (Levy, Reali, & Griï¬ ths, 2009) and perceptual multistability (Gershman et al., 2012; Moreno- Bote, Knill, & Pouget, 2011). Moreover, we are beginning to understand how such methods could be implemented in neural circuits (Buesing, Bill, Nessler, & Maass, 2011; Huang & Rao, 2014; Pecevski, Buesing, & Maass, 2011).9 While Monte Carlo methods are powerful and come with asymptotic guarantees, it is challenging to make them work on complex problems like program induction and theory learning. When the hypothesis space is vast and only a few hypotheses are consistent with the data, how can good models be discovered without exhaustive search?
1604.00289#89
1604.00289#91
1604.00289
[ "1511.06114" ]
1604.00289#91
Building Machines That Learn and Think Like People
In at least some domains, people may not have an especially clever solution to this problem, instead grappling with the full combinatorial complexity of theory learning (T. D. Ullman et al., 2012). Discovering new theories can be slow and arduous, as testiï¬ ed by the long timescale of cognitive development, and learning in a saltatory fashion (rather than through gradual adaptation) is characteristic of aspects of human intelligence, including discovery and insight during development (L. Schulz, 2012), problem-solving (Sternberg & Davidson, 1995), and epoch-making discoveries in scientiï¬ c research (Langley, Bradshaw, Simon, & Zytkow, 1987). Discovering new theories can also happen much more quickly â A person learning the rules of Frostbite will probably undergo a loosely ordered sequence of â Aha!â moments: they will learn that jumping on ice ï¬ oes causes them to change color, changing the color of ice ï¬ oes causes an igloo to be constructed piece-by-piece, that birds make you lose points, that ï¬ sh make you gain points, that you can change the direction of ice ï¬ oe at the cost of one igloo piece, and so on. These little fragments of a â Frostbite theoryâ are assembled to form a causal understanding of the game relatively quickly, in what seems more like a guided process than arbitrary proposals in a Monte Carlo inference scheme. Similarly, as described in the Characters Challenge, people can quickly infer motor programs to draw a new character in a similarly guided processes. For domains where program or theory learning happens quickly, it is possible that people employ inductive biases not only to evaluate hypotheses, but also to guide hypothesis selection.
1604.00289#90
1604.00289#92
1604.00289
[ "1511.06114" ]
1604.00289#92
Building Machines That Learn and Think Like People
L. Schulz (2012) has suggested that abstract structural properties of problems contain information about the abstract forms of their solutions. Even without knowing the answer to the question â Where is the deepest point in the Paciï¬ c Ocean?â one still knows that the answer must be a location on a 9In the interest of brevity, we do not discuss here another important vein of work linking neural circuits to variational approximations (Bastos et al., 2012), which have received less attention in the psychological literature.
1604.00289#91
1604.00289#93
1604.00289
[ "1511.06114" ]
1604.00289#93
Building Machines That Learn and Think Like People
32 map. The answer â 20 inchesâ to the question â What year was Lincoln born?â can be invalidated a priori, even without knowing the correct answer. In recent experiments, Tsividis, Tenenbaum, and Schulz (2015) found that children can use high-level abstract features of a domain to guide hypothesis selection, by reasoning about distributional properties like the ratio of seeds to ï¬ owers, and dynamical properties like periodic or monotonic relationships between causes and eï¬ ects (see also Magid, Sheskin, & Schulz, 2015). How might eï¬ cient mappings from questions to a plausible subset of answers be learned? Recent work in AI spanning both deep learning and graphical models has attempted to tackle this chal- lenge by â
1604.00289#92
1604.00289#94
1604.00289
[ "1511.06114" ]
1604.00289#94
Building Machines That Learn and Think Like People
amortizingâ probabilistic inference computations into an eï¬ cient feed-forward mapping (Eslami, Tarlow, Kohli, & Winn, 2014; Heess, Tarlow, & Winn, 2013; A. Mnih & Gregor, 2014; Stuhlm¨uller, Taylor, & Goodman, 2013). We can also think of this as â learning to do inference,â which is independent from the ideas of learning as model building discussed in the previous section. These feed-forward mappings can be learned in various ways, for example, using paired generative/recognition networks (Dayan et al., 1995; Hinton et al., 1995) and variational optimization (Gregor et al., 2015; A. Mnih & Gregor, 2014; Rezende, Mohamed, & Wierstra, 2014) or nearest-neighbor density estimation (Kulkarni, Kohli, Tenenbaum, & Mansinghka, 2015; Stuhlm¨uller et al., 2013). One implication of amortization is that solutions to diï¬ erent problems will become correlated due to the sharing of amortized computations; some evidence for inferential correlations in humans was reported by Gershman and Goodman (2014). This trend is an avenue of potential integration of deep learning models with probabilistic models and probabilistic pro- gramming: training neural networks to help perform probabilistic inference in a generative model or a probabilistic program (Eslami et al., 2016; Kulkarni, Whitney, Kohli, & Tenenbaum, 2015; Yildirim, Kulkarni, Freiwald, & Te, 2015). Another avenue for potential integration is through diï¬ erentiable programming (Dalrmple, 2016) â by ensuring that the program-like hypotheses are diï¬ erentiable and thus learnable via gradient descent â a possibility discussed in the concluding section (Section 6.1). # 4.3.2 Model-based and model-free reinforcement learning The DQN introduced by V. Mnih et al. (2015) used a simple form of model-free reinforcement learning in a deep neural network that allows for fast selection of actions.
1604.00289#93
1604.00289#95
1604.00289
[ "1511.06114" ]
1604.00289#95
Building Machines That Learn and Think Like People
There is indeed sub- stantial evidence that the brain uses similar model-free learning algorithms in simple associative learning or discrimination learning tasks (see Niv, 2009, for a review). In particular, the phasic ï¬ ring of midbrain dopaminergic neurons is qualitatively (Schultz, Dayan, & Montague, 1997) and quantitatively (Bayer & Glimcher, 2005) consistent with the reward prediction error that drives updating of model-free value estimates. Model-free learning is not, however, the whole story. Considerable evidence suggests that the brain also has a model-based learning system, responsible for building a â cognitive mapâ of the environment and using it to plan action sequences for more complex tasks (Daw, Niv, & Dayan, 2005; Dolan & Dayan, 2013). Model-based planning is an essential ingredient of human intelli- gence, enabling ï¬ exible adaptation to new tasks and goals; it is where all of the rich model-building abilities discussed in the previous sections earn their value as guides to action. As we argued in our discussion of Frostbite, one can design numerous variants of this simple video game that are
1604.00289#94
1604.00289#96
1604.00289
[ "1511.06114" ]
1604.00289#96
Building Machines That Learn and Think Like People
33 identical except for the reward function â that is, governed by an identical environment model of state-action-dependent transitions. We conjecture that a competent Frostbite player can easily shift behavior appropriately, with little or no additional learning, and it is hard to imagine a way of doing that other than having a model-based planning approach in which the environment model can be modularly combined with arbitrary new reward functions and then deployed immediately for plan- ning. One boundary condition on this ï¬ exibility is the fact that the skills become â habitizedâ with routine application, possibly reï¬
1604.00289#95
1604.00289#97
1604.00289
[ "1511.06114" ]
1604.00289#97
Building Machines That Learn and Think Like People
ecting a shift from model-based to model-free control. This shift may arise from a rational arbitration between learning systems to balance the trade-oï¬ between ï¬ exibility and speed (Daw et al., 2005; Keramati, Dezfouli, & Piray, 2011). Similarly to how probabilistic computations can be amortized for eï¬ ciency (see previous section), plans can be amortized into cached values by allowing the model-based system to simulate training data for the model-free system (Sutton, 1990). This process might occur oï¬ ine (e.g., in dreaming or quiet wakefulness), suggesting a form of consolidation in reinforcement learning (Gershman, Markman, & Otto, 2014). Consistent with the idea of cooperation between learning systems, a recent experiment demonstrated that model-based behavior becomes automatic over the course of training (Economides, Kurth-Nelson, L¨ubbert, Guitart-Masip, & Dolan, 2015).
1604.00289#96
1604.00289#98
1604.00289
[ "1511.06114" ]
1604.00289#98
Building Machines That Learn and Think Like People
Thus, a marriage of ï¬ exibility and eï¬ ciency might be achievable if we use the human reinforcement learning systems as guidance. Intrinsic motivation also plays an important role in human learning and behavior (Berlyne, 1966; Deci & Ryan, 1975; Harlow, 1950). While much of the previous discussion assumes the standard view of behavior as seeking to maximize reward and minimize punishment, all externally provided rewards are reinterpreted according to the â internal valueâ of the agent, which may depend on the current goal and mental state. There may also be an intrinsic drive to reduce uncertainty and construct models of the environment (Edelman, 2015; Schmidhuber, 2015), closely related to learning-to-learn and multi-task learning. Deep reinforcement learning is only just starting to address intrinsically motivated learning (Kulkarni et al., 2016; Mohamed & Rezende, 2015).
1604.00289#97
1604.00289#99
1604.00289
[ "1511.06114" ]
1604.00289#99
Building Machines That Learn and Think Like People
# 5 Responses to common questions In discussing the arguments in this paper with colleagues, three lines of questioning or critiques have come up frequently. We think it is helpful to address these points directly, to maximize the potential for moving forward together. 1. Comparing the learning speeds of humans and neural networks on speciï¬ c tasks is not meaningful, because humans have extensive prior experience. It may seem unfair to compare neural networks and humans on the amount of training experience required to perform a task, such as learning to play new Atari games or learning new handwritten characters, when humans have had extensive prior experience that these networks have not beneï¬
1604.00289#98
1604.00289#100
1604.00289
[ "1511.06114" ]
1604.00289#100
Building Machines That Learn and Think Like People
ted from. People have had many hours playing other games, and experience reading or writing many other handwritten characters, not to mention experience in a variety of more loosely related tasks. If neural networks were â pre-trainedâ on the same experience, the argument goes, then they might generalize similarly to humans when exposed to novel tasks. 34 This has been the rationale behind multi-task learning or transfer learning, a strategy with a long history that has shown some promising results recently with deep networks (e.g., Donahue et al., 2013; Luong, Le, Sutskever, Vinyals, & Kaiser, 2015; Parisotto et al., 2016). Furthermore, some deep learning advocates argue, the human brain eï¬ ectively beneï¬ ts from even more experience through evolution. If deep learning researchers see themselves as trying to capture the equivalent of humansâ collective evolutionary experience, this would be equivalent to a truly immense â pre- trainingâ phase.
1604.00289#99
1604.00289#101
1604.00289
[ "1511.06114" ]
1604.00289#101
Building Machines That Learn and Think Like People
We agree that humans have a much richer starting point than neural networks when learning most new tasks, including learning a new concept or to play a new video game. That is the point of the â developmental start-up softwareâ and other building blocks that we argued are key to creating this richer starting point. We are less committed to a particular story regarding the origins of the ingredients, including the relative roles of genetically programmed and experience- driven developmental mechanisms in building these components in early infancy. Either way, we see them as fundamental building blocks for facilitating rapid learning from sparse data. Learning-to-learn across multiple tasks is conceivably one route to acquiring these ingredients, but simply training conventional neural networks on many related tasks may not be suï¬ cient to generalize in human-like ways for novel tasks. As we argued in Section 4.2.3, successful learning- to-learn â or at least, human-level transfer learning â is enabled by having models with the right representational structure, including the other building blocks discussed in this paper. Learning- to-learn is a powerful ingredient, but it can be more powerful when operating over compositional representations that capture the underlying causal structure of the environment, while also building on the intuitive physics and psychology. Finally, we recognize that some researchers still hold out hope that if only they can just get big enough training datasets, suï¬ ciently rich tasks, and enough computing power â far beyond what has been tried out so far â then deep learning methods might be suï¬ cient to learn representations equivalent to what evolution and learning provides humans with. We can sympathize with that hope and believe it deserves further exploration, although we are not sure it is a realistic one. We understand in principle how evolution could build a brain with the cognitive ingredients we discuss here.
1604.00289#100
1604.00289#102
1604.00289
[ "1511.06114" ]
1604.00289#102
Building Machines That Learn and Think Like People
Stochastic hill-climbing is slow â it may require massively parallel exploration, over millions of years with innumerable dead-ends â but it can build complex structures with complex functions if we are willing to wait long enough. In contrast, trying to build these representations from scratch using backpropagation, deep Q-learning or any stochastic gradient-descent weight update rule in a ï¬ xed network architecture may be unfeasible regardless of how much training data are available. To build these representations from scratch might require exploring fundamental structural variations in the networkâ s architecture, which gradient-based learning in weight space is not prepared to do. Although deep learning researchers do explore many such architectural variations, and have been devising increasingly clever and powerful ones recently, it is the researchers who are driving and directing this process. Exploration and creative innovation in the space of network architectures have not yet been made algorithmic. Perhaps they could, using genetic programming methods (Koza, 1992) or other structure-search algorithms (Yamins et al., 2014). We think this would be a fascinating and promising direction to explore, but we may have to acquire more patience than machine learning researchers typically express with their algorithms: the dynamics of structure-search may look much more like the slow random hill-climbing of evolution than the smooth, methodical progress of stochastic gradient-descent. An alternative strategy is to
1604.00289#101
1604.00289#103
1604.00289
[ "1511.06114" ]
1604.00289#103
Building Machines That Learn and Think Like People
35 build in appropriate infant-like knowledge representations and core ingredients as the starting point for our learning-based AI systems, or to build learning systems with strong inductive biases that guide them in this direction. Regardless of which way an AI developer chooses to go, our main points are orthogonal to this objection. There are a set of core cognitive ingredients for human-like learning and thought. Deep learning models could incorporate these ingredients through some combination of additional structure and perhaps additional learning mechanisms, but for the most part have yet to do so. Any approach to human-like AI, whether based on deep learning or not, is likely to gain from incorporating these ingredients.
1604.00289#102
1604.00289#104
1604.00289
[ "1511.06114" ]
1604.00289#104
Building Machines That Learn and Think Like People
2. Biological plausibility suggests theories of intelligence should start with neural networks. We have focused on how cognitive science can motivate and guide eï¬ orts to engineer human-like AI, in contrast to some advocates of deep neural networks who cite neuroscience for inspiration. Our approach is guided by a pragmatic view that the clearest path to a computational formalization of human intelligence comes from understanding the â softwareâ before the â hardware.â In the case of this article, we proposed key ingredients of this software in previous sections.
1604.00289#103
1604.00289#105
1604.00289
[ "1511.06114" ]
1604.00289#105
Building Machines That Learn and Think Like People
Nonetheless, a cognitive approach to intelligence should not ignore what we know about the brain. Neuroscience can provide valuable inspirations for both cognitive models and AI researchers: the centrality of neural networks and model-free reinforcement learning in our proposals for â Thinking fastâ (Section 4.3) are prime exemplars. Neuroscience can also in principle impose constraints on cognitive accounts, both at the cellular and systems level. If deep learning embodies brain-like computational mechanisms and those mechanisms are incompatible with some cognitive theory, then this is an argument against that cognitive theory and in favor of deep learning.
1604.00289#104
1604.00289#106
1604.00289
[ "1511.06114" ]
1604.00289#106
Building Machines That Learn and Think Like People
Unfortunately, what we â knowâ about the brain is not all that clear-cut. Many seemingly well-accepted ideas regarding neural computation are in fact biologically dubious, or uncertain at best â and thus should not disqualify cognitive ingredients that pose challenges for implementation within that approach. For example, most neural networks use some form of gradient-based (e.g., backpropagation) or Hebbian learning. It has long been argued, however, that backpropagation is not biologically plausible; as Crick (1989) famously pointed out, backpropagation seems to require that information be transmitted backwards along the axon, which does not ï¬ t with realistic models of neuronal function (although recent models circumvent this problem in various ways Liao, Leibo, & Poggio, 2015; Lillicrap, Cownden, Tweed, & Akerman, 2014; Scellier & Bengio, 2016). This has not prevented backpropagation being put to good use in connectionist models of cognition or in building deep neural networks for AI. Neural network researchers must regard it as a very good thing, in this case, that concerns of biological plausibility did not hold back research on this particular algorithmic approach to learning.10 We strongly agree: Although neuroscientists have not found any mechanisms for implementing backpropagation in the brain, neither have they produced deï¬ nitive evidence against it. The existing data simply oï¬ er little constraint either way, and backpropagation has been of obviously great value in engineering todayâ s best pattern recognition systems. 10Michael Jordan made this point forcefully in his 2015 speech accepting the Rumelhart Prize.
1604.00289#105
1604.00289#107
1604.00289
[ "1511.06114" ]
1604.00289#107
Building Machines That Learn and Think Like People
36 Hebbian learning is another case in point. In the form of long-term potentiation (LTP) and spike- timing dependent plasticity (STDP), Hebbian learning mechanisms are often cited as biologically supported (Bi & Poo, 2001). However, the cognitive signiï¬ cance of any biologically grounded form of Hebbian learning is unclear. Gallistel and Matzel (2013) have persuasively argued that the critical interstimulus interval for LTP is orders of magnitude smaller than the intervals that are behaviorally relevant in most forms of learning. In fact, experiments that simultaneously manipulate the interstimulus and intertrial intervals demonstrate that no critical interval exists. Behavior can persist for weeks or months, whereas LTP decays to baseline over the course of days (Power, Thompson, Moyer, & Disterhoft, 1997). Learned behavior is rapidly reacquired after extinction (Bouton, 2004), whereas no such facilitation is observed for LTP (de Jonge & Racine, 1985). Most relevantly for our focus, it would be especially challenging to try to implement the ingredients described in this article using purely Hebbian mechanisms. Claims of biological plausibility or implausibility usually rest on rather stylized assumptions about the brain that are wrong in many of their details. Moreover, these claims usually pertain to the cellular and synaptic level, with few connections made to systems level neuroscience and subcor- tical brain organization (Edelman, 2015). Understanding which details matter and which do not requires a computational theory (Marr, 1982). Moreover, in the absence of strong constraints from neuroscience, we can turn the biological argument around: Perhaps a hypothetical biological mechanism should be viewed with skepticism if it is cognitively implausible. In the long run, we are optimistic that neuroscience will eventually place more constraints on theories of intelligence. For now, we believe cognitive plausibility oï¬ ers a surer foundation. # 3.
1604.00289#106
1604.00289#108
1604.00289
[ "1511.06114" ]
1604.00289#108
Building Machines That Learn and Think Like People
Language is essential for human intelligence. Why is it not more prominent here? We have said little in this article about peopleâ s ability to communicate and think in natural lan- guage, a distinctively human cognitive capacity where machine capabilities lag strikingly. Certainly one could argue that language should be included on any short list of key ingredients in human intelligence: for instance, Mikolov et al. (2016) featured language prominently in their recent paper sketching challenge problems and a road map for AI. Moreover, while natural language processing is an active area of research in deep learning (e.g., Bahdanau, Cho, & Bengio, 2015; Mikolov, Sutskever, & Chen, 2013; K. Xu et al., 2015), it is widely recognized that neural networks are far from implementing human language abilities. The question is, how do we develop machines with a richer capacity for language? We ourselves believe that understanding language and its role in intelligence goes hand-in-hand with understanding the building blocks discussed in this article. It is also true that language builds on the core abilities for intuitive physics, intuitive psychology, and rapid learning with compositional, causal models that we do focus on. These capacities are in place before children master language, and they provide the building blocks for linguistic meaning and language acquisition (Carey, 2009; Jackendoï¬
1604.00289#107
1604.00289#109
1604.00289
[ "1511.06114" ]
1604.00289#109
Building Machines That Learn and Think Like People
, 2003; Kemp, 2007; Oâ Donnell, 2015; Pinker, 2007; F. Xu & Tenenbaum, 2007). We hope that by better understanding these earlier ingredients and how to implement and integrate them computationally, we will be better positioned to understand linguistic meaning and acquisition in computational terms, and to explore other ingredients that make human language possible. What else might we need to add to these core ingredients to get language? Many researchers have speculated about key features of human cognition that gives rise to language and other uniquely
1604.00289#108
1604.00289#110
1604.00289
[ "1511.06114" ]
1604.00289#110
Building Machines That Learn and Think Like People
37 human modes of thought: Is it recursion, or some new kind of recursive structure building ability (Berwick & Chomsky, 2016; Hauser, Chomsky, & Fitch, 2002)? Is it the ability to reuse symbols by name (Deacon, 1998)? Is it the ability to understand others intentionally and build shared intentionality (Bloom, 2000; Frank, Goodman, & Tenenbaum, 2009; Tomasello, 2010)? Is it some new version of these things, or is it just more of the aspects of these capacities that are already present in infants? These are important questions for future work with the potential to expand the list of key ingredients; we did not intend our list to be complete. Finally, we should keep in mind all the ways that acquiring language extends and enriches the ingredients of cognition we focus on in this article. The intuitive physics and psychology of infants is likely limited to reasoning about objects and agents in their immediate spatial and temporal vicinity, and to their simplest properties and states. But with language, older children become able to reason about a much wider range of physical and psychological situations (Carey, 2009). Language also facilitates more powerful learning-to-learn and compositionality (Mikolov et al., 2016), allowing people to learn more quickly and ï¬ exibly by representing new concepts and thoughts in relation to existing concepts (Lupyan & Bergen, 2016; Lupyan & Clark, 2015). Ultimately, the full project of building machines that learn and think like humans must have language at its core.
1604.00289#109
1604.00289#111
1604.00289
[ "1511.06114" ]
1604.00289#111
Building Machines That Learn and Think Like People
# 6 Looking forward In the last few decades, AI and machine learning have made remarkable progress: Computer programs beat chess masters; AI systems beat Jeopardy champions; apps recognize photos of your friends; machines rival humans on large-scale object recognition; smart phones recognize (and, to a limited extent, understand) speech. The coming years promise still more exciting AI applications, in areas as varied as self-driving cars, medicine, genetics, drug design and robotics. As a ï¬ eld, AI should be proud of these accomplishments, which have helped move research from academic journals into systems that improve our daily lives. We should also be mindful of what AI has achieved and what it has not. While the pace of progress has been impressive, natural intelligence is still by far the best example of intelligence. Machine performance may rival or exceed human performance on particular tasks, and algorithms may take inspiration from neuroscience or aspects of psychology, but it does not follow that the algorithm learns or thinks like a person. This is a higher bar worth reaching for, potentially leading to more powerful algorithms while also helping unlock the mysteries of the human mind. When comparing people and the current best algorithms in AI and machine learning, people learn from less data and generalize in richer and more ï¬
1604.00289#110
1604.00289#112
1604.00289
[ "1511.06114" ]
1604.00289#112
Building Machines That Learn and Think Like People
exible ways. Even for relatively simple concepts such as handwritten characters, people need to see just one or a few examples of a new concept before being able to recognize new examples, generate new examples, and generate new concepts based on related ones (Figure 1A). So far, these abilities elude even the best deep neural networks for character recognition (Ciresan et al., 2012), which are trained on many examples of each concept and do not ï¬ exibly generalize to new tasks. We suggest that the comparative power and ï¬ exibility of peopleâ s inferences come from the causal and compositional nature of their representations. We believe that deep learning and other learning paradigms can move closer to human-like learning
1604.00289#111
1604.00289#113
1604.00289
[ "1511.06114" ]
1604.00289#113
Building Machines That Learn and Think Like People
38 and thought if they incorporate psychological ingredients including those outlined in this paper. Be- fore closing, we discuss some recent trends that we see as some of the most promising developments in deep learning â trends we hope will continue and lead to more important advances. # 6.1 Promising directions in deep learning There has been recent interest in integrating psychological ingredients with deep neural networks, especially selective attention (Bahdanau et al., 2015; V. Mnih, Heess, Graves, & Kavukcuoglu, 2014; K. Xu et al., 2015), augmented working memory (Graves et al., 2014, 2016; Grefenstette et al., 2015; Sukhbaatar et al., 2015; Weston et al., 2015), and experience replay (McClelland, McNaughton, & Oâ Reilly, 1995; V. Mnih et al., 2015). These ingredients are lower-level than the key cognitive ingredients discussed in this paper, yet they suggest a promising trend of using insights from cognitive psychology to improve deep learning, one that may be even furthered by incorporating higher-level cognitive ingredients. Paralleling the human perceptual apparatus, selective attention forces deep learning models to process raw perceptual data as a series of high-resolution â foveal glimpsesâ rather than all at once. Somewhat surprisingly, the incorporation of attention has led to substantial performance gains in a variety of domains, including in machine translation (Bahdanau et al., 2015), object recognition (V. Mnih et al., 2014), and image caption generation (K. Xu et al., 2015). Attention may help these models in several ways. It helps to coordinate complex (often sequential) outputs by attending to only speciï¬ c aspects of the input, allowing the model to focus on smaller sub-tasks rather than solving an entire problem in one shot. For instance, during caption generation, the attentional window has been shown to track the objects as they are mentioned in the caption, where the network may focus on a boy and then a Frisbee when producing a caption like, â
1604.00289#112
1604.00289#114
1604.00289
[ "1511.06114" ]
1604.00289#114
Building Machines That Learn and Think Like People
A boy throws a Frisbeeâ (K. Xu et al., 2015). Attention also allows larger models to be trained without requiring every model parameter to aï¬ ect every output or action. In generative neural network models, attention has been used to concentrate on generating particular regions of the image rather than the whole image at once (Gregor et al., 2015). This could be a stepping stone towards building more causal generative models in neural networks, such as a neural version of the Bayesian Program Learning model that could be applied to tackling the Characters Challenge (Section 3.1). Researchers are also developing neural networks with â working memoriesâ that augment the shorter- term memory provided by unit activation and the longer-term memory provided by the connection weights (Graves et al., 2014, 2016; Grefenstette et al., 2015; Reed & de Freitas, 2016; Sukhbaatar et al., 2015; Weston et al., 2015). These developments are also part of a broader trend towards â
1604.00289#113
1604.00289#115
1604.00289
[ "1511.06114" ]
1604.00289#115
Building Machines That Learn and Think Like People
diï¬ erentiable programming,â the incorporation of classic data structures such a random access memory, stacks, and queues, into gradient-based learning systems (Dalrmple, 2016). For example, the Neural Turing Machine (NTM; Graves et al., 2014) and its successor the Diï¬ erentiable Neural 2016) are neural networks augmented with a random access Computer (DNC; Graves et al., external memory with read and write operations that maintains end-to-end diï¬ erentiability. The NTM has been trained to perform sequence-to-sequence prediction tasks such as sequence copying and sorting, and the DNC has been applied to solving block puzzles and ï¬ nding paths between nodes in a graph (after memorizing the graph). Additionally, Neural Programmer-Interpreters learn to represent and execute algorithms such as addition and sorting from fewer examples by observing
1604.00289#114
1604.00289#116
1604.00289
[ "1511.06114" ]
1604.00289#116
Building Machines That Learn and Think Like People
39 input-output pairs (like the NTM and DNC) as well as execution traces (Reed & de Freitas, 2016). Each model seems to learn genuine programs from examples, albeit in a representation more like assembly language than a high-level programming language. While this new generation of neural networks has yet to tackle the types of challenge problems introduced in this paper, diï¬ erentiable programming suggests the intriguing possibility of combining the best of program induction and deep learning. The types of structured representations and model building ingredients discussed in this paper â objects, forces, agents, causality, and compositionality â help to explain important facets of human learning and thinking, yet they also bring challenges for performing eï¬ cient inference (Section 4.3.1). Deep learning systems have not yet shown they can work with these representations, but they have demonstrated the surprising eï¬ ectiveness of gradient descent in large models with high-dimensional parameter spaces. A synthesis of these approaches, able to perform eï¬ cient inference over programs that richly model the causal structure an infant sees in the world, would be a major step forward for building human-like AI Another example of combining pattern recognition and model-based search comes from recent AI research into the game Go.
1604.00289#115
1604.00289#117
1604.00289
[ "1511.06114" ]
1604.00289#117
Building Machines That Learn and Think Like People
Go is considerably more diï¬ cult for AI than chess, and it was only recently that a computer program â AlphaGo â ï¬ rst beat a world-class player (Chouard, 2016) by using a combination of deep convolutional neural networks (convnets) and Monte Carlo Tree search (Silver et al., 2016). Each of these components has made gains against artiï¬ cial and real Go players (Gelly & Silver, 2008, 2011; Silver et al., 2016; Tian & Zhu, 2016), and the notion of combining pattern recognition and model-based search goes back decades in Go and other games. Showing that these approaches can be integrated to beat a human Go champion is an important AI accomplishment (see Figure 7). Just as important, however, are the new questions and directions it opens up for the long-term project of building genuinely human-like AI. One worthy goal would be to build an AI system that beats a world-class player with the amount and kind of training human champions receive â rather than overpowering them with Google-scale computational resources. AlphaGo is initially trained on 28.4 million positions and moves from 160,000 unique games played by human experts; it then improves through reinforcement learning, playing 30 million more games against itself. Between the publication of Silver et al. (2016) and before facing world champion Lee Sedol, AlphaGo was iteratively retrained several times in this way; the basic system always learned from 30 million games, but it played against successively stronger versions of itself, eï¬ ectively learning from 100 million or more games altogether (Silver, 2016). In contrast, Lee has probably played around 50,000 games in his entire life. Looking at numbers like these, it is impressive that Lee can even compete with AlphaGo at all. What would it take to build a professional-level Go AI that learns from only 50,000 games? Perhaps a system that combines the advances of AlphaGo with some of the complementary ingredients for intelligence we argue for here would be a route to that end. AI could also gain much by trying to match the learning speed and ï¬ exibility of normal human Go players.
1604.00289#116
1604.00289#118
1604.00289
[ "1511.06114" ]
1604.00289#118
Building Machines That Learn and Think Like People
People take a long time to master the game of Go, but as with the Frostbite and Characters challenges (Sections 3.1 and 3.2), humans can learn the basics of the game quickly through a combination of explicit instruction, watching others, and experience. Playing just a few games teaches a human enough to beat someone who has just learned the rules but never played before. Could AlphaGo model these earliest stages of real human learning curves? Human Go players can also adapt what they have learned to innumerable game variants.
1604.00289#117
1604.00289#119
1604.00289
[ "1511.06114" ]
1604.00289#119
Building Machines That Learn and Think Like People
The Wikipedia page 40 (a) Conv layer Conv layers x 10 Conv layer k parallel softmax Current board 25 feature planes 92 channels 384 channels k maps P 5 x5 kernel 3 x 3 kernel 3 x 3 kernel a | Ae Ly ' â Our next move (next-1) a »~â S +": > 47> mus LY t â Opponent move (next-2) SF ., Ao Gossssseaasaas Our counter move (next-3) (b) â =p Tree policy 22/40 exe Default policy 2/10 Synced Panna DCNN server ~~~ Figure 7: An AI system for playing Go combining a deep convolutional network (convnet) and model-based search through Monte-Carlo Tree Search (MCTS). (A) The convnet on its own can be used to predict the next k moves given the current board. (B) A search tree with the current board state as its root and the current â
1604.00289#118
1604.00289#120
1604.00289
[ "1511.06114" ]
1604.00289#120
Building Machines That Learn and Think Like People
win/totalâ statistics at each node. A new MCTS rollout selects moves along the tree according to the MCTS policy (red arrows) until it reaches a new leaf (red circle), where the next move is chosen by the convnet. From there, play proceeds until the gameâ s end according to a pre-deï¬ ned default policy based on the Pachi program (BaudiË s & Gailly, 2012), itself based on MCTS. (C) The end-game result of the new leaf is used to update the search tree. Adapted from Tian and Zhu (2016) with permission.
1604.00289#119
1604.00289#121
1604.00289
[ "1511.06114" ]
1604.00289#121
Building Machines That Learn and Think Like People
41 9 â Go variantsâ describes versions such as playing on bigger or smaller board sizes (ranging from 9 to 38 19 board), or playing on boards of diï¬ erent shapes and connectivity structures (rectangles, triangles, hexagons, even a map of the English city Milton Keynes). The board can be a torus, a mobius strip, a cube or a diamond lattice in three dimensions. Holes can be cut in the board, in regular or irregular ways. The rules can be adapted to what is known as First Capture Go (the ï¬ rst player to capture a stone wins), NoGo (the player who avoids capturing any enemy stones longer wins) or Time Is Money Go (players begin with a ï¬ xed amount of time and at the end of the game, the number of seconds remaining on each playerâ s clock is added to their score). Players may receive bonuses for creating certain stone patterns or capturing territory near certain landmarks. There could be four or more players, competing individually or in teams.
1604.00289#120
1604.00289#122
1604.00289
[ "1511.06114" ]
1604.00289#122
Building Machines That Learn and Think Like People
In each of these variants, eï¬ ective play needs to change from the basic game, but a skilled player can adapt and does not simply have to relearn the game from scratch. Could AlphaGo? While techniques for handling variable sized inputs in convnets may help for playing on diï¬ erent board sizes (Sermanet et al., 2014), the value functions and policies that AlphaGo learns seem unlikely to generalize as ï¬ exibly and automatically as people do. Many of the variants described above would require signiï¬ cant reprogramming and retraining, directed by the smart humans who programmed AlphaGo, not the system itself. As impressive as AlphaGo is in beating the worldâ s best players at the standard game â and it is extremely impressive â the fact that it cannot even conceive of these variants, let alone adapt to them autonomously, is a sign that it does not understand the game as humans do. Human players can understand these variants and adapt to them because they explicitly represent Go as a game, with a goal to beat an adversary who is playing to achieve the same goal they are, governed by rules about how stones can be placed on a board and how board positions are scored. Humans represent their strategies as a response to these constraints, such that if the game changes, they can begin to adjust their strategies accordingly. In sum, Go presents compelling challenges for AI beyond matching world-class human performance, in trying to match human levels of understanding and generalization, based on the same kinds and amounts of data, explicit instructions, and opportunities for social learning aï¬
1604.00289#121
1604.00289#123
1604.00289
[ "1511.06114" ]
1604.00289#123
Building Machines That Learn and Think Like People
orded to people. In learning to play Go as quickly and as ï¬ exibly as they do, people are drawing on most of the cognitive ingredients this paper has laid out. They are learning-to-learn with compositional knowledge. They are using their core intuitive psychology, and aspects of their intuitive physics (spatial and object representations). And like AlphaGo, they are also integrating model-free pattern recognition with model-based search. We believe that Go AI systems could be built to do all of these things, potentially capturing better how humans learn and understand the game. We believe it would be richly rewarding for AI and cognitive science to pursue this challenge together, and that such systems could be a compelling testbed for the principles this paper argues for â as well as building on all of the progress to date that AlphaGo represents. # 6.2 Future applications to practical AI problems In this paper, we suggested some ingredients for building computational models with more human- like learning and thought. These principles were explained in the context of the Characters and Frostbite Challenges, with special emphasis on reducing the amount of training data required and facilitating transfer to novel yet related tasks. We also see ways these ingredients can spur progress on core AI problems with practical applications.
1604.00289#122
1604.00289#124
1604.00289
[ "1511.06114" ]
1604.00289#124
Building Machines That Learn and Think Like People
Here we oï¬ er some speculative thoughts on these 42 applications. 1. Scene understanding. Deep learning is moving beyond object recognition and towards scene understanding, as evidenced by a ï¬ urry of recent work focused on generating natural language captions for images (Karpathy & Fei-Fei, 2015; Vinyals et al., 2014; K. Xu et al., 2015). Yet current algorithms are still better at recognizing objects than understanding scenes, often getting the key objects right but their causal relationships wrong (Figure 6). We see com- positionality, causality, intuitive physics and intuitive psychology as playing an increasingly important role in reaching true scene understanding. For example, picture a cluttered garage workshop with screw drivers and hammers hanging from the wall, wood pieces and tools stacked precariously on a work desk, and shelving and boxes framing the scene.
1604.00289#123
1604.00289#125
1604.00289
[ "1511.06114" ]
1604.00289#125
Building Machines That Learn and Think Like People
In order for an autonomous agent to eï¬ ectively navigate and perform tasks in this environment, the agent would need intuitive physics to properly reason about stability and support. A holistic model of the scene would require the composition of individual object models, glued together by relations. Finally, causality helps infuse the recognition of existing tools (or the learning of new ones) with an understanding of their use, helping to connect diï¬ erent object models in the proper way (e.g., hammering a nail into a wall, or using a saw horse to support a beam being cut by a saw). If the scene includes people acting or interacting, it will be nearly impossible to understand their actions without thinking about their thoughts, and especially their goals and intentions towards the other objects and agents they believe are present. 2. Autonomous agents and intelligent devices. Robots and personal assistants (such as cell- phones) cannot be pre-trained on all possible concepts they may encounter. Like a child learning the meaning of new words, an intelligent and adaptive system should be able to learn new concepts from a small number of examples, as they are encountered naturally in the environment. Common concept types include new spoken words (names like â
1604.00289#124
1604.00289#126
1604.00289
[ "1511.06114" ]
1604.00289#126
Building Machines That Learn and Think Like People
Ban Ki-Moonâ or â Koï¬ Annanâ ), new gestures (a secret handshake or a â ï¬ st bumpâ ), and new activities, and a human-like system would be able to learn to both recognize and produce new instances from a small number of examples. Like with handwritten characters, a system may be able to quickly learn new concepts by constructing them from pre-existing primitive actions, informed by knowledge of the underlying causal process and learning-to-learn. 3. Autonomous driving. Perfect autonomous driving requires intuitive psychology. Beyond de- tecting and avoiding pedestrians, autonomous cars could more accurately predict pedestrian behavior by inferring mental states, including their beliefs (e.g., Do they think it is safe to cross the street? Are they paying attention?) and desires (e.g., Where do they want to go? Do they want to cross? Are they retrieving a ball lost in the street?). Similarly, other drivers on the road have similarly complex mental states underlying their behavior (e.g., Do they want to change lanes? Pass another car? Are they swerving to avoid a hidden hazard? Are they distracted?). This type of psychological reasoning, along with other types of model-based causal and physical reasoning, are likely to be especially valuable in challenging and novel driving circumstances for which there is little relevant training data (e.g. navigating unusual construction zones, natural disasters, etc.) 4. Creative design. Creativity is often thought to be a pinnacle of human intelligence: chefs de- sign new dishes, musicians write new songs, architects design new buildings, and entrepreneurs
1604.00289#125
1604.00289#127
1604.00289
[ "1511.06114" ]
1604.00289#127
Building Machines That Learn and Think Like People
43 start new businesses. While we are still far from developing AI systems that can tackle these types of tasks, we see compositionality and causality as central to this goal. Many com- monplace acts of creativity are combinatorial, meaning they are unexpected combinations of familiar concepts or ideas (Boden, 1998; Ward, 1994). As illustrated in Figure 1-iv, novel vehicles can be created as a combination of parts from existing vehicles, and similarly novel characters can be constructed from the parts of stylistically similar characters, or familiar characters can be re-conceptualized in novel styles (Rehling, 2001). In each case, the free combination of parts is not enough on its own: While compositionality and learning-to-learn can provide the parts for new ideas, causality provides the glue that gives them coherence and purpose. # 6.3 Towards more human-like learning and thinking machines Since the birth of AI in the 1950s, people have wanted to build machines that learn and think like people. We hope researchers in AI, machine learning, and cognitive science will accept our challenge problems as a testbed for progress. Rather than just building systems that recognize handwritten characters and play Frostbite or Go as the end result of an asymptotic process, we suggest that deep learning and other computational paradigms should aim to tackle these tasks using as little training data as people need, and also to evaluate models on a range of human-like generalizations beyond the one task the model was trained on. We hope that the ingredients outlined in this article will prove useful for working towards this goal: seeing objects and agents rather than features, building causal models and not just recognizing patterns, recombining representations without needing to retrain, and learning-to-learn rather than starting from scratch. # Acknowledgments We are grateful to Peter Battaglia, Matt Botvinick, Y-Lan Boureau, Shimon Edelman, Nando de Freitas, Anatole Gershman, George Kachergis, Leslie Kaelbling, Andrej Karpathy, George Konidaris, Tejas Kulkarni, Tammy Kwan, Michael Littman, Gary Marcus, Kevin Murphy, Steven Pinker, Pat Shafto, David Sontag, Pedro Tsividis, and four anonymous reviewers for helpful com- ments on early versions of this manuscript.
1604.00289#126
1604.00289#128
1604.00289
[ "1511.06114" ]
1604.00289#128
Building Machines That Learn and Think Like People
Tom Schaul was very helpful in answering questions regarding the DQN learning curves and Frostbite scoring. This work was supported by the Center for Minds, Brains and Machines (CBMM), under NSF STC award CCF-1231216, and the Moore- Sloan Data Science Environment at NYU. # References Andrychowicz, M., Denil, M., Gomez, S., Hoï¬ man, M. W., Pfau, D., Schaul, T., & de Freitas, N. (2016).
1604.00289#127
1604.00289#129
1604.00289
[ "1511.06114" ]
1604.00289#129
Building Machines That Learn and Think Like People
Learning to learn by gradient descent by gradient descent. arXiv preprint. Anselmi, F., Leibo, J. Z., Rosasco, L., Mutch, J., Tacchetti, A., & Poggio, T. (2016). Unsupervised learning of invariant representations. Theoretical Computer Science. 44 Bahdanau, D., Cho, K., & Bengio, Y. (2015). Neural Machine Translation by Jointly Learning to Align and Translate. In International Conference on Learning Representations (ICLR). Retrieved from http://arxiv.org/abs/1409.0473v3 Baillargeon, R. (2004).
1604.00289#128
1604.00289#130
1604.00289
[ "1511.06114" ]
1604.00289#130
Building Machines That Learn and Think Like People
Infantsâ physical world. Current Directions in Psychological Science, 13 , 89â 94. doi: 10.1111/j.0963-7214.2004.00281.x Baillargeon, R., Li, J., Ng, W., & Yuan, S. (2009). An account of infants physical reasoning. Learning and the infant mind , 66â 116. Baker, C. L., Saxe, R., & Tenenbaum, J. B. (2009).
1604.00289#129
1604.00289#131
1604.00289
[ "1511.06114" ]
1604.00289#131
Building Machines That Learn and Think Like People
Action understanding as inverse planning. Cognition, 113 (3), 329â 349. Barsalou, L. W. (1983). Ad hoc categories. Memory & Cognition, 11 (3), 211â 227. Bastos, A. M., Usrey, W. M., Adams, R. A., Mangun, G. R., Fries, P., & Friston, K. J. (2012). Canonical microcircuits for predictive coding. Neuron, 76 , 695â 711. Bates, C. J., Yildirim, I., Tenenbaum, J. B., & Battaglia, P. W. (2015).
1604.00289#130
1604.00289#132
1604.00289
[ "1511.06114" ]
1604.00289#132
Building Machines That Learn and Think Like People
Humans predict liquid dynamics using probabilistic simulation. In Proceedings of the 37th Annual Conference of the Cognitive Science Society. Battaglia, P. W., Hamrick, J. B., & Tenenbaum, J. B. (2013). Simulation as an engine of physical scene understanding. Proceedings of the National Academy of Sciences, 110 (45), 18327â 18332. BaudiË s, P., & Gailly, J.-l. (2012). Pachi:
1604.00289#131
1604.00289#133
1604.00289
[ "1511.06114" ]
1604.00289#133
Building Machines That Learn and Think Like People
State of the art open source go program. In Advances in computer games (pp. 24â 38). Springer. Baxter, J. (2000). A model of inductive bias learning. Journal of Artiï¬ cial Intelligence Research, 12 , 149â 198. Bayer, H. M., & Glimcher, P. W. (2005). Midbrain dopamine neurons encode a quantitative reward prediction error signal. Neuron, 47 , 129â 141. Bellemare, M. G., Naddaf, Y., Veness, J., & Bowling, M. (2013). The arcade learning environment: An evaluation platform for general agents.
1604.00289#132
1604.00289#134
1604.00289
[ "1511.06114" ]
1604.00289#134
Building Machines That Learn and Think Like People
Journal of Artiï¬ cial Intelligence Research, 47 , 253â 279. Berlyne, D. E. (1966). Curiosity and exploration. Science, 153 , 25â 33. Berthiaume, V. G., Shultz, T. R., & Onishi, K. H. (2013). A constructivist connectionist model of transitions on false-belief tasks. Cognition, 126 (3), 441â 458. Berwick, R. C., & Chomsky, N. (2016).
1604.00289#133
1604.00289#135
1604.00289
[ "1511.06114" ]
1604.00289#135
Building Machines That Learn and Think Like People
Why only us: Language and evolution. Cambridge, MA: MIT Press. Bever, T. G., & Poeppel, D. (2010). Analysis by synthesis: a (re-) emerging program of research for language and vision. Biolinguistics, 4 , 174â 200. Bi, G.-q., & Poo, M.-m. (2001). Synaptic modiï¬ cation by correlated activity: Hebbâ s postulate revisited. Annual Review of Neuroscience, 24 , 139â 166. Biederman, I. (1987). Recognition-by-components: A theory of human image understanding. Psychological Review , 94 (2), 115â 147. Bienenstock, E., Cooper, L. N., & Munro, P. W. (1982). Theory for the development of neuron selectivity: orientation speciï¬ city and binocular interaction in visual cortex. The Journal of Neuroscience, 2 (1), 32â
1604.00289#134
1604.00289#136
1604.00289
[ "1511.06114" ]
1604.00289#136
Building Machines That Learn and Think Like People
48. Bienenstock, E., Geman, S., & Potter, D. (1997). Compositionality, MDL Priors, and Object Recognition. In Advances in Neural Information Processing Systems. Bloom, P. (2000). How Children Learn the Meanings of Words. Cambridge, MA: MIT Press. Blundell, C., Uria, B., Pritzel, A., Li, Y., Ruderman, A., Leibo, J. Z., . . . Hassabis, D. (2016).
1604.00289#135
1604.00289#137
1604.00289
[ "1511.06114" ]
1604.00289#137
Building Machines That Learn and Think Like People
45 Model-Free Episodic Control. arXiv preprint. Bobrow, D. G., & Winograd, T. (1977). An overview of KRL, a knowledge representation language. Cognitive Science, 1 , 3â 46. Boden, M. A. 347â 356. (1998). Creativity and artiï¬ cial intelligence. Artiï¬ cial Intelligence, 103 (I 998), Boden, M. A. (2006). Mind as machine: A history of cognitive science. Oxford University Press. Bonawitz, E., Denison, S., Griï¬ ths, T. L., & Gopnik, A. (2014). Probabilistic models, learning algorithms, and response variability: sampling in cognitive development. Trends in Cognitive Sciences, 18 , 497â
1604.00289#136
1604.00289#138
1604.00289
[ "1511.06114" ]
1604.00289#138
Building Machines That Learn and Think Like People
500. Bottou, L. (2014). From machine learning to machine reasoning. Machine learning, 94 (2), 133â 149. Bouton, M. E. (2004). Context and behavioral processes in extinction. Learning & Memory, 11 , 485â 494. Buckingham, D., & Shultz, T. R. (2000). The developmental course of distance, time, and velocity concepts: A generative connectionist model. Journal of Cognition and Development, 1 (3), 305â 345. Buesing, L., Bill, J., Nessler, B., & Maass, W. (2011).
1604.00289#137
1604.00289#139
1604.00289
[ "1511.06114" ]
1604.00289#139
Building Machines That Learn and Think Like People
Neural dynamics as sampling: a model for stochastic computation in recurrent networks of spiking neurons. PLoS Computational Biology, 7 , e1002211. Carey, S. (1978). The Child as Word Learner. In J. Bresnan, G. Miller, & M. Halle (Eds.), Linguistic theory and psychological reality (pp. 264â 293). Carey, S. (2004). Bootstrapping and the origin of concepts.
1604.00289#138
1604.00289#140
1604.00289
[ "1511.06114" ]
1604.00289#140
Building Machines That Learn and Think Like People
Daedalus, 133 (1), 59â 68. Carey, S. (2009). The Origin of Concepts. New York, New York, USA: Oxford University Press. Carey, S., & Bartlett, E. (1978). Acquiring a single new word. Papers and Reports on Child Language Development, 15 , 17â 29. Chouard, T. (2016, March). The go ï¬ les: AI computer wraps up 4-1 victory against human champion. ([Online; posted 15-March-2016]) Ciresan, D., Meier, U., & Schmidhuber, J. (2012). Multi-column Deep Neural Networks for Image Classiï¬
1604.00289#139
1604.00289#141
1604.00289
[ "1511.06114" ]
1604.00289#141
Building Machines That Learn and Think Like People
cation. In Computer Vision and Pattern Recognition (CVPR) (pp. 3642â 3649). Collins, A. G. E., & Frank, M. J. (2013). Cognitive control over learning: Creating, clustering, and generalizing task-set structure. Psychological Review , 120 (1), 190â 229. Cook, C., Goodman, N. D., & Schulz, L. E. (2011). Where science starts: spontaneous experiments in preschoolersâ exploratory play. Cognition, 120 (3), 341â 9. Crick, F. (1989).
1604.00289#140
1604.00289#142
1604.00289
[ "1511.06114" ]
1604.00289#142
Building Machines That Learn and Think Like People
The recent excitement about neural networks. Nature, 337 , 129â 132. Csibra, G. (2008). Goal attribution to inanimate agents by 6.5-month-old infants. Cognition, 107 , 705â 717. Csibra, G., Biro, S., Koos, O., & Gergely, G. (2003). One-year-old infants use teleological repre- sentations of actions productively. Cognitive Science, 27 , 111â 133. Dalrmple, D. (2016). Diï¬ erentiable Programming. Retrieved from https://www.edge.org/ response-detail/26794 Davis, E., & Marcus, G. (2015). Commonsense Reasoning and Commonsense Knowledge in Arti- ï¬ cial Intelligence. Communications of the ACM , 58 (9), 92â 103. Daw, N. D., Niv, Y., & Dayan, P. (2005). Uncertainty-based competition between prefrontal and dorsolateral striatal systems for behavioral control. Nature Neuroscience, 8 , 1704â
1604.00289#141
1604.00289#143
1604.00289
[ "1511.06114" ]
1604.00289#143
Building Machines That Learn and Think Like People
1711. Dayan, P., Hinton, G. E., Neal, R. M., & Zemel, R. S. (1995). The Helmholtz machine. Neural Computation, 7 (5), 889â 904. 46 Deacon, T. W. (1998). The symbolic species: The co-evolution of language and the brain. WW Norton & Company. Deci, E. L., & Ryan, R. M. (1975). Intrinsic motivation. Wiley Online Library. de Jonge, M., & Racine, R. J. (1985). The eï¬ ects of repeated induction of long-term potentiation in the dentate gyrus. Brain Research, 328 , 181â 185. Denton, E., Chintala, S., Szlam, A., & Fergus, R. (2015).
1604.00289#142
1604.00289#144
1604.00289
[ "1511.06114" ]
1604.00289#144
Building Machines That Learn and Think Like People
Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks. In Advances in Neural Information Processing Systems 29. Retrieved from http://arxiv.org/abs/1506.05751 Diuk, C., Cohen, A., & Littman, M. L. (2008). An Object-Oriented representation for eï¬ cient In Proceedings of the 25th International Conference on Machine reinforcement learning. Learning (ICML) (pp. 240â 247). Dolan, R. J., & Dayan, P. (2013). Goals and habits in the brain. Neuron, 80 , 312â 325. Donahue, J., Jia, Y., Vinyals, O., Hoï¬ man, J., Zhang, N., Tzeng, E., & Darrell, T. (2013).
1604.00289#143
1604.00289#145
1604.00289
[ "1511.06114" ]
1604.00289#145
Building Machines That Learn and Think Like People
Decaf: A deep convolutional activation feature for generic visual recognition. arXiv preprint arXiv:1310.1531 . Economides, M., Kurth-Nelson, Z., L¨ubbert, A., Guitart-Masip, M., & Dolan, R. J. (2015). Model- based reasoning in humans becomes automatic with training. PLoS Computation Biology, 11 , e1004463. Edelman, S. (2015). The minority report: some common assumptions to reconsider in the modelling of the brain and behaviour. Journal of Experimental & Theoretical Artiï¬ cial Intelligence, 28 (4), 751â 776. Eden, M. (1962).
1604.00289#144
1604.00289#146
1604.00289
[ "1511.06114" ]
1604.00289#146
Building Machines That Learn and Think Like People
Handwriting and Pattern Recognition. IRE Transactions on Information Theory, 160â 166. Eliasmith, C., Stewart, T. C., Choo, X., Bekolay, T., DeWolf, T., Tang, Y., & Rasmussen, D. (2012). A large-scale model of the functioning brain. Science, 338 (6111), 1202â 1205. Elman, J. L. (2005). Connectionist models of cognitive development:
1604.00289#145
1604.00289#147
1604.00289
[ "1511.06114" ]