doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
1604.01696
7
# 2 Related Work Several lines of research have recently focused on learning narrative/event representations. Chambers and Jurafsky first proposed narrative chains (Cham- bers and Jurafsky, 2008) as a partially ordered set of narrative events that share a common actor called the ‘protagonist’. A narrative event is a tuple of an event (a verb) and its participants represented as typed dependencies. Several expansions have since been proposed, including narrative schemas (Cham- bers and Jurafsky, 2009), script sequences (Regneri et al., 2010), and relgrams (Balasubramanian et al., 2013). Formal probabilistic models have also been proposed to learn event schemas and frames (Che- ung et al., 2013; Bamman et al., 2013; Chambers, 2013; Nguyen et al., 2015). These are trained on smaller corpora and focus less on large-scale learn- ing. A major shortcoming so far is that these models are mainly trained on news articles. Little knowl- edge about everyday life events are learned.
1604.01696#7
A Corpus and Evaluation Framework for Deeper Understanding of Commonsense Stories
Representation and learning of commonsense knowledge is one of the foundational problems in the quest to enable deep language understanding. This issue is particularly challenging for understanding casual and correlational relationships between events. While this topic has received a lot of interest in the NLP community, research has been hindered by the lack of a proper evaluation framework. This paper attempts to address this problem with a new framework for evaluating story understanding and script learning: the 'Story Cloze Test'. This test requires a system to choose the correct ending to a four-sentence story. We created a new corpus of ~50k five-sentence commonsense stories, ROCStories, to enable this evaluation. This corpus is unique in two ways: (1) it captures a rich set of causal and temporal commonsense relations between daily events, and (2) it is a high quality collection of everyday life stories that can also be used for story generation. Experimental evaluation shows that a host of baselines and state-of-the-art models based on shallow language understanding struggle to achieve a high score on the Story Cloze Test. We discuss these implications for script and story learning, and offer suggestions for deeper language understanding.
http://arxiv.org/pdf/1604.01696
Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, James Allen
cs.CL, cs.AI
In Proceedings of the 2016 North American Chapter of the ACL (NAACL HLT), 2016
null
cs.CL
20160406
20160406
[]
1604.01696
8
Several groups have directly addressed script learning by focusing exclusively on the narrative cloze test. Jans et al. (Jans et al., 2012) redefined the test to be a text ordered sequence of events, whereas the original did not rely on text order (Chambers and Jurafsky, 2008). Since then, oth- ers have shown language-modeling techniques per- form well (Pichotta and Mooney, 2014a; Rudinger et al., 2015). This paper shows that these approaches struggle on the richer Story Cloze evaluation. There has also been renewed attention toward natural language comprehension and commonsense
1604.01696#8
A Corpus and Evaluation Framework for Deeper Understanding of Commonsense Stories
Representation and learning of commonsense knowledge is one of the foundational problems in the quest to enable deep language understanding. This issue is particularly challenging for understanding casual and correlational relationships between events. While this topic has received a lot of interest in the NLP community, research has been hindered by the lack of a proper evaluation framework. This paper attempts to address this problem with a new framework for evaluating story understanding and script learning: the 'Story Cloze Test'. This test requires a system to choose the correct ending to a four-sentence story. We created a new corpus of ~50k five-sentence commonsense stories, ROCStories, to enable this evaluation. This corpus is unique in two ways: (1) it captures a rich set of causal and temporal commonsense relations between daily events, and (2) it is a high quality collection of everyday life stories that can also be used for story generation. Experimental evaluation shows that a host of baselines and state-of-the-art models based on shallow language understanding struggle to achieve a high score on the Story Cloze Test. We discuss these implications for script and story learning, and offer suggestions for deeper language understanding.
http://arxiv.org/pdf/1604.01696
Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, James Allen
cs.CL, cs.AI
In Proceedings of the 2016 North American Chapter of the ACL (NAACL HLT), 2016
null
cs.CL
20160406
20160406
[]
1604.01696
9
There has also been renewed attention toward natural language comprehension and commonsense reasoning (Levesque, 2011; Roemmele et al., 2011; Bowman et al., 2015). There are a few recent frame- works for evaluating language comprehension (Her- mann et al., 2015; Weston et al., 2015), including the MCTest (Richardson et al., 2013) as a notable one. Their framework also involves story compre- hension, however, their stories are mostly fictional, on average 212 words, and geared toward children in grades 1-4. Some progress has been made in story understanding by limiting the task to the specific do- mains and question types. This includes research on understanding newswire involving terrorism scripts (Mueller, 2002), stories about people in a restau- rant where a reasonable number of questions about time and space can be answered (Mueller, 2007), and generating stories from fairy tales (McIntyre and Lapata, 2009). Finally, there is a rich body of work on story plot generation and creative or artistic story telling (Méndez et al., 2014; Riedl and Le6n, 2008). This paper is unique to these in its corpus of short, simple stories with a wide variety of commonsense events. We show these to be useful for learning, but also for enabling a rich evaluation framework for narrative understanding. # 3 A Corpus of Short Commonsense Stories
1604.01696#9
A Corpus and Evaluation Framework for Deeper Understanding of Commonsense Stories
Representation and learning of commonsense knowledge is one of the foundational problems in the quest to enable deep language understanding. This issue is particularly challenging for understanding casual and correlational relationships between events. While this topic has received a lot of interest in the NLP community, research has been hindered by the lack of a proper evaluation framework. This paper attempts to address this problem with a new framework for evaluating story understanding and script learning: the 'Story Cloze Test'. This test requires a system to choose the correct ending to a four-sentence story. We created a new corpus of ~50k five-sentence commonsense stories, ROCStories, to enable this evaluation. This corpus is unique in two ways: (1) it captures a rich set of causal and temporal commonsense relations between daily events, and (2) it is a high quality collection of everyday life stories that can also be used for story generation. Experimental evaluation shows that a host of baselines and state-of-the-art models based on shallow language understanding struggle to achieve a high score on the Story Cloze Test. We discuss these implications for script and story learning, and offer suggestions for deeper language understanding.
http://arxiv.org/pdf/1604.01696
Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, James Allen
cs.CL, cs.AI
In Proceedings of the 2016 North American Chapter of the ACL (NAACL HLT), 2016
null
cs.CL
20160406
20160406
[]
1604.01696
10
# 3 A Corpus of Short Commonsense Stories We aimed to build a corpus with two goals in mind: 1. The corpus contains a variety of commonsense causal and temporal relations between every- day events. This enables learning narrative structure across a range of events, as opposed to a single domain or genre. 2. The corpus is a high quality collection of non- fictional daily short life stories, which can be used for training rich coherent story-telling models. In order to narrow down our focus, we carefully define a narrative or story as follows: ‘A narrative or story is anything which is told in the form of a causally (logically) linked set of events involv- ing some shared characters’. The classic definition of a story requires having a plot, (e.g., a charac- ter following a goal and facing obstacles), however, here we are not concerned with how entertaining or dramatic the stories are. Instead, we are con- cerned with the essence of actually being a logically meaningful story. We follow the notion of “storiness’ (Forster, 1927; Bailey, 1999), which is described as “the expectations and questions that a reader may have as the story develops”, where expectations are ‘common-sense logical inferences’ made by the imagined reader of the story.
1604.01696#10
A Corpus and Evaluation Framework for Deeper Understanding of Commonsense Stories
Representation and learning of commonsense knowledge is one of the foundational problems in the quest to enable deep language understanding. This issue is particularly challenging for understanding casual and correlational relationships between events. While this topic has received a lot of interest in the NLP community, research has been hindered by the lack of a proper evaluation framework. This paper attempts to address this problem with a new framework for evaluating story understanding and script learning: the 'Story Cloze Test'. This test requires a system to choose the correct ending to a four-sentence story. We created a new corpus of ~50k five-sentence commonsense stories, ROCStories, to enable this evaluation. This corpus is unique in two ways: (1) it captures a rich set of causal and temporal commonsense relations between daily events, and (2) it is a high quality collection of everyday life stories that can also be used for story generation. Experimental evaluation shows that a host of baselines and state-of-the-art models based on shallow language understanding struggle to achieve a high score on the Story Cloze Test. We discuss these implications for script and story learning, and offer suggestions for deeper language understanding.
http://arxiv.org/pdf/1604.01696
Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, James Allen
cs.CL, cs.AI
In Proceedings of the 2016 North American Chapter of the ACL (NAACL HLT), 2016
null
cs.CL
20160406
20160406
[]
1604.01696
11
We propose to satisfy our two goals by asking hundreds of workers on Amazon Mechanical Turk (AMT) to write novel five-sentence stories. The five- sentence length gives enough context to the story without allowing room for sidetracks about less im- portant or irrelevant information in the story. In this Section we describe the details about how we col- lected this corpus, and provide statistical analysis. # 3.1. Data Collection Methodology Crowdsourcing this corpus makes the data collec- tion scalable and adds to the diversity of stories. We tested numerous pilots with varying prompts and in- structions. We manually checked the submitted sto- ries in each pilot and counted the number of sub- missions which did not have our desired level of co- herency or were specifically fictional or offensive. Three people participated in this task and they iter- ated over the ratings until everyone agreed with the next pilot’s prompt design. We achieved the best re- sults when we let the workers write about anything they have in mind, as opposed to mandating a pre- specified topic. The final crowdsourcing prompt can be found in supplementary material.
1604.01696#11
A Corpus and Evaluation Framework for Deeper Understanding of Commonsense Stories
Representation and learning of commonsense knowledge is one of the foundational problems in the quest to enable deep language understanding. This issue is particularly challenging for understanding casual and correlational relationships between events. While this topic has received a lot of interest in the NLP community, research has been hindered by the lack of a proper evaluation framework. This paper attempts to address this problem with a new framework for evaluating story understanding and script learning: the 'Story Cloze Test'. This test requires a system to choose the correct ending to a four-sentence story. We created a new corpus of ~50k five-sentence commonsense stories, ROCStories, to enable this evaluation. This corpus is unique in two ways: (1) it captures a rich set of causal and temporal commonsense relations between daily events, and (2) it is a high quality collection of everyday life stories that can also be used for story generation. Experimental evaluation shows that a host of baselines and state-of-the-art models based on shallow language understanding struggle to achieve a high score on the Story Cloze Test. We discuss these implications for script and story learning, and offer suggestions for deeper language understanding.
http://arxiv.org/pdf/1604.01696
Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, James Allen
cs.CL, cs.AI
In Proceedings of the 2016 North American Chapter of the ACL (NAACL HLT), 2016
null
cs.CL
20160406
20160406
[]
1604.01696
12
The key property that we had enforced in our final prompt was the following: the story should read like a coherent story, with a specific begin- ning and ending, where something happens in be- tween. This constraint resulted in many causal and temporal links between events. Table 1 shows the examples we provided to the workers for instruct- ing them about the constraints. We set a limit of 70 characters to the length of each sentence. This prevented multi-part sentences that include unnec- essary details. The workers were also asked to pro- vide a title that best describes their story. Last but not least, we instructed the workers not to use quo- tations in their sentences and avoid using slang or informal language.
1604.01696#12
A Corpus and Evaluation Framework for Deeper Understanding of Commonsense Stories
Representation and learning of commonsense knowledge is one of the foundational problems in the quest to enable deep language understanding. This issue is particularly challenging for understanding casual and correlational relationships between events. While this topic has received a lot of interest in the NLP community, research has been hindered by the lack of a proper evaluation framework. This paper attempts to address this problem with a new framework for evaluating story understanding and script learning: the 'Story Cloze Test'. This test requires a system to choose the correct ending to a four-sentence story. We created a new corpus of ~50k five-sentence commonsense stories, ROCStories, to enable this evaluation. This corpus is unique in two ways: (1) it captures a rich set of causal and temporal commonsense relations between daily events, and (2) it is a high quality collection of everyday life stories that can also be used for story generation. Experimental evaluation shows that a host of baselines and state-of-the-art models based on shallow language understanding struggle to achieve a high score on the Story Cloze Test. We discuss these implications for script and story learning, and offer suggestions for deeper language understanding.
http://arxiv.org/pdf/1604.01696
Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, James Allen
cs.CL, cs.AI
In Proceedings of the 2016 North American Chapter of the ACL (NAACL HLT), 2016
null
cs.CL
20160406
20160406
[]
1604.01696
13
Collecting high quality stories with these con- straints gives us a rich collection of commonsense stories which are full of stereotypical inter-event reThe little puppy thought he was a great basketball player. He challenged the kitten to a friendly game. The kitten agreed. Kitten started to practice really hard. Eventually the kitten beat the puppy by 40 points. Bill thought he was a great basketball player. He challenged Sam to a friendly game. Sam agreed. Sam started to practice really hard. Eventually Sam beat Bill by 40 points. Iam happy with my life. I have been kind. I have been successful. I work out. Why not be happy when you can? The city is full of people and offers a lot of things to do. One of my favorite things is going to the outdoor concerts. I also like visiting the different restaurants and museums. There is always something exciting to do in the city. The Smith family went to the family beach house every summer. They loved the beach house a lot. Unfortunately there was a bad hurricane once. Their beach house was washed away. Now they lament the loss of their beach house every summer. xX \ XX \ X v Miley was in middle school. Skhe+ivedin-an-apartment. Once Miley made a mistake and cheated in
1604.01696#13
A Corpus and Evaluation Framework for Deeper Understanding of Commonsense Stories
Representation and learning of commonsense knowledge is one of the foundational problems in the quest to enable deep language understanding. This issue is particularly challenging for understanding casual and correlational relationships between events. While this topic has received a lot of interest in the NLP community, research has been hindered by the lack of a proper evaluation framework. This paper attempts to address this problem with a new framework for evaluating story understanding and script learning: the 'Story Cloze Test'. This test requires a system to choose the correct ending to a four-sentence story. We created a new corpus of ~50k five-sentence commonsense stories, ROCStories, to enable this evaluation. This corpus is unique in two ways: (1) it captures a rich set of causal and temporal commonsense relations between daily events, and (2) it is a high quality collection of everyday life stories that can also be used for story generation. Experimental evaluation shows that a host of baselines and state-of-the-art models based on shallow language understanding struggle to achieve a high score on the Story Cloze Test. We discuss these implications for script and story learning, and offer suggestions for deeper language understanding.
http://arxiv.org/pdf/1604.01696
Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, James Allen
cs.CL, cs.AI
In Proceedings of the 2016 North American Chapter of the ACL (NAACL HLT), 2016
null
cs.CL
20160406
20160406
[]
1604.01696
14
every summer. xX \ XX \ X v Miley was in middle school. Skhe+ivedin-an-apartment. Once Miley made a mistake and cheated in one of her exams. She tried to hide the truth from her parents. After her parents found out, they grounded her for a month. Miley was in middle school. She usually got good grades in school . Once Miley made a mistake and cheated in one of her exams. She tried to hide the truth from her parents. After her parents found out, they grounded her for a month. Table 1: Examples of good and bad stories provided to the crowd-sourced workers. Each row emphasizes one of the three properties that each story should satisfy: (1) being realistic, (2) having clear beginning and ending, and (3) not stating anything irrelevant to the story. X challenge Y — Y agree play —— Y practice Y beat X Figure 1: An example narrative chain with charac- ters X and Y. lations. For example, from the good story in first row of Table 1, one can extract the narrative chain represented in Figure 1. Developing a better se- mantic representation for narrative chains which can capture rich inter-event relations in these stories is a topic of future work. Quality Control: One issue with
1604.01696#14
A Corpus and Evaluation Framework for Deeper Understanding of Commonsense Stories
Representation and learning of commonsense knowledge is one of the foundational problems in the quest to enable deep language understanding. This issue is particularly challenging for understanding casual and correlational relationships between events. While this topic has received a lot of interest in the NLP community, research has been hindered by the lack of a proper evaluation framework. This paper attempts to address this problem with a new framework for evaluating story understanding and script learning: the 'Story Cloze Test'. This test requires a system to choose the correct ending to a four-sentence story. We created a new corpus of ~50k five-sentence commonsense stories, ROCStories, to enable this evaluation. This corpus is unique in two ways: (1) it captures a rich set of causal and temporal commonsense relations between daily events, and (2) it is a high quality collection of everyday life stories that can also be used for story generation. Experimental evaluation shows that a host of baselines and state-of-the-art models based on shallow language understanding struggle to achieve a high score on the Story Cloze Test. We discuss these implications for script and story learning, and offer suggestions for deeper language understanding.
http://arxiv.org/pdf/1604.01696
Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, James Allen
cs.CL, cs.AI
In Proceedings of the 2016 North American Chapter of the ACL (NAACL HLT), 2016
null
cs.CL
20160406
20160406
[]
1604.01696
15
Developing a better se- mantic representation for narrative chains which can capture rich inter-event relations in these stories is a topic of future work. Quality Control: One issue with crowdsourcing is how to instruct non-expert workers. This task is a type of creative writing, and is trickier than classifi- cation and tagging tasks. In order to ensure we get qualified workers, we designed a qualification test on AMT in which the workers had to judge whether or not a given story (total five stories) is an accept- able one. We used five carefully selected stories to be a part of the qualification test. This not only elim- inates any potential spammers on AMT, but also pro- vides us with a pool of creative story writers. Fur- thermore, we qualitatively browsed through the sub- missions and gave the workers detailed feedback be- fore approving their submissions. We often bonused our top workers, encouraging them to write new sto- ries on a daily basis. Statistics: Figure 2 shows the distribution of number of tokens of different sentence positions. The first sentence tends to be shorter, as it usually introduces characters or sets the scene, and the fifth sentence is longer, providing more detailed conclu- sions
1604.01696#15
A Corpus and Evaluation Framework for Deeper Understanding of Commonsense Stories
Representation and learning of commonsense knowledge is one of the foundational problems in the quest to enable deep language understanding. This issue is particularly challenging for understanding casual and correlational relationships between events. While this topic has received a lot of interest in the NLP community, research has been hindered by the lack of a proper evaluation framework. This paper attempts to address this problem with a new framework for evaluating story understanding and script learning: the 'Story Cloze Test'. This test requires a system to choose the correct ending to a four-sentence story. We created a new corpus of ~50k five-sentence commonsense stories, ROCStories, to enable this evaluation. This corpus is unique in two ways: (1) it captures a rich set of causal and temporal commonsense relations between daily events, and (2) it is a high quality collection of everyday life stories that can also be used for story generation. Experimental evaluation shows that a host of baselines and state-of-the-art models based on shallow language understanding struggle to achieve a high score on the Story Cloze Test. We discuss these implications for script and story learning, and offer suggestions for deeper language understanding.
http://arxiv.org/pdf/1604.01696
Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, James Allen
cs.CL, cs.AI
In Proceedings of the 2016 North American Chapter of the ACL (NAACL HLT), 2016
null
cs.CL
20160406
20160406
[]
1604.01696
16
tends to be shorter, as it usually introduces characters or sets the scene, and the fifth sentence is longer, providing more detailed conclu- sions to the story. Table 2 summarizes the statistics of our crowdsourcing effort. Figure 3 shows the dis- tribution of the most frequent 50 events in the cor- pus. Here we count event as any hyponym of ‘event’ or ‘process’ in WordNet (Miller., 1995). The top two events, ‘go’ and ‘get’, each comprise less than 2% of all the events, which illustrates the rich diversity of the corpus. 5 > 04 Sentence 3 pied ™ Sentence 1 2 0.06 ; 902 7 ° a 5 6 7 Number of Tokens Figure 2: Number of tokens in each sentence posi- tion. # submitted stories 49,895 # approved stories 49,255 # workers participated 932 Average # stories by one worker 52.84 Max # stories written by one worker 3,057 Average work time among workers (minute) 4.80 Median work time among workers (minute) 2.16 Average payment per story (cents) 26 Table 2: Crowdsourcing worker statistics. Figure 4 visualizes the n-gram distribution of our story titles, where each
1604.01696#16
A Corpus and Evaluation Framework for Deeper Understanding of Commonsense Stories
Representation and learning of commonsense knowledge is one of the foundational problems in the quest to enable deep language understanding. This issue is particularly challenging for understanding casual and correlational relationships between events. While this topic has received a lot of interest in the NLP community, research has been hindered by the lack of a proper evaluation framework. This paper attempts to address this problem with a new framework for evaluating story understanding and script learning: the 'Story Cloze Test'. This test requires a system to choose the correct ending to a four-sentence story. We created a new corpus of ~50k five-sentence commonsense stories, ROCStories, to enable this evaluation. This corpus is unique in two ways: (1) it captures a rich set of causal and temporal commonsense relations between daily events, and (2) it is a high quality collection of everyday life stories that can also be used for story generation. Experimental evaluation shows that a host of baselines and state-of-the-art models based on shallow language understanding struggle to achieve a high score on the Story Cloze Test. We discuss these implications for script and story learning, and offer suggestions for deeper language understanding.
http://arxiv.org/pdf/1604.01696
Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, James Allen
cs.CL, cs.AI
In Proceedings of the 2016 North American Chapter of the ACL (NAACL HLT), 2016
null
cs.CL
20160406
20160406
[]
1604.01696
17
Average payment per story (cents) 26 Table 2: Crowdsourcing worker statistics. Figure 4 visualizes the n-gram distribution of our story titles, where each radial path indicates an n# submitted stories 49,895 # approved stories 49,255 # workers participated 932 Average # stories by one worker 52.84 Max # stories written by one worker 3,057 Average work time among workers (minute) 4.80 Median work time among workers (minute) 2.16 Average payment per story (cents) 26
1604.01696#17
A Corpus and Evaluation Framework for Deeper Understanding of Commonsense Stories
Representation and learning of commonsense knowledge is one of the foundational problems in the quest to enable deep language understanding. This issue is particularly challenging for understanding casual and correlational relationships between events. While this topic has received a lot of interest in the NLP community, research has been hindered by the lack of a proper evaluation framework. This paper attempts to address this problem with a new framework for evaluating story understanding and script learning: the 'Story Cloze Test'. This test requires a system to choose the correct ending to a four-sentence story. We created a new corpus of ~50k five-sentence commonsense stories, ROCStories, to enable this evaluation. This corpus is unique in two ways: (1) it captures a rich set of causal and temporal commonsense relations between daily events, and (2) it is a high quality collection of everyday life stories that can also be used for story generation. Experimental evaluation shows that a host of baselines and state-of-the-art models based on shallow language understanding struggle to achieve a high score on the Story Cloze Test. We discuss these implications for script and story learning, and offer suggestions for deeper language understanding.
http://arxiv.org/pdf/1604.01696
Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, James Allen
cs.CL, cs.AI
In Proceedings of the 2016 North American Chapter of the ACL (NAACL HLT), 2016
null
cs.CL
20160406
20160406
[]
1604.01696
18
get 90 not taxgecide’ ReaD PY ick want stop make sit find turn work. become ty help time break buy wait come use look watch play realize walk class tell game ‘start think back say saw know house drive ask need give begin run '©3¥@ end ike cali eat Put Figure 3: Distribution of top 50 events in our corpus. gram sequence. For this analysis we set n=5, where the mean number of tokens in titles is 9.8 and me- dian is 10. The ‘end’ token distinguishes the actual ending of a title from five-gram cut-off. This fig- ure demonstrates the range of topics that our workers have written about. The full circle reflects on 100% of the title n-grams and the n-gram paths in the faded 3/4 of the circle comprise less than 0.1% of the n- grams. This further demonstrates that the range of topics covered by our corpus is quite diverse. A full dynamic visualization of these n-grams can be found here: http: //goo.gl/Qhg60B. =D Figure 4: N-gram distribution of story titles. 3.2. Corpus Release The corpus is publicly available to the com- munity and
1604.01696#18
A Corpus and Evaluation Framework for Deeper Understanding of Commonsense Stories
Representation and learning of commonsense knowledge is one of the foundational problems in the quest to enable deep language understanding. This issue is particularly challenging for understanding casual and correlational relationships between events. While this topic has received a lot of interest in the NLP community, research has been hindered by the lack of a proper evaluation framework. This paper attempts to address this problem with a new framework for evaluating story understanding and script learning: the 'Story Cloze Test'. This test requires a system to choose the correct ending to a four-sentence story. We created a new corpus of ~50k five-sentence commonsense stories, ROCStories, to enable this evaluation. This corpus is unique in two ways: (1) it captures a rich set of causal and temporal commonsense relations between daily events, and (2) it is a high quality collection of everyday life stories that can also be used for story generation. Experimental evaluation shows that a host of baselines and state-of-the-art models based on shallow language understanding struggle to achieve a high score on the Story Cloze Test. We discuss these implications for script and story learning, and offer suggestions for deeper language understanding.
http://arxiv.org/pdf/1604.01696
Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, James Allen
cs.CL, cs.AI
In Proceedings of the 2016 North American Chapter of the ACL (NAACL HLT), 2016
null
cs.CL
20160406
20160406
[]
1604.01696
19
=D Figure 4: N-gram distribution of story titles. 3.2. Corpus Release The corpus is publicly available to the com- munity and can be accessed through http: //cs.rochester.edu/nlp/rocstories, which will be grown even further over the coming years. Given the quality control pipeline and the creativity required from workers, data collection goes slowly. We are also making available semantic parses of these stories. Since these stories are not newswire, off-the-shelf syntactic and shallow semantic parsers for event extraction often fail on the language. To address this issue, we customized search param- eters and added a few lexical entries? to TRIPS broad-coverage semantic parser®, optimizing its per- formance on our corpus. TRIPS parser (Allen et al., 2008) produces state-of-the-art logical forms for input stories, providing sense disambiguated and ontology-typed rich deep structures which enables event extraction together with semantic roles and coreference chains throughout the five sentences. 3.3. Temporal Analysis Being able to temporally order events in the stories is a pre-requisite for complete narrative understand- ing. Temporal analysis of the events in our short commonsensical stories is
1604.01696#19
A Corpus and Evaluation Framework for Deeper Understanding of Commonsense Stories
Representation and learning of commonsense knowledge is one of the foundational problems in the quest to enable deep language understanding. This issue is particularly challenging for understanding casual and correlational relationships between events. While this topic has received a lot of interest in the NLP community, research has been hindered by the lack of a proper evaluation framework. This paper attempts to address this problem with a new framework for evaluating story understanding and script learning: the 'Story Cloze Test'. This test requires a system to choose the correct ending to a four-sentence story. We created a new corpus of ~50k five-sentence commonsense stories, ROCStories, to enable this evaluation. This corpus is unique in two ways: (1) it captures a rich set of causal and temporal commonsense relations between daily events, and (2) it is a high quality collection of everyday life stories that can also be used for story generation. Experimental evaluation shows that a host of baselines and state-of-the-art models based on shallow language understanding struggle to achieve a high score on the Story Cloze Test. We discuss these implications for script and story learning, and offer suggestions for deeper language understanding.
http://arxiv.org/pdf/1604.01696
Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, James Allen
cs.CL, cs.AI
In Proceedings of the 2016 North American Chapter of the ACL (NAACL HLT), 2016
null
cs.CL
20160406
20160406
[]
1604.01696
20
Being able to temporally order events in the stories is a pre-requisite for complete narrative understand- ing. Temporal analysis of the events in our short commonsensical stories is an important topic of fur- ther research on its own. In this Section, we sum- marize two of our analyses regarding the nature of temporal ordering of events in our corpus. Shuffling Experiment: An open question in any text genre is how text order is related to tempo- ral order. Do the sentences follow the real-world temporal order of events? This experiment shuf- fles the stories and asks AMT workers to arrange them back to a coherent story. This can shed light on the correlation between the original position of the sentences and the position when another human rearranges them in a commonsensically meaningful way. We set up this experiment as follows: we sam- pled two sets of 50 stories from our corpus: Good- Storiessg and Random-Storiess). Good-Storiessg* is sampled from a set of stories written by top workers >For example, new informal verbs such as ‘vape’ or ‘vlog’ have been added to the lexicon of this semantic parser. Shttp://trips.ihmc.us/parser/cgi/step ‘This set
1604.01696#20
A Corpus and Evaluation Framework for Deeper Understanding of Commonsense Stories
Representation and learning of commonsense knowledge is one of the foundational problems in the quest to enable deep language understanding. This issue is particularly challenging for understanding casual and correlational relationships between events. While this topic has received a lot of interest in the NLP community, research has been hindered by the lack of a proper evaluation framework. This paper attempts to address this problem with a new framework for evaluating story understanding and script learning: the 'Story Cloze Test'. This test requires a system to choose the correct ending to a four-sentence story. We created a new corpus of ~50k five-sentence commonsense stories, ROCStories, to enable this evaluation. This corpus is unique in two ways: (1) it captures a rich set of causal and temporal commonsense relations between daily events, and (2) it is a high quality collection of everyday life stories that can also be used for story generation. Experimental evaluation shows that a host of baselines and state-of-the-art models based on shallow language understanding struggle to achieve a high score on the Story Cloze Test. We discuss these implications for script and story learning, and offer suggestions for deeper language understanding.
http://arxiv.org/pdf/1604.01696
Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, James Allen
cs.CL, cs.AI
In Proceedings of the 2016 North American Chapter of the ACL (NAACL HLT), 2016
null
cs.CL
20160406
20160406
[]
1604.01696
22
Good-Storiesso Random-Storiesso % perfectly ordered, taking majority ordering for each of the 50 stories 100 86 % all sentences perfectly ordered, out of 250 orderings % < 1 sentences misplaced, rest flow correctly, out of 250 orderings % correct placements of each position, | to 5 95.2 82.4 98.0 96.0 98.8, 97.6, 96, 96, 98.8 95.6, 86, 86.8, 91.2, 96.8 Table 3: Results from the human temporal shuffling experiment. who have shown shown consistent quality through- out their submissions. Random-Storiessg° is a ran- dom sampling from all the stories in the corpus. Then we randomly shuffled the sentences in each story and asked five crowd workers on AMT to rear- range the sentences.
1604.01696#22
A Corpus and Evaluation Framework for Deeper Understanding of Commonsense Stories
Representation and learning of commonsense knowledge is one of the foundational problems in the quest to enable deep language understanding. This issue is particularly challenging for understanding casual and correlational relationships between events. While this topic has received a lot of interest in the NLP community, research has been hindered by the lack of a proper evaluation framework. This paper attempts to address this problem with a new framework for evaluating story understanding and script learning: the 'Story Cloze Test'. This test requires a system to choose the correct ending to a four-sentence story. We created a new corpus of ~50k five-sentence commonsense stories, ROCStories, to enable this evaluation. This corpus is unique in two ways: (1) it captures a rich set of causal and temporal commonsense relations between daily events, and (2) it is a high quality collection of everyday life stories that can also be used for story generation. Experimental evaluation shows that a host of baselines and state-of-the-art models based on shallow language understanding struggle to achieve a high score on the Story Cloze Test. We discuss these implications for script and story learning, and offer suggestions for deeper language understanding.
http://arxiv.org/pdf/1604.01696
Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, James Allen
cs.CL, cs.AI
In Proceedings of the 2016 North American Chapter of the ACL (NAACL HLT), 2016
null
cs.CL
20160406
20160406
[]
1604.01696
23
Table 3 summarizes the results of this experiment. The first row shows the result of ordering if we take the absolute majority ordering of the five crowd workers as the final ordering. The second row shows the result of ordering if we consider each of the 250 (50 stories x 5 workers ordering each one) ordering cases independently. As shown, the good stories are perfectly ordered with very high accuracy. It is im- portant to note that this specific set rarely had any linguistic adverbials such as ‘first’, ‘then’, etc. to help human infer the ordering, so the main factors at play are the following: (1) the commonsensical temporal and causal relation between events (narra- tive schemas), e.g., human knows that first some- one loses a phone then starts searching; (2) the nat- ural way of narrating a story which starts with intro- ducing the characters and concludes the story at the end. The role of the latter factor is quantified in the misplacement rate of each position reported in Table 3, where the first and last sentences are more often correctly placed than others. The high precision of ordering in sentences 2 up to 4 further verifies the richness of our corpus in terms of logical relation between events.
1604.01696#23
A Corpus and Evaluation Framework for Deeper Understanding of Commonsense Stories
Representation and learning of commonsense knowledge is one of the foundational problems in the quest to enable deep language understanding. This issue is particularly challenging for understanding casual and correlational relationships between events. While this topic has received a lot of interest in the NLP community, research has been hindered by the lack of a proper evaluation framework. This paper attempts to address this problem with a new framework for evaluating story understanding and script learning: the 'Story Cloze Test'. This test requires a system to choose the correct ending to a four-sentence story. We created a new corpus of ~50k five-sentence commonsense stories, ROCStories, to enable this evaluation. This corpus is unique in two ways: (1) it captures a rich set of causal and temporal commonsense relations between daily events, and (2) it is a high quality collection of everyday life stories that can also be used for story generation. Experimental evaluation shows that a host of baselines and state-of-the-art models based on shallow language understanding struggle to achieve a high score on the Story Cloze Test. We discuss these implications for script and story learning, and offer suggestions for deeper language understanding.
http://arxiv.org/pdf/1604.01696
Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, James Allen
cs.CL, cs.AI
In Proceedings of the 2016 North American Chapter of the ACL (NAACL HLT), 2016
null
cs.CL
20160406
20160406
[]
1604.01696
24
found that sentence order matches TimeML order 55% of the time. A more comprehensive study of temporal and causal aspects of these stories requires defining a specific semantic annotation framework which covers not only temporal but also causal re- lations between commonsense events. This is cap- tured in a recent work on semantic annotation of ROCStories (Mostafazadeh et al., 2016). # 4 A New Evaluation Framework As described earlier in the introduction, the common evaluation framework for script learning is the ‘Nar- rative Cloze Test’ (Chambers and Jurafsky, 2008), where a system generates a ranked list of guesses for a missing event, given some observed events. The original goal of this test was to provide a compara- tive measure to evaluate narrative knowledge. How- ever, gradually, the community started optimizing towards the performance on the test itself, achiev- ing higher scores without demonstrating narrative knowledge learning. For instance, generating the ranked list according to the event’s corpus frequency (e.g., always predicting *X said’) was shown to be an extremely strong baseline (Pichotta and Mooney, 2014b). Originally, narrative cloze test chains were extracted by hand and verified as gold chains. How- ever, the cloze test chains used in all of the most recent works are not human verified as gold.
1604.01696#24
A Corpus and Evaluation Framework for Deeper Understanding of Commonsense Stories
Representation and learning of commonsense knowledge is one of the foundational problems in the quest to enable deep language understanding. This issue is particularly challenging for understanding casual and correlational relationships between events. While this topic has received a lot of interest in the NLP community, research has been hindered by the lack of a proper evaluation framework. This paper attempts to address this problem with a new framework for evaluating story understanding and script learning: the 'Story Cloze Test'. This test requires a system to choose the correct ending to a four-sentence story. We created a new corpus of ~50k five-sentence commonsense stories, ROCStories, to enable this evaluation. This corpus is unique in two ways: (1) it captures a rich set of causal and temporal commonsense relations between daily events, and (2) it is a high quality collection of everyday life stories that can also be used for story generation. Experimental evaluation shows that a host of baselines and state-of-the-art models based on shallow language understanding struggle to achieve a high score on the Story Cloze Test. We discuss these implications for script and story learning, and offer suggestions for deeper language understanding.
http://arxiv.org/pdf/1604.01696
Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, James Allen
cs.CL, cs.AI
In Proceedings of the 2016 North American Chapter of the ACL (NAACL HLT), 2016
null
cs.CL
20160406
20160406
[]
1604.01696
25
TimeML Annotation: TimeML-driven analysis of these stories can give us finer-grained insight about temporal aspect of the events in this corpus. We performed a simplified TimeML-driven (Puste- jovsky et al., 2003) expert annotation of a sample of 20 stories®. Among all the temporal links (TLINK) annotated, 62% were ‘before’ and 10% were ‘simul- taneous’. We were interested to know if the actual text order mirrors real-world order of events. We 5This set can be found here: https: //goo.gl/pgm2KR °The annotation is available: http: //goo.gl/7qdNsb It is evident that there is a need for a more system- atic automatic evaluation framework which is more in line with the original deeper script/story under- standing goals. It is important to note that reorder- ing of temporally shuffled stories (Section 3.3) can serve as a framework to evaluate a system’s story un- derstanding. However, reordering can be achieved to a degree by using various surface features such as adverbials, so this cannot be a foolproof story un- derstanding evaluation framework. Our ROCStories corpus enables a brand new framework for evalu- ating story understanding, called the ‘Story Cloze Test’.
1604.01696#25
A Corpus and Evaluation Framework for Deeper Understanding of Commonsense Stories
Representation and learning of commonsense knowledge is one of the foundational problems in the quest to enable deep language understanding. This issue is particularly challenging for understanding casual and correlational relationships between events. While this topic has received a lot of interest in the NLP community, research has been hindered by the lack of a proper evaluation framework. This paper attempts to address this problem with a new framework for evaluating story understanding and script learning: the 'Story Cloze Test'. This test requires a system to choose the correct ending to a four-sentence story. We created a new corpus of ~50k five-sentence commonsense stories, ROCStories, to enable this evaluation. This corpus is unique in two ways: (1) it captures a rich set of causal and temporal commonsense relations between daily events, and (2) it is a high quality collection of everyday life stories that can also be used for story generation. Experimental evaluation shows that a host of baselines and state-of-the-art models based on shallow language understanding struggle to achieve a high score on the Story Cloze Test. We discuss these implications for script and story learning, and offer suggestions for deeper language understanding.
http://arxiv.org/pdf/1604.01696
Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, James Allen
cs.CL, cs.AI
In Proceedings of the 2016 North American Chapter of the ACL (NAACL HLT), 2016
null
cs.CL
20160406
20160406
[]
1604.01696
26
# 4.1 Story Cloze Test The cloze task (Taylor, 1953) is used to evaluate a human (or a system) for language understanding by deleting a random word from a sentence and having a human fill in the blank. We introduce ‘Story Cloze Test’, in which a system is given a four-sentence ‘context’ and two alternative endings to the story, called ‘right ending’ and ‘wrong end- ing’. Hence, in this test the fifth sentence is blank. Then the system’s task is to choose the right end- ing. The ‘right ending’ can be viewed as ‘entailing’ hypothesis in a classic Recognizing Textual Entail- ment (RTE) framework (Giampiccolo et al., 2007), and ‘wrong’ ending can be seen as the ’contradict- ing’ hypothesis. Table 4 shows three example Story Cloze Test cases.
1604.01696#26
A Corpus and Evaluation Framework for Deeper Understanding of Commonsense Stories
Representation and learning of commonsense knowledge is one of the foundational problems in the quest to enable deep language understanding. This issue is particularly challenging for understanding casual and correlational relationships between events. While this topic has received a lot of interest in the NLP community, research has been hindered by the lack of a proper evaluation framework. This paper attempts to address this problem with a new framework for evaluating story understanding and script learning: the 'Story Cloze Test'. This test requires a system to choose the correct ending to a four-sentence story. We created a new corpus of ~50k five-sentence commonsense stories, ROCStories, to enable this evaluation. This corpus is unique in two ways: (1) it captures a rich set of causal and temporal commonsense relations between daily events, and (2) it is a high quality collection of everyday life stories that can also be used for story generation. Experimental evaluation shows that a host of baselines and state-of-the-art models based on shallow language understanding struggle to achieve a high score on the Story Cloze Test. We discuss these implications for script and story learning, and offer suggestions for deeper language understanding.
http://arxiv.org/pdf/1604.01696
Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, James Allen
cs.CL, cs.AI
In Proceedings of the 2016 North American Chapter of the ACL (NAACL HLT), 2016
null
cs.CL
20160406
20160406
[]
1604.01696
27
Story Cloze Test will serve as a generic story understanding evaluation framework, also applica- ble to evaluation of story generation models (for instance by computing the log-likelihoods assigned to the two ending alternatives by the story genera- tion model), which does not necessarily imply re- quirement for explicit narrative knowledge learning. However, it is safe to say that any model that per- forms well on Story Cloze Test is demonstrating some level of deeper story understanding. # 4.2 Data Collection Methodology We randomly sampled 13,500 stories from ROCSto- ries Corpus and presented only the first four sen- tences of each to AMT workers. For each story, a worker was asked to write a ‘right ending’ and a “wrong ending’. The workers were prompted to sat- isfy two conditions: (1) the sentence should follow up the story by sharing at least one of the characters of the story, and (2) the sentence should be entirely realistic and sensible when read in isolation. These conditions make sure that the Story Cloze Test cases are not trivial. More details on this setup is described in the supplementary material. Quality Control: The accuracy of the Story Cloze Test can play a crucial role in directing the research community in the right trajectory. We im- plemented the following two-step quality control:
1604.01696#27
A Corpus and Evaluation Framework for Deeper Understanding of Commonsense Stories
Representation and learning of commonsense knowledge is one of the foundational problems in the quest to enable deep language understanding. This issue is particularly challenging for understanding casual and correlational relationships between events. While this topic has received a lot of interest in the NLP community, research has been hindered by the lack of a proper evaluation framework. This paper attempts to address this problem with a new framework for evaluating story understanding and script learning: the 'Story Cloze Test'. This test requires a system to choose the correct ending to a four-sentence story. We created a new corpus of ~50k five-sentence commonsense stories, ROCStories, to enable this evaluation. This corpus is unique in two ways: (1) it captures a rich set of causal and temporal commonsense relations between daily events, and (2) it is a high quality collection of everyday life stories that can also be used for story generation. Experimental evaluation shows that a host of baselines and state-of-the-art models based on shallow language understanding struggle to achieve a high score on the Story Cloze Test. We discuss these implications for script and story learning, and offer suggestions for deeper language understanding.
http://arxiv.org/pdf/1604.01696
Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, James Allen
cs.CL, cs.AI
In Proceedings of the 2016 North American Chapter of the ACL (NAACL HLT), 2016
null
cs.CL
20160406
20160406
[]
1604.01696
28
Quality Control: The accuracy of the Story Cloze Test can play a crucial role in directing the research community in the right trajectory. We im- plemented the following two-step quality control: 1. Qualification Test: We designed a qualification test for this task, where the workers had to choose whether or not a given ‘right ending’ and ‘wrong ending’ satisfy our constraints. At this stage we collected 13,500 cloze test cases. 2. Human Verification: In order to further validate the cloze test cases, we compiled the 13,500 Story Cloze Test cases into 2 x 13,500 = 27, 000 full five-sentence stories. Then for each story we asked three crowd workers to verify whether or not the given sequence of five sentences makes sense as a meaningful and coherent story, rating within {-1, 0, 1}. Then we filtered cloze test cases which had ‘right ending’ with all ratings 1 and ‘wrong ending’ with all ratings 0. This pro- cess ensures that there are no boundary cases of ‘right ending’ and ‘wrong ending’. This resulted in final 3,742 test cases, which was randomly di- vided into validation and test Story Cloze Test sets. We also made sure to remove the original stories used in the validation and test set from our ROCStories Corpus.
1604.01696#28
A Corpus and Evaluation Framework for Deeper Understanding of Commonsense Stories
Representation and learning of commonsense knowledge is one of the foundational problems in the quest to enable deep language understanding. This issue is particularly challenging for understanding casual and correlational relationships between events. While this topic has received a lot of interest in the NLP community, research has been hindered by the lack of a proper evaluation framework. This paper attempts to address this problem with a new framework for evaluating story understanding and script learning: the 'Story Cloze Test'. This test requires a system to choose the correct ending to a four-sentence story. We created a new corpus of ~50k five-sentence commonsense stories, ROCStories, to enable this evaluation. This corpus is unique in two ways: (1) it captures a rich set of causal and temporal commonsense relations between daily events, and (2) it is a high quality collection of everyday life stories that can also be used for story generation. Experimental evaluation shows that a host of baselines and state-of-the-art models based on shallow language understanding struggle to achieve a high score on the Story Cloze Test. We discuss these implications for script and story learning, and offer suggestions for deeper language understanding.
http://arxiv.org/pdf/1604.01696
Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, James Allen
cs.CL, cs.AI
In Proceedings of the 2016 North American Chapter of the ACL (NAACL HLT), 2016
null
cs.CL
20160406
20160406
[]
1604.01696
29
Statistics: Table 5 summarizes the statistics of our crowdsourcing effort. The Story Cloze Test sets can also be accessed through our website. # 5 Story Cloze Test Models In this Section we demonstrate that Story Cloze Test cannot be easily tackled by using shallow tech- niques, without actually understanding the underly- ing narrative. Following other natural language in- ference frameworks such as RTE, we evaluate sys- tem performance according to basic accuracy mea- sure, which is defined as foe. We present the following baselines and models for tackling Story Cloze Test. All of the models are tested on the vali- dation and test Story Cloze sets, where only the val- idation set could be used for any tuning purposes. 1. Frequency: Ideally, the Story Cloze Test cases should not be answerable without the context. For example, if for some context the two alternatives are ‘He was mad after he won’? and ‘He was cheerful after he won’, the first alternative is sim- ply less probable in real world than the other one. This baseline chooses the alternative with higher search engine® hits of the main event (verb) together 7Given our prompt that the ‘wrong ending’ sentences should make sense in isolation, such cases should be rare in our dataset. Shttps://developers.google.com/ custom-search/
1604.01696#29
A Corpus and Evaluation Framework for Deeper Understanding of Commonsense Stories
Representation and learning of commonsense knowledge is one of the foundational problems in the quest to enable deep language understanding. This issue is particularly challenging for understanding casual and correlational relationships between events. While this topic has received a lot of interest in the NLP community, research has been hindered by the lack of a proper evaluation framework. This paper attempts to address this problem with a new framework for evaluating story understanding and script learning: the 'Story Cloze Test'. This test requires a system to choose the correct ending to a four-sentence story. We created a new corpus of ~50k five-sentence commonsense stories, ROCStories, to enable this evaluation. This corpus is unique in two ways: (1) it captures a rich set of causal and temporal commonsense relations between daily events, and (2) it is a high quality collection of everyday life stories that can also be used for story generation. Experimental evaluation shows that a host of baselines and state-of-the-art models based on shallow language understanding struggle to achieve a high score on the Story Cloze Test. We discuss these implications for script and story learning, and offer suggestions for deeper language understanding.
http://arxiv.org/pdf/1604.01696
Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, James Allen
cs.CL, cs.AI
In Proceedings of the 2016 North American Chapter of the ACL (NAACL HLT), 2016
null
cs.CL
20160406
20160406
[]
1604.01696
30
Shttps://developers.google.com/ custom-search/ Context Right Ending Wrong Ending Tom and Sheryl! have been together for two years. One day, they went to a carnival together. He won her several stuffed bears, and bought her funnel cakes. When they reached the Ferris wheel, he got down on one knee. Tom asked Sheryl to marry him. He wiped mud off of his boot. Karen was assigned a roommate her first year of college. Her roommate asked her to go to a nearby city for a concert. Karen agreed happily. The show was absolutely exhilarat- ing. Karen became good friends Karen hated her roommate. with her roommate. Jim got his first credit card in college. He didn’t have a job so he bought everything on his card. After he graduated he amounted a $10,000 debt. Jim realized that he was foolish to spend so much money. Jim decided to devise a plan for Jim decided to open another repayment. credit card. Table 4: Three example Story Cloze Te: st cases, completed by our crowd workers. # cases collected 13,500 # workers participated 282 Average # cases written by one worker 47.8 Max # cases written by one worker 1461 Average payment per test case (cents) 10 Size of the final set (verified by human) 3,744
1604.01696#30
A Corpus and Evaluation Framework for Deeper Understanding of Commonsense Stories
Representation and learning of commonsense knowledge is one of the foundational problems in the quest to enable deep language understanding. This issue is particularly challenging for understanding casual and correlational relationships between events. While this topic has received a lot of interest in the NLP community, research has been hindered by the lack of a proper evaluation framework. This paper attempts to address this problem with a new framework for evaluating story understanding and script learning: the 'Story Cloze Test'. This test requires a system to choose the correct ending to a four-sentence story. We created a new corpus of ~50k five-sentence commonsense stories, ROCStories, to enable this evaluation. This corpus is unique in two ways: (1) it captures a rich set of causal and temporal commonsense relations between daily events, and (2) it is a high quality collection of everyday life stories that can also be used for story generation. Experimental evaluation shows that a host of baselines and state-of-the-art models based on shallow language understanding struggle to achieve a high score on the Story Cloze Test. We discuss these implications for script and story learning, and offer suggestions for deeper language understanding.
http://arxiv.org/pdf/1604.01696
Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, James Allen
cs.CL, cs.AI
In Proceedings of the 2016 North American Chapter of the ACL (NAACL HLT), 2016
null
cs.CL
20160406
20160406
[]
1604.01696
31
This model is trained on the ‘BookCorpus’ (Zhu et al., 2015) (containing 16 different genres) of over 11,000 books. We use the skip-thoughts embedding of the alternatives and contexts for making decision the same way as with GenSim model. Table 5: Statistics for crowd-sourcing Story Cloze Test instances. with its semantic roles (e.g., ‘I*poison*flowers’ vs ‘T*nourish*flowers’). We extract the main verb and its corresponding roles using TRIPS semantic parser. 2. N-gram Overlap: Simply chooses the alterna- tive which shares more n-grams with the context. We compute Smoothed-BLEU (Lin and Och, 2004) score for measuring up to four-gram overlap of an alternative and the context. 3. GenSim: Average Word2Vec: Choose the hy- pothesis with closer average word2vec (Mikolov et al., 2013) embedding to the average word2vec em- bedding of the context. This is basically an en- hanced word overlap baseline, which accounts for semantic similarity.
1604.01696#31
A Corpus and Evaluation Framework for Deeper Understanding of Commonsense Stories
Representation and learning of commonsense knowledge is one of the foundational problems in the quest to enable deep language understanding. This issue is particularly challenging for understanding casual and correlational relationships between events. While this topic has received a lot of interest in the NLP community, research has been hindered by the lack of a proper evaluation framework. This paper attempts to address this problem with a new framework for evaluating story understanding and script learning: the 'Story Cloze Test'. This test requires a system to choose the correct ending to a four-sentence story. We created a new corpus of ~50k five-sentence commonsense stories, ROCStories, to enable this evaluation. This corpus is unique in two ways: (1) it captures a rich set of causal and temporal commonsense relations between daily events, and (2) it is a high quality collection of everyday life stories that can also be used for story generation. Experimental evaluation shows that a host of baselines and state-of-the-art models based on shallow language understanding struggle to achieve a high score on the Story Cloze Test. We discuss these implications for script and story learning, and offer suggestions for deeper language understanding.
http://arxiv.org/pdf/1604.01696
Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, James Allen
cs.CL, cs.AI
In Proceedings of the 2016 North American Chapter of the ACL (NAACL HLT), 2016
null
cs.CL
20160406
20160406
[]
1604.01696
32
4. Sentiment-Full: Choose the hypothesis that matches the average sentiment of the context. We use the state-of-the-art sentiment analysis model (Manning et al., 2014) which assigns a numerical value from | to 5 to a sentence. 5. Sentiment-Last: Choose the hypothesis that matches the sentiment of the last context sentence. 6. Skip-thoughts Model: This model uses Skip- thoughts’ Sentence2Vec embedding (Kiros et al., 2015) which models the semantic space of novels. 7. Narrative Chains-AP: Implements the standard approach to learning chains of narrative events based on Chambers and Jurafsky (2008). An event is rep- resented as a verb and a typed dependency (e.g., the subject of runs). We computed the PMI between all event pairs in the Associate Press (AP) portion of the English Gigaword Corpus that occur at least 2 times. We run coreference over the given story, and choose the hypothesis whose coreferring entity has the highest average PMI score with the entity’s chain in the story. If no entity corefers in both hypotheses, it randomly chooses one of the hypotheses. 8. Narrative Chains-Stories: The same model as above, but trained on ROCStories.
1604.01696#32
A Corpus and Evaluation Framework for Deeper Understanding of Commonsense Stories
Representation and learning of commonsense knowledge is one of the foundational problems in the quest to enable deep language understanding. This issue is particularly challenging for understanding casual and correlational relationships between events. While this topic has received a lot of interest in the NLP community, research has been hindered by the lack of a proper evaluation framework. This paper attempts to address this problem with a new framework for evaluating story understanding and script learning: the 'Story Cloze Test'. This test requires a system to choose the correct ending to a four-sentence story. We created a new corpus of ~50k five-sentence commonsense stories, ROCStories, to enable this evaluation. This corpus is unique in two ways: (1) it captures a rich set of causal and temporal commonsense relations between daily events, and (2) it is a high quality collection of everyday life stories that can also be used for story generation. Experimental evaluation shows that a host of baselines and state-of-the-art models based on shallow language understanding struggle to achieve a high score on the Story Cloze Test. We discuss these implications for script and story learning, and offer suggestions for deeper language understanding.
http://arxiv.org/pdf/1604.01696
Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, James Allen
cs.CL, cs.AI
In Proceedings of the 2016 North American Chapter of the ACL (NAACL HLT), 2016
null
cs.CL
20160406
20160406
[]
1604.01696
33
9. Deep Structured Semantic Model (DSSM): This model (Huang et al., 2013) is trained to project the four-sentences context and the fifth sentence into the same vector space. It consists of two separate deep neural networks for learning jointly the em- bedding of the four-sentences context and the fifth sentence, respectively. As suggested in Huang et al. (2013), the input of the DSSM is based on context- dependent characters, e.g., the distribution count of letter-trigrams in the context and in the fifth sen- tence, respectively. The hyper parameters of the DSSM is determined on the validation set, while the model’s parameters are trained on the ROCStories corpus. In our experiment, each of the two neural networks in the DSSM has two layers: the dimenoe 8 os oa Sy} N S 5 RS Se Sr Or Ra < S So Ss Ss » Rs cae’ sg Pgs ws a ® o et ww io of og s we ws S RS Validation Set 0.514 0.506» 0.477. «0.545. 0.489 0.514. (0.536. 0.472,—s«0.510 0.604. 1.0 Test Set 0.513 0.520
1604.01696#33
A Corpus and Evaluation Framework for Deeper Understanding of Commonsense Stories
Representation and learning of commonsense knowledge is one of the foundational problems in the quest to enable deep language understanding. This issue is particularly challenging for understanding casual and correlational relationships between events. While this topic has received a lot of interest in the NLP community, research has been hindered by the lack of a proper evaluation framework. This paper attempts to address this problem with a new framework for evaluating story understanding and script learning: the 'Story Cloze Test'. This test requires a system to choose the correct ending to a four-sentence story. We created a new corpus of ~50k five-sentence commonsense stories, ROCStories, to enable this evaluation. This corpus is unique in two ways: (1) it captures a rich set of causal and temporal commonsense relations between daily events, and (2) it is a high quality collection of everyday life stories that can also be used for story generation. Experimental evaluation shows that a host of baselines and state-of-the-art models based on shallow language understanding struggle to achieve a high score on the Story Cloze Test. We discuss these implications for script and story learning, and offer suggestions for deeper language understanding.
http://arxiv.org/pdf/1604.01696
Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, James Allen
cs.CL, cs.AI
In Proceedings of the 2016 North American Chapter of the ACL (NAACL HLT), 2016
null
cs.CL
20160406
20160406
[]
1604.01696
36
The results of evaluating these models on the Story Cloze validation and test sets are shown in Ta- ble 6. The constant-choose-first (51%) and human performance (100%) is also provided for compari- son. Note that these sets were doubly verified by human, hence it does not have any boundary cases, resulting in 100% human performance. The DSSM model achieves the highest accuracy, but only 7.2 points higher than constant-choose-first. Error anal- ysis on the narrative chains model shows why this and other event-based language models are not suf- ficient for the task: often, the final sentences of our stories contain complex events beyond the main verb, such as ‘Bill was highly unprepared’ or ‘He had to go to a homeless shelter’. Event language models only look at the verb and syntactic relation like ‘was-object’ and ‘go-to’. In that sense, going to a homeless shelter is the same as going to the beach. This suggests the requirement of having richer se- mantic representation for events in narratives. Our proposed Story Cloze Test offers a new challenge to the community. # 6 Discussion
1604.01696#36
A Corpus and Evaluation Framework for Deeper Understanding of Commonsense Stories
Representation and learning of commonsense knowledge is one of the foundational problems in the quest to enable deep language understanding. This issue is particularly challenging for understanding casual and correlational relationships between events. While this topic has received a lot of interest in the NLP community, research has been hindered by the lack of a proper evaluation framework. This paper attempts to address this problem with a new framework for evaluating story understanding and script learning: the 'Story Cloze Test'. This test requires a system to choose the correct ending to a four-sentence story. We created a new corpus of ~50k five-sentence commonsense stories, ROCStories, to enable this evaluation. This corpus is unique in two ways: (1) it captures a rich set of causal and temporal commonsense relations between daily events, and (2) it is a high quality collection of everyday life stories that can also be used for story generation. Experimental evaluation shows that a host of baselines and state-of-the-art models based on shallow language understanding struggle to achieve a high score on the Story Cloze Test. We discuss these implications for script and story learning, and offer suggestions for deeper language understanding.
http://arxiv.org/pdf/1604.01696
Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, James Allen
cs.CL, cs.AI
In Proceedings of the 2016 North American Chapter of the ACL (NAACL HLT), 2016
null
cs.CL
20160406
20160406
[]
1604.01696
37
# 6 Discussion There are three core contributions in this paper: (1) a new corpus of commonsense stories, called ROC- Stories, (2) a new evaluation framework to evalu- ate script/story learners, called Story Cloze Test, and (3) a host of first approaches to tackle this new test framework. ROCStories Corpus is the first crowd- sourced corpus of its kind for the community. We have released about 50k stories, as well as validation and test sets for Story Cloze Test. This dataset will eventually grow to 100k stories, which will be released through our website. In order to continue making meaningful progress on this task, although it is possible to keep increasing the size of the training data, we expect the community to develop models that will learn to generalize to unseen commonsense concepts and situations. The Story Cloze Test proved to be a challenge to all of the models we tested. We believe it will serve as an effective evaluation for both story understand- ing and script knowledge learners. We encourage the community to benchmark their progress by re- porting their results on Story Cloze test set. Com- pared to the previous Narrative Cloze Test, we found that one of the early models for that task actually performs worse than random guessing. We can con- clude that Narrative Cloze test spurred interest in script learning, however, it ultimately does not eval- uate deeper knowledge and language understanding.
1604.01696#37
A Corpus and Evaluation Framework for Deeper Understanding of Commonsense Stories
Representation and learning of commonsense knowledge is one of the foundational problems in the quest to enable deep language understanding. This issue is particularly challenging for understanding casual and correlational relationships between events. While this topic has received a lot of interest in the NLP community, research has been hindered by the lack of a proper evaluation framework. This paper attempts to address this problem with a new framework for evaluating story understanding and script learning: the 'Story Cloze Test'. This test requires a system to choose the correct ending to a four-sentence story. We created a new corpus of ~50k five-sentence commonsense stories, ROCStories, to enable this evaluation. This corpus is unique in two ways: (1) it captures a rich set of causal and temporal commonsense relations between daily events, and (2) it is a high quality collection of everyday life stories that can also be used for story generation. Experimental evaluation shows that a host of baselines and state-of-the-art models based on shallow language understanding struggle to achieve a high score on the Story Cloze Test. We discuss these implications for script and story learning, and offer suggestions for deeper language understanding.
http://arxiv.org/pdf/1604.01696
Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, James Allen
cs.CL, cs.AI
In Proceedings of the 2016 North American Chapter of the ACL (NAACL HLT), 2016
null
cs.CL
20160406
20160406
[]
1604.01696
38
# Acknowledgments We would like to thank the amazing crowd workers whose endless hours of daily story writing made this research possible. We thank William de Beaumont and Choh Man Teng for their work on TRIPS parser. We thank Alyson Grealish for her great help in the quality control of our corpus. This work was sup- ported in part by Grant W911NF-15-1-0542 with the US Defense Advanced Research Projects Agency (DARPA), the Army Research Office (ARO) and the Office of Naval Research (ONR). Our data collec- tion effort was sponsored by Nuance Foundation. # References James F. Allen, Mary Swift, and Will de Beaumont. 2008. Deep semantic analysis of text. In Proceedings of the 2008 Conference on Semantics in Text Process- ing, STEP ’08, pages 343-354, Stroudsburg, PA, USA. Association for Computational Linguistics. Paul Bailey. 1999. Searching for storiness: Story- generation from a reader’s perspective. In AAAI Fall Symposium on Narrative Intelligence. Niranjan Balasubramanian, Stephen Soderland, Oren Et- zioni Mausam, and Oren Etzioni. 2013. Generating coherent event schemas at scale. In EMNLP, pages 1721-1731.
1604.01696#38
A Corpus and Evaluation Framework for Deeper Understanding of Commonsense Stories
Representation and learning of commonsense knowledge is one of the foundational problems in the quest to enable deep language understanding. This issue is particularly challenging for understanding casual and correlational relationships between events. While this topic has received a lot of interest in the NLP community, research has been hindered by the lack of a proper evaluation framework. This paper attempts to address this problem with a new framework for evaluating story understanding and script learning: the 'Story Cloze Test'. This test requires a system to choose the correct ending to a four-sentence story. We created a new corpus of ~50k five-sentence commonsense stories, ROCStories, to enable this evaluation. This corpus is unique in two ways: (1) it captures a rich set of causal and temporal commonsense relations between daily events, and (2) it is a high quality collection of everyday life stories that can also be used for story generation. Experimental evaluation shows that a host of baselines and state-of-the-art models based on shallow language understanding struggle to achieve a high score on the Story Cloze Test. We discuss these implications for script and story learning, and offer suggestions for deeper language understanding.
http://arxiv.org/pdf/1604.01696
Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, James Allen
cs.CL, cs.AI
In Proceedings of the 2016 North American Chapter of the ACL (NAACL HLT), 2016
null
cs.CL
20160406
20160406
[]
1604.01696
39
David Bamman, Brendan OConnor, and Noah Smith. 2013. Learning latent personas of film characters. ACL. Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. Learning natural lan- guage inference from a large annotated corpus. In Pro- ceedings of the 2015 Conference on Empirical Meth- ods in Natural Language Processing, pages 632-642, Stroudsburg, PA. Association for Computational Lin- guistics. K. Burton, A. Java, , and I. Soboroff. 2009. The icwsm 2009 spinn3r dataset. In In Proceedings of the Third Annual Conference on Weblogs and Social Me- dia (ICWSM 2009), San Jose, CA. Nathanael Chambers and Daniel Jurafsky. 2008. Unsu- pervised learning of narrative event chains. In Kath- leen McKeown, Johanna D. Moore, Simone Teufel, James Allan, and Sadaoki Furui, editors, ACL, pages 789-797. The Association for Computer Linguistics.
1604.01696#39
A Corpus and Evaluation Framework for Deeper Understanding of Commonsense Stories
Representation and learning of commonsense knowledge is one of the foundational problems in the quest to enable deep language understanding. This issue is particularly challenging for understanding casual and correlational relationships between events. While this topic has received a lot of interest in the NLP community, research has been hindered by the lack of a proper evaluation framework. This paper attempts to address this problem with a new framework for evaluating story understanding and script learning: the 'Story Cloze Test'. This test requires a system to choose the correct ending to a four-sentence story. We created a new corpus of ~50k five-sentence commonsense stories, ROCStories, to enable this evaluation. This corpus is unique in two ways: (1) it captures a rich set of causal and temporal commonsense relations between daily events, and (2) it is a high quality collection of everyday life stories that can also be used for story generation. Experimental evaluation shows that a host of baselines and state-of-the-art models based on shallow language understanding struggle to achieve a high score on the Story Cloze Test. We discuss these implications for script and story learning, and offer suggestions for deeper language understanding.
http://arxiv.org/pdf/1604.01696
Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, James Allen
cs.CL, cs.AI
In Proceedings of the 2016 North American Chapter of the ACL (NAACL HLT), 2016
null
cs.CL
20160406
20160406
[]
1604.01696
40
Nathanael Chambers and Dan Jurafsky. 2009. Unsuper- vised learning of narrative schemas and their partici- pants. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th Interna- tional Joint Conference on Natural Language Process- ing of the AFNLP: Volume 2 - Volume 2, ACL ’09, pages 602-610, Stroudsburg, PA, USA. Association for Computational Linguistics. Nathanael Chambers. 2013. Event schema induction with a probabilistic entity-driven model. In EMNLP, volume 13, pages 1797-1807. Eugene Charniak. 1972. Toward a model of children’s story comprehension. December. Jackie Cheung, Hoifung Poon, and Lucy Vanderwende. 2013. Probabilistic frame induction. In ACL. E.M. Forster. 1927. Aspects of the Novel. Arnold, London. Edward Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, and Bill Dolan. 2007. The third pascal recognizing tex- tual entailment challenge. In Proceedings of the ACL- PASCAL Workshop on Textual Entailment and Para- phrasing, RTE ’07, pages 1-9, Stroudsburg, PA, USA. ACL.
1604.01696#40
A Corpus and Evaluation Framework for Deeper Understanding of Commonsense Stories
Representation and learning of commonsense knowledge is one of the foundational problems in the quest to enable deep language understanding. This issue is particularly challenging for understanding casual and correlational relationships between events. While this topic has received a lot of interest in the NLP community, research has been hindered by the lack of a proper evaluation framework. This paper attempts to address this problem with a new framework for evaluating story understanding and script learning: the 'Story Cloze Test'. This test requires a system to choose the correct ending to a four-sentence story. We created a new corpus of ~50k five-sentence commonsense stories, ROCStories, to enable this evaluation. This corpus is unique in two ways: (1) it captures a rich set of causal and temporal commonsense relations between daily events, and (2) it is a high quality collection of everyday life stories that can also be used for story generation. Experimental evaluation shows that a host of baselines and state-of-the-art models based on shallow language understanding struggle to achieve a high score on the Story Cloze Test. We discuss these implications for script and story learning, and offer suggestions for deeper language understanding.
http://arxiv.org/pdf/1604.01696
Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, James Allen
cs.CL, cs.AI
In Proceedings of the 2016 North American Chapter of the ACL (NAACL HLT), 2016
null
cs.CL
20160406
20160406
[]
1604.01696
41
Andrew S. Gordon and Reid Swanson. 2009. Identify- ing Personal Stories in Millions of Weblog Entries. In Third International Conference on Weblogs and Social Media, Data Challenge Workshop, San Jose, CA, May. Karl Moritz Hermann, Tomas Kocisky, Edward Grefen- stette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 1693-1701. Curran Associates, Inc. Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry Heck. 2013. Learning deep structured semantic models for web search using click- through data. In Proceedings of the 22Nd ACM International Conference on Information & Knowl- edge Management, CIKM ° 13, pages 2333-2338, New York, NY, USA. ACM.
1604.01696#41
A Corpus and Evaluation Framework for Deeper Understanding of Commonsense Stories
Representation and learning of commonsense knowledge is one of the foundational problems in the quest to enable deep language understanding. This issue is particularly challenging for understanding casual and correlational relationships between events. While this topic has received a lot of interest in the NLP community, research has been hindered by the lack of a proper evaluation framework. This paper attempts to address this problem with a new framework for evaluating story understanding and script learning: the 'Story Cloze Test'. This test requires a system to choose the correct ending to a four-sentence story. We created a new corpus of ~50k five-sentence commonsense stories, ROCStories, to enable this evaluation. This corpus is unique in two ways: (1) it captures a rich set of causal and temporal commonsense relations between daily events, and (2) it is a high quality collection of everyday life stories that can also be used for story generation. Experimental evaluation shows that a host of baselines and state-of-the-art models based on shallow language understanding struggle to achieve a high score on the Story Cloze Test. We discuss these implications for script and story learning, and offer suggestions for deeper language understanding.
http://arxiv.org/pdf/1604.01696
Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, James Allen
cs.CL, cs.AI
In Proceedings of the 2016 North American Chapter of the ACL (NAACL HLT), 2016
null
cs.CL
20160406
20160406
[]
1604.01696
42
Bram Jans, Steven Bethard, Ivan Vuli¢c, and Marie Francine Moens. 2012. Skip n-grams and ranking functions for predicting script events. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguis- tics, pages 336-344. Association for Computational Linguistics. Ryan Kiros, Yukun Zhu, Ruslan Salakhutdinov, Richard S Zemel, Antonio Torralba, Raquel Urtasun, and Sanja Fidler. 2015. Skip-thought vectors. NIPS. Hector J. Levesque. 2011. The winograd schema chal- lenge. In AAAI Spring Symposium: Logical Formal- izations of Commonsense Reasoning. AAAI. Chin-Yew Lin and Franz Josef Och. 2004. Automatic evaluation of machine translation quality using longest common subsequence and skip-bigram statistics. In Proceedings of the 42Nd Annual Meeting on Associa- tion for Computational Linguistics, ACL ’04, Strouds- burg, PA, USA. Association for Computational Lin- guistics.
1604.01696#42
A Corpus and Evaluation Framework for Deeper Understanding of Commonsense Stories
Representation and learning of commonsense knowledge is one of the foundational problems in the quest to enable deep language understanding. This issue is particularly challenging for understanding casual and correlational relationships between events. While this topic has received a lot of interest in the NLP community, research has been hindered by the lack of a proper evaluation framework. This paper attempts to address this problem with a new framework for evaluating story understanding and script learning: the 'Story Cloze Test'. This test requires a system to choose the correct ending to a four-sentence story. We created a new corpus of ~50k five-sentence commonsense stories, ROCStories, to enable this evaluation. This corpus is unique in two ways: (1) it captures a rich set of causal and temporal commonsense relations between daily events, and (2) it is a high quality collection of everyday life stories that can also be used for story generation. Experimental evaluation shows that a host of baselines and state-of-the-art models based on shallow language understanding struggle to achieve a high score on the Story Cloze Test. We discuss these implications for script and story learning, and offer suggestions for deeper language understanding.
http://arxiv.org/pdf/1604.01696
Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, James Allen
cs.CL, cs.AI
In Proceedings of the 2016 North American Chapter of the ACL (NAACL HLT), 2016
null
cs.CL
20160406
20160406
[]
1604.01696
43
Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language pro- cessing toolkit. In Proceedings of 52nd Annual Meet- ing of the Association for Computational Linguistics: System Demonstrations, pages 55-60. Mehdi Manshadi, Reid Swanson, and Andrew S. Gor- don. 2008. Learning a Probabilistic Model of Event Sequences From Internet Weblog Stories. In 2/ st Con- ference of the Florida Al Society, Applied Natural Lan- guage Processing Track, Coconut Grove, FL, May. Neil McIntyre and Mirella Lapata. 2009. Learning to tell tales: A data-driven approach to story generation. In Proceedings of the Joint Conference of the 47th An- nual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 217-225, Singapore. Gonzalo Méndez, Pablo Gervas, and Carlos Leén. 2014. A model of character affinity for agent-based story generation. In 9th International Conference on Knowledge, Information and Creativity Support Sys- tems, Limassol, Cyprus, 11/2014. Springer-Verlag, Springer-Verlag.
1604.01696#43
A Corpus and Evaluation Framework for Deeper Understanding of Commonsense Stories
Representation and learning of commonsense knowledge is one of the foundational problems in the quest to enable deep language understanding. This issue is particularly challenging for understanding casual and correlational relationships between events. While this topic has received a lot of interest in the NLP community, research has been hindered by the lack of a proper evaluation framework. This paper attempts to address this problem with a new framework for evaluating story understanding and script learning: the 'Story Cloze Test'. This test requires a system to choose the correct ending to a four-sentence story. We created a new corpus of ~50k five-sentence commonsense stories, ROCStories, to enable this evaluation. This corpus is unique in two ways: (1) it captures a rich set of causal and temporal commonsense relations between daily events, and (2) it is a high quality collection of everyday life stories that can also be used for story generation. Experimental evaluation shows that a host of baselines and state-of-the-art models based on shallow language understanding struggle to achieve a high score on the Story Cloze Test. We discuss these implications for script and story learning, and offer suggestions for deeper language understanding.
http://arxiv.org/pdf/1604.01696
Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, James Allen
cs.CL, cs.AI
In Proceedings of the 2016 North American Chapter of the ACL (NAACL HLT), 2016
null
cs.CL
20160406
20160406
[]
1604.01696
44
Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013. Distributed rep- resentations of words and phrases and their compo- sitionality. In Advances in Neural Information Pro- cessing Systems 26: 27th Annual Conference on Neu- ral Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States., pages 3111-3119. G. Miller. 1995. Wordnet: A lexical database for english. In In Communications of the ACM. Nasrin Mostafazadeh, Alyson Grealish, Nathanael Chambers, James F. Allen, and Lucy Vanderwende. 2016. Semantic annotation of event structures in com- monsense stories. In Proceedings of the The 4th Work- shop on EVENTS: Definition, Detection, Coreference, and Representation, San Diego, California, June. As- sociation for Computational Linguistics. Erik T. Mueller. 2002. Understanding script-based sto- ries using commonsense reasoning. Cognitive Systems Research, 5:2004. Erik T. Mueller. 2007. Modeling space and time in nar- ratives about restaurants. LLC, 22(1):67-84.
1604.01696#44
A Corpus and Evaluation Framework for Deeper Understanding of Commonsense Stories
Representation and learning of commonsense knowledge is one of the foundational problems in the quest to enable deep language understanding. This issue is particularly challenging for understanding casual and correlational relationships between events. While this topic has received a lot of interest in the NLP community, research has been hindered by the lack of a proper evaluation framework. This paper attempts to address this problem with a new framework for evaluating story understanding and script learning: the 'Story Cloze Test'. This test requires a system to choose the correct ending to a four-sentence story. We created a new corpus of ~50k five-sentence commonsense stories, ROCStories, to enable this evaluation. This corpus is unique in two ways: (1) it captures a rich set of causal and temporal commonsense relations between daily events, and (2) it is a high quality collection of everyday life stories that can also be used for story generation. Experimental evaluation shows that a host of baselines and state-of-the-art models based on shallow language understanding struggle to achieve a high score on the Story Cloze Test. We discuss these implications for script and story learning, and offer suggestions for deeper language understanding.
http://arxiv.org/pdf/1604.01696
Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, James Allen
cs.CL, cs.AI
In Proceedings of the 2016 North American Chapter of the ACL (NAACL HLT), 2016
null
cs.CL
20160406
20160406
[]
1604.01696
45
Erik T. Mueller. 2007. Modeling space and time in nar- ratives about restaurants. LLC, 22(1):67-84. Kiem-Hieu Nguyen, Xavier Tannier, Olivier Ferret, and Romaric Besangon. 2015. Generative event schema induction with entity disambiguation. In Proceedings of the 53rd annual meeting of the Association for Com- putational Linguistics (ACL-15). Karl Pichotta and Raymond J Mooney. 2014a. Statisti- cal script learning with multi-argument events. EACL 2014, page 220. Karl Pichotta and Raymond J. Mooney. 2014b. Statis- tical script learning with multi-argument events. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguis- tics (EACL 2014), Gothenburg, Sweden, April. James Pustejovsky, Jos Castao, Robert Ingria, Roser Saur, Robert Gaizauskas, Andrea Setzer, and Graham Katz. 2003. Timeml: Robust specification of event and temporal expressions in text. In in Fifth Interna- tional Workshop on Computational Semantics (IWCS- 5.
1604.01696#45
A Corpus and Evaluation Framework for Deeper Understanding of Commonsense Stories
Representation and learning of commonsense knowledge is one of the foundational problems in the quest to enable deep language understanding. This issue is particularly challenging for understanding casual and correlational relationships between events. While this topic has received a lot of interest in the NLP community, research has been hindered by the lack of a proper evaluation framework. This paper attempts to address this problem with a new framework for evaluating story understanding and script learning: the 'Story Cloze Test'. This test requires a system to choose the correct ending to a four-sentence story. We created a new corpus of ~50k five-sentence commonsense stories, ROCStories, to enable this evaluation. This corpus is unique in two ways: (1) it captures a rich set of causal and temporal commonsense relations between daily events, and (2) it is a high quality collection of everyday life stories that can also be used for story generation. Experimental evaluation shows that a host of baselines and state-of-the-art models based on shallow language understanding struggle to achieve a high score on the Story Cloze Test. We discuss these implications for script and story learning, and offer suggestions for deeper language understanding.
http://arxiv.org/pdf/1604.01696
Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, James Allen
cs.CL, cs.AI
In Proceedings of the 2016 North American Chapter of the ACL (NAACL HLT), 2016
null
cs.CL
20160406
20160406
[]
1604.01696
46
Michaela Regneri, Alexander Koller, and Manfred Pinkal. 2010. Learning script knowledge with web experiments. In Proceedings of the 48th Annual Meet- ing of the Association for Computational Linguistics, pages 979-988. Association for Computational Lin- guistics. Matthew Richardson, Christopher J. C. Burges, and Erin Renshaw. 2013. Mctest: A challenge dataset for the open-domain machine comprehension of text. In EMNLP, pages 193-203. ACL. M. Ried] and Carlos Leén. 2008. Toward vignette-based story generation for drama management systems. In Workshop on Integrating Technologies for Interactive Stories - 2nd International Conference on INtelligent TEchnologies for interactive enterTAINment, 8-10/1. Melissa Roemmele, Cosmin Adrian Bejan, and An- drew S. Gordon. 2011. Choice of Plausible Alterna- tives: An Evaluation of Commonsense Causal Reason- ing. In AAAI Spring Symposium on Logical Formal- izations of Commonsense Reasoning, Stanford Univer- sity, March. Rachel Rudinger, Pushpendre Rastogi, Francis Ferraro, and Benjamin Van Durme. 2015. Script induction as language modeling. In Proceedings of the 2015 Con- ference on Empirical Methods in Natural Language Processing (EMNLP-15).
1604.01696#46
A Corpus and Evaluation Framework for Deeper Understanding of Commonsense Stories
Representation and learning of commonsense knowledge is one of the foundational problems in the quest to enable deep language understanding. This issue is particularly challenging for understanding casual and correlational relationships between events. While this topic has received a lot of interest in the NLP community, research has been hindered by the lack of a proper evaluation framework. This paper attempts to address this problem with a new framework for evaluating story understanding and script learning: the 'Story Cloze Test'. This test requires a system to choose the correct ending to a four-sentence story. We created a new corpus of ~50k five-sentence commonsense stories, ROCStories, to enable this evaluation. This corpus is unique in two ways: (1) it captures a rich set of causal and temporal commonsense relations between daily events, and (2) it is a high quality collection of everyday life stories that can also be used for story generation. Experimental evaluation shows that a host of baselines and state-of-the-art models based on shallow language understanding struggle to achieve a high score on the Story Cloze Test. We discuss these implications for script and story learning, and offer suggestions for deeper language understanding.
http://arxiv.org/pdf/1604.01696
Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, James Allen
cs.CL, cs.AI
In Proceedings of the 2016 North American Chapter of the ACL (NAACL HLT), 2016
null
cs.CL
20160406
20160406
[]
1604.01696
47
Roger C. Schank and Robert P. Abelson. 1977. Scripts, Plans, Goals and Understanding: an Inquiry into Hu- man Knowledge Structures. L. Erlbaum, Hillsdale, NJ. Lenhart K. Schubert and Chung Hee Hwang. 2000. Episodic logic meets little red riding hood: A com- prehensive, natural representation for language un- derstanding. In Natural Language Processing and Knowledge Representation: Language for Knowledge and Knowledge for Language. MIT/AAAI Press. Reid Swanson and Andrew S. Gordon. 2008. Say Any- thing: A Massively collaborative Open Domain Story Writing Companion. In First International Confer- ence on Interactive Digital Storytelling, Erfurt, Ger- many, November. Wilson L Taylor. 1953. Cloze procedure: a new tool for measuring readability. Journalism quarterly. Scott R. Turner. 1994. The creative process: A computer model of storytelling. Hillsdale: Lawrence Erlbaum. Jason Weston, Antoine Bordes, Sumit Chopra, and Tomas Mikolov. 2015. Towards ai-complete ques- tion answering: A set of prerequisite toy tasks. CoRR, abs/1502.05698. Terry Winograd. 1972. Understanding Natural Lan- guage. Academic Press, Inc., Orlando, FL, USA.
1604.01696#47
A Corpus and Evaluation Framework for Deeper Understanding of Commonsense Stories
Representation and learning of commonsense knowledge is one of the foundational problems in the quest to enable deep language understanding. This issue is particularly challenging for understanding casual and correlational relationships between events. While this topic has received a lot of interest in the NLP community, research has been hindered by the lack of a proper evaluation framework. This paper attempts to address this problem with a new framework for evaluating story understanding and script learning: the 'Story Cloze Test'. This test requires a system to choose the correct ending to a four-sentence story. We created a new corpus of ~50k five-sentence commonsense stories, ROCStories, to enable this evaluation. This corpus is unique in two ways: (1) it captures a rich set of causal and temporal commonsense relations between daily events, and (2) it is a high quality collection of everyday life stories that can also be used for story generation. Experimental evaluation shows that a host of baselines and state-of-the-art models based on shallow language understanding struggle to achieve a high score on the Story Cloze Test. We discuss these implications for script and story learning, and offer suggestions for deeper language understanding.
http://arxiv.org/pdf/1604.01696
Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, James Allen
cs.CL, cs.AI
In Proceedings of the 2016 North American Chapter of the ACL (NAACL HLT), 2016
null
cs.CL
20160406
20160406
[]
1604.00289
0
6 1 0 2 v o N 2 ] I A . s c [ 3 v 9 8 2 0 0 . 4 0 6 1 : v i X r a In press at Behavioral and Brain Sciences. # Building Machines That Learn and Think Like People Brenden M. Lake,1 Tomer D. Ullman,2,4 Joshua B. Tenenbaum,2,4 and Samuel J. Gershman3,4 1Center for Data Science, New York University 2Department of Brain and Cognitive Sciences, MIT 3Department of Psychology and Center for Brain Science, Harvard University 4Center for Brains Minds and Machines # Abstract
1604.00289#0
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
http://arxiv.org/pdf/1604.00289
Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman
cs.AI, cs.CV, cs.LG, cs.NE, stat.ML
In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary
null
cs.AI
20160401
20161102
[ { "id": "1511.06114" }, { "id": "1510.05067" }, { "id": "1602.05179" }, { "id": "1603.08575" } ]
1604.00289
1
# Abstract Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving perfor- mance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recog- nition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models. # 1 Introduction
1604.00289#1
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
http://arxiv.org/pdf/1604.00289
Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman
cs.AI, cs.CV, cs.LG, cs.NE, stat.ML
In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary
null
cs.AI
20160401
20161102
[ { "id": "1511.06114" }, { "id": "1510.05067" }, { "id": "1602.05179" }, { "id": "1603.08575" } ]
1604.00289
2
# 1 Introduction Artificial intelligence (AI) has been a story of booms and busts, yet by any traditional measure of success, the last few years have been marked by exceptional progress. Much of this progress has come from recent advances in “deep learning,” characterized by learning large neural-network-style models with multiple layers of representation. These models have achieved remarkable gains in many domains spanning object recognition, speech recognition, and control (LeCun, Bengio, & Hinton, 2015; Schmidhuber, 2015). In object recognition, Krizhevsky, Sutskever, and Hinton (2012) trained a deep convolutional neural network (convnets; LeCun et al., 1989) that nearly halved the error rate of the previous state-of-the-art on the most challenging benchmark to date. In the years since, convnets continue to dominate, recently approaching human-level performance on some object recognition benchmarks (He, Zhang, Ren, & Sun, 2015; Russakovsky et al., 2015; Szegedy et al., 2014). In automatic speech recognition, Hidden Markov Models (HMMs) have been the leading approach since the late 1980s (Juang & Rabiner, 1990), yet this framework has been chipped away piece by piece and replaced with deep learning components (Hinton et al.,
1604.00289#2
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
http://arxiv.org/pdf/1604.00289
Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman
cs.AI, cs.CV, cs.LG, cs.NE, stat.ML
In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary
null
cs.AI
20160401
20161102
[ { "id": "1511.06114" }, { "id": "1510.05067" }, { "id": "1602.05179" }, { "id": "1603.08575" } ]
1604.00289
3
2012). Now, the leading approaches to speech recognition are fully neural network systems (Graves, Mohamed, & Hinton, 2013; Weng, Yu, Watanabe, & Juang, 2014). Ideas from deep learning have also been applied to learning complex control problems. V. Mnih et al. (2015) combined ideas from deep learning and reinforcement learning to make a “deep reinforcement learning” algorithm that learns to play large classes of simple video games from just frames of pixels and the game score, achieving human or superhuman level performance on many of these games (see also Guo, Singh, Lee, Lewis, & Wang, 2014; Schaul, Quan, Antonoglou, & Silver, 2016; Stadie, Levine, & Abbeel, 2016). These accomplishments have helped neural networks regain their status as a leading paradigm in machine learning, much as they were in the late 1980s and early 1990s. The recent success of neural networks has captured attention beyond academia. In industry, companies such as Google and Facebook have active research divisions exploring these technologies, and object and speech recognition systems based on deep learning have been deployed in core products on smart phones and the web. The media has also covered many of the recent achievements of neural networks, often expressing the view that neural networks have achieved this recent success by virtue of their brain-like computation and thus their ability to emulate human learning and human cognition.
1604.00289#3
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
http://arxiv.org/pdf/1604.00289
Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman
cs.AI, cs.CV, cs.LG, cs.NE, stat.ML
In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary
null
cs.AI
20160401
20161102
[ { "id": "1511.06114" }, { "id": "1510.05067" }, { "id": "1602.05179" }, { "id": "1603.08575" } ]
1604.00289
4
In this article, we view this excitement as an opportunity to examine what it means for a machine to learn or think like a person. We first review some of the criteria previously offered by cognitive scientists, developmental psychologists, and AI researchers. Second, we articulate what we view as the essential ingredients for building such a machine that learns or thinks like a person, synthesizing theoretical ideas and experimental data from research in cognitive science. Third, we consider contemporary AI (and deep learning in particular) in light of these ingredients, finding that deep learning models have yet to incorporate many of them and so may be solving some problems in different ways than people do. We end by discussing what we view as the most plausible paths towards building machines that learn and think like people. This includes prospects for integrating deep learning with the core cognitive ingredients we identify, inspired in part by recent work fusing neural networks with lower-level building blocks from classic psychology and computer science (attention, working memory, stacks, queues) that have traditionally been seen as incompatible.
1604.00289#4
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
http://arxiv.org/pdf/1604.00289
Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman
cs.AI, cs.CV, cs.LG, cs.NE, stat.ML
In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary
null
cs.AI
20160401
20161102
[ { "id": "1511.06114" }, { "id": "1510.05067" }, { "id": "1602.05179" }, { "id": "1603.08575" } ]
1604.00289
5
Beyond the specific ingredients in our proposal, we draw a broader distinction between two differ- ent computational approaches to intelligence. The statistical pattern recognition approach treats prediction as primary, usually in the context of a specific classification, regression, or control task. In this view, learning is about discovering features that have high value states in common – a shared label in a classification setting or a shared value in a reinforcement learning setting – across a large, diverse set of training data. The alternative approach treats models of the world as pri- mary, where learning is the process of model-building. Cognition is about using these models to understand the world, to explain what we see, to imagine what could have happened that didn’t, or what could be true that isn’t, and then planning actions to make it so. The difference be- tween pattern recognition and model-building, between prediction and explanation, is central to our view of human intelligence. Just as scientists seek to explain nature, not simply predict it, we see human thought as fundamentally a model-building activity. We elaborate this key point with numerous examples
1604.00289#5
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
http://arxiv.org/pdf/1604.00289
Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman
cs.AI, cs.CV, cs.LG, cs.NE, stat.ML
In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary
null
cs.AI
20160401
20161102
[ { "id": "1511.06114" }, { "id": "1510.05067" }, { "id": "1602.05179" }, { "id": "1603.08575" } ]
1604.00289
6
seek to explain nature, not simply predict it, we see human thought as fundamentally a model-building activity. We elaborate this key point with numerous examples below. We also discuss how pattern recognition, even if it is not the core of intelligence, can nonetheless support model-building, through “model-free” algorithms that learn through experience how to make essential inferences more computationally efficient.
1604.00289#6
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
http://arxiv.org/pdf/1604.00289
Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman
cs.AI, cs.CV, cs.LG, cs.NE, stat.ML
In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary
null
cs.AI
20160401
20161102
[ { "id": "1511.06114" }, { "id": "1510.05067" }, { "id": "1602.05179" }, { "id": "1603.08575" } ]
1604.00289
7
2 Before proceeding, we provide a few caveats about the goals of this article and a brief overview of the key ideas. # 1.1 What this article is not For nearly as long as there have been neural networks, there have been critiques of neural networks (Crick, 1989; Fodor & Pylyshyn, 1988; Marcus, 1998, 2001; Minsky & Papert, 1969; Pinker & Prince, 1988). While we are critical of neural networks in this article, our goal is to build on their successes rather than dwell on their shortcomings. We see a role for neural networks in developing more human-like learning machines: They have been applied in compelling ways to many types of machine learning problems, demonstrating the power of gradient-based learning and deep hierarchies of latent variables. Neural networks also have a rich history as computational models of cognition (McClelland, Rumelhart, & the PDP Research Group, 1986; Rumelhart, McClelland, & the PDP Research Group, 1986) – a history we describe in more detail in the next section. At a more fundamental level, any computational model of learning must ultimately be grounded in the brain’s biological neural networks.
1604.00289#7
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
http://arxiv.org/pdf/1604.00289
Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman
cs.AI, cs.CV, cs.LG, cs.NE, stat.ML
In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary
null
cs.AI
20160401
20161102
[ { "id": "1511.06114" }, { "id": "1510.05067" }, { "id": "1602.05179" }, { "id": "1603.08575" } ]
1604.00289
8
We also believe that future generations of neural networks will look very different from the current state-of-the-art. They may be endowed with intuitive physics, theory of mind, causal reasoning, and other capacities we describe in the sections that follow. More structure and inductive biases could be built into the networks or learned from previous experience with related tasks, leading to more human-like patterns of learning and development. Networks may learn to effectively search for and discover new mental models or intuitive theories, and these improved models will, in turn, enable subsequent learning, allowing systems that learn-to-learn – using previous knowledge to make richer inferences from very small amounts of training data.
1604.00289#8
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
http://arxiv.org/pdf/1604.00289
Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman
cs.AI, cs.CV, cs.LG, cs.NE, stat.ML
In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary
null
cs.AI
20160401
20161102
[ { "id": "1511.06114" }, { "id": "1510.05067" }, { "id": "1602.05179" }, { "id": "1603.08575" } ]
1604.00289
9
It is also important to draw a distinction between AI that purports to emulate or draw inspiration from aspects of human cognition, and AI that does not. This article focuses on the former. The latter is a perfectly reasonable and useful approach to developing AI algorithms – avoiding cognitive or neural inspiration as well as claims of cognitive or neural plausibility. Indeed, this is how many researchers have proceeded, and this article has little pertinence to work conducted under this research strategy.1 On the other hand, we believe that reverse engineering human intelligence can usefully inform AI and machine learning (and has already done so), especially for the types of domains and tasks that people excel at. Despite recent computational achievements, people are better than machines at solving a range of difficult computational problems, including concept learning, scene understanding, language acquisition, language understanding, speech recognition, etc. Other human cognitive abilities remain difficult to understand computationally, including creativity, common sense, and general purpose reasoning. As long as natural intelligence remains the best example of intelligence, we believe that the project of reverse engineering the human solutions to difficult computational problems will continue to inform and advance AI. Finally, while we focus on neural network approaches to AI, we do not wish to give the impres- sion that these are the only contributors to recent advances in AI. On the contrary, some of the
1604.00289#9
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
http://arxiv.org/pdf/1604.00289
Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman
cs.AI, cs.CV, cs.LG, cs.NE, stat.ML
In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary
null
cs.AI
20160401
20161102
[ { "id": "1511.06114" }, { "id": "1510.05067" }, { "id": "1602.05179" }, { "id": "1603.08575" } ]
1604.00289
11
Neural network: A network of simple neuron-like processing units that collectively per- form complex computations. Neural networks are often organized into layers, including an input layer that presents the data (e.g, an image), hidden layers that transform the data into intermediate representations, and an output layer that produces a response (e.g., a label or an action). Recurrent connections are also popular when processing sequential data. Deep learning: A neural network with at least one hidden layer (some networks have dozens). Most state-of-the-art deep networks are trained using the backpropagation algo- rithm to gradually adjust their connection strengths. Backpropagation: Gradient descent applied to training a deep neural network. The gradient of the objective function (e.g., classification error or log-likelihood) with respect to the model parameters (e.g., connection weights) is used to make a series of small adjustments to the parameters in a direction that improves the objective function. Convolutional network (convnet): A neural network that uses trainable filters instead of (or in addition to) fully-connected layers with independent weights. The same filter is applied at many locations across an image
1604.00289#11
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
http://arxiv.org/pdf/1604.00289
Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman
cs.AI, cs.CV, cs.LG, cs.NE, stat.ML
In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary
null
cs.AI
20160401
20161102
[ { "id": "1511.06114" }, { "id": "1510.05067" }, { "id": "1602.05179" }, { "id": "1603.08575" } ]
1604.00289
12
instead of (or in addition to) fully-connected layers with independent weights. The same filter is applied at many locations across an image (or across a time series), leading to neural networks that are effectively larger but with local connectivity and fewer free parameters. Model-free and model-based reinforcement learning: Model-free algorithms di- rectly learn a control policy without explicitly building a model of the environment (re- ward and state transition distributions). Model-based algorithms learn a model of the environment and use it to select actions by planning. Deep Q-learning: A model-free reinforcement learning algorithm used to train deep neural networks on control tasks such as playing Atari games. A network is trained to approximate the optimal action-value function Q(s, a), which is the expected long-term cumulative reward of taking action a in state s and then optimally selecting future actions. Generative model: A model that specifies a probability distribution over the data. For instance, in a classification task with examples X and class labels y, a generative model specifies the distribution of data given labels P (X y), as well as a prior on
1604.00289#12
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
http://arxiv.org/pdf/1604.00289
Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman
cs.AI, cs.CV, cs.LG, cs.NE, stat.ML
In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary
null
cs.AI
20160401
20161102
[ { "id": "1511.06114" }, { "id": "1510.05067" }, { "id": "1602.05179" }, { "id": "1603.08575" } ]
1604.00289
13
with examples X and class labels y, a generative model specifies the distribution of data given labels P (X y), as well as a prior on labels P (y), which can be used for sampling new examples or for classification by using Bayes’ rule to X) directly, possibly by using a compute P (y | neural network to predict the label for a given data point, and cannot directly be used to sample new examples or to compute other queries regarding the data. We will generally be concerned with directed generative models (such as Bayesian networks or probabilistic programs) which can be given a causal interpretation, although undirected (non-causal) generative models (such as Boltzmann machines) are also possible. Program induction: Constructing a program that computes some desired function, where that function is typically specified by training data consisting of example input- output pairs. In the case of probabilistic programs, which specify candidate generative models for data, an abstract description language is used to define a set of allowable programs and learning is a search for the programs likely to have generated the data.
1604.00289#13
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
http://arxiv.org/pdf/1604.00289
Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman
cs.AI, cs.CV, cs.LG, cs.NE, stat.ML
In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary
null
cs.AI
20160401
20161102
[ { "id": "1511.06114" }, { "id": "1510.05067" }, { "id": "1602.05179" }, { "id": "1603.08575" } ]
1604.00289
14
4 most exciting recent progress has been in new forms of probabilistic machine learning (Ghahra- mani, 2015). For example, researchers have developed automated statistical reasoning techniques (Lloyd, Duvenaud, Grosse, Tenenbaum, & Ghahramani, 2014), automated techniques for model building and selection (Grosse, Salakhutdinov, Freeman, & Tenenbaum, 2012), and probabilistic programming languages (e.g., Gelman, Lee, & Guo, 2015; Goodman, Mansinghka, Roy, Bonawitz, & Tenenbaum, 2008; Mansinghka, Selsam, & Perov, 2014). We believe that these approaches will play important roles in future AI systems, and they are at least as compatible with the ideas from cognitive science we discuss here, but a full discussion of those connections is beyond the scope of the current article. # 1.2 Overview of the key ideas The central goal of this paper is to propose a set of core ingredients for building more human-like learning and thinking machines. We will elaborate on each of these ingredients and topics in Section 4, but here we briefly overview the key ideas.
1604.00289#14
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
http://arxiv.org/pdf/1604.00289
Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman
cs.AI, cs.CV, cs.LG, cs.NE, stat.ML
In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary
null
cs.AI
20160401
20161102
[ { "id": "1511.06114" }, { "id": "1510.05067" }, { "id": "1602.05179" }, { "id": "1603.08575" } ]
1604.00289
15
The first set of ingredients focuses on developmental “start-up software,” or cognitive capabilities If an present early in development. There are several reasons for this focus on development. ingredient is present early in development, it is certainly active and available well before a child or adult would attempt to learn the types of tasks discussed in this paper. This is true regardless of whether the early-present ingredient is itself learned from experience or innately present. Also, the earlier an ingredient is present, the more likely it is to be foundational to later development and learning.
1604.00289#15
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
http://arxiv.org/pdf/1604.00289
Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman
cs.AI, cs.CV, cs.LG, cs.NE, stat.ML
In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary
null
cs.AI
20160401
20161102
[ { "id": "1511.06114" }, { "id": "1510.05067" }, { "id": "1602.05179" }, { "id": "1603.08575" } ]
1604.00289
16
We focus on two pieces of developmental start-up software (see Wellman & Gelman, 1992, for a review of both). First is intuitive physics (Section 4.1.1): Infants have primitive object concepts that allow them to track objects over time and allow them to discount physically implausible trajectories. For example, infants know that objects will persist over time and that they are solid and coherent. Equipped with these general principles, people can learn more quickly and make more accurate predictions. While a task may be new, physics still works the same way. A second type of software present in early development is intuitive psychology (Section 4.1.2): Infants understand that other people have mental states like goals and beliefs, and this understanding strongly constrains their learning and predictions. A child watching an expert play a new video game can infer that the avatar has agency and is trying to seek reward while avoiding punishment. This inference immediately constrains other inferences, allowing the child to infer what objects are good and what objects are bad. These types of inferences further accelerate the learning of new tasks.
1604.00289#16
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
http://arxiv.org/pdf/1604.00289
Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman
cs.AI, cs.CV, cs.LG, cs.NE, stat.ML
In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary
null
cs.AI
20160401
20161102
[ { "id": "1511.06114" }, { "id": "1510.05067" }, { "id": "1602.05179" }, { "id": "1603.08575" } ]
1604.00289
17
Our second set of ingredients focus on learning. While there are many perspectives on learning, we see model building as the hallmark of human-level learning, or explaining observed data through the construction of causal models of the world (Section 4.2.2). Under this perspective, the early- present capacities for intuitive physics and psychology are also causal models of the world. A primary job of learning is to extend and enrich these models, and to build analogous causally structured theories of other domains. Compared to state-of-the-art algorithms in machine learning, human learning is distinguished by its 5 richness and its efficiency. Children come with the ability and the desire to uncover the underlying causes of sparsely observed events and to use that knowledge to go far beyond the paucity of the data. It might seem paradoxical that people are capable of learning these richly structured models from very limited amounts of experience. We suggest that compositionality and learning-to- learn are ingredients that make this type of rapid model learning possible (Sections 4.2.1 and 4.2.3, respectively).
1604.00289#17
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
http://arxiv.org/pdf/1604.00289
Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman
cs.AI, cs.CV, cs.LG, cs.NE, stat.ML
In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary
null
cs.AI
20160401
20161102
[ { "id": "1511.06114" }, { "id": "1510.05067" }, { "id": "1602.05179" }, { "id": "1603.08575" } ]
1604.00289
18
A final set of ingredients concerns how the rich models our minds build are put into action, in real time (Section 4.3). It is remarkable how fast we are to perceive and to act. People can comprehend a novel scene in a fraction of a second, and or a novel utterance in little more than the time it takes to say it and hear it. An important motivation for using neural networks in machine vision and speech systems is to respond as quickly as the brain does. Although neural networks are usually aiming at pattern recognition rather than model-building, we will discuss ways in which these “model-free” methods can accelerate slow model-based inferences in perception and cognition (Section 4.3.1). By learning to recognize patterns in these inferences, the outputs of inference can be predicted without having to go through costly intermediate steps. Integrating neural networks that “learn to do inference” with rich model-building learning mechanisms offers a promising way to explain how human minds can understand the world so well, so quickly.
1604.00289#18
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
http://arxiv.org/pdf/1604.00289
Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman
cs.AI, cs.CV, cs.LG, cs.NE, stat.ML
In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary
null
cs.AI
20160401
20161102
[ { "id": "1511.06114" }, { "id": "1510.05067" }, { "id": "1602.05179" }, { "id": "1603.08575" } ]
1604.00289
19
We will also discuss the integration of model-based and model-free methods in reinforcement learn- ing (Section 4.3.2), an area that has seen rapid recent progress. Once a causal model of a task has been learned, humans can use the model to plan action sequences that maximize future reward; when rewards are used as the metric for successs in model-building, this is known as model-based reinforcement learning. However, planning in complex models is cumbersome and slow, making the speed-accuracy trade-off unfavorable for real-time control. By contrast, model-free reinforce- ment learning algorithms, such as current instantiations of deep reinforcement learning, support fast control but at the cost of inflexibility and possibly accuracy. We will review evidence that humans combine model-based and model-free learning algorithms both competitively and cooper- atively, and that these interactions are supervised by metacognitive processes. The sophistication of human-like reinforcement learning has yet to be realized in AI systems, but this is an area where crosstalk between cognitive and engineering approaches is especially promising. # 2 Cognitive and neural inspiration in artificial intelligence
1604.00289#19
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
http://arxiv.org/pdf/1604.00289
Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman
cs.AI, cs.CV, cs.LG, cs.NE, stat.ML
In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary
null
cs.AI
20160401
20161102
[ { "id": "1511.06114" }, { "id": "1510.05067" }, { "id": "1602.05179" }, { "id": "1603.08575" } ]
1604.00289
20
# 2 Cognitive and neural inspiration in artificial intelligence The questions of whether and how AI should relate to human cognitive psychology are older than the terms ‘artificial intelligence’ and ‘cognitive psychology.’ Alan Turing suspected that it is easier to build and educate a child-machine than try to fully capture adult human cognition (Turing, 1950). Turing pictured the child’s mind as a notebook with “rather little mechanism and lots of blank sheets,” and the mind of a child-machine as filling in the notebook by responding to rewards and punishments, similar to reinforcement learning. This view on representation and learning echoes behaviorism, a dominant psychological tradition in Turing’s time. It also echoes the strong empiricism of modern connectionist models, the idea that we can learn almost everything we know from the statistical patterns of sensory inputs. Cognitive science repudiated the over-simplified behaviorist view and came to play a central role 6
1604.00289#20
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
http://arxiv.org/pdf/1604.00289
Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman
cs.AI, cs.CV, cs.LG, cs.NE, stat.ML
In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary
null
cs.AI
20160401
20161102
[ { "id": "1511.06114" }, { "id": "1510.05067" }, { "id": "1602.05179" }, { "id": "1603.08575" } ]
1604.00289
21
Cognitive science repudiated the over-simplified behaviorist view and came to play a central role 6 in early AI research (Boden, 2006). Newell and Simon (1961) developed their “General Problem Solver” as both an AI algorithm and a model of human problem solving, which they subsequently tested experimentally (Newell & Simon, 1972). AI pioneers in other areas of research explicitly referenced human cognition, and even published papers in cognitive psychology journals (e.g., Bobrow & Winograd, 1977; Hayes-Roth & Hayes-Roth, 1979; Winograd, 1972). For example, Schank (1972), writing in the journal Cognitive Psychology, declared that We hope to be able to build a program that can learn, as a child does, how to do what we have described in this paper instead of being spoon-fed the tremendous information necessary. A similar sentiment was expressed by Minsky (1974): I draw no boundary between a theory of human thinking and a scheme for making an intelligent machine; no purpose would be served by separating these today since neither domain has theories good enough to explain—or to produce—enough mental capacity.
1604.00289#21
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
http://arxiv.org/pdf/1604.00289
Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman
cs.AI, cs.CV, cs.LG, cs.NE, stat.ML
In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary
null
cs.AI
20160401
20161102
[ { "id": "1511.06114" }, { "id": "1510.05067" }, { "id": "1602.05179" }, { "id": "1603.08575" } ]
1604.00289
22
Much of this research assumed that human knowledge representation is symbolic and that reasoning, language, planning and vision could be understood in terms of symbolic operations. Parallel to these developments, a radically different approach was being explored, based on neuron-like “sub- symbolic” computations (e.g., Fukushima, 1980; Grossberg, 1976; Rosenblatt, 1958). The representations and algorithms used by this approach were more directly inspired by neuroscience than by cognitive psychology, although ultimately it would flower into an influential school of thought about the nature of cognition—parallel distributed processing (PDP) (McClelland et al., 1986; Rumelhart, McClelland, & the PDP Research Group, 1986). As its name suggests, PDP emphasizes parallel computation by combining simple units to collectively implement sophisticated computations. The knowledge learned by these neural networks is thus distributed across the collection of units rather than localized as in most symbolic data structures. The resurgence of recent interest in neural networks, more commonly referred to as “deep learning,” share the same representational commitments
1604.00289#22
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
http://arxiv.org/pdf/1604.00289
Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman
cs.AI, cs.CV, cs.LG, cs.NE, stat.ML
In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary
null
cs.AI
20160401
20161102
[ { "id": "1511.06114" }, { "id": "1510.05067" }, { "id": "1602.05179" }, { "id": "1603.08575" } ]
1604.00289
23
data structures. The resurgence of recent interest in neural networks, more commonly referred to as “deep learning,” share the same representational commitments and often even the same learning algorithms as the earlier PDP models. “Deep” refers to the fact that more powerful models can be built by composing many layers of representation (see LeCun et al., 2015; Schmidhuber, 2015, for recent reviews), still very much in the PDP style while utilizing recent advances in hardware and computing capabilities, as well as massive datasets, to learn deeper models.
1604.00289#23
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
http://arxiv.org/pdf/1604.00289
Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman
cs.AI, cs.CV, cs.LG, cs.NE, stat.ML
In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary
null
cs.AI
20160401
20161102
[ { "id": "1511.06114" }, { "id": "1510.05067" }, { "id": "1602.05179" }, { "id": "1603.08575" } ]
1604.00289
24
It is also important to clarify that the PDP perspective is compatible with “model building” in addition to “pattern recognition.” Some of the original work done under the banner of PDP (Rumelhart, McClelland, & the PDP Research Group, 1986) is closer to model building than pattern recognition, whereas the recent large-scale discriminative deep learning systems more purely for a related discussion). But, as discussed, exemplify pattern recognition (see Bottou, 2014, there is also a question of the nature of the learned representations within the model – their form, compositionality, and transferability – and the developmental start-up software that was used to get there. We focus on these issues in this paper. Neural network models and the PDP approach offer a view of the mind (and intelligence more broadly) that is sub-symbolic and often populated with minimal constraints and inductive biases 7
1604.00289#24
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
http://arxiv.org/pdf/1604.00289
Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman
cs.AI, cs.CV, cs.LG, cs.NE, stat.ML
In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary
null
cs.AI
20160401
20161102
[ { "id": "1511.06114" }, { "id": "1510.05067" }, { "id": "1602.05179" }, { "id": "1603.08575" } ]
1604.00289
25
7 to guide learning. Proponents of this approach maintain that many classic types of structured knowledge, such as graphs, grammars, rules, objects, structural descriptions, programs, etc. can be useful yet misleading metaphors for characterizing thought. These structures are more epiphenom- enal than real, emergent properties of more fundamental sub-symbolic cognitive processes (McClel- land et al., 2010). Compared to other paradigms for studying cognition, this position on the nature of representation is often accompanied by a relatively “blank slate” vision of initial knowledge and representation, much like Turing’s blank notebook.
1604.00289#25
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
http://arxiv.org/pdf/1604.00289
Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman
cs.AI, cs.CV, cs.LG, cs.NE, stat.ML
In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary
null
cs.AI
20160401
20161102
[ { "id": "1511.06114" }, { "id": "1510.05067" }, { "id": "1602.05179" }, { "id": "1603.08575" } ]
1604.00289
26
When attempting to understand a particular cognitive ability or phenomenon within this paradigm, a common scientific strategy is to train a relatively generic neural network to perform the task, adding additional ingredients only when necessary. This approach has shown that neural networks can behave as if they learned explicitly structured knowledge, such as a rule for producing the past tense of words (Rumelhart & McClelland, 1986), rules for solving simple balance-beam physics problems (McClelland, 1988), or a tree to represent types of living things (plants and animals) and their distribution of properties (Rogers & McClelland, 2004). Training large-scale relatively generic networks is also the best current approach for object recognition (He et al., 2015; Krizhevsky et al., 2012; Russakovsky et al., 2015; Szegedy et al., 2014), where the high-level feature representations of these convolutional nets have also been used to predict patterns of neural response in human and macaque IT cortex (Khaligh-Razavi & Kriegeskorte, 2014; Kriegeskorte, 2015; Yamins et al., 2014) as well as human typicality ratings (Lake,
1604.00289#26
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
http://arxiv.org/pdf/1604.00289
Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman
cs.AI, cs.CV, cs.LG, cs.NE, stat.ML
In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary
null
cs.AI
20160401
20161102
[ { "id": "1511.06114" }, { "id": "1510.05067" }, { "id": "1602.05179" }, { "id": "1603.08575" } ]
1604.00289
27
& Kriegeskorte, 2014; Kriegeskorte, 2015; Yamins et al., 2014) as well as human typicality ratings (Lake, Zaremba, Fergus, & Gureckis, 2015) and similarity ratings (Peterson, Abbott, & Griffiths, 2016) for images of common objects. Moreover, researchers have trained generic networks to perform structured and even strategic tasks, such as the recent work on using a Deep Q-learning Network (DQN) to play simple video games (V. Mnih et al., 2015). If neural networks have such broad application in machine vision, language, and control, and if they can be trained to emulate the rule-like and structured behaviors that characterize cognition, do we need more to develop truly human-like learning and thinking machines? How far can relatively generic neural networks bring us towards this goal?
1604.00289#27
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
http://arxiv.org/pdf/1604.00289
Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman
cs.AI, cs.CV, cs.LG, cs.NE, stat.ML
In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary
null
cs.AI
20160401
20161102
[ { "id": "1511.06114" }, { "id": "1510.05067" }, { "id": "1602.05179" }, { "id": "1603.08575" } ]
1604.00289
28
# 3 Challenges for building more human-like machines While cognitive science has not yet converged on a single account of the mind or intelligence, the claim that a mind is a collection of general purpose neural networks with few initial constraints is rather extreme in contemporary cognitive science. A different picture has emerged that highlights the importance of early inductive biases, including core concepts such as number, space, agency and objects, as well as powerful learning algorithms that rely on prior knowledge to extract knowledge from small amounts of training data. This knowledge is often richly organized and theory-like in structure, capable of the graded inferences and productive capacities characteristic of human thought. Here we present two challenge problems for machine learning and AI: learning simple visual concepts (Lake, Salakhutdinov, & Tenenbaum, 2015) and learning to play the Atari game Frostbite (V. Mnih et al., 2015). We also use the problems as running examples to illustrate the importance of core cognitive ingredients in the sections that follow. 8 # 3.1 The Characters Challenge
1604.00289#28
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
http://arxiv.org/pdf/1604.00289
Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman
cs.AI, cs.CV, cs.LG, cs.NE, stat.ML
In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary
null
cs.AI
20160401
20161102
[ { "id": "1511.06114" }, { "id": "1510.05067" }, { "id": "1602.05179" }, { "id": "1603.08575" } ]
1604.00289
29
8 # 3.1 The Characters Challenge The first challenge concerns handwritten character recognition, a classic problem for comparing different types of machine learning algorithms. Hofstadter (1985) argued that the problem of recognizing characters in all the ways people do – both handwritten and printed – contains most if not all of the fundamental challenges of AI. Whether or not this statement is right, it highlights the surprising complexity that underlies even “simple” human-level concepts like letters. More practically, handwritten character recognition is a real problem that children and adults must learn to solve, with practical applications ranging from reading envelope addresses or checks in an ATM machine. Handwritten character recognition is also simpler than more general forms of object recognition – the object of interest is two-dimensional, separated from the background, and usually unoccluded. Compared to how people learn and see other types of objects, it seems possible, in the near term, to build algorithms that can see most of the structure in characters that people can see.
1604.00289#29
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
http://arxiv.org/pdf/1604.00289
Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman
cs.AI, cs.CV, cs.LG, cs.NE, stat.ML
In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary
null
cs.AI
20160401
20161102
[ { "id": "1511.06114" }, { "id": "1510.05067" }, { "id": "1602.05179" }, { "id": "1603.08575" } ]
1604.00289
30
The standard benchmark is the MNIST data set for digit recognition, which involves classifying images of digits into the categories ‘0’-‘9’ (LeCun, Bottou, Bengio, & Haffner, 1998). The training set provides 6,000 images per class for a total of 60,000 training images. With a large amount of training data available, many algorithms achieve respectable performance, including K-nearest neighbors (5% test error), support vector machines (about 1% test error), and convolutional neu- ral networks (below 1% test error; LeCun et al., 1998). The best results achieved using deep convolutional nets are very close to human-level performance at an error rate of 0.2% (Ciresan, Meier, & Schmidhuber, 2012). Similarly, recent results applying convolutional nets to the far more challenging ImageNet object recognition benchmark have shown that human-level performance is within reach on that data set as well (Russakovsky et al., 2015).
1604.00289#30
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
http://arxiv.org/pdf/1604.00289
Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman
cs.AI, cs.CV, cs.LG, cs.NE, stat.ML
In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary
null
cs.AI
20160401
20161102
[ { "id": "1511.06114" }, { "id": "1510.05067" }, { "id": "1602.05179" }, { "id": "1603.08575" } ]
1604.00289
31
While humans and neural networks may perform equally well on the MNIST digit recognition task and other large-scale image classification tasks, it does not mean that they learn and think in the same way. There are at least two important differences: people learn from fewer examples and they learn richer representations, a comparison true for both learning handwritten characters as well as learning more general classes of objects (Figure 1). People can learn to recognize a new handwritten character from a single example (Figure 1A-i), allowing them to discriminate between novel instances drawn by other people and similar looking non-instances (Lake, Salakhutdinov, & Tenenbaum, 2015; E. G. Miller, Matsakis, & Viola, 2000). Moreover, people learn more than how to do pattern recognition: they learn a concept – that is, a model of the class that allows their acquired knowledge to be flexibly applied in new ways. In addition to recognizing new examples, people can also generate new examples (Figure 1A-ii), parse a character into its most important parts and relations (Figure 1A-iii; Lake, Salakhutdinov, and Tenenbaum (2012)), and generate new characters given a small set of related characters (Figure 1A-iv). These additional abilities come for free along with the acquisition of the underlying concept.
1604.00289#31
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
http://arxiv.org/pdf/1604.00289
Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman
cs.AI, cs.CV, cs.LG, cs.NE, stat.ML
In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary
null
cs.AI
20160401
20161102
[ { "id": "1511.06114" }, { "id": "1510.05067" }, { "id": "1602.05179" }, { "id": "1603.08575" } ]
1604.00289
32
Even for these simple visual concepts, people are still better and more sophisticated learners than the best algorithms for character recognition. People learn a lot more from a lot less, and cap- turing these human-level learning abilities in machines is the Characters Challenge. We recently reported progress on this challenge using probabilistic program induction (Lake, Salakhutdinov, & Tenenbaum, 2015), yet aspects of the full human cognitive ability remain out of reach. While both people and model represent characters as a sequence of pen strokes and relations, people have 9 A i i B) " wo led |°9 leu led |Z5 3 - on ed Saal es ® Oe HO i) a iv) a9 |B) J) Ye ii) iv) ates fj DY} BD /7B led a VIB % Figure 1: The characters challenge: human-level learning of a novel handwritten characters (A), with the same abilities also illustrated for a novel two-wheeled vehicle (B). A single example of a new visual concept (red box) can be enough information to support the (i) classification of new examples, (ii) generation of new examples, (iii) parsing an object into parts and relations, and (iv) generation of new concepts from related concepts. Adapted from Lake, Salakhutdinov, and Tenenbaum (2015).
1604.00289#32
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
http://arxiv.org/pdf/1604.00289
Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman
cs.AI, cs.CV, cs.LG, cs.NE, stat.ML
In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary
null
cs.AI
20160401
20161102
[ { "id": "1511.06114" }, { "id": "1510.05067" }, { "id": "1602.05179" }, { "id": "1603.08575" } ]
1604.00289
33
a far richer repertoire of structural relations between strokes. Furthermore, people can efficiently integrate across multiple examples of a character to infer which have optional elements, such as the horizontal cross-bar in ‘7’s, combining different variants of the same character into a single co- herent representation. Additional progress may come by combining deep learning and probabilistic program induction to tackle even richer versions of the Characters Challenge. # 3.2 The Frostbite Challenge The second challenge concerns the Atari game Frostbite (Figure 2), which was one of the control problems tackled by the DQN of V. Mnih et al. (2015). The DQN was a significant advance in reinforcement learning, showing that a single algorithm can learn to play a wide variety of complex tasks. The network was trained to play 49 classic Atari games, proposed as a test domain for reinforcement learning (Bellemare, Naddaf, Veness, & Bowling, 2013), impressively achieving human-level performance or above on 29 of the games. It did, however, have particular trouble with Frostbite and other games that required temporally extended planning strategies.
1604.00289#33
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
http://arxiv.org/pdf/1604.00289
Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman
cs.AI, cs.CV, cs.LG, cs.NE, stat.ML
In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary
null
cs.AI
20160401
20161102
[ { "id": "1511.06114" }, { "id": "1510.05067" }, { "id": "1602.05179" }, { "id": "1603.08575" } ]
1604.00289
34
In Frostbite, players control an agent (Frostbite Bailey) tasked with constructing an igloo within a time limit. The igloo is built piece-by-piece as the agent jumps on ice floes in water (Figure 2A-C). The challenge is that the ice floes are in constant motion (moving either left or right), and ice floes only contribute to the construction of the igloo if they are visited in an active state (white rather than blue). The agent may also earn extra points by gathering fish while avoiding a number of fatal hazards (falling in the water, snow geese, polar bears, etc.). Success in this game requires a 10
1604.00289#34
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
http://arxiv.org/pdf/1604.00289
Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman
cs.AI, cs.CV, cs.LG, cs.NE, stat.ML
In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary
null
cs.AI
20160401
20161102
[ { "id": "1511.06114" }, { "id": "1510.05067" }, { "id": "1602.05179" }, { "id": "1603.08575" } ]
1604.00289
35
Figure 2: Screenshots of Frostbite, a 1983 video game designed for the Atari game console. A) The start of a level in Frostbite. The agent must construct an igloo by hopping between ice floes and avoiding obstacles such as birds. The floes are in constant motion (either left or right), making multi-step planning essential to success. B) The agent receives pieces of the igloo (top right) by jumping on the active ice floes (white), which then deactivates them (blue). C) At the end of a level, the agent must safely reach the completed igloo. D) Later levels include additional rewards (fish) and deadly obstacles (crabs, clams, and bears). temporally extended plan to ensure the agent can accomplish a sub-goal (such as reaching an ice floe) and then safely proceed to the next sub-goal. Ultimately, once all of the pieces of the igloo are in place, the agent must proceed to the igloo and thus complete the level before time expires (Figure 2C).
1604.00289#35
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
http://arxiv.org/pdf/1604.00289
Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman
cs.AI, cs.CV, cs.LG, cs.NE, stat.ML
In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary
null
cs.AI
20160401
20161102
[ { "id": "1511.06114" }, { "id": "1510.05067" }, { "id": "1602.05179" }, { "id": "1603.08575" } ]
1604.00289
36
The DQN learns to play Frostbite and other Atari games by combining a powerful pattern recognizer (a deep convolutional neural network) and a simple model-free reinforcement learning algorithm (Q-learning; Watkins & Dayan, 1992). These components allow the network to map sensory inputs (frames of pixels) onto a policy over a small set of actions, and both the mapping and the policy are trained to optimize long-term cumulative reward (the game score). The network embodies the strongly empiricist approach characteristic of most connectionist models: very little is built into the network apart from the assumptions about image structure inherent in convolutional networks, so the network has to essentially learn a visual and conceptual system from scratch for each new game. In V. Mnih et al. (2015), the network architecture and hyper-parameters were fixed, but 11 the network was trained anew for each game, meaning the visual system and the policy are highly specialized for the games it was trained on. More recent work has shown how these game-specific networks can share visual features (Rusu et al., 2016) or be used to train a multi-task network (Parisotto, Ba, & Salakhutdinov, 2016), achieving modest benefits of transfer when learning to play new games.
1604.00289#36
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
http://arxiv.org/pdf/1604.00289
Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman
cs.AI, cs.CV, cs.LG, cs.NE, stat.ML
In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary
null
cs.AI
20160401
20161102
[ { "id": "1511.06114" }, { "id": "1510.05067" }, { "id": "1602.05179" }, { "id": "1603.08575" } ]
1604.00289
37
Although it is interesting that the DQN learns to play games at human-level performance while assuming very little prior knowledge, the DQN may be learning to play Frostbite and other games in a very different way than people do. One way to examine the differences is by considering the amount of experience required for learning. In V. Mnih et al. (2015), the DQN was compared with a professional gamer who received approximately two hours of practice on each of the 49 Atari games (although he or she likely had prior experience with some of the games). The DQN was trained on 200 million frames from each of the games, which equates to approximately 924 hours of game time (about 38 days), or almost 500 times as much experience as the human received.2 Additionally, the DQN incorporates experience replay, where each of these frames is replayed approximately 8 more times on average over the course of learning.
1604.00289#37
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
http://arxiv.org/pdf/1604.00289
Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman
cs.AI, cs.CV, cs.LG, cs.NE, stat.ML
In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary
null
cs.AI
20160401
20161102
[ { "id": "1511.06114" }, { "id": "1510.05067" }, { "id": "1602.05179" }, { "id": "1603.08575" } ]
1604.00289
38
With the full 924 hours of unique experience and additional replay, the DQN achieved less than 10% of human-level performance during a controlled test session (see DQN in Fig. 3). More recent variants of the DQN have demonstrated superior performance (Schaul et al., 2016; Stadie et al., 2016; van Hasselt, Guez, & Silver, 2016; Wang et al., 2016), reaching 83% of the professional gamer’s score by incorporating smarter experience replay (Schaul et al., 2016) and 96% by using smarter replay and more efficient parameter sharing (Wang et al., 2016) (see DQN+ and DQN++ in Fig. 3).3 But they requires a lot of experience to reach this level: the learning curve provided in Schaul et al. (2016) shows performance is around 46% after 231 hours, 19% after 116 hours, and below 3.5% after just 2 hours (which is close to random play, approximately 1.5%). The differences between the human and machine learning curves suggest that they may be learning different kinds of knowledge, using different learning mechanisms, or both.
1604.00289#38
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
http://arxiv.org/pdf/1604.00289
Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman
cs.AI, cs.CV, cs.LG, cs.NE, stat.ML
In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary
null
cs.AI
20160401
20161102
[ { "id": "1511.06114" }, { "id": "1510.05067" }, { "id": "1602.05179" }, { "id": "1603.08575" } ]
1604.00289
39
The contrast becomes even more dramatic if we look at the very earliest stages of learning. While both the original DQN and these more recent variants require multiple hours of experience to perform reliably better than random play, even non-professional humans can grasp the basics of the game after just a few minutes of play. We speculate that people do this by inferring a general schema to describe the goals of the game and the object types and their interactions, using the kinds of intuitive theories, model-building abilities and model-based planning mecha- nisms we describe below. While novice players may make some mistakes, such as inferring that fish are harmful rather than helpful, they can learn to play better than chance within a few min- utes. If humans are able to first watch an expert playing for a few minutes, they can learn even faster. In informal experiments with two of the authors playing Frostbite on a Javascript emu- lator (http://www.virtualatari.org/soft.php?soft=Frostbite), after watching videos of expert play on YouTube for just two minutes, we found that we were able to reach scores comparable to or 2The time required to train the DQN (compute time) is not the same as the game (experience) time. Compute time can be longer.
1604.00289#39
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
http://arxiv.org/pdf/1604.00289
Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman
cs.AI, cs.CV, cs.LG, cs.NE, stat.ML
In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary
null
cs.AI
20160401
20161102
[ { "id": "1511.06114" }, { "id": "1510.05067" }, { "id": "1602.05179" }, { "id": "1603.08575" } ]
1604.00289
40
2The time required to train the DQN (compute time) is not the same as the game (experience) time. Compute time can be longer. 3The reported scores use the “human starts” measure of test performance, designed to prevent networks from just memorizing long sequences of successful actions from a single starting point. Both faster learning (Blundell et al., 2016) and higher scores (Wang et al., 2016) have been reported using other metrics, but it is unclear how well the networks are generalizing with these alternative metrics. 12 5000 4000} 3000 2000 Frostbite Score 1000 2 116 231 346 462 578 693 808 924 Amount of game experience (in hours) Figure 3: Comparing learning speed for people versus Deep Q-Networks (DQNs). Test performance on the Atari 2600 game “Frostbite” is plotted as a function of game experience (in hours at a frame rate of 60 fps), which does not include additional experience replay. Learning curves (if available) and scores are shown from different networks: DQN (V. Mnih et al., 2015), DQN+ (Schaul et al., 2016), and DQN++ (Wang et al., 2016). Random play achieves a score of 66.4. The “human starts” performance measure is used (van Hasselt et al., 2016).
1604.00289#40
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
http://arxiv.org/pdf/1604.00289
Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman
cs.AI, cs.CV, cs.LG, cs.NE, stat.ML
In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary
null
cs.AI
20160401
20161102
[ { "id": "1511.06114" }, { "id": "1510.05067" }, { "id": "1602.05179" }, { "id": "1603.08575" } ]
1604.00289
41
better than the human expert reported in V. Mnih et al. (2015) after at most 15-20 minutes of total practice.4 There are other behavioral signatures that suggest fundamental differences in representation and learning between people and the DQN. For instance, the game of Frostbite provides incremental rewards for reaching each active ice floe, providing the DQN with the relevant sub-goals for com- pleting the larger task of building an igloo. Without these sub-goals, the DQN would have to take random actions until it accidentally builds an igloo and is rewarded for completing the entire level. In contrast, people likely do not rely on incremental scoring in the same way when figuring out how to play a new game. In Frostbite, it is possible to figure out the higher-level goal of building an igloo without incremental feedback; similarly, sparse feedback is a source of difficulty in other Atari 2600 games such as Montezuma’s Revenge where people substantially outperform current DQN approaches. The learned DQN network is also rather inflexible to changes in its inputs and goals: changing the color or appearance of objects or changing the goals of the network would have devastating consequences on performance if the network is not retrained. While any specific model is necessarily
1604.00289#41
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
http://arxiv.org/pdf/1604.00289
Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman
cs.AI, cs.CV, cs.LG, cs.NE, stat.ML
In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary
null
cs.AI
20160401
20161102
[ { "id": "1511.06114" }, { "id": "1510.05067" }, { "id": "1602.05179" }, { "id": "1603.08575" } ]
1604.00289
42
4More precisely, the human expert in V. Mnih et al. (2015) scored an average of 4335 points across 30 game sessions of up to five minutes of play. In individual sessions lasting no longer than five minutes, author TDU obtained scores of 3520 points after approximately 5 minutes of gameplay, 3510 points after 10 minutes, and 7810 points after 15 minutes. Author JBT obtained 4060 after approximately 5 minutes of gameplay, 4920 after 10-15 minutes, and 6710 after no more than 20 minutes. TDU and JBT each watched approximately two minutes of expert play on YouTube (e.g., https://www.youtube.com/watch?v=ZpUFztf9Fjc, but there are many similar examples that can be found in a YouTube search). 13 simplified and should not be held to the standard of general human intelligence, the contrast between DQN and human flexibility is striking nonetheless. For example, imagine you are tasked with playing Frostbite with any one of these new goals: Get the lowest possible score.
1604.00289#42
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
http://arxiv.org/pdf/1604.00289
Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman
cs.AI, cs.CV, cs.LG, cs.NE, stat.ML
In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary
null
cs.AI
20160401
20161102
[ { "id": "1511.06114" }, { "id": "1510.05067" }, { "id": "1602.05179" }, { "id": "1603.08575" } ]
1604.00289
43
Get closest to 100, or 300, or 1000, or 3000, or any level, without going over. Beat your friend, who’s playing next to you, but just barely, not by too much, so as not to embarrass them. Go as long as you can without dying. Die as quickly as you can. Pass each level at the last possible minute, right before the temperature timer hits zero and you die (i.e., come as close as you can to dying from frostbite without actually dying). Get to the furthest unexplored level without regard for your score. See if you can discover secret Easter eggs. Get as many fish as you can. Touch all the individual ice floes on screen once and only once. Teach your friend how to play as efficiently as possible.
1604.00289#43
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
http://arxiv.org/pdf/1604.00289
Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman
cs.AI, cs.CV, cs.LG, cs.NE, stat.ML
In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary
null
cs.AI
20160401
20161102
[ { "id": "1511.06114" }, { "id": "1510.05067" }, { "id": "1602.05179" }, { "id": "1603.08575" } ]
1604.00289
44
This range of goals highlights an essential component of human intelligence: people can learn models and use them for arbitrary new tasks and goals. While neural networks can learn multiple mappings or tasks with the same set of stimuli – adapting their outputs depending on a specified goal – these models require substantial training or reconfiguration to add new tasks (e.g., Collins & Frank, 2013; Eliasmith et al., 2012; Rougier, Noelle, Braver, Cohen, & O’Reilly, 2005). In contrast, people require little or no retraining or reconfiguration, adding new tasks and goals to their repertoire with relative ease. The Frostbite example is a particularly telling contrast when compared with human play. Even the best deep networks learn gradually over many thousands of game episodes, take a long time to reach good performance and are locked into particular input and goal patterns. Humans, after playing just a small number of games over a span of minutes, can understand the game and its goals well enough to perform better than deep networks do after almost a thousand hours of experience. Even more impressively, people understand enough to invent or accept new goals, generalize over changes to the input, and explain the game to others. Why are people different? What core ingredients of human intelligence might the DQN and other modern machine learning methods be missing?
1604.00289#44
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
http://arxiv.org/pdf/1604.00289
Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman
cs.AI, cs.CV, cs.LG, cs.NE, stat.ML
In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary
null
cs.AI
20160401
20161102
[ { "id": "1511.06114" }, { "id": "1510.05067" }, { "id": "1602.05179" }, { "id": "1603.08575" } ]
1604.00289
46
14 differently. They may be better seen as solving different tasks. Human learners – unlike DQN and many other deep learning systems – approach new problems armed with extensive prior experience. The human is encountering one in a years-long string of problems, with rich overlapping structure. Humans as a result often have important domain-specific knowledge for these tasks, even before they ‘begin.’ The DQN is starting completely from scratch.” We agree, and indeed this is another way of putting our point here. Human learners fundamentally take on different learning tasks than today’s neural networks, and if we want to build machines that learn and think like people, our machines need to confront the kinds of tasks that human learners do, not shy away from them. People never start completely from scratch, or even close to “from scratch,” and that is the secret to their success. The challenge of building models of human learning and thinking then becomes: How do we bring to bear rich prior knowledge to learn new tasks and solve new problems so quickly? What form does that prior knowledge take, and how is it constructed, from some combination of inbuilt capacities and previous experience? The core ingredients we propose in the next section offer one route to meeting this challenge. # 4 Core ingredients of human intelligence
1604.00289#46
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
http://arxiv.org/pdf/1604.00289
Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman
cs.AI, cs.CV, cs.LG, cs.NE, stat.ML
In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary
null
cs.AI
20160401
20161102
[ { "id": "1511.06114" }, { "id": "1510.05067" }, { "id": "1602.05179" }, { "id": "1603.08575" } ]
1604.00289
47
# 4 Core ingredients of human intelligence In the Introduction, we laid out what we see as core ingredients of intelligence. Here we consider the ingredients in detail and contrast them with the current state of neural network modeling. While these are hardly the only ingredients needed for human-like learning and thought (see our discussion of language in Section 5), they are key building blocks which are not present in most current learning-based AI systems – certainly not all present together – and for which additional attention may prove especially fruitful. We believe that integrating them will produce significantly more powerful and more human-like learning and thinking abilities than we currently see in AI systems.
1604.00289#47
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
http://arxiv.org/pdf/1604.00289
Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman
cs.AI, cs.CV, cs.LG, cs.NE, stat.ML
In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary
null
cs.AI
20160401
20161102
[ { "id": "1511.06114" }, { "id": "1510.05067" }, { "id": "1602.05179" }, { "id": "1603.08575" } ]
1604.00289
48
Before considering each ingredient in detail, it is important to clarify that by “core ingredient” we do not necessarily mean an ingredient that is innately specified by genetics or must be “built in” to any learning algorithm. We intend our discussion to be agnostic with regards to the origins of the key ingredients. By the time a child or an adult is picking up a new character or learning how to play Frostbite, they are armed with extensive real world experience that deep learning systems do not benefit from – experience that would be hard to emulate in any general sense. Certainly, the core ingredients are enriched by this experience, and some may even be a product of the experience itself. Whether learned, built in, or enriched, the key claim is that these ingredients play an active and important role in producing human-like learning and thought, in ways contemporary machine learning has yet to capture. # 4.1 Developmental start-up software Early in development, humans have a foundational understanding of several core domains (Spelke, 2003; 2007). These domains include number (numerical and set opera- tions), space (geometry and navigation), physics (inanimate objects and mechanics) and psychology (agents and groups). These core domains cleave cognition at its conceptual joints, and each domain 15
1604.00289#48
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
http://arxiv.org/pdf/1604.00289
Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman
cs.AI, cs.CV, cs.LG, cs.NE, stat.ML
In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary
null
cs.AI
20160401
20161102
[ { "id": "1511.06114" }, { "id": "1510.05067" }, { "id": "1602.05179" }, { "id": "1603.08575" } ]
1604.00289
49
15 is organized by a set of entities and abstract principles relating the entities. The underlying cogni- tive representations can be understood as “intuitive theories,” with a causal structure resembling a scientific theory (Carey, 2004, 2009; Gopnik et al., 2004; Gopnik & Meltzoff, 1999; Gweon, Tenenbaum, & Schulz, 2010; L. Schulz, 2012; Wellman & Gelman, 1992, 1998). The “child as scientist” proposal further views the process of learning itself as also scientist-like, with recent experiments showing that children seek out new data to distinguish between hypotheses, isolate vari- ables, test causal hypotheses, make use of the data-generating process in drawing conclusions, and learn selectively from others (Cook, Goodman, & Schulz, 2011; Gweon et al., 2010; L. E. Schulz, Gopnik, & Glymour, 2007; Stahl & Feigenson, 2015; Tsividis, Gershman, Tenenbaum, & Schulz, 2013). We will address the nature of learning mechanisms in Section 4.2.
1604.00289#49
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
http://arxiv.org/pdf/1604.00289
Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman
cs.AI, cs.CV, cs.LG, cs.NE, stat.ML
In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary
null
cs.AI
20160401
20161102
[ { "id": "1511.06114" }, { "id": "1510.05067" }, { "id": "1602.05179" }, { "id": "1603.08575" } ]
1604.00289
50
Each core domain has been the target of a great deal of study and analysis, and together the domains are thought to be shared cross-culturally and partly with non-human animals. All of these domains may be important augmentations to current machine learning, though below we focus in particular on the early understanding of objects and agents. # 4.1.1 Intuitive physics Young children have rich knowledge of intuitive physics. Whether learned or innate, important physical concepts are present at ages far earlier than when a child or adult learns to play Frostbite, suggesting these resources may be used for solving this and many everyday physics-related tasks. At the age of 2 months and possibly earlier, human infants expect inanimate objects to follow principles of persistence, continuity, cohesion and solidity. Young infants believe objects should move along smooth paths, not wink in and out of existence, not inter-penetrate and not act at a distance (Spelke, 1990; Spelke, Gutheil, & Van de Walle, 1995). These expectations guide object segmentation in early infancy, emerging before appearance-based cues such as color, texture, and perceptual goodness (Spelke, 1990).
1604.00289#50
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
http://arxiv.org/pdf/1604.00289
Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman
cs.AI, cs.CV, cs.LG, cs.NE, stat.ML
In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary
null
cs.AI
20160401
20161102
[ { "id": "1511.06114" }, { "id": "1510.05067" }, { "id": "1602.05179" }, { "id": "1603.08575" } ]
1604.00289
51
These expectations also go on to guide later learning. At around 6 months, infants have already developed different expectations for rigid bodies, soft bodies and liquids (Rips & Hespos, 2015). Liquids, for example, are expected to go through barriers, while solid objects cannot (Hespos, Ferry, & Rips, 2009). By their first birthday, infants have gone through several transitions of compre- hending basic physical concepts such as inertia, support, containment and collisions (Baillargeon, 2004; Baillargeon, Li, Ng, & Yuan, 2009; Hespos & Baillargeon, 2008).
1604.00289#51
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
http://arxiv.org/pdf/1604.00289
Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman
cs.AI, cs.CV, cs.LG, cs.NE, stat.ML
In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary
null
cs.AI
20160401
20161102
[ { "id": "1511.06114" }, { "id": "1510.05067" }, { "id": "1602.05179" }, { "id": "1603.08575" } ]
1604.00289
52
There is no single agreed-upon computational account of these early physical principles and con- cepts, and previous suggestions have ranged from decision trees (Baillargeon et al., 2009), to cues, to lists of rules (Siegler & Chen, 1998). A promising recent approach sees intuitive physical rea- soning as similar to inference over a physics software engine, the kind of simulators that power modern-day animations and games (Bates, Yildirim, Tenenbaum, & Battaglia, 2015; Battaglia, Hamrick, & Tenenbaum, 2013; Gerstenberg, Goodman, Lagnado, & Tenenbaum, 2015; Sanborn, Mansinghka, & Griffiths, 2013). According to this hypothesis, people reconstruct a perceptual scene using internal representations of the objects and their physically relevant properties (such as mass, elasticity, and surface friction), and forces acting on objects (such as gravity, friction, or collision impulses). Relative to physical ground truth, the intuitive physical state representation 16
1604.00289#52
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
http://arxiv.org/pdf/1604.00289
Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman
cs.AI, cs.CV, cs.LG, cs.NE, stat.ML
In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary
null
cs.AI
20160401
20161102
[ { "id": "1511.06114" }, { "id": "1510.05067" }, { "id": "1602.05179" }, { "id": "1603.08575" } ]
1604.00289
53
16 A= 1. Inputs === 2. Intuitive Physics Engine = 3. Outputs B =r" Changes to Input one | Will it fall? Which direction? Add blocks, blocks made of styrofoam, blocks made of lead, blocks made of goo, table is made of rubber, table is actually quicksand, pour water on the tower, pour honey on the tower, blue blocks are glued together, red blocks are magnetic, gravity is reversed, wind blows over table, table has slippery ice on top... Figure 4: The intuitive physics-engine approach to scene understanding, illustrated through tower stability. (A) The engine takes in inputs through perception, language, memory and other faculties. It then constructs a physical scene with objects, physical properties and forces, simulates the scene’s development over time and hands the output to other reasoning systems. (B) Many possible ‘tweaks’ to the input can result in much different scenes, requiring the potential discovery, training and evaluation of new features for each tweak. Adapted from Battaglia et al. (2013). is approximate and probabilistic, and oversimplified and incomplete in many ways. Still, it is rich enough to support mental simulations that can predict how objects will move in the immediate future, either on their own or in responses to forces we might apply.
1604.00289#53
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
http://arxiv.org/pdf/1604.00289
Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman
cs.AI, cs.CV, cs.LG, cs.NE, stat.ML
In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary
null
cs.AI
20160401
20161102
[ { "id": "1511.06114" }, { "id": "1510.05067" }, { "id": "1602.05179" }, { "id": "1603.08575" } ]
1604.00289
54
This “intuitive physics engine” approach enables flexible adaptation to a wide range of everyday scenarios and judgments in a way that goes beyond perceptual cues. For example (Figure 4), a physics-engine reconstruction of a tower of wooden blocks from the game Jenga can be used to predict whether (and how) a tower will fall, finding close quantitative fits to how adults make these predictions (Battaglia et al., 2013) as well as simpler kinds of physical predictions that have been studied in infants (T´egl´as et al., 2011). Simulation-based models can also capture how people make hypothetical or counterfactual predictions: What would happen if certain blocks are taken away, more blocks are added, or the table supporting the tower is jostled? What if certain blocks were glued together, or attached to the table surface? What if the blocks were made of different materials (Styrofoam, lead, ice)? What if the blocks of one color were much heavier than other colors? Each of these physical judgments may require new features or new training for a pattern recognition account to work at the same level as the model-based simulator.
1604.00289#54
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
http://arxiv.org/pdf/1604.00289
Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman
cs.AI, cs.CV, cs.LG, cs.NE, stat.ML
In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary
null
cs.AI
20160401
20161102
[ { "id": "1511.06114" }, { "id": "1510.05067" }, { "id": "1602.05179" }, { "id": "1603.08575" } ]
1604.00289
55
What are the prospects for embedding or acquiring this kind of intuitive physics in deep learning systems? Connectionist models in psychology have previously been applied to physical reasoning tasks such as balance-beam rules (McClelland, 1988; Shultz, 2003) or rules relating distance, velocity, and time in motion (Buckingham & Shultz, 2000), but these networks do not attempt to work with complex scenes as input or a wide range of scenarios and judgments as in Figure 4. 17
1604.00289#55
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
http://arxiv.org/pdf/1604.00289
Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman
cs.AI, cs.CV, cs.LG, cs.NE, stat.ML
In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary
null
cs.AI
20160401
20161102
[ { "id": "1511.06114" }, { "id": "1510.05067" }, { "id": "1602.05179" }, { "id": "1603.08575" } ]
1604.00289
56
A recent paper from Facebook AI researchers (Lerer, Gross, & Fergus, 2016) represents an excit- ing step in this direction. Lerer et al. (2016) trained a deep convolutional network-based system (PhysNet) to predict the stability of block towers from simulated images similar to those in Figure 4A but with much simpler configurations of two, three or four cubical blocks stacked vertically. Impressively, PhysNet generalized to simple real images of block towers, matching human perfor- mance on these images, meanwhile exceeding human performance on synthetic images. Human and PhysNet confidence were also correlated across towers, although not as strongly as for the approx- imate probabilistic simulation models and experiments of Battaglia et al. (2013). One limitation is that PhysNet currently requires extensive training – between 100,000 and 200,000 scenes – to learn judgments for just a single task (will the tower fall?) on a narrow range of scenes (towers with two to four cubes). It has been shown to generalize, but also only in limited ways (e.g., from towers of two and three cubes to towers of four cubes). In contrast, people require far less experience
1604.00289#56
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
http://arxiv.org/pdf/1604.00289
Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman
cs.AI, cs.CV, cs.LG, cs.NE, stat.ML
In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary
null
cs.AI
20160401
20161102
[ { "id": "1511.06114" }, { "id": "1510.05067" }, { "id": "1602.05179" }, { "id": "1603.08575" } ]
1604.00289
57
but also only in limited ways (e.g., from towers of two and three cubes to towers of four cubes). In contrast, people require far less experience to perform any particular task, and can generalize to many novel judgments and complex scenes with no new training required (although they receive large amounts of physics experience through interacting with the world more generally). Could deep learning systems such as PhysNet cap- ture this flexibility, without explicitly simulating the causal interactions between objects in three dimensions? We are not sure, but we hope this is a challenge they will take on.
1604.00289#57
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
http://arxiv.org/pdf/1604.00289
Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman
cs.AI, cs.CV, cs.LG, cs.NE, stat.ML
In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary
null
cs.AI
20160401
20161102
[ { "id": "1511.06114" }, { "id": "1510.05067" }, { "id": "1602.05179" }, { "id": "1603.08575" } ]
1604.00289
58
Alternatively, instead of trying to make predictions without simulating physics, could neural net- works be trained to emulate a general-purpose physics simulator, given the right type and quantity of training data, such as the raw input experienced by a child? This is an active and intriguing area of research, but it too faces significant challenges. For networks trained on object classification, deeper layers often become sensitive to successively higher-level features, from edges to textures to shape-parts to full objects (Yosinski, Clune, Bengio, & Lipson, 2014; Zeiler & Fergus, 2014). For deep networks trained on physics-related data, it remains to be seen whether higher layers will encode objects, general physical properties, forces and approximately Newtonian dynamics. A generic network trained on dynamic pixel data might learn an implicit representation of these con- cepts, but would it generalize broadly beyond training contexts as people’s more explicit physical concepts do? Consider for example a network that learns to predict the trajectories of several balls bouncing in a box (Kodratoff & Michalski, 2014). If this network has actually learned something like Newtonian mechanics, then it
1604.00289#58
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
http://arxiv.org/pdf/1604.00289
Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman
cs.AI, cs.CV, cs.LG, cs.NE, stat.ML
In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary
null
cs.AI
20160401
20161102
[ { "id": "1511.06114" }, { "id": "1510.05067" }, { "id": "1602.05179" }, { "id": "1603.08575" } ]
1604.00289
59
balls bouncing in a box (Kodratoff & Michalski, 2014). If this network has actually learned something like Newtonian mechanics, then it should be able to generalize to interestingly different scenarios – at a minimum different numbers of differently shaped objects, bouncing in boxes of different shapes and sizes and orientations with respect to gravity, not to mention more severe generalization tests such as all of the tower tasks discussed above, which also fall under the Newtonian domain. Neural network researchers have yet to take on this challenge, but we hope they will. Whether such models can be learned with the kind (and quantity) of data available to human infants is not clear, as we discuss further in Section 5.
1604.00289#59
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
http://arxiv.org/pdf/1604.00289
Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman
cs.AI, cs.CV, cs.LG, cs.NE, stat.ML
In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary
null
cs.AI
20160401
20161102
[ { "id": "1511.06114" }, { "id": "1510.05067" }, { "id": "1602.05179" }, { "id": "1603.08575" } ]
1604.00289
60
It may be difficult to integrate object and physics-based primitives into deep neural networks, but the payoff in terms of learning speed and performance could be great for many tasks. Consider the case of learning to play Frostbite. Although it can be difficult to discern exactly how a network learns to solve a particular task, the DQN probably does not parse a Frostbite screenshot in terms of stable objects or sprites moving according to the rules of intuitive physics (Figure 2). But incorporating a physics-engine-based representation could help DQNs learn to play games such as Frostbite in a faster and more general way, whether the physics knowledge is captured implicitly in a neural network or more explicitly in simulator. Beyond reducing the amount of training data and 18
1604.00289#60
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
http://arxiv.org/pdf/1604.00289
Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman
cs.AI, cs.CV, cs.LG, cs.NE, stat.ML
In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary
null
cs.AI
20160401
20161102
[ { "id": "1511.06114" }, { "id": "1510.05067" }, { "id": "1602.05179" }, { "id": "1603.08575" } ]
1604.00289
61
18 potentially improving the level of performance reached by the DQN, it could eliminate the need to retrain a Frostbite network if the objects (e.g., birds, ice-floes and fish) are slightly altered in their behavior, reward-structure, or appearance. When a new object type such as a bear is introduced, as in the later levels of Frostbite (Figure 2D), a network endowed with intuitive physics would also have an easier time adding this object type to its knowledge (the challenge of adding new objects was also discussed in Marcus, 1998, 2001). In this way, the integration of intuitive physics and deep learning could be an important step towards more human-like learning algorithms. # 4.1.2 Intuitive psychology Intuitive psychology is another early-emerging ability with an important influence on human learn- ing and thought. Pre-verbal infants distinguish animate agents from inanimate objects. This distinction is partially based on innate or early-present detectors for low-level cues, such as the presence of eyes, motion initiated from rest, and biological motion (Johnson, Slaughter, & Carey, 1998; Premack & Premack, 1997; Schlottmann, Ray, Mitchell, & Demetriou, 2006; Tremoulet & Feldman, 2000). Such cues are often sufficient but not necessary for the detection of agency.
1604.00289#61
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
http://arxiv.org/pdf/1604.00289
Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman
cs.AI, cs.CV, cs.LG, cs.NE, stat.ML
In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary
null
cs.AI
20160401
20161102
[ { "id": "1511.06114" }, { "id": "1510.05067" }, { "id": "1602.05179" }, { "id": "1603.08575" } ]
1604.00289
62
Beyond these low-level cues, infants also expect agents to act contingently and reciprocally, to have goals, and to take efficient actions towards those goals subject to constraints (Csibra, 2008; Csibra, Biro, Koos, & Gergely, 2003; Spelke & Kinzler, 2007). These goals can be socially directed; at around three months of age, infants begin to discriminate anti-social agents that hurt or hinder others from neutral agents (Hamlin, 2013; Hamlin, Wynn, & Bloom, 2010), and they later distinguish between anti-social, neutral, and pro-social agents (Hamlin, Ullman, Tenenbaum, Goodman, & Baker, 2013; Hamlin, Wynn, & Bloom, 2007). It is generally agreed that infants expect agents to act in a goal-directed, efficient, and socially sensitive fashion (Spelke & Kinzler, 2007). What is less agreed on is the computational architecture that supports this reasoning and whether it includes any reference to mental states and explicit goals.
1604.00289#62
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
http://arxiv.org/pdf/1604.00289
Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman
cs.AI, cs.CV, cs.LG, cs.NE, stat.ML
In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary
null
cs.AI
20160401
20161102
[ { "id": "1511.06114" }, { "id": "1510.05067" }, { "id": "1602.05179" }, { "id": "1603.08575" } ]
1604.00289
63
One possibility is that intuitive psychology is simply cues “all the way down” (Schlottmann, Cole, Watts, & White, 2013; Scholl & Gao, 2013), though this would require more and more cues as the scenarios become more complex. Consider for example a scenario in which an agent A is moving towards a box, and an agent B moves in a way that blocks A from reaching the box. Infants and adults are likely to interpret B’s behavior as ‘hindering’ (Hamlin, 2013). This inference could be captured by a cue that states ‘if an agent’s expected trajectory is prevented from completion, the blocking agent is given some negative association.’
1604.00289#63
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
http://arxiv.org/pdf/1604.00289
Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman
cs.AI, cs.CV, cs.LG, cs.NE, stat.ML
In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary
null
cs.AI
20160401
20161102
[ { "id": "1511.06114" }, { "id": "1510.05067" }, { "id": "1602.05179" }, { "id": "1603.08575" } ]