doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
2306.12672
3
1 # Introduction Language expresses the vast internal landscape of our thoughts. We use language to convey what we believe, what we are uncertain about, and what we do not know. We talk about what we see in the world around us, and what we imagine in real or wholly hypothetical futures. We discuss what we want and what we plan to do, and dissect what others want and what we think they will do. We build and pass on new bodies of knowledge in language—we ask questions and offer explanations, give commands and instructions, and propose and refute theories. Some of these ideas can be expressed in part through other means. But language stands apart for its flexibility and breadth, and its seeming proximity to our thoughts. What is language? How does language get its meaning, and when should we say that a person or machine knows, understands, and can use it? What is the relationship between language and the rest of general cognition—what allows language to inform and support so much of thought? This paper focuses on these questions as they relate to human language and thought, in computational terms. What integrated cognitive theory can model how language relates to the other core systems of human cognition? If we seek to build AI systems that emulate how humans talk and think, what architecture can integrate language robustly into systems that support the full scope of our thought?
2306.12672#3
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
5
1 # INTRODUCTION Theories of cognition have long considered human language and thinking to be deeply related, but fundamentally distinct. Thinking, in many traditional cognitive theories, revolves around goal-directed world modeling, inference, and decision making—constructing mental models of the world that reflect prior beliefs, can be updated from new observations, and support rational prediction and decision making toward’s one’s goals (Craik, 1967; Gentner & Stevens, 2014; Johnson-Laird, 1980, 1989; Lake, Ullman, Tenenbaum, & Gershman, 2017; Morgan, 1999; Nersessian et al., 2010). Language, in contrast, centers around communicating these thoughts to others, and receiving their thoughts in turn. In most linguistic theories, human languages are mappings between the internal representations of thought and an externalizable symbol system, which might be phonemes, signs, or glyphs (Frege, 1892; Heim & Kratzer, 1998; Lewis, 1976). To produce language is to map thoughts into these external symbols, and to understand language is to transduce from these external symbols back into the representations of thought.
2306.12672#5
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
6
The theoretical distinction between language and thought rests on multiple intersecting lines of evidence. Prior to learning language, infants are born equipped with a powerful toolkit for modeling and thinking about the world, including an understanding of physical objects and events, and the goals and actions of agents (Spelke, 2022; Spelke & Kinzler, 2007), and general abilities for learning statistics and structure (Saffran, Senghas, & Trueswell, 2001; Xu et al., 2021). Building on these foundations, children acquire language from relatively sparse input data, rapidly generalizing beyond the utterances they hear to produce and understand entirely new ones (Bloom, 2002; L. Gleitman, 1990; L. R. Gleitman, Cassidy, Nappa, Papafragou, & Trueswell, 2005; Landauer & Dumais, 1997; Pinker, 1998; L. Smith & Yu, 2008); they then use language to acquire new concepts they would not get merely from direct experience (Carey, 2009; Gopnik, 1996; Wellman & Gelman, 1992). Language and thought also appear to operate in distinct but interacting brain systems: neuroimaging and neurological studies reveal a “language network” specialized for processing sentences, functionally and anatomically separate from but closely connected to brain networks supporting other aspects of general cognition (Fedorenko & Varley, 2016; Mahowald et al., 2023).
2306.12672#6
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
7
‘These empirical findings have shaped decades of computational models in cognitive science and AI. To model the expressiveness of human cognition, an influential computational paradigm suggests that humans compose and execute mental programs in an internal language of thought (Fodor, 1975), a structured symbolic substrate for representing conceptual knowledge that provides a general interface to algorithms for problem solving and reasoning. These symbolic systems are not merely logic engines; they support our probabilistic inferences, and rich intuitive simulations (Goodman, Tenenbaum, & Gerstenberg, 2014; Oaksford & Chater, 2007; Russell & Norvig, 2021). This paradigm underlies many of the success stories in cognitive science and related applications in AI. It has influenced models that capture how people draw causal and explanatory inferences about facts and observations (Pearl, 1988; Pearl et al., 2000), learn and generalize concepts from few examples (Lake et al., 2017); plan actions over long time horizons and under complex conditions (Kaelbling & Lozano-Pérez, 2013; Russell & Norvig, 2021); imagine and predict the physical world (Battaglia, Hamrick, &
2306.12672#7
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
8
& Lozano-Pérez, 2013; Russell & Norvig, 2021); imagine and predict the physical world (Battaglia, Hamrick, & Tenenbaum, 2013; Ullman, Spelke, Battaglia, & Tenenbaum, 2017); and reason about other agents with their own beliefs and goals (C. Baker, Saxe, & Tenenbaum, 2011). Within linguistics and natural language processing, in turn, this paradigm underlies semantic parsing systems designed to map from human language into symbolic computational representations. It has yielded AI systems that could follow instructions (Tellex et al., 2011) and answer natural language queries with respect to structured knowledge representations (Klein & Manning, 2003; Liang, 2016; Steedman, 2011; Y. W. Wong & Mooney, 2007); as well as cognitive models that capture how human children learn the grammar and meaning of expressions in their native language (Abend, Kwiatkowski, Smith, Goldwater, & Steedman, 2017; Chater & Manning, 2006; Frank, Goodman, & Tenenbaum, 2009; Gauthier, Levy, & Tenenbaum, 2018; Goldwater, Griffiths, & Johnson, 2009; Perfors,
2306.12672#8
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
10
Despite this progress, however, modular and symbolic models of language and thought have been dogged by persistent critiques of their scalability and scope. Cognitive and AI researchers over the years have carved off specific domains of world knowledge, constructing bespoke representations to model them without a general account of whether they would generalize to all of human knowledge, or how they could be scalably learned. Semantic parsing systems inherited these critiques, and faced additional challenges in implementing the mapping from sentences into symbolic representations. These mapping functions were either hand-engineered or learned from strong supervision on specific domains of language, limiting them to brittle, imperfect models of the breadth and complexity of real human discourse. 2 1 # INTRODUCTION
2306.12672#10
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
11
In just the last few years, a serious challenge has emerged to the traditional view of language and thought as distinct but interacting components of the mind, each modeled using structured representations. Large language models (LLMs) use a new generation of attention-based deep neural networks to learn the probabilistic distributions of words from vast datasets of human language, generally training on orders of magnitude more data than a human encounters in their lifetime (Bommasani et al., 2021; T. B. Brown et al., 2020; OpenAI, 2023c; Rae et al., 2021; Vaswani et al., 2017). The underlying computational objective that drives these models is not itself new. LLMs follow in the tradition of distributional approaches to discovering structure in language (Firth, 1957; Harris, 1954; Osgood, 1952), which seek to extract representations of meaning from statistical patterns in how words are used in context (Dumais et al., 2004; Griffiths, Steyvers, & Tenenbaum, 2007; Mikolov, Sutskever, Chen, Corrado, & Dean, 2013; Sahlgren, 2008). What is new, however, is the scale and scope of
2306.12672#11
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
12
Sutskever, Chen, Corrado, & Dean, 2013; Sahlgren, 2008). What is new, however, is the scale and scope of today’s distributional vision, which has expanded in stages. A first generation of LLMs, trained specifically to predict words in context, produced such fluent language that they challenged traditional symbolic approaches to modeling language (Devlin, Chang, Lee, & Toutanova, 2018; Peters et al., 1802; Radford et al., 2019). Their qualitative success, as well as internal representational probes, suggested that linguistic structures sufficient for grammatically coherent language could be learned entirely from modeling the statistics of words (Piantadosi, 2023; Tenney, Das, & Pavlick, 2019). By scaling to even larger datasets and neural networks, LLMs appeared to learn not only the structure of language, but capacities for some kinds of thinking; they could learn new words in context, and extract patterns in language from a few examples that they could generalize locally to similar cases (T. B. Brown et al., 2020). The most recent LLMs have been trained not only to model the statistics of language but explicitly to reason, with targeted supervision on
2306.12672#12
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
13
B. Brown et al., 2020). The most recent LLMs have been trained not only to model the statistics of language but explicitly to reason, with targeted supervision on instruction following, writing code, and other forms of human dialog and feedback in conversational contexts (Chen et al., 2021; OpenAI, 2023a, 2023c; Ouyang et al., 2022). They produce such fluent language on a wide variety of tasks that many have begun to ask whether merely more training of this sort, with increasing scale, could learn representations sufficient for general intelligence (Bubeck et al., 2023). Proponents of the most extreme “scaling hypothesis” have argued that because language is used to express so much of human thought, a sufficiently large and performant predictive language model would effectively have to construct an internal model of all of cognition (Branwen, 2022).
2306.12672#13
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
14
This theoretical vision has sparked both excitement and controversy, but proponents and critics agree that it raises its own questions about its long-term scalability—most significantly, what will be required to close the outstanding gaps between today’s LLMs and general cognitive models that reason systematically and consistently about the language they receive or produce. Current LLMs can produce impressive results on a set of linguistic inputs and then fail completely on others that make trivial alterations to the same underlying domain (Ullman, 2023); they mix confident answers to complex questions with equally confident, hallucinated language that does not reflect a consistent, calibrated notion of truth or belief (Bubeck et al., 2023; OpenAI, 2023c). These issues make it difficult to evaluate whether LLMs have acquired cognitive capacities such as social reasoning and theory of mind (Ullman, 2023), or to compare different kinds of world modeling and planning tasks (Valmeekam, Sreedharan, Marquez, Olmo, & Kambhampati, 2023). One approach to solving these problems is through additional data. Perhaps fully robust, systematic reasoning will finally emerge if models
2306.12672#14
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
15
Kambhampati, 2023). One approach to solving these problems is through additional data. Perhaps fully robust, systematic reasoning will finally emerge if models are trained on still more language, or supervised more explicitly on data from complex reasoning tasks. This scaling route raises practical questions about whether it will be possible to acquire enough data to train such a model, as well as theoretical questions whether more data and more parameters alone will in fact yield robust systems for thought. Another strategy in recent work seeks to build more robust cognitive capacities by augmenting LLMs with various external tools for structured representation and symbolic reasoning, such as calculators (Cobbe et al., 2021), logic engines (Weir & Van Durme, 2022), databases (Alon et al., 2022; Borgeaud et al., 2022; Izacard et al., 2022; Thoppilan et al., 2022), physics simulators (R. Liu et al., 2022), planners (B. Liu et al., 2023), and APIs for executing arbitrary code (Karpas et al., 2022; OpenAI, 2023c; Schick et al., 2023). But these new hybrid approaches
2306.12672#15
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
16
for executing arbitrary code (Karpas et al., 2022; OpenAI, 2023c; Schick et al., 2023). But these new hybrid approaches resurrect many of the same long-term scalablity challenges that confronted earlier semantic parsing and knowledge representation systems, by designing a menagerie of bespoke representations and tools without a broader account of how they will scale towards general models of language and thought.
2306.12672#16
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
17
In this paper, we consider a different approach to integrating the strengths of modern language models and classic symbolic architectures, one that draws on but also runs counter to recent trends in AI, in a sense flipping these scaling questions on their head. Instead of trying to turn models trained to predict language into models that might genuinely think—filling each gap in reasoning we discover through yet more 3 1 # INTRODUCTION Approaches to language-informed thinking Large language models Classical symbolic models vv Ax.VX... > true iii)" ) World knowledge (define ...) Observations » iti va > (condition ...) fog Questions (query ...) Natural Meaning Probabilistic Inference Distributions over language function language of thought function possible worlds Our framework: Rational Meaning Construction
2306.12672#17
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
18
Figure 1: Human language understanding supports flexible inferences in a process we term language-informed thinking. Computational approaches to language-informed thinking sit on a neurosymbolic continuum: On one side, classical symbolic models (top right) yield systematic, structured inferences, but are typically limited to narrow linguistic domains and often require hand-engineering. On the other side, large language models (LLMs; top left) achieve remarkable facility with open-domain natural language, but struggle to ground reasoning in a consistent world state that supports coherent inferences, predictions and plans. Our rational meaning construction framework decomposes language-informed thinking into two modules: (1) A meaning function translates natural language into probabilistic programming language (PPL) statements that represent linguistic meaning with respect to a symbolic world model. (2) An inference function computes probabilities over the space of possible worlds consistent with and conditioned on information in the language. In the rest of this paper, we illustrate how our framework can combine the strengths of LLMs and PPLs, affording both broad coverage of natural language and a principled treatment of reasoning about uncertain events, outcomes, and scenarios.
2306.12672#18
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
19
data, new kinds of language training or linguistic prompting tricks, or by plugging in yet another external tool—we ask: what are the prospects for a unifying computational framework guided by the study of thought and language in the human mind and brain, as well as what we have learned from multiple eras of AI? Can we build intelligent architectures that use, learn and understand language as people do, informed by neuroscience constraints and developmental trajectories? That is, can we build models in which language is learned efficiently within one relatively small, modular computational system, which interfaces generally with other systems dedicated to robust world modeling and reasoning? What architecture lets language build on pre-existing capacities for symbolic world modeling and inference, while also allowing linguistic meanings and world knowledge to scaffold and bootstrap each other, as a learner’s experiences and competences grow? This paper attempts to show what such a model might look like—and how it can build theoretically and practically on the insights from both classical paradigms for language and thought, and the recent successes of statistical learning made by large language models. We propose a framework for intelligent computational architectures that reason about and learn from language, but we begin with a proposal for what it means to think. As in the traditional cognitive view, thinking at its core is constructing general-purpose representations for modeling the entities and events in the world, sufficient to support rational, coherent inferences under 4 1 # INTRODUCTION
2306.12672#19
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
21
Our proposal, which we call rational meaning construction, rests on the integration of two com- putational components, each which we suggest can be instantiated using modern computational tools—a probabilistic language of thought for constructing structured models of arbitrary situations, which supports supports coherent belief updating and inferences over them; and a general mechanism for taking natural language and constructing meaning from it, represented as distributions over expressions in this language of thought (Fig. 1). We propose that probabilistic programs can formally instantiate the first component. They offer a structured representation for expressing novel situations and arbitrary problems with respect to a meaningful model over possible world states, a coherent notion of conditional belief updating, and a systematic framework for inferences with respect to queries and goals. We propose, in turn, that meaning construction can be modeled as translation from utterances in language to expressions in a general proba- bilistic programming language. Theoretical and empirical results have long suggested that human languages implement locally compositional, efficiently learnable mappings between symbolic representations of thought and external symbol systems. We therefore propose that code-trained large language models can be viewed as
2306.12672#21
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
22
efficiently learnable mappings between symbolic representations of thought and external symbol systems. We therefore propose that code-trained large language models can be viewed as in-principle implementations of broad, context-sensitive, and resource-rational meaning functions, in that they can be used to efficiently infer distributions between language and programs from stored, prior patterns in the background distribution of language and code. By integrating these two components, we propose that this paradigm suggests a general framework by which language can meaningfully relate to many fundamental aspects of cognition, modeling how we might condition on language to systematically update our beliefs, pose new questions and goals in language, and convey structured background information or even define new relevant concepts about a situation or about the world.
2306.12672#22
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
23
In Section 2, we give an overview of this framework, describing the overall structure and more detailed rationale behind the computational components we build on in the remainder of this paper. We then describe a concrete but minimal implementation of this framework using contemporary probabilistic programming and language modeling tools, intended to demonstrate the basic computational components of this approach and elucidate the scope and scalability of the broader proposal. Given this general paradigm, we first illustrate the potential breadth of this approach for integrating meaning construction and reasoning, showing how it might address a core set of computational and cognitive domains that we communicate about in language (Fig. 2). Each of these examples uses minimal pedagogical examples intended to suggest how this approach integrates language with important bodies of work from computational cognitive science and artificial intelligence. We first show how this framework can condition on language in order to describe and reason about uncertain situations with respect to an ongoing discourse (Section 2.2), then show how this approach can be extended to reason about relational systems (Section 3.1), physical and perceptual scenes (Section 3.2), and social situations involving agents with goals and plans (Section 3.3).
2306.12672#23
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
24
We then turn to how this approach might begin to address core scalability challenges that confront In traditional approaches to modeling thinking as symbol processing, whether logical or probabilistic. Section 4, we show how language can support growing knowledge autonomously, without hand engineering, by using the rational meaning construction framework to construct a broad range of new concepts in existing models and even whole new world models, which in turn support coherent downstream reasoning. Ultimately, this paper is a prospective one, and the examples presented here are intended to convey a sufficiently concrete proposal to suggest avenues for future work. In Section 5, we outline what we see as some of the most significant open questions and future directions raised by this framework. These include theoretical questions that relate our approach to classical models of language, open cognitive directions for extending this approach to model language acquisition and production, and important engineering directions necessary for scaling inference, robust translation, and learning under this general paradigm. Finally, in Section 6, we conclude by looking ahead to the longer-term implications of this proposal for modeling intelligent systems that use, understand, and think about language as we do. 5 1 # INTRODUCTION
2306.12672#24
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
25
Probabilistic Reasoning SFr-rT Crr-rTt To 81.39 Tr 0.27 Bayesian tug-of-war Knowledge about the world (define Generative » Won-against (team-1 team-2) 3 Cteam-strength tean-1) (team-strength team-2))) world models Observations about the world (condition ce (won-against ‘(john mary) Condition > "Com sue))) statements > Questions about the world (query © (strength ‘mary) Query q (strength 'tom))) statements , whichever Relational Perceptual and Social Reasoning Physical Reasoning Reasoning Chatie Dana Kinship systems Visual and physical scenes (define (define (define grandfather-of? (name_a name_b) object (obj-id) actions (agent-id) (exists (lambda (x) Cand list (choose-shape obj-id) Gif (has_bike? agent-id) (father-of? name_a x) (choose-color obj-id))) list ‘is_walking 'is_biking) (parent-of? x name_b))) (list ‘is_walking))) (condition (condition (condition (grandfather-of? ‘charlie (= Gength (and Cloves? ‘alex 'sushi)
2306.12672#25
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
26
‘is_walking))) (condition (condition (condition (grandfather-of? ‘charlie (= Gength (and Cloves? ‘alex 'sushi) “dana)) ((filter-shape ‘mug) (hates? ‘alex ‘pizza) ((filter-color red) (has-bike? ‘alex))) (objects-in-scene "this-scene)))) 1)) What do How many mug: 1 think Alex will do? (query (query (query (get_actions ‘alex)) (filter-tree Cength (lambda (x) (and ((filter-shape ‘mug) (child-of? x ‘charlie) (objects-in-scene (parent-of? x ‘dana))))) “this-scene))))
2306.12672#26
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
27
Figure 2: Understanding language in four domains of reasoning that form the core of this paper. Probabilistic reasoning requires integrating sparse evidence to predict the outcomes of uncertain events, like the winners of tug-of-war matches. Relational reasoning involves maintaining and updating coherent beliefs about structured domains, like family trees, based on relational information. Perceptual and physical reasoning links language to our sensory and intuitive physical knowledge of objects in the external world, such as kitchen items on a tabletop. Social reasoning involves reasoning about the minds of other intelligent agents, such as how their goals, preferences, and circumstances shape their actions as they navigate in the world. Across all the domains, we present a unified framework that translates language into code in a probabilistic programming language to facilitate human-like reasoning. 6 2 OVERVIEW OF THE KEY IDEAS # 2 Overview of the key ideas The central goal of this paper is to propose a new computational framework, rational meaning construction, which relates language to thought. This framework licenses a concrete class of computational architectures for building intelligent systems that use language, which we propose can be implemented using modern AI tools. In this section, we briefly overview the key ideas that form the basis of this proposal. We draw on three observations from a rational, probabilistic perspective on biological intelligence and human language:
2306.12672#27
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
28
A rational perspective on intelligent thought. Biological intelligence encompasses many computational capacities. The foundational notion of thought we focus on here centers on rational inference and decision making in service of one’s goals (Anderson, 1990; Chater & Oaksford, 1999). Under this perspective, thought comprises systems for modeling the world. These internal world models allow us to infer the particulars of a situation from whatever information is at hand, evaluate alternative world states and imagine possible future ones, and decide on actions that might bring one towards valuable future states in the world. Following extensive work in computational cognitive science, we view the world models that support biological intelligence as structured and probabilistic (Goodman et al., 2014; Griffiths, Chater, Kemp, Perfors, & Tenenbaum, 2010; Lake et al., 2017), designed to integrate the noisy evidence an agent receives into causal, explanatory models that allow them to maintain coherent beliefs about the world and generalizably infer consistent, useful predictions and plans. This basic, underlying view of intelligent thought draws on empirical evidence from essentially every species with a brain, from bees (Biernaskie, Walker, &
2306.12672#28
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
29
This basic, underlying view of intelligent thought draws on empirical evidence from essentially every species with a brain, from bees (Biernaskie, Walker, & Gegear, 2009; R. F. Wang & Spelke, 2002), to zebrafish Bolton et al. (2019); R. E. Johnson et al. (2020), mice (English, Nejad, Sommerfelt, Yanik, & von der Behrens, 2023), birds (Isomura, Parr, & Friston, 2019), and primates (Khalvati, Kiani, & Rao, 2021). Informally, a rational view of thought can be summarized as the ability to solve useful problems given our internal models of the world, ranging from navigation and foraging to physical prediction and social reasoning. Against this overarching picture of thought, human intelligence further stands out for its flexibility and expressiveness. We invent our own problems along with new approaches to solving them, rather than sticking to a limited set of largely innate goals and strategies (Tomasello, 2022). A few other species, non-human primates, dolphins, and some birds, are creative problem-solvers and problem-creators, but none come close to the range of goals humans can
2306.12672#29
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
30
species, non-human primates, dolphins, and some birds, are creative problem-solvers and problem-creators, but none come close to the range of goals humans can adopt (Chu & Schulz, 2023). Uniquely in the natural world, humans think about and come to understand problems far beyond the narrow range necessary for our immediate survival, considering goals and questions that draw on abstract, culturally constructed, and even entirely hypothetical systems for modeling and conceptualizing the world (Dennett, 2017).
2306.12672#30
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
31
A rational perspective on language. As with thought, language also encompasses many systems and capacities. This paper focuses on the class of problems we refer to as language-informed thinking, the general means by which language informs the inferences and decisions of an intelligent agent. We take a broadly rational perspective on language—we consider language to be a system of goal-directed actions for externalizing and communicating thoughts to other intelligent beings (Chater & Manning, 2006; Gibson, 2014; Goodman & Frank, 2016). In this context, we frame the problem of deriving meaning as inferring the mappings between a language’s system of external communicative signals into the representations of rational thought. It is worth highlighting that thought does not require language and is distinct from language in the human brain (Mahowald et al., 2023). Non-human species, and pre-verbal infants (Spelke, 2022), are clearly capable of modeling the world towards their inferences and goals without language. But for humans, language clearly plays a profound role in determining the problems we think about, and how we think about them. Our natural languages allow us to communicate an extraordinarily broad range of our thoughts about the problems we pose and solve, including our abstract and general world knowledge, our specific beliefs about a situation, the particular questions or goals we have or want to pose to others, and our approaches to reasoning about them.
2306.12672#31
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
32
A resource-rational perspective on language and thought. Finally, our integrated computational approach to language and thought builds on extensive evidence that humans are resource-rational thinkers— under finite constraints of time and memory, we rationally allocate computational resources in order to make useful inferences and plans (S. J. Gershman, Horvitz, & Tenenbaum, 2015; Lieder & Griffiths, 2019). Resource rational agents amortize computational effort across prior experience and problems, storing and 7 2.1 A rational meaning construction framework 2 OVERVIEW OF THE KEY IDEAS reusing prior computation towards similar new problems that we encounter in the future (S. Gershman & Goodman, 2014; Le, Baydin, & Wood, 2017). Certain domains of inferences share more structure than others, and evidence suggests that we therefore heavily amortize them. Prior work, for instance, suggests that computations involved in basic perceptual activities (Brooke-Wilson, 2023; Dasgupta & Gershman, 2021; Fodor, 1983), such as object recognition under common lighting conditions, are highly amortizable from reusable patterns in computation that are learnable and shared across a background distribution of perceptual instances. This view suggests why fast, bottom-up pattern recognition models have made great advances in modeling perception in recent years, while it has proved much more challenging to amortize the wide range of flexible inferences required for arbitrary problem solving.
2306.12672#32
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
33
We propose an analogous resource-rational perspective on the kinds of computation implicated in language- informed thought. Under almost every theoretical and empirical account of linguistic structure and semantics, the mappings between language and meanings should be highly amortizable across the background distribution of language—there are structured, systematic, and learnable patterns in how units of language map onto units of thought. The idea that meaning construction should be highly amortizable follows from our view on language itself as an efficient communicative system. Extensive empirical evidence suggests that communicative pressures shape how language maps onto meanings at every level of linguistic structure, from individual morphemes (Bybee, 1985) to patterns in how common syntactic frames communicate meaning (L. Gleitman, 1990; Grimshaw, 1981), and even reusable pragmatic implications present across common discourse situations (White, Mu, & Goodman, 2020). But while we take the view that a resource-rational agent should intelligently learn and reuse prior computation when possible, we do not view language-informed thinking, or thinking in general, as solely a matter of learning and interpolating over statistical patterns from prior experience. When we think, including when we think about
2306.12672#33
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
34
thinking, or thinking in general, as solely a matter of learning and interpolating over statistical patterns from prior experience. When we think, including when we think about the meanings we recover from language—to update our beliefs, to follow instructions, or to answer questions posed in language—we must be able to flexibly model arbitrary situations and support capacities for general problem solving, including inference, planning, and simulation, under a wide range of new and unencountered circumstances.
2306.12672#34
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
35
The efficient learnability of human language also highlights that, in many senses, the computational relationship between language and thought in humans is almost the inverse of that in today’s LLMs. For humans, language could be characterized as an emergent property of thinking. Infants can model the world and draw inferences well before they know language (Gopnik, 1996; Spelke, 2022), and reliably acquire complete linguistic capabilities from exposure to relatively tiny amounts of language (R. Brown, 1973). Congenitally-Deaf humans born with no language input spontaneously develop languages to communicate their thoughts, with the same basic hallmarks of mature natural languages (Goldin-Meadow, 2012; Pyers, Shusterman, Senghas, Spelke, & Emmorey, 2010; Senghas, Kita, & Ozyurek, 2004). This paper seeks to understand and model the cognitive and computational structures underlying this human scaling route to intelligence and language use — one that begins with robust capacities for thought, and scaffolds language efficiently on top of them to then offer a powerful tool for driving and constructing new thought. # 2.1 Our proposal: A framework for modeling rational meaning construction
2306.12672#35
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
36
# 2.1 Our proposal: A framework for modeling rational meaning construction The perspective we offer above draws from theoretical and empirical work that precedes this paper. Our core contribution in this paper is to propose a new computational framework in light of these observations, that seeks to unify prior symbolic, probabilistic inference and statistical learning traditions and to take advantage of the clear computational advances made by modern LLMs as learned statistical models of language. We describe a framework for rational meaning construction in which linguistic meaning is formalized as a context-sensitive mapping from natural language to a distribution over expressions in a probabilistic language of thought (PLoT) for rational world modeling and inference. Under this framework, we then propose that large language models trained on language and code can be used to implement meaning functions in a resource-rational architecture – they can implement learned, broad-coverage mappings between language and code; and they can be understood as part of a human-like, resource-rational system that efficiently infers these mappings using stored patterns amortized from the prior joint distribution over language and code. This motivates the concrete architecture we propose and illustrate throughout the remainder of this paper, and its two main components for modeling thinking and modeling language relative to thinking—or how language informs thinking. 8 2.1 A rational meaning construction framework 2 OVERVIEW OF THE KEY IDEAS # 2.1.1 Modeling thinking
2306.12672#36
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
37
We propose implementing thinking using probabilistic programs as a general representational substrate for building world models and specifying rational inferences over them. This proposal builds on prior work in cognitive science and AI formalizing how a broad class of problems can be expressed as probabilistic programs (Chater & Manning, 2006; Goodman et al., 2014), following a generic inference query motif (Goodman, Mansinghka, Roy, Bonawitz, & Tenenbaum, 2008) — a probabilistic program that combines a generative world model that models abstract, causal beliefs about probable world states; specific evidence that an agent conditions on; and a particular query being posed as the question or goal for thinking. Inference to solve a problem consists of formally computing or sampling from a probability distribution over answers to this question, specified by the world model and conditions. This computational proposal forms the backbone of the probabilistic language of thought model of general human cognition (Goodman et al., 2014), and has been used empirically to model a wide range of human inferences, including those that draw on visual perception (V. K. Mansinghka, Kulkarni, Perov, & Tenenbaum, 2013),
2306.12672#37
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
38
range of human inferences, including those that draw on visual perception (V. K. Mansinghka, Kulkarni, Perov, & Tenenbaum, 2013), physical simulation (Battaglia et al., 2013), and social reasoning (C. Baker et al., 2011). It is designed explicitly to formalize a central property of human thought — the capacity to expressively and flexibly pose problems involving entirely novel situations and goals, and to solve them relative to a computable representation of the world and internal belief.
2306.12672#38
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
39
# 2.1.2 Modeling language relative to thought Given this model for thought, we propose formalizing rational meaning construction as a broad-coverage, contextual translation function that maps language into a distribution over expressions in a probabilistic language of thought. This proposal builds most closely on and draws inspiration from efforts to articulate a probable world semantics for natural language in prior work (Goodman & Lassiter, 2015), in order to express how language could compactly convey uncertain propositions and vague meanings with respect to a formal probabilistic generative model. It also builds on the longer history of symbolic semantic theories we overview in the introduction, including formal semantics theories that model language as mapping into formal propositions over possible worlds (eg. Heim and Kratzer (1998); Lewis (1976)), and semantic parsing systems (eg. Abend et al. (2017); Klein and Manning (2003); Liang (2016); Steedman (2001); Y. W. Wong and Mooney (2007)) that map language into formally executable program expressions.
2306.12672#39
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
40
Our goal, however, is to broaden and generalize these framings to suggest a general framework for modeling how language can interface with and inform such a broad swatch of human cognition. By positing that meaning is a general mapping between sentences and expressions in a probabilistic language of thought, we believe that a rational meaning construction approach can elaborate on and concretely model core desiderata of a coherent theory of linguistic meaning – modeling how meanings drive inferences about what is true and probable; formalizing how language can pose propositions and queries that are then evaluated with respect to an internal model over probable worlds; and relating meaning to the general computational systems for representing, thinking about, and receiving new information about the world from broader cognition.
2306.12672#40
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
41
This proposal suggests a wide class of possible architectures that map from language into probabilistic programs—in principle, any general mapping function that expresses a distribution over programs conditioned on sentences in context. Under this umbrella of possible implementations, we propose finally that large language-to-code models can be used to generally instantiate these meaning functions. Unlike prior semantic parsers or attempts to hand implement mappings between language and code, LLMs offer a concrete means of instantiating far more broad-coverage mappings between human sentences and meanings than have been previously possible. They are also context-sensitive, in that they can construct meanings for an utterance that condition both on the general distribution of language and thought and a local linguistic and thinking context. They can condition translation on a local discourse context, when prompted with prior utterances, and on a local problem under consideration, when prompted with existing code in a probabilistic program.
2306.12672#41
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
42
By using LLMs to map between language and code, this proposal is also closely related to the recent lines of work we review in the introduction that seek to augment and connect LLMs with various structured and symbolic reasoning tools—both domain-specific reasoning engines like planners and physics engines (eg. B. Liu et al. (2023); R. Liu et al. (2022)), and more general APIs for code execution (eg. Karpas et al. (2022); OpenAI (2023b); Schick et al. (2023)). As we demonstrate throughout this paper, however, we propose that the probabilistic language of thought can offer a cognitively-motivated, unifying symbolic substrate for 9 2.1 A rational meaning construction framework 2 OVERVIEW OF THE KEY IDEAS
2306.12672#42
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
43
9 2.1 A rational meaning construction framework 2 OVERVIEW OF THE KEY IDEAS interfacing between language and many core aspects associated with general cognition. It provides a general motif for structuring and constructing generative world models, which can nest calls to other domain-specific systems (such as planners and physics engines); and an overarching framework for modeling how diverse kinds of observations can be used to update these models and answer new queries, framed as Bayesian conditioning and inference. With respect to the more general landscape of large statistical language models, this proposal finally suggests one way to situate the strengths of LLMs into a more human-like, modular framework for language-informed thinking. Rather than look to statistical patterns to capture all of the ways we think, plan, and reason about language, this resource-rational approach seeks to ground distributional aspects of language into a framework that can leverage learned prior patterns when they are useful—while also modeling how language can construct and relate to coherent world models and algorithms for explicit, novel decision making and inference. # Illustrating the architecture by example
2306.12672#43
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
44
# Illustrating the architecture by example This general architecture is best explained through concrete implemented examples, which we give in the next sections. For each of the four domains of reasoning shown in Fig. 2, we work through a representative dialog between a speaker of English and our language-informed thinking computational architecture, which could stand in for how we model another human being’s understanding and thinking about the speaker’s language, or the ways we hope a human-like AI system would similarly respond. For pedagogical reasons, we have chosen to implement these examples using one particular probabilistic programming language and one particular language-to-code model. These particular tools are not necessarily the most performant or scalable AI solutions; nor the best accounts we have of the corresponding components of human architecture. Nevertheless, they are familiar and simple, and provide the most direct route we know to illustrate our ideas in ways others can also experiment with. To elaborate on these choices:
2306.12672#44
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
45
• The probabilistic language of thought we use to express inference problems is Church (Goodman et al., 2008), a Turing-universal probabilistic programming language constructed on top of the functional programming language Scheme. We have used the WebChurch dialect which implements several general inference procedures, but we have chosen the simplest and most general—and least efficient—approach based on rejection sampling: Inference is based on drawing samples from the prior over world states described by the generative model, and rejecting those that fail satisfy the constraints of any observation conditions. The samples that remain constitute a posterior sample over possible worlds consistent with the observed information, sufficient to answer the queries under consideration in the language discourse. Other similarly functional PPLs such as WebPPL or Gen could have been chosen instead. In Section 5, we discuss future directions for extending and scaling inference beyond these simple illustrative implementations.
2306.12672#45
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
46
• The language-to-code model we use to amortize meaning construction over programs is Codex model (Chen et al., 2021), a GPT-3-based language model fine-tuned on source code, which provides pairings between natural language and code with comments, drawn from programs on GitHub and other sources. Since the release of Codex, many other language-to-code models have been developed, and more recent versions of GPT-based language models are now routinely trained on large amounts of source code; we believe these could be used to similar effect. In Section 5, we also discuss future directions for more cognitively plausible training and updating of neural models that amortize inference in joint distributions over natural language and probabilistic languages of thought.
2306.12672#46
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
47
Finally, before turning to the examples, we want to add an important note about our intentions and goals. The examples are designed to be illustrative and pedagogical—we choose them for their simplicity and clarity, and to show how prior empirical and computational work from cognitive science can be related under this general framework to language. Each example gestures at a larger domain of reasoning, but, of course, each domain is much broader than what we can implement here. Each example is also representative of a wide class of computational cognitive models that can be instantiated in a probabilistic language of thought, and which we propose can be integrated with natural language inputs and outputs under a rational meaning construction framework. In each section we therefore also discuss how this framework might be scaled, and what more work may be necessary, to scale from these examples towards a richer model of language in relation to those domains. 10 2.1 A rational meaning construction framework 2 OVERVIEW OF THE KEY IDEAS
2306.12672#47
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
48
10 2.1 A rational meaning construction framework 2 OVERVIEW OF THE KEY IDEAS We also hope that these examples, and other variations that elaborate on them and on the core domains of reasoning we discuss here, will offer useful starting points for more rigorous, systematic, cognitively- oriented evaluation and interpretation of the reasoning processes emergent in large language models and other language-based AI systems. In our own preliminary evaluations of these domains, we find that current large language models show many of the properties we discuss in the introduction. In some cases they appear to approximate implicitly the representations and algorithms we seek to model explicitly. In others, particularly with more complex modifications beyond these simple examples, we find that large language models left to their own devices produce outputs that diverge from our intuitions. We seek here to model the representations with which people make meaning from language in relation to all of these domains, but hope that these frameworks will be useful for understanding other computational systems that use language as well, including interpreting the representations that large language models already learn or should seek to acquire. # Graphical conventions Throughout the examples presented in this paper: Ss Translations mapping from language into probabilistic programs, produced by Codex, are indicated by a neural network icon.
2306.12672#48
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
49
Probabilistic inferences, performed by Church, are indicated by a cog icon. 11 2.2 Understanding language with probabilistic reasoning 2 OVERVIEW OF THE KEY IDEAS # 2.2 Understanding language with probabilistic reasoning To illustrate our framework, let’s consider a concrete scenario that involves reasoning from language in the face of uncertainty. Suppose a friend is telling you about a tug-of-war tournament that took place the prior weekend in which the authors were participating: Right off the bat, Josh won against Lio. He then proceeded to claim victory against Alex. Even working as a team, Lio and Alex still could not beat Josh! In order to understand this story, it is useful to construct a little mental model: there are different players, they face each other solo or in teams, and based on his track record, Josh appears to be particularly strong. Now, suppose your friend tells you about a newcomer: In a huge upset, Gabe managed to best Josh in the fourth round. Maybe Gabe is even stronger than Josh! Or, perhaps Josh was simply feeling lazy in the last match, in which case, Gabe might not actually be so strong. To clarify, you might ask a question, Who is stronger: Gabe or Josh? Your friend’s answer, which might itself express uncertainty, will nevertheless provide further information for you to incorporate into your understanding.
2306.12672#49
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
50
In making meaning from language about a scenario like the above, you are engaging in probabilistic reasoning: integrating over different possibilities in order to infer likely explanations. People are remarkably proficient at making inferences from exactly this kind of sparse evidence. Sometimes, we acquire this evidence through direct experience—by watching the tournament, for instance—but often, this kind of information comes to us through language that cues us to update our beliefs accordingly. Critically, in order to reason consistently, we need to represent core aspects of the situation: who are the different actors, what events took place, and what inferences have we already made? To this end, it is extremely useful to have a world model, which we defined earlier as a probabilistic generative model that encapsulates the key mechanics of a domain and facilitates coherent, causal explanations of events. In this section, our aim is to further formalize what exactly we mean by world models and how large-scale neural models might serve as an interface between natural language and these kinds of cognitive representations. World models as generative programs. The core of each example in this paper is a probabilistic generative model that defines the mechanics of a domain. For the purposes of this demonstration, and throughout Section 3, we focus on reasoning from language given a pre-specified world model. Later, in Section 4, we show how language can be used to grow out and construct new world models.
2306.12672#50
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
51
As a playground for this initial demonstration, we consider the “Bayesian tug-of-war,” a classic experimental domain in cognitive science that requires making inferences about the latent traits of individuals from sparse evidence. Prior work establishes that Bayesian inference in a probabilistic generative model closely captures people’s predictions about scenarios in the tug-of-war (Gerstenberg & Goodman, 2012; Goodman et al., 2014), and that simple sentences can be mapped onto queries in this model (Goodman & Lassiter, 2015). Here, we build on this work to give an account for how people might turn open-ended natural language into statements in the probabilistic language-of-thought.
2306.12672#51
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
52
In tug-of-war, we start with a generative model of a tournament in which players of varying strengths compete in a series of matches, facing off either solo or as part of fluid teams (Fig. 3A). Each player has a latent strength value randomly sampled from a Gaussian distribution (with parameters arbitrarily chosen as µ = 50 and σ = 20). As an observer, our goal is to infer the latent strength of each individual based on their win/loss record. However, players sometimes don’t pull at their full strength and each player has a different intrinsic “laziness” value (uniformly sampled from the interval [0, 1]) that describes how likely they are to be lethargic in a given match. The full Church code for the tug-of-war is given in Appendix A.1.1. Linguistic meanings as probabilistic program expressions. While the generative model defines the generic mechanics of the domain, we want to be able to talk about specific people and events. In our framework, we focus on two kinds of linguistic utterances:
2306.12672#52
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
53
Observations provide information about people, objects, and events in the world; e.g., “Josh faced off against Lio and won.” In our framework, we translate observations into condition statements in Church, which update the state of the world model to reflect new facts. Note that condition statements have no return value; instead, they constrain the world model such that downstream inferences must be consistent with respect to the conditioning statement. 12 2.2 Understanding language with probabilistic reasoning 2 OVERVIEW OF THE KEY IDEAS
2306.12672#53
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
54
(A) Generative world model (define strength (mem (lambda (player) ee (gaussian 50 20)))) PY 58.07 (define laziness (mem (lambda (player) (uniform @ 1)))) F 3 Q.27 (define (team-strength team) (sum (map (lambda (player) Cif (flip (laziness player)) (/ (strength player) 2) ——_ x (strength player))) ° team))) (© (team-strength team-1) (team-strength team-2))) (define (won-against team-1 team-2) _ J T T =~ T T —d nl o ey (B) Translation examples for LLM prompting John and Mary won against Tom and Sue. w (condition (won-against '(john mary) '(tom sue))) TT-TT Sue is very strong! iti ‘sue) 75)) F 2 31.39 (condition (> (strength If Sue played against Tom, who would win? 2? (query (won-against '(sue) '(tom))) fT-T (C) Natural language (D) Language of thought osh won against Lio. Josh proceeded to claim victory against Alex. (condition (won-against '(josh) '(lio))) 4 (condition (won-against '(josh)
2306.12672#54
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
56
Figure 3: Illustration of probabilistic reasoning via language-to-code translation in the tug-of-war domain. (A) The generative model defines two latent traits, strength and laziness, and specifies how these interact to determine team-strength. By combining (A) and (B), we can few-shot prompt an LLM to translate open-ended natural language (C) into Church statements (D) that capture linguistic meaning with respect to the domain. The resulting probabilistic inferences transparently represent the model’s beliefs and naturally capture human-like intuitions about players’ latent traits. 13 2.2 Understanding language with probabilistic reasoning 2 OVERVIEW OF THE KEY IDEAS Questions seek information in the face of uncertainty about the world; e.g., “Would Josh beat Gabe if they played again?” In our framework, we translate questions into query statements in Church, which evaluate the quantity of interest. Calling query triggers a probabilistic computation that simulates possible worlds under the model, constrained by any observations so far. The query expression is evaluated in each simulated world, yielding multiple samples that form a posterior distribution over the value of interest.
2306.12672#56
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
57
Throughout the examples in this work, we freely interleave query and condition statements, much as questions might occasionally arise between statements of fact in a natural dialogue. Implementationally, this behavior is achieved through a read-evaluate-print loop (REPL) inspired by Venture’s (V. Mansinghka, Selsam, & Perov, 2014), that evaluates queries against all condition statements that have appeared up to that point in the dialogue history. In our model, we assume that the user specifies whether each utterance is a condition or a query, but LLMs could likely classify unannotated utterances accurately. Inspired by the work of Goodman and Translating from natural language to program expressions. Lassiter (2015), if we had some way to translate linguistic utterances into probabilistic program statements, we could perform a wide variety of probabilistic inferences from plain English. Up until recently, however, it was unclear how to construct a meaning function sufficiently general to translate open-ended natural language into highly structured expressions compatible with a Church model. Our core observation is that language-code LLMs have many of the properties necessary to serve as a useful meaning function: broad-coverage exposure to natural language, a robust capacity to model joint language-code text distributions, and the ability to quickly grasp domain-specific syntax and semantics from a few examples.
2306.12672#57
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
58
In this work, we leverage the few-shot prompting capabilities of one such LLM, the Codex model from OpenAI, to induce a translation model from English to Church code. As it turns out, we only need to provide a small handful of example translations (represented in Fig. 3B) to achieve a variety of interesting behaviors. To translate a new language utterance to Church, we simply concatenate the generative model (full text in Appendix A.1.1) and the translation examples (full text in Appendix A.1.2) into a prompt whose final line is the utterance. We then generate from Codex, which, based on the comment-code pattern in the prompt, infers that the completion should be written in Church, using the function definitions and constructs provided in the prompt.
2306.12672#58
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
59
Notice the high degree of variation in phrasing and lexical choice in Fig. 3C; none of the utterances contain “won” or “against,” yet Codex still maps these to the won-against function. Here, we start to see some of the advantages of using an LLM over more traditional semantic parsing techniques like CCG parsers (Artzi, Lee, & Zettlemoyer, 2015; Artzi & Zettlemoyer, 2013). Because the model is pre-trained on a vast amount of linguistic data, it fluently handles many different kinds of linguistic variation. However, by including the Church generative model in the prompt, we can effectively constrain the output space; the model infers that the generated code should use the functions defined in the generative model.
2306.12672#59
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
60
As a semantic parsing tool, this combination of pre-training and prompting manages to achieve broad invariance to spurious linguistic variation while remaining sensitive to wording choices that might affect meaning. We can see this tradeoff at work in Fig. 3C, where the translation uses a negation, closely reflecting the structure of “Lio and Alex still could not beat Josh.” Of course, there are multiple aspects of the utterance that this translation does not capture (e.g., “Even working as a team...” suggests that Lio and Alex’s efforts were well-coordinated; as opposed to something like, “Stepping on each other’s toes the whole match...,” which would imply the opposite). Our point is not that the LLM translation perfectly captures all aspects of the utterance meaning, but rather, that it encodes those that are relevant to and compatible with the domain model so as to facilitate downstream reasoning.
2306.12672#60
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
61
Reasoning about scenarios with probabilistic inference. So far, we’ve illustrated how we might condition a PLoT model on natural language, but what about reasoning? After hearing the information in Fig. 3C, we might assume that the player named Josh is quite strong. Exactly how strong is Josh, though? And how likely is it that he would beat another player who isn’t Lio or Alex? Just as we used Codex to translate facts into condition statements, we can use it to translate questions into query statements in Church. The Church inference engine then automatically simulates scenarios (in this case, 1000 times) that are consistent with the given condition statements in order to produce an approximate posterior distribution over each query. 14 2.2 Understanding language with probabilistic reasoning 2 OVERVIEW OF THE KEY IDEAS
2306.12672#61
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
62
14 2.2 Understanding language with probabilistic reasoning 2 OVERVIEW OF THE KEY IDEAS By offloading reasoning from the LLM to the PLoT, we can obtain a much richer picture of our model’s beliefs about the world (Fig. 3D). While the LLM alone can only respond with textual statements like “Josh is very strong,” Church gives us an entire probability density over Josh’s strength (on expectation, he is a little less than one standard deviation above the average strength = 50). Likewise, we can easily obtain a distribution over the outcomes of a Gabe-Josh match (given Josh’s strong track record, our model finds Gabe’s chances slim, at 23.90%). Critically, Church is doing much of the heavy lifting of inference in the background in order to produce these posterior distributions. In a huge upset, Gabe managed to best Josh in the fourth round. (condition (won-against '(gabe) '(josh))) (query (strength 'gabe)) og » > How strong is Gabe? nt YY ® HY —— No info (y= 64.19) —— Josh likely lazy (11 = 60.69) —— Josh rarely lazy (y = 70.20) h 1 1 1 1 1 1 1 1 70
2306.12672#62
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
63
Figure 4: Reasoning about a pair of hypothetical scenarios with language-code translation. In a world where Josh is often lazy, Gabe’s win is counteracted by a high likelihood that Josh threw the match. Conversely, in a world where Josh is rarely lazy, Gabe’s win is surprising and suggests a high strength value. Rational meaning construction with an LLM appropriately resolves the linguistic meaning of these two scenarios, selecting reasonable probability parameters for the conditioning statements. Meanwhile, probabilistic inference about Gabe’s strength is finely sensitive to the implications of these competing hypotheses. In addition to providing useful interpretability, reasoning in Church models is sensitive to each new piece of information. Much like human learners, Church models can flexibly update their beliefs when presented with low-probability or unanticipated events. Picking up our tug-of-war saga, consider the plot twist in Fig. 4: In a huge upset, Gabe managed to best Josh in the fourth round.
2306.12672#63
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
64
In a huge upset, Gabe managed to best Josh in the fourth round. How might this new information shape our interpretation of the match 4 outcome? If Josh is likely to be lazy, then it’s possible that Gabe simply got lucky and wasn’t so strong after all. If, on the other hand, Josh is rarely lazy, we might start to regard Gabe as particularly strong. In Fig. 4, we can observe how Church reasons about these two possibilities, shifting the probability density over Gabe’s strength left if Josh is likely lazy and right if Josh is rarely lazy. Note how, in order to translate a phrase like “Josh has a propensity to slack off,” Codex must choose a particular probability threshold. This choice is arbitrary and, while there is no “correct” answer, we see that Codex is able to choose valid probability values between [0, 1] that feel appropriate to the wording: a 15 2.2 Understanding language with probabilistic reasoning 2 OVERVIEW OF THE KEY IDEAS
2306.12672#64
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
65
15 2.2 Understanding language with probabilistic reasoning 2 OVERVIEW OF THE KEY IDEAS “propensity to slack off” doesn’t necessarily imply that someone slacks off all the time, while, in contrast, “rarely lazy” offers more certainty. Indeed, across many different contexts, we observe that Codex is able to pick reasonable parameter values that respect both the language and the parametrization of defined distributions. We consider these inferences to represent a form of “amortized pragmatics” (Goodman & Lassiter, 2015), which we will revisit in Section 5.
2306.12672#65
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
66
Putting it together: the power of probabilistic reasoning. We conclude this section with a final example that underscores the flexibility of our framework to model complex reasoning from language and foreshadows multiple themes that we will revisit later in the paper. Consider the dialogue in Fig. 5, in which the students and faculty team up to face one another. The interlocutor poses two questions: “Is Gabe stronger than the weakest player on the faculty team?” and “Who would win in a match between the students and the faculty?” As we saw in the prior tug-of-war examples, the answers to both of these questions are expressed as probability distributions derived from simulation of the generative tug-of-war model. Moreover, in both cases, the introduction of new information flips the model’s belief state in a way that aligns with human intuitions. In this way, the PLoT framework is natively capable of defeasible inference—a phenomenon of human reasoning that was of great interest to early AI pioneers of non-monotonic logics (Ginsberg, 1987; McCarthy, 1980).
2306.12672#66
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
67
A key advantage of our framework is that achieving these kinds of defeasible and flexible inferences from natural language reduces to grounding utterances into appropriate condition and query statements. While the observations and questions in Fig. 5 are semantically more complex than those that appeared in the prior examples, and though there are many degrees of freedom involved in the translation problem, we confirm that an appropriately-prompted LLM can produce translations that intuitively capture the meaning of each utterance with respect to the tug-of-war domain. Moreover, as we saw in Fig. 4, Codex is able to amortize certain pragmatic inferences in resolving “pretty strong” to a threshold of strength > 60, “real slackers” to a threshold of laziness > 0.9, and “several of the faculty” to count >= 3. How far can we go with these kinds of amortizations? Throughout Section 3 and Section 4, we will see examples of context-driven amortizations across different domains; and in Section 5, we will regroup to discuss how these different examples of amortization might inform our theories of language understanding and pragmatics.
2306.12672#67
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
68
In this dialogue, we also give a preview of define, a powerful construct in our framework that is discussed in depth in Section 4. Just as people come up with terms like “20th-century pragmatists” or “Meatless Monday” to pick out entire hierarchies of people, things, and events, a core feature of the probabilistic LoT is the ability to define new concepts that can later be referenced symbolically. In the Fig. 5 dialogue, language about team structures defines two new concepts, faculty-team and student-team, that facilitate concise translation of language like, “Is Gabe stronger than the weakest player on the faculty team?” Moreover, while faculty-team is a static list, other defined concepts can ground out in functions that take arguments. In fact, stronger-than?, which is defined in the prompt (Appendix A.1.2), is one such example, illustrating how programming languages are well-suited to capture the infinite productivity of language that arises through structured composition. Through this lens, we can start to imagine how our tug-of-war world model might be expanded to ground many new kinds of language: • The tug-of-war tournament is organized into three leagues for novices, amateurs, and professionals. In order to be considered a professional, a player must win 20 one-on-one matches against other professionals.
2306.12672#68
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
69
• Players often get increasingly tired over the course of a tournament, though some players have more stamina than others. • The tournament has an entry fee of $20 per contestant and a grand prize of $10,000 for the winning team. How can we grow our world models to incorporate new language, or even construct new world models entirely from scratch? In Section 4, we revisit the tug-of-war domain with an eye to precisely these questions. Conclusions. As an introductory example, the tug-of-war domain serves as a minimal illustration of the kind of reasoning from language that our framework is concerned with. Our goal here was to build intuition 16 2.2 Understanding language with probabilistic reasoning 2 OVERVIEW OF THE KEY IDEAS
2306.12672#69
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
70
The faculty team is made up of four players: Jacob, Se (detine racutty-tean “(Jacob Josh noah vitash) [areca Tsp ar a Gt cusMayAts peo (define student-team '(alex gabe lio ben ced)) (query Is Gabe stronger than the weakest player on the faculty = (stronger-than? og team? ‘gabe (argmin strength faculty-team) ) io ee (condition All of the faculty are pretty strong. = (all (map (lambda (player) (> (strength player) 62)) faculty-team) )) ( ' ! a og 1 1 me ' team? 1 0% 10% 20% 30% 40% 50% 60% 70% 80% | . : (query Who would win a match between the students and the = (won-against student-team faculty-team) 2g faculty? horse —————————- studentteam a Despite their strength, several of the faculty are real = (condition slackers. “ O= (count (map (lambda (player) (> (laziness player) @.9)) faculty-team) ) 3)) 0% 5% 10% 15% 20% 25% 30% 95% 40% 4S OH 55% 60% ( 1 ' Who would win a match between the students andthe 1 __ 1 faculty? ' i 1 . |
2306.12672#70
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
71
Figure 5: In this final tug-of-war dialogue, natural language plays three interlaced roles in interfacing with the language-of-thought. Definitions (purple) introduce new concepts, such as specific player-teams, that can later be referenced symbolically. Observations (blue) translate into condition statements that probabilistically constrain the world state, sometimes amortizing the resolution of linguistic ambiguity (e.g., “pretty strong” or “real slackers”). Finally, questions (green) translate into queries that trigger inference by probabilistic simulation over possible worlds that is both sensitive to and consistent with prior definitions and observations. 17 2.2 Understanding language with probabilistic reasoning 2 OVERVIEW OF THE KEY IDEAS
2306.12672#71
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
72
17 2.2 Understanding language with probabilistic reasoning 2 OVERVIEW OF THE KEY IDEAS for our general approach: by translating natural language into condition and query statements as inputs to a probabilistic inference engine, we can achieve forms of reasoning from language that are consistent with respect to a mental model of the world. Nonetheless, in scaling this approach beyond the toy domain of tug-of-war, many questions arise. How does probabilistic inference relate to models of relational and deductive reasoning of the sort that classical AI approaches excel at? How do we ground linguistic meaning in the visual and physical world? And how does language understanding inform our actions and interactions with other agents through goal-directed planning? In Section 3, we will progressively expand our scope to touch on each of these questions and show that, in each case, new kinds of language understanding and reasoning can be naturally incorporated into our framework. 18 3 WORLD MODELS # 3 Understanding and reasoning about language with world models
2306.12672#72
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
73
18 3 WORLD MODELS # 3 Understanding and reasoning about language with world models In this section, we illustrate how the general framework we propose in Section 2 can be applied and extended to integrate natural language with core domains of human-like thought. In each, we build on the idea that language that conveys observations and questions about uncertain situations, constructing meanings from a generative world modeling program that supports probabilistic reasoning. In Section 3.1, we show how this approach can be integrated to understand language that conveys structured, logical lexical relations. In Section 3.2, we show how generative programs that support perceptual and physical simulation can be used to ground language about scenes into visual world. Finally, in Section 3.3, we consider language about agents with preferences and goals, and show how we can make meaning from sentences with respect to a generative program that supports planning. # 3.1 Language for logical and relational reasoning
2306.12672#73
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
74
# 3.1 Language for logical and relational reasoning In the previous section, we examined how translation from natural language into the probabilistic language of thought naturally captures a certain form of reasoning in which uncertainty plays a key role. How does this framework relate to earlier computational theories of reasoning, such as classical AI approaches to logical and relational reasoning (Russell & Norvig, 2021)? Historically, systems like Prolog (Colmerauer, Kanoui, Pasero, & Roussel, 1972; Philippe, 1972) were designed for similar goals to ours here, to allow people to directly interact with computers via natural language (French, originally), specifying only the background knowledge and goals for computation without the algorithmic details (Colmerauer & Roussel, 1996). In this section, we demonstrate how the PLoT not only fully supports the style of deductive, logical reasoning characteristic of classical AI, but extends it to support inductive inferences as well. Moreover, we argue that many kinds of real-world reasoning problems that are traditionally modeled using structured logic-based approaches actually require a mix of both symbolic and probabilistic reasoning. In doing so, we aim to illustrate how our approach of translating from natural language to the PLoT fluidly integrates both kinds of reasoning in a way that comes naturally to people, but that has proved elusive for both traditional deductive programming systems and purely statistical language models.
2306.12672#74
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
75
Language about kinship relations. Suppose you are again with your friend from Section 2.2, who is telling you about a part of their extended family. “Avery has a sister named Blake, and their father is named Charlie,” your friend says. Immediately, you start to sketch a picture in your mind of this family, which you can update on-the-fly as you get more information: “Charlie is the grandfather of Dana.” At this point, you can infer that one of Charlie’s kids is also Dana’s parent, but which one? In the absence of additional information, it’s a toss-up between Avery and Blake, with some outside chance that there could be another, unmentioned sibling who is Dana’s parent. Hearing that “Blake has two kids” might initially shift your beliefs towards Blake. However, upon learning that “Dana is an only child,” you’d have to rule Blake out entirely! This kind of relational reasoning, which freely intermixes deductive and inductive inferences, comes quite naturally to people. How do we make such rich inferences from a relatively sparse sequence of words?
2306.12672#75
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
76
In this section, our domain of interest will be kinship: relationships between people in a family. The kinship domain provides fertile ground for the study of logical reasoning for several reasons. First, during development, one of the first domains where we learn about logical relations is in describing families (Elkind, 1962; Piaget, 1951). Language has evolved to describe family structures in highly economical terms that naturally express composition (e.g., my mother’s father is my grandfather ) and symmetry (e.g., if Avery is my cousin, then I am Avery’s cousin; together, we are cousins). Nevertheless, while certain kinship references are relatively straightforward (e.g., “Blake’s mother”), others involve ambiguity (e.g., “Blake’s uncle” could refer to the brother of either of Blake’s parents; or even, perhaps, a close older male who is not related by blood or marriage). Finally, kinship reasoning freely intermixes deductive and inductive inferences: for instance, “Charlie has a grandson named Dana” deductively implies the existence of a child of Charlie who is also a parent to Dana; and it
2306.12672#76
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
77
for instance, “Charlie has a grandson named Dana” deductively implies the existence of a child of Charlie who is also a parent to Dana; and it inductively implies that Charlie was possibly partnered at some point, such that Dana might have another grandparent in the picture. Traditional logical accounts of reasoning in this domain capture the deductive inferences but not the inductive inferences in cases like this. People, in contrast, routinely make statements such as “This is Kendall, the partner of Avery’s niece” with the expectation that others will draw roughly the same inferences they would in building a mental model of this family: Avery has
2306.12672#77
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
78
19 3.1 Language for logical and relational reasoning 3 WORLD MODELS (i) Generative domain theory of family trees Oo ® person-0 person + partner? | person-id: person-2 name: blake @ (flip 0.5) gender: female parent-1-id: person-0 parent-2-id: person-1 Crandom-choice "avery blake charlie » (random-choice “(male female)) mi n-children? n (geometric @.5) 3) persona person-6 (ii) Translations into LoT predicates re cc e Fan [ Charlie is Blake’s father. Blake’s dad is named Charlie. (father-of? ‘charlie 'blake) __ Blake has three children. Blake has 3 kids. t+ (= (length (children-of 'blake)) 3) __ Blake’s brother has a son named Dana. Blake has a brother whose son is named Dana. (exists Qambda (x) (and (brother-of? x 'blake) (son-of? ‘dana x)))
2306.12672#78
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
79
Figure 6: Illustration of a simple kinship domain theory and conceptual system implemented in Church. (i) The generative model specifies a process by which individuals form couples and have children to form family trees. Each tree represents a “possible world” in which certain relationships hold. (ii) These relationships are expressed using predicates in a conceptual system that supports quantificational logic and composition, giving rise to an expressive domain semantics that aligns well with natural language. a brother or sister, and that sibling has a female child, and Kendall is that person’s partner. In sum, the kinship domain offers a rich set of relations and possible inferences, and comes equipped with an extensive natural language vocabulary, making it an ideal playground to explore our translation hypothesis.
2306.12672#79
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
80
World models of kinship as probabilistic generative programs. Owing to the richness of the domain, recent years have seen a steady interest in computational cognitive models of various aspects of kinship, ranging from development and acquisition of kinship terms across cultures (Mitchell & Jordan, 2021; Mollica & Piantadosi, 2022), tradeoffs in communicative efficiency in natural (Jones, 2010; Kemp & Regier, 2012) and artificial (K. Smith, Frank, Rolando, Kirby, & Loy, 2020) kinship systems, and probabilistic inferences about kinship relations from sparse evidence (Katz, Goodman, Kersting, Kemp, & Tenenbaum, 2008). In this work, our primary interest is in how people represent and reason about kinship relations conditioned on language. Following Katz et al. (2008), we construct an intuitive domain theory of kinship using a probabilistic generative model and a small number of rules that form a conceptual system.
2306.12672#80
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
81
As in Section 2.2, our kinship domain theory is expressed as a generative model in Church. In the Bayesian tug-of-war, the generative model consisted of random variables over continuous quantities like strength and laziness. In contrast, in this section, our generative model specifies a series of discrete random choices that describe events in a family’s genealogy: people are born, find partners, have children, and the process repeats. All of these events involve random choices that shape the makeup of the family tree. Fig. 6 (i) shows a schematic of the kinship generative domain theory. When a person is born, they are assigned a unique person-id, a name1 sampled from a list of gender-neutral names, and a gender sampled from {male, female}. Next, with fixed p = 0.5, the person partners with a new individual from outside the family. Finally, if partnered, the couple has n = {0, 1, 2, 3} children, with the number of kids drawn from a geometric distribution (p = 0.5). This process repeats recursively until a full family tree is generated. To support efficient inference using Church’s generic sampling algorithms, we cap the trees at 3 generations and limit each couple to 3 children. Further implementation details of the generative model can be found in Appendix A.2.1.
2306.12672#81
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
83
As with any computational model of a social phenomenon, this toy kinship model is reductive of many important nuances of identities and relationships. For instance, while the model includes both same- and opposite-gender couples, these couples never split, so step-relations aren’t well-captured. While these kinds of compromises are designed to keep inference tractable, still others stem from limitations of the language itself. For example, many colloquial English kinship terms are gender-binary (e.g., mother, grandfather, daughter), so instantiating them as truth-conditional predicates coerces the generative model towards traditional gender assignments. Similarly, many English names carry strong gender associations, which NLP systems trained on large linguistic corpora pick up on (Caliskan, Bryson, & Narayanan, 2017; Grand, Blank, Pereira, & Fedorenko, 2022). In our examples, we intentionally select gender-neutral names (e.g., Avery, Blake, Charlie, Dana) to emphasize that these naming-based gender inferences are deliberately not part of the reasoning task. To summarize, language both reflects and constrains our intuitive theories of complex domains like kinship
2306.12672#83
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
84
gender inferences are deliberately not part of the reasoning task. To summarize, language both reflects and constrains our intuitive theories of complex domains like kinship (Sapir, 1929; Whorf, 1956; c.f. Gentner & Goldin-Meadow, 2003 for a review of contemporary perspectives on linguistic relativity), and these tradeoffs manifest concretely in the toy model presented in this section. Fortunately, where this initial “off-the-shelf” kinship model lacks social and cultural nuance, our framework offers opportunities to extend and modify these areas. In section Section 4.1, we look at ways of growing our kinship model to include concepts from non-English-speaking cultures and more inclusive concepts of gender.
2306.12672#84
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
85
Relational meanings as program statements. Given a generative model of family trees, we can define a rich conceptual system to make statements about relationships between individuals. Our conceptual system consists primarily of a dozen-odd derived predicates that are binary operators over pairs of names; e.g., (father-of? 'charlie 'blake) is true iff Charlie is the father of Blake in a particular tree instance.2 These derived predicates build on a small number of low-level accessor functions that operate directly on nodes in the tree data structure. For instance, (children-of 'blake) returns a list of names corresponding to the children of Blake in the tree. Finally, our conceptual system includes several higher-order functions, like map-tree, filter-tree, and exists that take custom predicates as inputs and return a boolean. These functions facilitate the expression of a rich compositional semantics by allowing for compound predicates containing conjunctions and disjunctions. Fig. 6 (ii) illustrates several examples of the kinds of statements that can be made using combinations of derived predicates, low-level accessors, and higher-order functions. The full set of definitions making up the conceptual system is given in Appendix A.2.3.
2306.12672#85
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
86
Translating from language to program expressions. As in Section 2.2, we use a handful of paired natural language / code examples (Appendix A.2.4) to induce a meaning function via Codex. Because the prompt also includes the generative model source code and the full set of derived predicates, the LLM is able to resolve statements like “Blake has two kids” to the appropriate function (in this case, children-of) using the available definitions. Moreover, we observe zero-shot generalization to linguistic constructs that are not explicitly defined in the prompt, such as the concept of an “only child” (Fig. 7). Putting it together: Reasoning from language about kinship relations. What is the purpose of all of this domain-specific machinery that we have now built up? The answer is two-fold. First, the generative domain theory compactly captures the key dynamics of our domain, allowing us to reason about a combinatorially vast space of possible family trees. Meanwhile, the conceptual system serves as a higher-level program interface, defining certain relationships that we would like to be able to talk about. Finally, the large language model bridges the domain model with natural language, providing a flexible and context-aware way to ground language into conditioning and query statements.
2306.12672#86
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
87
In Fig. 7, we can see how these components come together to facilitate naturalistic reasoning from language about kinship relations. Each natural language utterance translates to a condition statement in Church that serves as a constraint on family trees. With each successive condition, our uncertainty decreases and our picture of the family tree in question starts to crystallize. Samples from the conditioned domain theory model therefore serve as hypotheses about possible worlds that are consistent with the information provided through language. Furthermore, the distribution over conditioned samples provides a principled way to reason about 2Note that because our model includes same-gender couples, Blake may have one father, two fathers, or no fathers. Blake also may not exist in the tree in the first place! Crucially, these aspects of the generative model don’t matter to the derived predicate, which simply evaluates whether the relationship in question holds somewhere in the tree. 21 3.1 Language for logical and relational reasoning 3 WORLD MODELS
2306.12672#87
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
88
A. Language-to-codetranslation —_B. Family trees sampled from conditioned kinship domain theory Avery has a sister named Blake. L, (condition (sister-of? ‘blake 'avery)) Avery | | Blake Avery and Blake’s father is named Charlie. Chatie | %) (condition (and (father-of? ‘charlie ‘avery) (father-of? ‘charlie 'blake))) hae avery Avery |_| Blake Charlie is Dana’s grandfather. Charie Charle | (condition (grandfather-of? ‘charlie 'dana)) Dana Dana Which of Charlie’s kids is Dana’s parent? Probabilistic inference over sampled worlds fod od (query (filter-tree avery (lambda (x) (and —+ blake (child-of? x 'charlie) other of? x! t T T T T T T T T 1 (parent-of? x ‘dana))))) 0% 5% 10% 15% 20% «28% ©=— 30% += 38% = 40% «= 45% Blake has two kids. cra (condition (= (length (children-of 'blake)) 2)) vey eg Which of Charlie’s kids is
2306.12672#88
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
90
Figure 7: Kinship reasoning from natural language, backed by a domain theory model in the probabilistic language of thought. (A) Natural language utterances about a particular family are readily translated into Church conditioning statements by a LLM. (B) Samples from the conditioned generative domain model are possible family trees that adhere to the growing set of constraints (conditioning statements are cumulative). Reasoning about unknown kinship relations is accomplished through posterior inference against a translated query. With each new piece of information, the model’s beliefs reflect both deductive and inductive inferences. 22 3.1 Language for logical and relational reasoning 3 WORLD MODELS
2306.12672#90
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
91
22 3.1 Language for logical and relational reasoning 3 WORLD MODELS queries, such as Which of Charlie’s kids is the parent of Dana? Posterior inference (in this case, accomplished via rejection sampling) faithfully reflects various possible configurations and their relative probabilities. For instance, in Fig. 7, after conditioning on Blake has two kids, the model puts > 80% probability on Blake being Dana’s parent, but also factors in low-probability possible worlds where Avery or a third unnamed sibling is Dana’s parent. Yet, despite this confident answer, the model can correctly infer that this same probability drops to 0% in the face of the contradictory information that Dana is an only child. Note that the distributional parser plays a crucial role in this inference by providing a correct interpretation of this utterance. Meanwhile, the Church inference engine does the heavy lifting of representing possible worlds and reasoning about them in a principled manner.
2306.12672#91
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
92
Future directions: Logical and relational reasoning with language models. Significant recent attention has been directed towards studying reasoning in LLMs. Typical approaches involve engineering prompts so as to induce structured generations in text space that approximate “step-by-step” reasoning (Kojima, Gu, Reid, Matsuo, & Iwasawa, 2022; Nye et al., 2021; Wei et al., 2022). Nevertheless, current evaluations find that even with such methods, LLMs are prone to producing unfaithful reasoning chains in which conclusions do not follow logically from the premises (Golovneva et al., 2022; H. Liu et al., 2023; Lyu et al., 2023; Ribeiro et al., 2023). These issues of consistency have motivated several systems that connect LLMs to external symbolic inference engines that perform deductive inference using Prolog-style backwards chaining (Dalvi, Tafjord, & Clark, 2022; Pan, Albalak, Wang, & Wang, 2023; Weir & Van Durme, 2022). We see this work as closely-related in spirit to our approach, but fundamentally limited to deductive reasoning. (See
2306.12672#92
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
93
Weir & Van Durme, 2022). We see this work as closely-related in spirit to our approach, but fundamentally limited to deductive reasoning. (See Appendix A.2.5 for a technical explanation of these limitations.) Of course, we make no claim that Church or its derivatives are the only languages that can capture human-like relational reasoning. For instance, ProbLog (De Raedt, Kimmig, & Toivonen, 2007; Dries, Kimmig, Davis, Belle, & De Raed, 2017; Suster et al., 2021), a probabilistic extension of Prolog in which deduction rules can be annotated with probabilities, offers a compelling alternative. Indeed, interfacing ProbLog with a natural language via an LLM-backed meaning function would constitute a promising instantiation of our rational meaning construction framework. Our core assertion here, and in the rest of this paper, is that representing probabilistic, generative models over possible worlds is critical to reasoning coherently about a structured domains.
2306.12672#93
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
94
23 3.2 Language for visual and physical reasoning 3 WORLD MODELS # 3.2 Language for visual and physical reasoning Sensory detail and physical knowledge pervade our everyday language. We can describe and imagine highly visual objects and scenes—a few red mugs on a tabletop, a tall stack of blue plates, a heavy box, and objects that move, bounce, and collide. We flexibly make predictions about physical events (what will happen if a kid crashes into that table stacked with plates? ), or infer the underlying physical properties of the world (how heavy is that box that no one can lift? ), based on situations described entirely in words. As with the other domains we have considered thus far, understanding this language requires integrating over the uncertainty inherent to language, like the possible heights picked out by tall and motions picked out by a bounce, as well as the uncertainty inherent to how we imagine the physical world itself.
2306.12672#94
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
95
How can we so flexibly relate language to our more general perceptual and physical reasoning? In this section, we illustrate how our overarching framework for language understanding can be modularly extended to capture both of these capabilities. We begin with perception, extending our framework to integrate a graphics rendering engine to relate linguistic meanings to visual knowledge (Section 3.2.1). We then build on this approach to integrate a physics simulation engine to further interface between language and intuitive, probabilistic physical reasoning (Section 3.2.2). By incorporating these external engines, these sections blueprint how computational models that ground linguistic meaning in a PLoT can interface with other cognitive modules for perception and physical reasoning. # 3.2.1 Language about visual scenes To illustrate the structured relationship between language and visual knowledge, imagine how we might talk about a very simple domain of scenes (Fig. 8, top)—tables on which someone was placing some household objects (mugs, cans, or bowls) that come in different colors (red, green, yellow, or blue.)
2306.12672#95
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
96
Given descriptions of particular scenes (Fig. 8, bottom), most of us can easily picture tabletop scenes that fit these descriptions, updating what we imagine to incorporate arbitrary new information, like that everything on the table is blue, and also that there are no mugs, and lots of bowls. We can do this despite uncertainty in the language itself—a phrase like lots of bowls leaves open just how many bowls there are, though we have general intuitions that there should be more than one or even two bowls on our imagined table. You can also draw a host of fundamentally probabilistic inferences to answer many arbitrary questions about the scenes you imagine, like how many green mugs there might be, or whether there are more red objects or green ones. The set of scenes you imagine, and the way you answer these questions, is structured and compositional at the level of individual objects and their properties (a mug, a green mug, a bunch of green mugs), and over successive sentences (like there are many red objects on the table, there are just a few green mugs, and there are also at least three green bowls.) The way we talk about scenes like these suggests the level of abstraction with which we mentally represent them. We describe
2306.12672#96
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
97
and there are also at least three green bowls.) The way we talk about scenes like these suggests the level of abstraction with which we mentally represent them. We describe and reason over object categories, lexical properties, numeric quantities, and set relations, and we can easily visualize scenes from these abstract, linguistic descriptions. In contrast, recent evaluations of current multimodal models—large language models fine tuned on corpora of images (Ramesh, Dhariwal, Nichol, Chu, & Chen, 2022; Ramesh et al., 2021)—suggest that even large models struggle with just these kinds of simple but abstract relational concepts in language, such as producing images consistent with quantifiers like more red things than green things, or relations like a plate on top of a cup (Conwell & Ullman, 2022; Marcus, Davis, & Aaronson, 2022; Radford et al., 2019).
2306.12672#97
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
98
In this section, we propose that the basic motif outlined in our framework also suggests an alternate approach for relating language and visual reasoning. Our architecture draws on the traditions of viewing perception as “analysis by synthesis” or “vision as inverse graphics” from cognitive science and classic computer vision (Battaglia et al., 2013; Gothoskar et al., 2021; Kulkarni, Kohli, Tenenbaum, & Mansinghka, 2015; Lee & Mumford, 2003; J. Wu, Yildirim, Lim, Freeman, & Tenenbaum, 2015a; Yuille & Kersten, 2006). This approach frames visual imagination and visual scene understanding as two sides of the same coin, modeling visualization in a mental graphics rendering engine over internal scene representations and perception as probabilistic inference to invert the renderer and thereby recover the physical content of scenes from vision. In this section, we show how this general approach to modeling human perception can integrate cleanly into the framework we have sketched so far, augmenting the probabilistic language of thought with an interface to a rendering engine so it can serve as a general, flexible intermediary for relating language, world models, and visual scenes. 24 3.2 Language for visual and physical reasoning 3 WORLD MODELS
2306.12672#98
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
99
Generative world model of scenes Sampled scene graphs Scene 1 Scene 2 Scene 3 abject-1: ( ebject-2: ( —object-2: ¢ abject-1: ( abject-2: ( object-2: (—object-1: (—object-2: ( —object-a: Color: red color: green color: blue Color: red color: blue color: blue color: red color: blue color: yellow Shape: mug shape: Gan shope: bowl shape: mug shape: mig) shape: Bowl shape: mig. shape: ug shape: mg > ? ? ? ? ? > (define choose-shape (mem (lambda (obj-id) Graphics rendering engine (pair ‘shape (uniform '(mug can bowl)))))) ED DEES ES (define choose-color (mem (lambda (obj-id) (pair ‘color (uniform '(red blue green yellow)))))) (define generate-object (mem (lambda (obj-id) (list (pair ‘object-id obj-id) (choose-shape obj-id) (choose-color obj-id))))) (define choose-num-objects ...) (define generate-objects-in-scene ...) Reasoning about scenes from natural language Dialogue A Dialogue B There's only a few mugs and bowls, though
2306.12672#99
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
100
...) (define generate-objects-in-scene ...) Reasoning about scenes from natural language Dialogue A Dialogue B There's only a few mugs and bowls, though at least one of each. verything on the table is blue. (condition (and ength ((filter-shape ‘mug) (objects ‘scene))) 3) length ((filter-shape "bowl) (objects ‘scene))) 3) (© length ((filter-shape ‘mug) (objects ‘scene))) 0) (© length ((filter-shape ‘bowl) (objects ‘scene))) 0))) And most of the objects in this (condition (and » (> Cength ((Filter-color green) (objects ‘scene))) 0) * (o= Clength ((filter-color green) (objects ‘scene)) (/ (length (objects ‘scene)) 2)))) How many green mugs do you think ther Protbitierenceoverpssewors gf . Be Cony leet (atest gram — cal : 3 ((filter-shape 'mug) (objects-in-scene ‘scene))))) me
2306.12672#100
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
101
(define choose-shape (mem (lambda (obj-id) (pair ‘shape (uniform '(mug can bowl)))))) (define choose-color (mem (lambda (obj-id) (pair ‘color (uniform '(red blue green yellow)))))) (define generate-object (mem (lambda (obj-id) (list (pair ‘object-id obj-id) (choose-shape obj-id) (choose-color obj-id))))) (define choose-num-objects ...) (define generate-objects-in-scene ...) Figure 8: Human language understanding draws on our structured knowledge of the visual world. (Top) A probabilistic generative model describes a prior over tabletop scenes with varying configurations of colored mugs, cans, and bowls. Sampled world states describe a scene based on symbolic object concepts. Interfacing this world model with a graphics rendering engine models visual imagination of a given scene. (Bottom) Language about particular visual scenes can now be translated as before into conditions (blue) and queries (green) on the distribution over scenes, which can be rendered into visual scenes that reflect language.
2306.12672#101
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
102
Integrating the probabilistic generative model over scenes with a rendering engine. To model the domain of tabletop scenes, we begin with a probabilistic generative model like those in the preceding sections. The generative program excerpted at the top of Fig. 8 (purple) describes a prior over the number of objects in a given scene, and the shape and color of each object. This program is similar in many ways to the kinship model in Section 3.1, which generates possible family trees as a collection of entities and stochastic choices about each one. Similarly, the generative model in this domain generates a particular scene by making stochastic choices over the number of objects in the scene (choose-num-objects), then generates each individual object (generate-object) based on stochastic choices over its possible properties (e.g choose-shape and choose-color). This basic structure can be augmented in many ways to model more complex scenes, with more variation over possible properties like size or material, hierarchical classes of object categories like dishware, cups, and mugs, or hierarchical object structures like a stack of plates.
2306.12672#102
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
103
Each sample from the generative model in Fig. 8 is a structured, symbolic representation of a particular scene state, represented in our particular implementation as a list of object dictionaries that map between attribute kinds (like object-shape) and values (like 'mug). These scene states are very simple instances of the many symbolic scene representations used throughout computer graphics and computational models 25 3.2 Language for visual and physical reasoning 3 WORLD MODELS of human scene understanding, data structures which model the abstract and semantic contents of scenes (Armeni et al., 2019; Bar-Zeev, 2003; Clark, 1976; Gothoskar et al., 2021; J. Johnson et al., 2017, 2015; Zinberg, Cusumano-Towner, & Vikash, 2019).
2306.12672#103
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
104
We can now extend this probabilistic generative program so that it expresses not just a distribution over possible scene states, but over the visual percepts of each scene. We do so by extending our base probabilistic programming language with a new function, render, that takes in scene graphs as inputs and calls out to Blender, a 3D computer graphics engine.3 Our render implementation builds on the basic capabilities of any programmable graphics engine. It defines how symbolic object entities with the properties defined in our model (shapes like mug) are rendered and colored into 3D CAD shapes, and can forward render any sampled scene graph into a visual scene with the requisite object types and colors, and overall structure (Fig. 8, top, Rendered possible worlds). Collectively, this generative model and rendering interface unites the underlying belief distribution over possible scene states with how each of these scenes might look.
2306.12672#104
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
105
More broadly, this implementation is intended as a simple, illustrative example of how our framework could be integrated to model many, complex relationships between the objects we talk about in a scene and how they look — recent work in scene understanding, for instance, models variation in lighting, viewer angle and distance from the scene, stereo depth sensing, and sources of noise in perception (such as from a viewer who only looks briefly at an image, or an imperfect, non-idealized visual sensor) (e.g., in Deng, Zhi, Lee, and Ahn (2021); Gothoskar et al. (2021); Hughes, Chang, and Carlone (2022); Kulkarni et al. (2015); V. K. Mansinghka et al. (2013); Zinberg et al. (2019)). Grounded meanings as program expressions. By augmenting probabilistic generative models with a graphics rendering engine, we have now extended our framework to allow language that describes and asks questions about scenes to interface with visual depictions of those scenes.
2306.12672#105
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
106
In our simple tabletops scene domain, for instance, we can ground linguistic descriptions of the number, kinds, and colors of objects in a scene (Fig. 8, blue) like there’s at least two green cans or a few mugs and bowls into probabilistic program condition statements on scene states in the generative model. As with preceding sections, the translations shown in Fig. 8 are quite straightforward and interpretable, because the generative model we have defined expresses compositional predicates on object properties at the grain of language. Constraints on objects of specific types, like green cans are translated into a sequence of conditions on the relevant properties of object entities, successively filtering on the set of objects that are green (filter-color green) and then further filtering to the set of objects that are also cans (filter-shape 'can).4 Sampling scene states from the conditioned generative model, and rendering these scenes into images with the render interface, then produces visual depictions that are consistent with any sequence of observations made in language. This approach disentangles reasoning, as probabilistic inference over a structured generative model, from the perceptual properties of scenes. As with before, we can translate questions like How many green mugs do you think there
2306.12672#106
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
107
inference over a structured generative model, from the perceptual properties of scenes. As with before, we can translate questions like How many green mugs do you think there are? into probabilistic query expressions. Our approach reasons about these questions as inferences over the distribution of possible scenes, adapting beliefs about the scenes to condition systematically and coherently on sequences of new statements made in language.
2306.12672#107
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
108
Translating from language to program expressions. As with the previous sections, we can now translate actual descriptions and questions about scenes, by using a large language-to-code model conditioned on the generative domain model and a few example pairs of language and code (see Appendix A.3.1 for the full prompt we provide to condition the language-program model). The translations in Fig. 8 and Fig. 9 generally showcase the local generalizability and flexibility we illustrate in the other sections—the translation is robust to conjunction and syntactic variation, differing numbers of object predicates (yellow object, red mug), compositions of object predicates (eg. a few mugs and bowls), negations over set quantity (there aren’t any), and comparatives over object sets (more red mugs than green cans).
2306.12672#108
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]