doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2306.12672 | 109 | 3https://www.blender.org/ 4In our implementation, which can be found in Appendix A.3.1, we derive named color predicates like green over the base generative model, which samples color properties over a continuous space of RGB values. This implementation suggests a more general pointâthat any number of lexical concepts, such as many more arbitrary color names over the underlying color space, can be derived as symbolic predicates over a richer continuous space reflected in the generative model. A similar approach could be taken for other lexical terms that carve up continuous spaces, such as prepositions like left, center, or near over geometric space.
26
3.2 Language for visual and physical reasoning
3 WORLD MODELS | 2306.12672#109 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 110 | 26
3.2 Language for visual and physical reasoning
3 WORLD MODELS
A. Language-to-code translation B. Rendered scenes from conditioned generative model There is at least one red mug in this scene. (condition (>= length Be ((filter-color red) ((filter-shape 'mug) * (objects-in-scene 'scene)))) 1)) There are also at least three green cans. (condition (= Clength HR â(hilter-color green) ((Filter-shape 'can) (objects-in-scene âscene)))) 3)) There arenât any yellow objects. (condition (= (length Be ((filter-color yellow) (objects-in-scene 'scene))) @)) There are more red mugs than green cans. (condition (© Cength ((filter-color red) ((filter-shape 'mug) 2 Be (objects-in-scene 'âscene)))) : = (length ((filter-color green) ((filter-shape 'can) (objects-in-scene âscene))))
Figure 9: Each sentence in this sequence (left) translates into a separate, composable condition expressions that updates the underlying generative model over scene states. After each sentence, sampling symbolic scene states from the updated distribution and render-ing them (right) yields images that reflect the prior over scenes and are consistent with the information in all successive sentences. | 2306.12672#110 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 111 | Even on this relatively simple domain, Fig. 8 and Fig. 9 also showcase ways in which the LLM can represent conditional inferences from language to program expressions that go beyond simple, literal semantic meanings. These examples build on what we already find in Section 2.2, in which the LLM can contextually interpret vague language like very strong as thresholds on continuous variables in the generative world model. In this domain, find the LLM can translate vague quantifiers (like few, most, arenât a lot, a bunch, or arenât many) without explicit program predicates defining each lexical termâthe model can directly translate these terms into reasonable, interpretable quantities over sets of objects (such as translating only a few to (<= 3) objects). We also find that sampling from the distribution over meanings further supports the idea that the LLM represents a broader distribution over intended meanings, including acceptable semantic variation in the interpretation of vague lexical terms. Sampling from the distribution at higher temperatures, for instance, we find that our implementation variously translates most into program expressions that interpret
27
3.2 Language for visual and physical reasoning
3 WORLD MODELS | 2306.12672#111 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 112 | 27
3.2 Language for visual and physical reasoning
3 WORLD MODELS
this as more than half, or more than 80%, or other greater fractions, of the set of overall objects in the scene. These translations draw on the language-to-code modelâs background prior on language itself (we do not prompt it with examples of these particular phrases), its amortized understanding of how these phrases relate to continuous, named variables in code (like length of a set of objects), and the particular context of the generative world model itself (which defines the prior over numbers of objects that determines the context-specific scale of these graded quantifiers.) | 2306.12672#112 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 113 | Translations of vague quantifiers like these have been handled in classical semantics and recent accounts as explicit, pragmatic and probabilistic inferences based on context-specific priorsâthe acceptable quantity most people would infer for many mugs on a table is intuitively very different from the quantity intended by many grains of sand (Edgington, 1992, 1997; Graff, 2000; Lassiter & Goodman, 2017). The results we show here provide further evidence that LLMs can often amortize many of these inferences, to directly predict common interpretations from language. As we discuss in Section 5, future work might explore more fluid, joint integrations of these approaches to inferring meanings, trading off between the amortized interpretations the LLM can produce and more explicit probabilistic inference, such as conditioning on other information in language. Learning that Sally is a wholesale porcelain supplier who owns thousands of mugs in a nearby warehouse might lead you to infer an updated meaning of Sally has many mugs, but is a complex inference that we might not expect to be amortized in an LLM from the background distribution of language and commented code. | 2306.12672#113 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 114 | Putting it together: Reasoning from language about visual scenes. Taken together, the examples in Fig. 8 show how this approach naturally extends the components of this frameworkâthe ability to describe possible worlds in language, flexibly updating a background distribution of beliefs within a conditioned generative model, and query this model to draw probabilistic inferencesâto also ground out in visual scenes. The more extended example in Fig. 9 highlights the more granular, intuitive way in which the distribution over scenes changes to reflect successive new sentences, updating a flexible distribution over scenes that still remains consistent with all of the previous observations.
# 3.2.2 Language about dynamic physical scenes
When we talk about a scene, however, we describe more than just the colors and shapes of objects sitting on a table. We talk in verbs, describing events unfolding in the changing, physical world around us. Consider, for instance, descriptions of another set of tabletop scenesâones that just involve a red object placed to the left of a blue one on a table (Fig. 10). These scenes are initially even simpler than our tables of cans and dishware, but still afford a range of dynamic and physics-specific descriptions. | 2306.12672#114 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 115 | You can easily imagine, for instance, what would happen if someone pushed the red ball gently to the rightâyou might say that it would bump into the blue ball, and you could likely imagine how fast the blue ball would be moving as a result. You can infer how these scenes would change if someone pushed the red ball much harder, as if shooting a billiard ball, or tapped it even more gently, nudging it forward with their finger, so that perhaps it wouldnât collide with the blue ball at all. These inferences are sensitive to many other properties of the scene, and of these objects, that we could describe in language, like whether the red ball is really heavy, or the blue ball is very light, or at least much lighter than the red one. If we changed the objects in the scene, and now placed a red block to the left of the blue one, your intuitive understanding of how different shapes relate to different degrees of friction would again change how you might see these scenes play out in your mind, and how you might answer questions about their collision and motion. | 2306.12672#115 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 116 | As adults, we have a deep, general understanding of how physical objects move and behave, and extensive developmental evidence suggests that well before we acquire language, we understand many core physical principles that govern our world (Baillargeon, 2004; Hespos & Baillargeon, 2008; Rips & Hespos, 2015; Spelke, 1990; Spelke, Gutheil, & Van de Walle, 1995; Téglás et al., 2011). A productive line of computational cognitive models, in turn, has modeled human physical understanding as probabilistic inferences over a mental physics engine, modeled as programmable physics simulation engines (Battaglia et al., 2013; de Avila Belbute-Peres, Smith, Allen, Tenenbaum, & Kolter, 2018; Lake et al., 2017; Ullman et al., 2017; Yi et al., 2019) like those used in video games, computer animation, and robotics (Coumans & Bai, 2016; Erez, Tassa, & Todorov, 2015; Todorov, Erez, & Tassa, 2012).
28
3.2 Language for visual and physical reasoning
3 WORLD MODELS | 2306.12672#116 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 117 | Generative world model of dynamic physical scenes Sampled scene graphs Imagine a table with a red object placed to the left of a blue one. Scene 1 Scene 2 emineaeasare pelierenintenterendteciert jvtect C ctor rd shoves spare sss: 8.2, 253, ve 18, sojectts (colo: rdy shape: ibe mak: 18, x: oR, vs Ay ebject-2: { color: blue, shapeâ sphere, mass: 3.0, x: @, v: 8.0, ebject-2: { color: blue, shape: cube, mass: 3.0, x: 8, v: 0.8, 0.9, force: 8.0.0 2: 0.0, force: 8.0...) (define choose_shapes...) (define choose_mass ...) (define get_initial_x...) t1 t=2 te2 t=10 (define generate-object ebject-1 object-1 ebject-1 object ebject-1 abject) (mem (lambda (obj-id) (list aera an oa, ae an aan (pair âobject-id obj-id) (choose_shape obj-id) vs 0.95...) VE 0.9.07 vs 0.01...) v3.0.7 W207 | 2306.12672#117 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 118 | obj-id) (choose_shape obj-id) vs 0.95...) VE 0.9.07 vs 0.01...) v3.0.7 W207 ve 18.) (choose_color obj-id) (choose_mass obj-id)...)))) ... (define generate-initial-scene-state...) .... (define simulate-physics (mem Clanibda (scene total_t delta_t) (let check_collisions ...) (let generate_next_scene_state_at_time...) ....)))) Reasoning about physical scenes from natural language Situation A Imagine that the red objectiis a ball, and is pretty heavy. The red ball hits the blue one. How fast does the blue ball move after the collision? (condition (get_singleton_object (Lanbda (object) Cand tion, Get sineleton (condition (get_singleton object Clanbda (object_1) EE Goshager âShnered object) (Get-singleton-object.Clanbda (object_2) og (© (get_attribute object 'mass) 2))))) (exists_event (lambda (event) (and (Cis_color? red) abject_1) (Gis_shape? 'sphere) object_1) And the blue objects also a ball, but is fairly light. BE | 2306.12672#118 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 119 | red) abject_1) (Gis_shape? 'sphere) object_1) And the blue objects also a ball, but is fairly light. BE (sccolor? blue) object.2) (Gis_shape? sphere) object_2) (ondition (get_singleton_object Clanbda (object) (and Gsparticipant_of event? event object_1) (Gs color? blue) object) Gs_participant_of event? event object_2) : (Cis_shape? "sphere) object) GsLevent? âcollision event))))))))) G (et_attribute object ânass) 2))))) Now imagine that the red ball is pushed forcefully to the right. Condition (get-singleton_object (lanbda (object) (and EBB (is color? red) object) (isishape? âsphere) object) © (get_attribute object 'f0) +6))))) Situation B Now how fast does the blue ball move after th Now, imagine that the red ball is quite light. âAnd the blue ball is somewhat heavy. The red ball is pushed gently to the right. Situation C fast does the blue block move after it is bumped by the red one? Imagine that all of the objects are blocks. The red block | 2306.12672#119 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 121 | Figure 10: The way we talk about the world also draws on our intuitive physical knowledge. (Top) A probabilistic generative model describes a prior over tabletop scenes with a red and blue object placed side by side, of varying mass and shape. Integrating a physics simulation engine into this generative model allows this model to express a prior over dynamic scenes, modeling how each possible scene unfolds over time as differing initial forces are applied to the red object. (Bottom) Language about possible physical scenes can again be translated into conditions (blue) and queries (green) on the distribution over dynamic scenes. Rendering these scenes produces scenes that reflect these conditions, and inference over the simulations allows the framework to answer queries contingent on the properties of the objects described in language.
As with the previous example on visual scenes, our goal in this section will be to illustrate how the overarching framework we have described in this paper can integrate language with other domains of human reasoningâperception and visual imagination, or intuitive physical reasoning. By translating language into a probabilistic language of thought, we can relate the semantics of language to these other, well-studied computational and cognitive modeling approaches, using probabilistic programs as the underlying interface between language, inference, and these engines for perception and physical simulation. | 2306.12672#121 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 122 | This approach is closely related to other recent work from the AI literature, most notably R. Liu et al. (2022), which also extends large language models with physics engine to ground natural language in physical simulation. By incorporating an interface to physics within a general probabilistic programming language, we show here how these approaches can model the commonsense, probabilistic judgements we make about
29
3.2 Language for visual and physical reasoning
3 WORLD MODELS
everyday physical languageâincluding with respect to uncertainty and vagueness in language about the underlying world state, or combined with inputs from visual reasoning, as discussed in the prior section. | 2306.12672#122 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 123 | everyday physical languageâincluding with respect to uncertainty and vagueness in language about the underlying world state, or combined with inputs from visual reasoning, as discussed in the prior section.
Integrating the probabilistic generative model over scenes with a physics engine. To model language about the example scenes we described hereâred and blue balls, or blocks, placed on a tabletop (Fig. 10)âwe implement a probabilistic generative model that is similar by design to the previous visual scenes domain (a very short excerpt appears in Fig. 10, and the full model appears in Appendix A.3.3). This generative program describes a prior over the possible properties of the objects initially set on a table, modeling scenes as a collection of objects in which each individual object is again generated (generate-object) based on stochastic choices over its possible properties (e.g choose_shapes). In this domain, however, we also model an explicit prior over the physical properties of each object, such as its mass (choose_mass), and the relationship between shape and friction (as a simple get_friction_constants function returns different constants, with a higher constant for blocks than spheres). | 2306.12672#123 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 124 | As with the visual scenes example, each sample from this generative model again returns a structured, symbolic representation of a possible initial scene state, as a list of object entities that represents each object as a dictionary-like mapping from attribute kinds. This dictionary also stores each objectâs initial kinematic state, such as its position, velocity, acceleration, and any forces applied to it. To model the various ways we can push the objects around, our generative model over scene also implements a stochastic function over possible initial forces (choose_initial_forces) applied to an object. | 2306.12672#124 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 125 | To model how each possible world unfolds as a dynamic scene over time, we implement a simulate_physics function (Fig. 10) that integrates the basic functionality of any programmable physics engine into the probabilistic modelâthis function takes in a scene state that specifies the relevant physical properties of objects, and returns a sequence of scene states forward simulated in time under the laws of physics. In our implementation, this sequence is a list of scene states at each timestep, each which contains its own set of the objects and their properties with relevant kinematic properties (like position, velocity, acceleration) updated at each timestep. The physics model we use in our example is simple enough that we implement it fully within the body of the probabilistic program itself (see Appendix A.3.3) for illustrative purposesâour simulate_physics updates each object at each timestep under the basic kinematic laws of Newtonian mechanics, includes a simple implementation of static and kinetic friction under gravity, and models simple collisions as impulse exchanges in momentum. | 2306.12672#125 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 126 | The rendered simulations we show in Fig. 10 also showcase the interplay between these modular, API-like interfaces integrated into a probabilistic language of thoughtâcombined with the render interface from the previous section, we can not only simulate underlying physical scene states, but visualize them by rendering each individual scene state in the sequence over time. Collectively, this model now captures a prior over tabletop scenes, models how any given scene in the distribution unfolds dynamically under physics, and captures how each scene appears visually over time. | 2306.12672#126 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 127 | Grounding physical language in program expressions. By extending the underlying probabilistic world model to interface with a physics engine, we can ground the semantics of language about the physical world in intuitive, human-like physical reasoning modeled by the physics simulation engine over world states. Descriptions of the physical properties of objects, for instance, like the blue ball is not very heavy (Fig. 10) translate into conditions on the mass property of an object in the world state, and maintain uncertainty inherent to languageâphrases like very heavy translate into conditions that threshold a continuous distribution of possible masses. As in the visual scene example, sampling from the conditioned generative model produces dynamic scene simulations that reflect language. Descriptions of heavy blue balls, or red blocks that are relatively light, or scenes in which a red ball is pushed forcefully, or in which a red block bumps into a blue one, all connote sets of scenes that model explicit, physical simulation. In turn, queries about distributions over physical scenes (like how fast a heavy blue ball will move after it is bumped ) reflect probabilistic inferences that condition on all of these relevant descriptions in language, estimated by sampling and running physical simulations over the possible world states. | 2306.12672#127 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 128 | In this example, we highlight an approach to translating verbs and descriptions of physical events (the red ball pushed forcefully to the right, the red ball hits the blue ball ) that grounds them directly over continuous variables in our world model. In Fig. 10, for example, our implementation translates pushed forcefully to the right into a condition expression that picks out a distribution of initial forces, over a space of continuous force
30
3.2 Language for visual and physical reasoning
3 WORLD MODELS
vectors with direction and magnitude, as the meaning of push in a physical world. Similarly, we translate hits with respect to collisions simulated by the physics engine between the two object entities. | 2306.12672#128 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 129 | vectors with direction and magnitude, as the meaning of push in a physical world. Similarly, we translate hits with respect to collisions simulated by the physics engine between the two object entities.
In our appendix, however, we also implement and show how a discrete event semantics can also be constructed over variables in the physics engine, to highlight potential future connections between our implementation and more classical event semantics representations. Neo-Davidsonian event semantics and related approaches (D. Davidson & Rescher, 1967; Parsons, 1990), for instance, have long modeled events in language with discrete event entities and lexical event predicates (eg. is_hitting) that describe particular categories of events in time. Prior work in classical event semantics has also considered how discrete event representations relate to underlying physical forces (Talmy, 1988), with particularly close connections to lexical semantics approaches (Jackendoff, 1985; Levin, 1993; Pinker, 1984; Schuler, 2005; Talmy, 1988) that realize verb meanings into cognitively-grounded physical concepts of motion and forces. | 2306.12672#129 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 130 | Our implementation concretely realizes these semantic events and predicates as functions derived entirely on top of a fully realized, continuous world state modeled in a physics engineâis_hitting, for instance, is an event derived on top of the collision mechanics in the underlying physics engine. Other event predicates, like is_moving, or is_resting, can be similarly as thresholds on continuous kinematic properties (here, velocity) represented in the world state. Our broader goal is to show all of these can be broadly constructed over a probabilistic language of thought, which grounds out concretetly with respect to states in an implementable physics engine. | 2306.12672#130 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 131 | Translating from language to program expressions. As with our visual scenes example, the translations we show in Fig. 10 are again chosen to illustrate the generalization and amortized inferences that the language- to-code LLM can make. Much like vague quantifiers, we find that the context-conditioned LLM can directly infer reasonable meanings for graded terms that pick out thresholds over a numeric distributionâtranslating phrases like not very heavy, pretty heavy, and pretty light directly into reasonable, context-specific thresholds on continuous masses, or pushed gently and pushed forcefully into thresholds on forces. Again, we see interesting future grounds for further integrating these kinds of amortized inferences with more explicit, probabilistic inference mechanisms for deriving themâsuch as to integrate inferences over language with new contextual observations from other modalities, such as perceptual or motor observations from seeing or actually moving these objects that might update oneâs background beliefs over the distribution of masses.
Putting it together: Probabilistic inference and physics simulation from language. The examples in Fig. 10 show how this approach can capture the nuanced relationships between language and physical reasoning. Language that modulates any of the physical properties in our introduction to this section, from the masses of objects, their shapes and corresponding friction when moving, and the forces they receive, changes the distribution over internally simulated scenes, and is reflected in updated inferences about downstream events. | 2306.12672#131 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 132 | Future directions: Perception as inverse rendering and complex physical reasoning as intuitive phsyics. As with all of our other examples, its important to emphasize that our simulate_physics interface is almost the simplest possible world model we might construct over physical scenes. The approach we take here is inspired by, but much simpler than, many other probabilistic generative models (Allen, Smith, & Tenenbaum, 2020; Battaglia et al., 2013; Ullman et al., 2017; J. Wu et al., 2015a; Xu et al., 2021) of more complex object configurations in more complex environments (such as ramps, stacks of objects), many other properties that we can describe about objects themselves (such as their material ), and arbitrary forces (like bumping the table or dropping objects from above). | 2306.12672#132 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 133 | Our approach in these sections also suggests a rich line of future work for reasoning jointly about observations in language, and from perception. While we do not implement a perceptual module in our example, the framework we sketch here can be directly integrated with the broad body of work on inverse graphics, which frames scene understanding as inference from observed visual inputs to recover structured representations of a sceneâs contents (D. Kersten, Mamassian, & Yuille, 2004; D. K. D. Kersten & Yuille, 1996; Lee & Mumford, 2003; J. Wu, Tenenbaum, & Kohli, 2017; J. Wu, Yildirim, Lim, Freeman, & Tenenbaum, 2015b; Yi et al., 2018; Yildirim, Belledonne, Freiwald, & Tenenbaum, n.d.; Yuille & Kersten, 2006). Our framework suggests a particularly fruitful integration between language and the growing body of work that
31
3.2 Language for visual and physical reasoning
3 WORLD MODELS | 2306.12672#133 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 134 | 31
3.2 Language for visual and physical reasoning
3 WORLD MODELS
combines probabilistic programs and graphics rendering engines (Gothoskar et al., 2021; Kulkarni et al., 2015; V. K. Mansinghka et al., 2013; Zinberg et al., 2019). To draw inferences about visual scenes from perceptual inputs, models like these incorporate convolutional neural networks to make fast, amortized proposals about the scene state from vision, but with respect to a generative program that defines the underlying scene state and guide inferences about particular scenes, such as to reason about occlusion.
Integrated with the approach we describe here, this framework could ground linguistic queries directly into vision, allowing structured inferences for visual question answering (e.g., counting the number of unique colors of the dishes in a scene). Moreover, it could enable more complex, joint inferences that integrate visual observation with linguistic information about latent physical properties of objects in a scene (e.g., mass, friction) or the presence or identity of occluded objects. Such multimodal integration holds the potential to shed further light on the ways that linguistic knowledge can shape our understanding of and reasoning about physical scenes.
32
3.3 Language for reasoning about agents and plans
3 WORLD MODELS
# 3.3 Language for reasoning about agents and plans | 2306.12672#134 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 135 | 32
3.3 Language for reasoning about agents and plans
3 WORLD MODELS
# 3.3 Language for reasoning about agents and plans
One of the most deeply human things we can talk about is other people. To conclude this section, we turn to language about other social beingsâagents who want things, chase goals, and plan how to act in the world around them.
As an illustrative example, we consider a domain (Fig. 11) inspired by C. L. Baker, Tenenbaum, and Saxe (2007), which evaluated commonsense social inferences about agents with different preferences and goals. In our slightly modified example, we consider a set of people with varying food preferences who are making plans for lunch. Based on the map shown in Fig. 11, weâll imagine which restaurant they might go to based on what foods they like, how far each restaurant is from their office (shown in blue), and whether restaurants happen to be open or closed. Weâll also note that students can bike or walk to any restaurant, but include the intuitive fact that biking is faster on roads, but slower than walking on the lawns. | 2306.12672#135 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 136 | The original experiments in C. L. Baker et al. (2007) used visual stimuli to depict agentsâ paths and plans, but language is a particularly natural and nuanced way to communicate information about other agents. Consider the range of situations we can describe in this simple example. We might leverage our wide vocabulary for describing the spectrum of someoneâs preferences and desiresâwhether they crave pizza or hate vegetables, or whether they love sushi rather than merely liking it. We might describe their more concrete, discrete goals, like getting to a pizza place or generally getting to the closest restaurant to the office. The inferences we draw from language also depend on our intuitions about agents themselves. All else being equal, we expect people to minimize the effort it takes to act, while trying to maximize the value they gain from acting. We might generally expect someone to walk down Ames Street if they wanted to go to the pizza place rather than taking an unnecessarily convoluted path, or to jump on a bike if they owned one, rather than taking a slower walk there. We also understand, of course, that people need to accommodate the world itself in their plans, and might not go to the pizza place no matter how much they love pizza, if they were told that the pizza place was closed. | 2306.12672#136 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 137 | Perhaps more subtly, but equally importantly, what we know about agents also informs the wide range of inferences we can draw from language about their actions. Consider, for instance, what you can infer from being told that someone had started at the office, and was now walking across the southern lawn. Because theyâre on a direct route towards the vegetarian place, you might infer that they are more likely to prefer vegetarian food, and that they either know or at least believe that the vegetarian place is open. Because they are walking on foot, you might also suspect that they do not own a bike, which would have allowed them to get to the restaurant more quickly. All of these inferences build on a cohesive picture of agents as a wholeâour expectations about agents as goal-directed, efficient actors inform how we think about any given action. | 2306.12672#137 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 138 | As with visual and physical reasoning, this section builds more generally on extensive work in cognitive science and artificial intelligence on social reasoning, and seeks to integrate this broader literature into our framework for language. Developmental evidence suggests that we have a core conceptual understanding of agents as goal-directed actors from a very young age (Csibra, 2008; Csibra, BÃró, Koós, & Gergely, 2003; R. M. Scott & Baillargeon, 2013; Spelke & Kinzler, 2007). Computational cognitive models, and the broader AI planning literature, have long approached social inferences like those we describe here under under a unifying model of planning and inverse planning (C. Baker et al., 2011; C. L. Baker, Saxe, & Tenenbaum, 2009; C. L. Baker et al., 2007; M. F. Cusumano-Towner, Radul, Wingate, & Mansinghka, 2017; Jara-Ettinger, Schulz, & Tenenbaum, 2020; Seaman, van de Meent, & Wingate, 2018). This framing couples the forward planning of actions that achieve goals or maximize utilities, to the inverse problem of inferring latent variables about the agent or the world from observations of their actions. | 2306.12672#138 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 139 | The example below shows how we can extend the modeling motif from our previous discussion of visual and physical reasoning, which shows how our framework can relate language to other core cognitive modules via interfaces implemented in a general probabilistic language of thought. In this section, we introduce model-based planners as another such core computational module, which can be integrated into this framework to support a wide range of probabilistic forward and inverse inferences about agents and their actions as they are referenced in language.
Integrating a probabilistic generative model over agents with a planner. As a concrete example, the generative program excerpted in Fig. 11 (shown in full in Appendix A.4.1) illustrates how an integrated
33
3.3 Language for reasoning about agents and plans
3 WORLD MODELS | 2306.12672#139 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 140 | Generative world model of planning domain Reference map g (define restaurants (list "sushi âpizza 'vegetarian)) (define is_open (mem (lambda (restaurant_type) (f1ip)))) (define gridvorld (List list âames "lawn âlawn âlawn sushi) list âames âlawn âlawn âLawn âdanner) ist âoffice âbarlow âbarlow âbarlow âdanner) list âames âlawn âlawn âLawn âdanner) (list âanes âlawn âlawn âlawn vegetarian) jy, CES "BED 'GtSonâcrson âcarson âdbner) Qo S+o Mass (define has_bike (mem (lambda (agent-id) (flip)))) define restaurant_utility (nem (lanbéa (agent-id restaurant_type) (uniform-draw (List (gaussian POSITIVE UTILITY. MEAN UTILITY VARIANCE) (eaussian NEGATIVE _UTILITY_MEAN UTILITY_VARIANCE))))) (define motion_utility (mem (lambda (agent-id | 2306.12672#140 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 141 | (eaussian NEGATIVE _UTILITY_MEAN UTILITY_VARIANCE))))) (define motion_utility (mem (lambda (agent-id location_type motion_type) (case location_type Carson Avenue (Clann) (ease motion_type (C'is_biking) -1) (Cisiwalking) -0.2) ...) Rr 0.05 f+ Be -0.05 (define plan (aen Canbda (agent-id map statex state_y) +-0.5 G+ H+-0.01 (let nap. transition=fn & Bes oo Get value_function Get optinal_policy_tron initial state ...) Barlow Street Ames Street Danner Street Reasoning about agents, goals, and plans from natural language Situation B Situation A Alex likes all of the nearby r condition (ond EDR Coondition (> (restourantutidity "Vie sushi) 19) Be Ci Gestauraneutitty âatexâsushi) © (restaurant_utility âalex âpizza) 0) © (restaurant_utility âalex 'vegetarian) 0))) 't mind vegetarian, either, but they h Condition Boe âGna 6 | 2306.12672#141 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 142 | 0) © (restaurant_utility âalex 'vegetarian) 0))) 't mind vegetarian, either, but they h Condition Boe âGna 6 (rostauane-utitity âLio âvegetarian) 0) (< (restaurant_utility âlio âpizza) @))) for lunch? Where do you think Lio will go? Inference over plans Cavery Hp Gotactions "Lio (lambda (action) (and Gis_subject.of action? action â1i0) Gs_action? action "is going))))) Oh, but the su EBB (condition (oot isopen *sush)) Where do you think Lio will go now? (query $Bp (Getactions "Lio (Lanbda (action) (and Cis_subject_of_action? action â1io) Gs_action? action "is_going))))) SS EE | 2306.12672#142 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 143 | Figure 11: Our language about other people builds on our intuitions about how agents act on their preferences and goals. (Top) Our example probabilistic generative model describes a prior over agents with different preferences for the nearby restaurants shown on the map, as well as the relative cost of getting to each one on bike or on foot. Integrating a model-based planner into this generative model allows it to express a prior on how agents will actually act based on their desires, balancing these preferences against whether restaurants are open, and whether or not they have a bike. (Bottom) Observations and queries about the agents, their goals, and about the world itself updates a unified belief distribution, reflecting how agents plan in the world and how observing their actions drives inferences about the latent state in the world.
probabilistic modeling and planning language can describe the agents and restaurants domain.
To model background states of this environment, our implementation represents the spatial structure of the campus (as a simple gridworld map), and stochastic Boolean variables that model whether someone owns a bike (has_bike) and whether a given restaurant is open (is_open).
We then introduce a generic, utility-based formulation derived from the classical AI planning literature to
34
3.3 Language for reasoning about agents and plans
3 WORLD MODELS | 2306.12672#143 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 144 | 34
3.3 Language for reasoning about agents and plans
3 WORLD MODELS
model the varying preferences of any given person and the effort of taking particular actions (see Russell and Norvig (2021) for a review). Incorporated into a probabilistic generative model, we can express the distribution of preferences any particular agent could have, and the way these preferences interact with the stochastic mechanics of any given world. In our implementation, we model these varying preferences as a stochastic utility function associated with particular agents and restaurants (restaurant_utility). Our implementation shows a bimodal Gaussian distribution, in which people tend to have distinctly negative or positive preferences for any given restaurant, but any other formulation would be easily expressible. We also model how these preferences interact with other aspects of the worldâwe condition the value an agent derives from actually arriving at a restaurant (utility_at_restaurant_state) on whether or not it is open. These utilities interact with possible actions an agent can take to get to different restaurants. We model the distribution of possible actions an agent might take (our available_actions function conditions on whether an agent has_bike), and the varying effort of individual actions. Our motion_utility conditions on the type of action and the state in which it is used, to model the greater effort of biking on grass and the relative ease of biking on the road. | 2306.12672#144 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 145 | Up to this point, the generative model simply expresses a general prior over world states that includes agent preferences. Now, to model how an agent actually decides on a course of action conditioned on the world state, we can finally introduce a plan interface (Fig. 8) that calls out to a model-based planner. Our implementation, while simple, implements the basic functionality core to an AI plannerâit computes a sequence of actions that achieves a goal or maximizes a value function, subject to an agentâs underlying preferences, available actions, and the conditions of the environment. As with the physics interface in our previous section, our example planner implementation is simple and generic enough that we also implement it fully within the body of the probabilistic program itself (see Appendix A.4.1) for illustrative purposes. Our implementation here uses a simple value-iteration algorithm, which computes an optimal policy of action to trade off between the value an agent derives from particular restaurants, and the cost of taking actions (walking or biking in any given direction, from any location in the map) towards them.
Language about agents as program expressions. By augmenting the probabilistic generative model with a planner, we can now ground many of the basic ways we talk about agents themselves in probabilistic conditions and queries to this model. | 2306.12672#145 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 146 | Language about what people want and prefer, like whether someone wants, likes, loves, doesnât mind, or hates a given restaurant in our example domain, can construct formal conditions over underlying utility variables, that in turn drive inferences about how the agent will act. In the examples shown in Fig. 8, we illustrate the semantics of these terms as conditions constructed directly over the continuous utility variables defined in this domain. We could also derive a more explicit set of predicates (like a Boolean likes? predicate, defined over the underlying utilities), but as in several previous sections, we show these more transparent semantics (like translating likes into a > 0 threshold on utilities) to illustrate how language relates to our model of agents, and to demonstrate the amortized inferences that a language-to-code model can make in directly inferring these threshold values in context, and for new preference words.
Observations about relevant aspects of the environment, like whether the sushi place is closed or Alex has a bike, are translated as in previous sections into conditions on the generative world model. In this integrated framework, these observations now support downstream inferences about how agents might change their behavior with respect to what we are told about the world. | 2306.12672#146 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 147 | Finally, of course, explicit observations and queries about someoneâs goals, plans, and individual actions (Gabe was biking East on Barlow Street, or What restaurant will Alex go to for lunch? ) can be interpreted with respect to the underlying, model-based planner, to drive inferences about forward planning agents choosing actions in the world, and inverse inferences over the many latent variables in the world that collectively explain language about someoneâs actions.
Translating language using language-program distributions. We showcase several distinct examples (Fig. 11) of context-sensitive, pragmatic inferences derived using a language-to-code meaning function conditioned on language and this generative world model.
As in previous sections, we find that the LLM can directly ground vague, graded terms in context-specific thresholds over particular continuous variables in the probabilistic world model. Here, this approach grounds preference terms (doesnât mind, loves) into reasonable thresholds over the utility variables in the world model
35
3.3 Language for reasoning about agents and plans
3 WORLD MODELS | 2306.12672#147 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 148 | 35
3.3 Language for reasoning about agents and plans
3 WORLD MODELS
(Fig. 11). We find that the LLM can both infer reasonable utility thresholds and generalize to words not explicitly given as example translations: we prompt the model with a handful of examples pairs, such as a translation that maps the word likes to a > 0 threshold on utilities, and the LLM successively generalizes this parse to ground other preference terms like hate and love, presumably based on the comparative valences of these preference terms in the broader distribution of language. | 2306.12672#148 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 149 | We also find that the LLM can directly translate quantifiers over contextual sets in this domainâlike likes all of the nearby restaurantsâinto a conjunction over the set of restaurant literals in this domain, by conditioning on the generative world model during parsing. More concretely, this means the LLM identifies the relevant restaurants list (shown in the excerpted generative world model in Fig. 11), and conditions on it to directly produce the unrolled conjunction over the list contents ((and (is_open 'sushi) (is_open 'pizza)...) intended by all restaurants, amortizing the computation that would have otherwise been necessary over a more literal semantics like (all restaurants). Together with the previous sections, these examples suggest how our framework might jointly support explicit inferences from language into various expressions in a language of thought, and learned patterns from the large language-to-code model that amortize some of these inferences over time, which we discuss directly as grounds for future work in Section 5. | 2306.12672#149 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 150 | Putting it together: Probabilistic inference and planning over language. The several example dialogues shown in Section 2 show how this approach captures the integrated social inferences we make about agents in language. We can now query the plans and goals of agents, deriving inferences with respect to the forward planning module incorporated into the underlying generative model, conditioning flexibly on arbitrary information in context, and updating expectations about where agents will go, and how they will change their plans based on new observations about the world. In turn, we can derive inverse planning inferences, like whether the pizza place is open, based on relatively tangential information about someoneâs actionsâknowing that an agent really likes pizza, but is seen taking a path that wouldnât efficiently lead them there. All of these inferences fall out of the same underlying generative model, which unifies these distinct observations about people and the world in language with respect to a formal model of how agents tend to behave. | 2306.12672#150 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 151 | Future directions: Scaling integrated world models for planning and inference. The plan function in our example implements a very simple but model-based plannerâit computes actions based on an underlying, structured model of the world. In comparison to the other domains in this paper, linguistic planning and social reasoning have received perhaps the attention in recent work, in part because complex reasoning about other agents (Ullman, 2023) and precise general planning tasks (Bubeck et al., 2023; Valmeekam et al., 2023) appear to pose outstanding challenges for even the largest current language models. Recent work has sought to interface large language models with classical planning languages and symbolic planners (eg. Collins, Wong, Feng, Wei, and Tenenbaum (2022); Ding, Zhang, Paxton, and Zhang (2023); B. Liu et al. (2023); Xie et al. (2023)), as well as general purpose programming languages used to express code-based policies (G. Wang et al., 2023). All of these approaches suggest directions for scaling the simple planning implementation we show hereâour goal is to show how classical planning approaches can be nested within and integrated into probabilistic generative models to support a range of complex reasoning about other agents to infer their goals and actions from information in language. | 2306.12672#151 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 152 | Collectively, the broader cognitive science and AI planning literature suggests many directions for scaling up this model towards more of the nuance in human social reasoning, each which would in turn allow this paradigm to ground richer and more nuanced descriptions of agents, and the inferences we draw from this language. Some of the most important ones include planners and planning languages that designed to express explicit and discrete goals, like wanting to be at the highest rated pizza place within a half-mile radius or trying to get a plate of sushi for under ten dollars, rather than continuous values and utilities (G. Davidson, Gureckis, & Lake, 2022; Fikes & Nilsson, 1971; D. McDermott, 1982; D. M. McDermott, 2000; Pednault, 1989); planners that model explicit uncertainty about the world itself, like agents who donât know whether a restaurant is open or closed until they get there (C. Baker et al., 2011; Kaelbling & Lozano-Pérez, 2013; Zhi-Xuan, Mann, Silver, Tenenbaum, & Mansinghka, 2020); hierarchical planners that recursively turn goals into more specific subgoals to account for | 2306.12672#152 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 153 | Mann, Silver, Tenenbaum, & Mansinghka, 2020); hierarchical planners that recursively turn goals into more specific subgoals to account for plans over longer timescales, at differing levels of abstraction Kaelbling and Lozano-Pérez (2011); and recursive models of agents who are themselves thinking about other agents, such as models of two people trying to meet up at a restaurant that they think will satisfy both of | 2306.12672#153 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 154 | 36
3.3 Language for reasoning about agents and plans
3 WORLD MODELS
them, or where they might be most likely to find the other (C. Baker et al., 2011; Krafft, Baker, Pentland, & Tenenbaum, 2016; S. A. Wu et al., 2021). Each of these could allow this paradigm to ground richer and more nuanced descriptions of agents, and the inferences we draw from this language.
Conclusions. Together with the previous sections on vision and physics, our approach to grounding language about social agents highlights the more general computational account suggested by our framework. By translating language into probabilistic programs, language can construct, describe, and drive inferences over our internal world models. These may in turn incorporate many more specific computational enginesâ modeling how scenes are visualized, how physics unfolds in the world, or how agents plan towards their goalsâ as modular interfaces that can be called upon in a general probabilistic language of thought.
37
4 GROWING WORLD MODELS
# 4 Growing and constructing world models from language | 2306.12672#154 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 155 | 37
4 GROWING WORLD MODELS
# 4 Growing and constructing world models from language
In Section 3, we illustrated how domain theories expressed in a probabilistic language-of-thought can provide flexible and powerful scaffolding for language understanding. In each, generative world modeling programs provided a unified substrate for defining a structured domain model and representing the meanings of sentences. But where do these world models come from? If we want our PLoT account of language understanding to scale beyond the knowledge that can be hand-coded by a programmer, we need to provided some account of how such a system might acquire new concepts and domain theories.
One of the hallmarks of human communication is our ability to teach each other fundamentally new concepts in language. We coin new words, define interrelated conceptual systems, and describe entirely new world models, explaining the abstract underlying structure of whole domains. Because language spans so many aspects of human thought, it is perhaps a uniquely powerful tool for structuring learning. In language, we can define new concepts and domains that are integrated into our inferences, relational reasoning, understanding of the visual and physical world, and goals and plans. | 2306.12672#155 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 156 | How do we learn new concepts and world models from language? And how can we build computational systems that can be taught in language as we teach each other? In this section, we showcase the extensibility of the framework we have proposed as a unified model for relating language to thinking. Because world models in a PPL are expressed as programs, the same core computational components can be used to extend and construct world models themselves from language. In Section 4.1, we show how we can extend an existing domain model with new lexical concepts. Then, in Section 4.2, we turn to language that communicates an entire background domain model from scratch. Through these simple explorations, we aim to point towards a near-term horizon where systems might construct rich and nuanced probabilistic models to make sense of their linguistic environments and the broader world around them.
# 4.1 Growing a world model from language | 2306.12672#156 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 157 | # 4.1 Growing a world model from language
How can we enrich our world models with concepts learned from language? Letâs consider, for instance, the kinship domain model used in the relational reasoning example in Section 3.1. The probabilistic program used in this example described a basic generative model over family trees, and then defined a handful of primitives, such as concepts for grandparent and sibling. But most people know and can talk about many more kinship relations than those included in that simple example, such as uncles, aunts, and cousins. What happens when we use language that invokes one of these undefined concepts?
Condition: Avery is Blakeâs uncle.
(condition (exists (lambda (x) (and (sibling-of? x 'avery) (parent-of? x 'blake))))) | 2306.12672#157 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 158 | (condition (exists (lambda (x) (and (sibling-of? x 'avery) (parent-of? x 'blake)))))
The LLM proposes an initial translation that includes some of the important components in the concept of an âuncle.â However, several key details are not quite right: an uncle should be the brother of Averyâs parent, not just a generic sibling. Moreover, an uncle can come from outside the bloodline, in which case this definition would not fit. Much like a person learning English, the LLM has a partial notion of this concept, but could benefit from more explicit instruction from a knowledgeable teacher. In this section, we introduce a new define construct that does just this by prompting the LLM to generate a new definition from language.
Define: An uncle is the brother of oneâs father or mother, or the husband of oneâs aunt.
(define (uncle-of? name_a name_b) (or (exists (lambda (x) (and (brother-of? name_a x) (parent-of? x name_b))))
38
4.1 Growing a world model from language
4 GROWING WORLD MODELS
(exists (lambda (x) (and (husband-of? name_a x) (aunt-of? x name_b) ))))) | 2306.12672#158 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 159 | 4 GROWING WORLD MODELS
(exists (lambda (x) (and (husband-of? name_a x) (aunt-of? x name_b) )))))
Weâve used define to fill in a bit of common knowledge that was missing from our conceptual system. But the mental frameworks we use to reason about the world are constantly undergoing conceptual change, both at an individual and a societal level (Carey, 1999; Posner, Strike, Hewson, & Gertzog, 1982). For instance, shifts in cultural notions of gender and identity have introduced new kinship terms into English. One of the hallmarks of language is the ease with which we can coin and communicate new concepts, like the following:
âPiblingâ is a gender-neutral term for âauntâ or âuncleâ that refers to the sibling of oneâs parent.
Finally, as we touched on in Section 3.1, kinship systems vary widely; certain cultures have kinship concepts that are more granular than those found in English. For instance:
In the language of the Northern Paiute, a group of peoples indigenous to the Great Basin region of the US, âp¯aanâiâ refers specifically to the sister of oneâs father.5 | 2306.12672#159 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 160 | From this definition, we can incorporate the concept of a p¯aanâi into our growing set of kinship concepts. Our framework elegantly captures this ability to learn new concepts in language that we can then use productively to construct new sentences and reason about coherently against the background of our existing world knowledge. Here, we walk concretely through how the basic components of our framework are combined to grow the original kinship model with new concepts. | 2306.12672#160 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 161 | Linguistic meanings as program expressions. Much as we interpreted observations as program expressions that conditioned an existing world model, and questions as program expressions that queried it, a sentence like, The term âp¯aanâiâ refers to the sister of oneâs father, can be modeled as a program expression that defines a new such primitive relation, paani-of?. The examples in Fig. 12 show how the semantics of this sentence, along with the other kinship concepts introduced in the introduction to this section, can be similarly understood as expressions that define new conceptual primitives. These expressions are particularly interesting because they are defined in terms of other concepts, like sister-of? and father-of?, that make up this conceptual system. In this way, our treatment of concept learning is closely linked to the idea of a conceptual role semantics (Block, 1998; Field, 1977; Greenberg & Harman, 2005; Harman, 1982), in which concepts (including lexical concepts) derive meaning from their interrelated roles and relationships to other concepts. In these examples, interpreting these sentences as program expressions defined over the base generative model showcases the flexible role that the generative modeling program can play, in relation to | 2306.12672#161 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 162 | these examples, interpreting these sentences as program expressions defined over the base generative model showcases the flexible role that the generative modeling program can play, in relation to language about the domain. While our example showcases simple relational definitions over the underlying world model, it is worth noting that these are not the only kinds of functional definitions that we could learn to extend a world model from language. This general approach can be used to make meaning from sentences that grow an underlying world model in other ways, such as by defining new random variables (like phenotypic eye colors or other inherited traits) that extend the probabilistic generative model. | 2306.12672#162 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 163 | Translating with a language-program distribution. While the meanings of these sentences play a different role in our frameworkâthey extend the world modeling program, rather than condition or query itâthey are still program expressions. Therefore, with minor adjustments, we can use the same language-to- code LLM approach to ground these new concepts in our world model. To derive each of the translations shown in Fig. 12, we feed the LLM the same prompt as in Section 3, which includes the existing generative model and example translations. The final line of the prompt begins with Define: and contains the language describing the new concept definition. Each sentence is then then translated into the new define statements which construct new conceptual kinship primitives. In sum, linguistic definitions are simply another kind of program expression we can translate into from language.
5At the time of writing, a Google search for the term âp¯aanâiâ yielded zero results. The term itself was pulled from a non-searchable table in a century-old manuscript (Lowie, 1930). As far as real-world kinship terms go, it is comparatively unlikelyâthough not impossibleâthat âp¯aanâiâ was part of Codexâs pretraining data.
39
4.1 Growing a world model from language
4 GROWING WORLD MODELS | 2306.12672#163 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 164 | A. Existing generative world model B. Defining new concepts via language-to-code translation (define (person person-id parent-1-id parent-2-id) ist An uncle is the brother of one's parent, or the husband of one's aunt. (pair 'person-id person-id) (pair âname person-id) (pair âgender (person->gender person-id)) (define (uncle-of? name_a name_b) (pair 'parent-1-id parent-1-id) (or (exists (lambda (x) (and (brother-of? name_a x) Ss (parent-of? x name_b)))) (exists (lambda (x) (and (husband-of? name_a x) (aunt-of? x name_b)))))) (pair 'âparent-2-id parent-: (define (parent-of? name_a name_b) (member? name_a (parents-of name_b))) (define (father-of? name_a name_b) (and (equal? (get-property name_a âgender) âmale) (parent-of? name_a name_b))) (define (sister-of? name_a name_b) (define (pibling-of? name_a | 2306.12672#164 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 165 | (parent-of? name_a name_b))) (define (sister-of? name_a name_b) (define (pibling-of? name_a name_b) (and (equal? (get-property name_a âgender) 'female) eo (or (uncle-of? name_a name_b) (sibling-of? name_a name_b))) (aunt-of? name_a name_b))) C. Extended world model to the sister of oneâs f. (define (uncle-of? name_a name_b) (define (paani-of? name_a name_b) (define (pibling-of? name_a name_b) » (exists (lambda (x) (and we , (sister-of? name_a x) (define (paani-of? name_a name_b) (father-of? x name_b))))) D. Grounding new language in learned concepts According to historical records, Chie of Numaga. Winnemucca (condition R (or (father-of? âwinnemucca ânumaga) (uncle-of? âwinnemucca ânumaga))) Numaga Numaga's son would have called Sarah Winnemucca his | 2306.12672#165 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 166 | ânumaga) (uncle-of? âwinnemucca ânumaga))) Numaga Numaga's son would have called Sarah Winnemucca his âpaanâiâ. (condition san » (exists (lambda (x) (and weneae » (son-of? x ânumaga) (paani-of? âsarah x))))) = Venere Sarah had a sibling named Natchez. » (condition b (sibling-of? âsarah ânatchez)) Natche | (query (length (filter-tree » (lambda (x) 2 » (exists (lambda (y) (and (pibling-of? x y) (son-of? y ânumaga)))))))) | 2306.12672#166 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 167 | Figure 12: Extending the kinship world model with linguistic descriptions of kinship relations drawn from contemporary English and a low-resource language (Northern Paiute). A language-to-code LLM is prompted with (A) the existing generative model code and (B) language describing novel kinship relations to produce new concept definitions in Church. The extended world model (C) now supports probabilistic reasoning from language that contains these new concepts (D).
40
4.2 Constructing new world models from language
4 GROWING WORLD MODELS
Growing the domain model with new program expressions. Finally, by incorporating the meanings of sentences like The term âp¯aanâiâ refers to the sister of oneâs father back into the domain model itself, we have formalized a simple approach for enriching a world models with concepts learned from language. Each sentence shown in Fig. 12 is translated into a program expression that defines a new relational function which extends the set of conceptual primitives that comprise the extended kinship domain.
The more general principle here is not limited, of course, to kinship concepts. We could extend any of the
domain models in each of our previous examples with new concepts learned from language. For example:
In tug of war, the strongest person on a team is referred to as the âanchorâ.
A âmonochromeâ scene is one in which every object is the same color. | 2306.12672#167 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 168 | A âmonochromeâ scene is one in which every object is the same color.
⢠On âNational Restaurant Dayâ, all the restaurants in town are guaranteed to be open.
Our proposal in this section is closely related to other work which formalizes the learning of new concepts as the learning of new program components, such as program synthesis systems that bootstrap a growing library of domain-specific concepts constructed out of an initial programming language (Bowers et al., 2023; Dechter, Malmaud, Adams, & Tenenbaum, 2013; Ellis et al., 2020); work that formalizes the learning of new concepts from language as the learning of new program primitives (Shin, Brockschmidt, Allamanis, & Polozov, 2018; Sumers, Hawkins, Ho, Griffiths, & Hadfield-Menell, 2022; C. Wong, Ellis, Tenenbaum, & Andreas, 2021); and semantic parsers that bootstrap lexicons of compositional word meanings, defined in a formal logical language, for interpreting new sentences (Artzi, Das, & Petrov, 2014; Cai & Yates, 2013; Kwiatkowski, Zettlemoyer, Goldwater, & Steedman, 2011). | 2306.12672#168 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 169 | The framing we describe here showcases the tight integration between language, meanings, and the probabilistic programs that form the formal substrate for modeling the world in our framework. Language that specifies new parts of a world model can be cleanly interpreted as program expressions, which are used to extend the generative world modeling program itself. These generative models in turn provide the basis for reasoning about new observations that build on these learned and structured bodies of conceptual knowledge. Returning to the themes of our introduction, human-like thinking, under the broader computational approach we take throughout this paper, is formalized as probabilistic programming and inference over probabilistic programs. This is how we construct models of and reason about the world. Language, then, is an especially powerful tool for constructing programs of all kindsâones that condition and query existing world models, and ones that actually construct and extend the flexible domain models themselves that undergird linguistic meaning and thought.
# 4.2 Constructing new world models from language | 2306.12672#169 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 170 | # 4.2 Constructing new world models from language
So far, we have assumed that language understanding happens in the context of a particular world model appropriate for the situation at hand, containing definitions of key concepts like sibling for kinship reasoning, or strength for reasoning about playground games. We have now seen how these models can be extended with new lexical definitions on the fly, but the question remains of where these background world models come from in the first place. The full answer to this question is likely complex: people learn about the world in all sorts of ways. But in some settings, people do seem to acquire new world models largely through language: we read the rules of new games, are taught the workings of machines, and take classes on the causal structure of many other complex systems (the human body, the solar system, the government). In this section, we broaden our scope beyond language that conveys new concepts that extend an existing domain model to consider how language can define entire new domain models from scratch. | 2306.12672#170 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 171 | As a concrete example, letâs return to the scenario from Section 2.2. Suppose your friend is telling you about a tug-of-war tournament that took place the prior weekendâonly this time, youâve never heard of tug-of-war before and donât know how itâs played. Your friend might explain the scenario to you using languageâindeed, their description might sound similar to the one our paper itself uses to convey the concepts of this particular situation:
Tug-of-war is a game played between teams of players. First, strength levels vary widely from person to person. Furthermore, each person has a percentage of the time that they are lazy. The strength of a team is the combined strength of its members, except that in any given match, each
41
4.2 Constructing new world models from language
4 GROWING WORLD MODELS
player may decide to be lazy, and thus contribute only half of their strength. Whether one team beats another just depends on which team pulls stronger that match. | 2306.12672#171 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 172 | 4 GROWING WORLD MODELS
player may decide to be lazy, and thus contribute only half of their strength. Whether one team beats another just depends on which team pulls stronger that match.
Given this language, you can learn the underlying domain model necessary to reason about future observations (Even working as a team, Lio and Alex could not beat Josh) and answer questions (How strong is Josh? ). In this section, we explore how the components of our framework can be used to construct an entire domain model as it is communicated in language, using the tug-of-war domain as an illustrative example.
Linguistic concepts as program expressions. Considering the vignette above, we might distinguish between two kinds of statement in your friendâs description of tug-of-war:
⢠Some statements introduce new concepts solely in terms of previously introduced concepts (e.g., Whether one team beats another just depends on which team pulls stronger that match).
⢠Other statements posit the existence of new primitive concepts, like strength and laziness, that have certain properties (e.g., Strength levels vary widely from person to person). | 2306.12672#172 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 173 | ⢠Other statements posit the existence of new primitive concepts, like strength and laziness, that have certain properties (e.g., Strength levels vary widely from person to person).
The first case is similar to the sentences we saw in Section 4.1, and we can interpret them as language-of- thought definitions. The second case, however, is genuinely new: these sentences neither define new words in terms of an existing domain theory, nor encode predicates over possible worlds. Rather, they define random variables that we expect to have different values in each possible world.6 In Church, such variables can be defined using mem: for example,
# (define strength (mem (lambda (person) (normal 100 20)))) | 2306.12672#173 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 174 | # (define strength (mem (lambda (person) (normal 100 20))))
declares that expressions of the form (strength person) are well-formed and evaluate to a number in each possible world, and that our prior distribution for a new personâs strength is a Gaussian centered at 100. (The mem construct memoizes the defined function, so that repeatedly evaluating (strength 'lio) in the same world will always give the same result.) It might seem strange to claim that the meaning of the sentence âPlayers have different strength levelsâ includes a specific prior over player strengths, like (normal 100 20). We do not make this claim: rather, the meaning function induces a distribution over possible definitions of strength, each of which uses a different prior. What the different possible translations have in common is that they model strength as a continuous variable assigned on a per-player basis, with some population-level variation. See Footnote 6 for further discussion of this distribution, and how it might arise from the literal meaning of the sentence being translated. | 2306.12672#174 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 175 | Translating new concepts from language. As before, because each sentence means some distribution over program fragments in a probabilistic language, we can use probabilistic language-to-code translation models like Codex as models of the meaning function. In Fig. 13, we prompt Codex with an unrelated example world model in domain about diseases and symptoms, and then ask it to translate sentences defining the tug-of-war domain. | 2306.12672#175 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 176 | 6 An alternative perspective is that the sentences we consider in this sectionâboth straightforward definitions, and sentences introducing new primitive conceptsâdo still encode predicates on possible worlds. According to this viewpoint, a sentence like âThe term âuncleâ refers to the brother of oneâs parent, or the husband of oneâs auntâ is an assertion that can be true or false; maybe uncle means something different in another possible world. To understand this viewpoint within our framework, we need to imagine that there is a background world model that models uncertainty about the code of a first-order world model (which definitions exist, and how they are defined). If we had such a model over world models, then sentences like âPlayers have different strength levelsâ could be interpreted as conditioning statements, observing that strength exists as a variable and that its value should vary from person to person. Conditioning on this constraint, we could then sample from the posterior over world models that satisfy this property. In this posterior, there would be some uncertainty over exactly how strength is modeled: e.g., does it vary according to a Gaussian distribution, and if so, with what parameters? | 2306.12672#176 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 177 | We find this view appealing, and believe that making it practical would be an intriguing technical challenge, requiring new developments in the field of Bayesian probabilistic program synthesis (Saad, Cusumano-Towner, Schaechtle, Rinard, & Mansinghka, 2019). In this section, we take a shortcut, of assuming that the meaning distribution induced by a sentence like âPlayers have different strength levelsâ directly samples model fragments consistent with the statement. That is, we ask our meaning function to amortize inference in the hierarchical model, directly proposing code defining strength, rather than first translating to a conditioning statement about strength existing, and then using a slower inference algorithm to infer its definition.
42
4.2 Constructing new world models from language
4 GROWING WORLD MODELS | 2306.12672#177 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 178 | A. Prompt, containing unrelated example world model 3; We define a probabilistic model in Church of the following scenario. 3; At any given time, about 1% of the population has lung cancer, 3) 20% have a cold, 10% have a stomach flu, and 0.5% have TB. (define lung-cancer (mem (lambda (person) (flip 0.01)))) (define cold (mem (lambda (person) (flip 0.2)))) (define stomach-flu (mem (lambda (person) (flip @.1)))) (define TB (mem (lambda (person) (flip @.005)))) 3; If you have a cold, there's a 50% chance you have a cough. 3} 30% of people with lung cancer have a cough, and 70% with TB. 3; There's also a small chance you have a cough even if you're otherwise healthy. (define cough (mem (lambda (person) (or (and (cold person) (flip 0.5)) (and (Llung-cancer person) (flip 0.3)) (and (TB person) (flip @.7)) (flip @.01))))) 3; Whether a person coughs during a particular visit to the doctor's office 3; depends on whether they have | 2306.12672#178 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 179 | @.7)) (flip @.01))))) 3; Whether a person coughs during a particular visit to the doctor's office 3; depends on whether they have a cough, and a bit of random chance. ;; Note that this will differ each time they go to the doctor's office, so 3; we do not use âmemâ (which memoizes the result). (define coughs-on-particular-visit (lambda (person) (and (cough person) (flip 0.7)))) B. Defining a new world model from scratch via language-to-code translation 3; Now, let's define a different probabilistic model of the following scenario. 3; It is totally unrelated to the previous model and does not reference the functions above. = (define strength (mem (lambda (person) (normal 10@ 20)))) Furthermore, each person has a percentage of the time that they are lazy. = (define laziness (mem (lambda (person) (uniform @ 1)))) The strength of a team is the combined strength of its members, except that in any given match, each player may decide to be lazy, and thus contribute only half of their strength. (define team-strength (lambda (members) (apply + (map (lambda (member) = (if | 2306.12672#179 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 180 | decide to be lazy, and thus contribute only half of their strength. (define team-strength (lambda (members) (apply + (map (lambda (member) = (if (flip (laziness member)) (/ (strength member) 2) (strength member) )) members) ))) Whether one team beats another just depends on which team pulls stronger that match. (define team-beats-team 4 (lambda (team1 team2) (> (team-strength team1) (team-strength team2))))) | 2306.12672#180 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 181 | Figure 13: Constructing the tug-of-war model from scratch. This can be accomplished with the same overarching language-to-code approach. (A) We provide a prompt containing one or more unrelated world models as examples. (In this case, the world model defines a medical diagnosis domain.) (B) Prompted line-by-line with language explaining the tug-of-war, Codex constructs a generative model from scratch that is semantically equivalent to the one from Section 2.2 (modulo some superficial naming and parameter choices).
43
4.2 Constructing new world models from language
4 GROWING WORLD MODELS
Constructing the domain model from new program expressions. By translating each sentence in a domain description in sequence, we canâstarting with no definitions beyond those built into Churchâbuild a domain model just as rich as the ones we hand-coded in earlier sections. In Fig. 13, although the specific priors may vary slightly, Codex recovers all the essential structure of our hand-coded tug-of-war model. Once we have a new domain model, we can immediately begin interpreting observations and queries, like those in Section 2.2, or continue to extend the domain model with new definitions. | 2306.12672#181 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 182 | In this section, Putting it together: Growing and constructing world models from language. weâve illustrated how the same basic building blocks used in the rest of the paper â language-to-code translation and probabilistic programs â can be used to extend and construct new world models. Hopefully, these simple sketches highlight a much deeper point: systems that have the ability to author world models in a universal programming language like Church can take advantage of the infinite expressivity of code to generalize to new kinds of language and thinking.
Nevertheless, the examples presented in Section 4 were limited to cases where there was an explicit connection between linguistic instructions and the resulting probabilistic programming expressions. In reality, this relationship is often indirect; language typically only provides us clues about how to think about a situation. In still other instances, we assemble world models in the absence of language, drawing instead on prior experience of similar situations. How can we build systems that learn to build world models on-the-fly? How can such systems remember and expand on prior world models to understand new situations? And how can they incorporate not just language, but the full spectrum of experiences in the world? In Section 5, we consider these questions as part of a discussion of the many future research directions needed to scale our framework to a general model of cognition.
44
5 FUTURE DIRECTIONS
# 5 Open questions and future directions | 2306.12672#182 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 183 | 44
5 FUTURE DIRECTIONS
# 5 Open questions and future directions
By using neural models to translate sentences into probabilistic programs, the sections above demonstrated how LLMs could extract meaning fromâand inference engines could reason aboutâlanguage describing uncertain situations, relational structures, embodied situations and goal-directed reasoning. However, these vignettes also leave open many questions about how to scale this framework to more complex language, and how to automate the process of building meaning representations for new domains. Together, these questions offer a roadmap for progress on central challenges in modeling language, reasoning, and their interaction, across many sub-fields of artificial intelligence and cognitive science.
# 5.1 Scaling models of rational meaning construction
We begin by describing several of the most important research directions necessary for scaling the framework we have articulated throughout this paper towards a more complete model of integrated cognition and language understanding.
# 5.1.1 Building new world models on the fly | 2306.12672#183 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 184 | A key aspect of our proposed architecture is that language is interpreted relative to a probabilistic model of a domain, capturing just enough structure to represent the situation at hand. In Section 4.2, we saw that LLMs could generate these programmatic world models, assuming the model was communicated via a sequence of natural language definitions. But people rarely need such elaborate scene-setting: we can understand language about the world even if no teacher has carefully drawn our attention to the relevant concepts beforehand. A key question is how to model this capability. How do minds craft bespoke world models on the fly, drawing in just enough of our knowledge about the world to answer the questions of interest? How does this process balance competing priorities, such as fidelity to what we know about the world, relevance to the problem at hand, and the efficiency and robustness of inference? These tradeoffs can sometimes seem to evolve during the course of a single chain of human thought. These questions are related to the classic frame problem (McCarthy, 1980) in artificial intelligence and cognitive science, and to recent proposals for addressing it in the setting of causal, probabilistic reasoning (Icard & Goodman, 2015). These approaches view the problem as one of retrieval: from | 2306.12672#184 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 185 | for addressing it in the setting of causal, probabilistic reasoning (Icard & Goodman, 2015). These approaches view the problem as one of retrieval: from a vast array of knowledge we have about the world, how can we select just the relevant parts for reasoning about a particular problem? It remains unclear, however, whether the sequences of bespoke models and approximate inferences produced by our minds can be understood as resource-rational approximations to coherent reasoning and planning in some larger, unifying world model, even in principle. | 2306.12672#185 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 186 | Most probabilistic programming languages were designed for inference in a single, unifying world model (Bingham et al., 2019; Carpenter et al., 2017; Goodman et al., 2008; Milch et al., 2007) that was written by an external mechanism, not to dynamically explore a sequence of probabilistic programs that are being synthesized, learned, and/or edited on the fly. But some progress in language-level support for dynamic world modeling has already been made. Probabilistic programs in Gen (M. F. Cusumano-Towner, Saad, Lew, & Mansinghka, 2019) have been used to synthesize and edit other probabilistic programs (Saad et al., 2019; Witty, Lew, Jensen, & Mansinghka, 2019), and to approximate globally coherent inferences by bridging across sequences of probabilistic programs describing translations among only partially-overlapping worlds (M. Cusumano-Towner, Bichsel, Gehr, Vechev, & Mansinghka, 2018; M. Cusumano-Towner, Lew, & Mansinghka, 2020; A. K. Lew, Matheos, et | 2306.12672#186 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 187 | Mansinghka, 2018; M. Cusumano-Towner, Lew, & Mansinghka, 2020; A. K. Lew, Matheos, et al., 2023; V. K. Mansinghka et al., 2018). Analogous language-level support for dynamic abstraction for planning with symbolic world models has also been developed (Zhi-Xuan, 2022). It remains to be seen to what extent these new degrees of freedom can be exploited by language-to-code models targeting these newer probabilistic programming platforms. | 2306.12672#187 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 188 | How could the common-sense background knowledge needed for dynamic world model synthesis be represented, even in principle? Modern game engines may provide important clues. They can be reconfigured and scripted to simulate diverse imaginary worlds and narratives, featuring interactions between physical objects and goal directed agents in both realistic and physically impossible environments. They routinely combine simulations of the same environment at multiple levels of detail, making computational tradeoffs that are in some ways analogous to the tradeoffs faced by human thinking. The level of scale, coherence, realism, and computational efficiency that they achieve still vastly outstrips the best multi-modal neural
45
5.1 Scaling models of rational meaning construction
5 FUTURE DIRECTIONS | 2306.12672#188 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 189 | 45
5.1 Scaling models of rational meaning construction
5 FUTURE DIRECTIONS
models. Although some progress is already being made by synthesizing lightweight, probabilistic game engine scripts using language-to-code models (C. E. Zhang, Wong, Grand, & Tenenbaum, 2023), many fundamental challenges remain. Game engines lack crucial affordances for robustly fitting world models to sparse data, simulating rare events, and planning under uncertainty. And despite promising progress in neurally-guided program learning (Ellis et al., 2020), showing that libraries and DSLs can be learned from sparse data, there seems to be a long way to go before we can learn game-engine like rules that are sufficient to robustly model common sense. Flexible synthesis and learning mechanisms that can hope to scale across the vast scope of human thought thus seems to require new ideas that span and integrate probabilistic programming, cognitive architecture, and hierarchical program learning.
# 5.1.2 Scaling probabilistic inference in dynamically synthesized world models | 2306.12672#189 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 190 | A central challenge not addressed by this paper is how to scale probabilistic inference to begin to approach the robustness, speed, efficiency, and flexibility of human thought. Consider that the rejection sampling algorithm used in Sections 3 and 4 requires an exponentially-growing number of proposal attempts as the scenario becomes less likely under the prior. Although many exact inference methods for probabilistic programs are much faster and more reliable, they are too restrictive to support many of the world models in this paper (Gehr, Misailovic, & Vechev, 2016; Gehr, Steffen, & Vechev, 2020; Holtzen, Van den Broeck, & Millstein, 2020; Saad, Rinard, & Mansinghka, 2021; Shan & Ramsey, 2017). And although there are many approaches to generic approximate inference in probabilistic programs, drawing on MCMC (Carpenter et al., 2017; Goodman et al., 2008; Wingate, Stuhlmüller, & Goodman, 2011), sequential Monte Carlo (V. Mansinghka et al., 2014; Tolpin, van de Meent, Yang, & Wood, 2016), and variational methods (Bingham et | 2306.12672#190 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 192 | One potential way forward is to explicitly generate models of thinking processes that augment the world models with which they are thinking, by synthesizing inference programs (M. F. Cusumano-Towner et al., 2019; V. K. Mansinghka et al., 2018) tailored to specific problems. For example, Ventureâs inference meta- programming language is designed to enable concise specification of sequential inference processes that combine SMC, dynamic programming, MCMC, gradient-based optimization, and variational inference to perform inference in a sequence of world models and queries that grows dynamically. Data-driven proposals for use with these thinking strategies can also be generated in real-time, without any offline learning, using dynamic programming over blocks of highly coupled variables. This approach has recently outperformed machine learning methods on hard common-sense reasoning problems in databases with millions of records (A. Lew, Agrawal, Sontag, & Mansinghka, 2021). Scaling this approach will require not just synthesizing world models but automatically analyzing and decomposing them, analogously to how inference algorithm designers decompose large inference problems into sequences of more tractable subproblems. | 2306.12672#192 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 193 | Another promising approach is to train neural networks to make data-driven proposals via amortized inference, potentially using synthetic data from an open-ended simulator of world models and queries (M. Wu & Goodman, 2022). This can be seen as an alternative to inference programming, avoiding the need for explicit symbolic analysis of the process of thought. It can also be seen as a potential technique by which inference programs might eventually be synthesized, once a suitable training corpus can be generated synthetically â as well as a source of data-driven proposals that can be recombined by inference programs.
# 5.1.3 Resource rational amortization in meaning construction and problem solving | 2306.12672#193 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 194 | # 5.1.3 Resource rational amortization in meaning construction and problem solving
In some of our examples, a sentence (e.g., âGabe is stronger than Joshâ) is translated to a meaning representation that looks very much like its classical formal semantics, composing the literal meanings of each word in the sentence. But in other examples (e.g., âseveral of the faculty are real slackersâ), the translations appear to incorporate complex contextual and pragmatic judgments, judgments that might otherwise have been arrived at via probabilistic inference in a model of speakers, listeners, and their intents (Goodman & Frank, 2016). This raises the question of where to draw the line between translation and inference. Versions of this question have been extensively studied (e.g., does a word like âsomeâ imply ânot allâ as part of its meaning, or does this implicature arise via after-the-fact pragmatic reasoning (Tessler, Tenenbaum, & Goodman, 2022)?), and some past work has offered a unifying view via theories of amortized pragmatics (White et al., 2020), whereby RSA-style inferences are âcompiled downâ into new word meanings.
46
Implications for cognitive science
5 FUTURE DIRECTIONS | 2306.12672#194 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 195 | 46
Implications for cognitive science
5 FUTURE DIRECTIONS
A key feature of our architecture is that it is largely agnostic to where exactly the boundary should lie, and as such could help to model and extend this process of amortized inference in language understanding. For example, as expanded on below, we could extend our symbolic world models to include aspects of the language understanding process itself (such as those described in symbolic derivations of semantics (Heim & Kratzer, 1998; Montague, 1970; Pollard & Sag, 1994; Steedman, 2001, 2011), and those used explicitly to compute its pragmatic interpretations (Fox, 2007; Goodman & Frank, 2016)). Symbolic inferences about meanings could then be used to train the language understanding module to directly generate the results of this symbolic inference processâfor use either as a fully amortized pragmatic translator, or as a proposal distribution within a larger Monte Carlo algorithm that could score and reject inaccurate translations. | 2306.12672#195 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 196 | In addition to making aspects of translation symbolic, we could consider approaches to amortizing the more general probabilistic inferences required to answer queries. By supervising âtranslation modelsâ directly with the final outputs of symbolic inference, across a wide variety of tasks, we could enable a pure neural inference mode for these systems that may overcome some limitations of models trained only on language and code. As described above, such supervised models could also be incorporated as proposal distributions in posterior sampling algorithms, leading to improved efficiency without sacrificing the ability to correct for learned biases that may be inapplicable when tackling novel problems.
Ultimately, we envision a new kind of neurosymbolic model in which, rather than pre-assigning responsibil- ities to the neural or symbolic program, models may flexibly perform any part of the language understanding via explicit probabilistic inference or learned, amortized prediction, with tradeoffs in speed and accuracy for any allocation of responsibilities to modules. The research question is how to do this automaticallyâhow do we identify pieces of a computation that can reliably be emulated by a neural model, how do we train this neural model efficiently, and how do we decide at runtime which inference mode to use? As above, these questions raise many opportunities to take inspiration from our scientific understanding of the separation of responsibilities in language and thought, and work on learning for inference in more general probabilistic models.
# 5.1.4 Language generation | 2306.12672#196 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 197 | # 5.1.4 Language generation
The preceding discussion has focused largely in problems of language understandingâmapping from utterances to inferences about the state of the world that those utterances describe. But effective models of language use should also be able to explain generation, making it possible to translate the results of inference back to language. As with the problem of language-informed thinking that we focus on this paper, it is useful to model language generation as two distinct processes: choosing what to say, then how to say it (Duboue & McKeown, 2003). And as with understanding, the first phase requires a model of the world, and of the speakerâs goals within it. What additional work is needed to adapt our models of rational meaning construction for generation? | 2306.12672#197 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 198 | One possibility, alluded to in the discussion of amortization above, is to interpret the language under- standing machinery described above as a model of a listener, then perform language generation by selecting utterances that cause this model listener to form correct beliefs or take appropriate actions (Fox, 2007; Goodman & Frank, 2016). This extra layer of reasoning introduces major inferential challenges: the generation model most now reason both about the set of possible utterances and the effect of each utterance on the distribution over possible worlds inferred by a listener. Here it is once again possible to leverage large-scale statistical learningâfor example, using LLMs to directly translate candidate communicative intentions back to natural language strings, which may then be used as candidate utterances to be scored using a formal model of language understanding. Such a hybrid neuro-symbolic generation model (Fang et al., 2022; Langkilde & Knight, 1998) offers a path towards language generation that is expressive and fluent, but avoids the truthfulness and hallucination problems that plague all purely neural language generation models that exist today (Maynez, Narayan, Bohnet, & McDonald, 2020; Wiseman, Shieber, & Rush, 2017).
# Implications for cognitive science
In this section, we describe several research directions for other closely related disciplines that study language and thought in natural minds, brains, and behavior, focusing on productive intersections in relation to this framework.
47
Implications for cognitive science | 2306.12672#198 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 199 | 47
Implications for cognitive science
5 FUTURE DIRECTIONS
# 5.2.1 Connections to cognitive and formal models of linguistic structure
In all the examples described above, the process of translating utterances into formal meaning representations was performed with a black-box statistical model, while reasoning about those meaning representations leveraged an explicit symbolic inferential process. However, an enormous body of work in linguistics has argued that the process of mapping from utterances to meaning representations can itself be described (at least approximately) in terms of symbol processing operations (Montague, 1970; Pollard & Sag, 1994; Steedman, 2001, inter alia). By design, most of our âmeaning representationsâ are designed to support efficient reasoning about domain-specific world models, and bear only a vague resemblance to formal and domain-general linguistic representational theories. But can the symbolic models of linguistic meaning posited by these theories (as opposed to the symbolic models of reasoning we already draw on) be incorporated into our framework? | 2306.12672#199 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 200 | As noted in Section 5.1.3, a fully realized model of rational meaning construction should be able to flexibly move computation across the statisticalâsymbolic boundary, âcompilingâ results of symbolic inference into amortized computation, or retrieving symbolic descriptions of amortized processes for explicit verification. In this view, the vignettes above treat the meaning representation process as culminating in domain-specific representations and amortized by default. But probabilistic symbolic models of meaning (e.g., Kwiatkowksi, Zettlemoyer, Goldwater, & Steedman, 2010), or Bayesian and game-theoretic models of semantics (e.g., Goodman & Frank, 2016) can themselves be implemented as probabilistic programs and composed with domain-specific inferential computations, resulting in an almost purely symbolic (but amortizable) language understanding process similar to the one described by Goodman and Lassiter (2015). | 2306.12672#200 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 201 | Such a model would also offer an appealing path toward learning language in a more sample-efficient (and perhaps human-like) ways. Todayâs neural sequence models require orders of magnitude more data than human learners to discover the structural regularities underlying human languages (Linzen, 2020). Explicit probabilistic symbolic models, by contrast, can discover this structure extremely sample-efficiently (Yang & Piantadosi, 2022). A model that could automatically infer symbolic meaning representation rules from data, then amortize this representation system into a statistical translation model (Liang, Daumé III, & Klein, 2008), would be capable of both efficient learning of language, and efficient modeling of other domains using language. It would also offer a framework for modeling other key aspects of language acquisition, including explicit linguistic instruction (of word meanings, rules of grammar, etc), tradeoffs between different formal representational schemes, and the relationship between linguistic competence (understood as symbol-side language processing) and linguistic performance (understood as statistical-side processing). | 2306.12672#201 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 202 | The semantic framework in this paper is most closely related to other cognitive semantic frameworks (eg. Jackendoff (1985); Lakoff (1988); Pietroski (2018); Pinker (1984)) that explicitly propose that human language constructs meanings from conceptual and cognitive primitives, including those for causal reasoning, or core knowledge representations of physics and agents. Related information-theoretic proposals have proposed that languages are effectively designed to be efficiently communicable externalizations of underlying thoughtsâthat the structure of human languages derives from underlying structure in the semantic representations we wish to communicate, and indeed may be driven by environmental and domain-specific pressures (eg. Gibson et al. (2019); Mollica et al. (2021); Zaslavsky, Kemp, Regier, and Tishby (2018)). | 2306.12672#202 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 203 | Other related acquisition theories posit that these structural relationships between the representations of thought and externalizable language play an important role in language acquisition. Under these theories, humans can so efficiently learn or hypothesize the meanings of sentences because they âmap cleanly" onto the cognitive structures already present in the minds of the language learner (Snedeker, 2016); language learning is bootstrapped by these predictable, structured mappings between the underlying space of meanings and the syntax of language (L. R. Gleitman et al., 2005; Hartshorne et al., 2016; Pinker & MacWhinney, 1987). In preliminary experiments, we find intriguing evidence that large language-to-code models can extract and generalize syntactic patterns between language and code, including to bootstrap hypotheses about the semantics of novel words expressed as probabilistic programs based on contextual, syntactic usage (see Syntactic Bootstrapping, Fig. 14). Future work can explore therefore whether these statistical distributional models might be used to implement cognitive models of bootstrapped language acquisition.
48
Implications for cognitive science
5 FUTURE DIRECTIONS
# 5.2.2 Modeling the mechanisms of human thought | 2306.12672#203 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 204 | Using tools for adaptive Bayesian inference over flexibly structured symbolic representationsâincluding not only probabilistic programs but more generally hierarchical Bayesian models (Griffiths et al., 2010; Tenenbaum, Kemp, Griffiths, & Goodman, 2011), resource-rational modeling (S. J. Gershman et al., 2015; Lieder & Griffiths, 2020), and program induction (Lake et al., 2017; Piantadosi et al., 2012)âcomputational cognitive scientists have built quantitatively predictive and functionally explanatory models of human behavior in almost every domain of cognition. This range spans from models of perception, concept learning and categorization, causal reasoning, decision-making and planning, to intuitive physics, theory of mind, sentence processing, and cognitive and language development (C. Baker et al., 2011; Goodman & Frank, 2016; Goodman et al., 2014; Griffiths & Tenenbaum, 2006; Ho, Saxe, & Cushman, 2022; Jara-Ettinger et al., 2020; Lake et al., 2017; Perfors et al., 2011). However, in almost every one of these cases, the models are not fully âstimulus-computableâ: Behavioral | 2306.12672#204 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 205 | al., 2017; Perfors et al., 2011). However, in almost every one of these cases, the models are not fully âstimulus-computableâ: Behavioral experiments in cognitive psychology almost always use natural language to present participants with some situation for thinking about (in addition to perhaps perceptual stimuli); language is also almost invariably used to pose some question or goal as the end for thinking. Put another way, almost all our behavioral experimentsâlike so many instances of cognition in the wildâfollow the language-informed thinking paradigm of this paper. But our cognitive models traditionally do not; they are created by hand from the modelerâs understanding of the natural language task description, rather than synthesized automatically from the linguistic stimuli presented to participants. To what extent can the rational meaning construction framework presented here reduce the need for computational cognitive scientists to manually create Bayesian models that match the natural-language prompts given to humans in behavioral experiments? Can we build âlanguage-computableâ models of human thought, that are much easier to test and vary via large-scale online experiments? | 2306.12672#205 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 206 | We have already begun to explore these possibilities and shown promising preliminary results in several domains, including to model how language implicates commonsense physical reasoning about linguistic scenes (C. E. Zhang et al., 2023), social reasoning about goal-directed agents (Ying et al., 2023); as well as to test the claim that the LLM-based meaning function we implement in this paper can compute amortized pragmatic judgments of scalar implicatures that accord with human interpretations (Lipkin, Wong, Grand, & Tenenbaum, 2023). | 2306.12672#206 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 207 | There is also a growing body of research in computational cognitive science showing that salient dynamics of thought, including well-known departures from Bayesian norms, can be explained via Monte Carlo inference approximations that aim to rationally use limited computational resources (Chater et al., 2020; S. J. Gershman et al., 2015; Lieder & Griffiths, 2020; Lieder, Hsu, & Griffiths, 2014; Sanborn & Chater, 2017). In some cases, human inferences seem to rest on just a single, highly approximate sample (Vul, Goodman, Griffiths, & Tenenbaum, 2014), or perhaps just a few of them (Vul & Pashler, 2008). If we extend our proposed architecture for rational meaning construction to incorporate these kinds of Monte Carlo mechanisms, could we build models of language-guided thinking that can be directly compared at a more mechanistic level to human behavior? How will processes of language understanding and reasoning interact mechanistically, and can we build resource-rational approximate inference models that capture this interaction?
# 5.2.3 Language and thought in the brain | 2306.12672#207 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 208 | Evidence from cognitive neuroscience suggests a number of parallels between the framework we describe in this paper, and how language relates to systems for general cognition in the human brain. Over decades, cognitive neuroscientists have mapped out a series of interconnected areas in the frontal and temporal lobes that are implicated in human language processing. This âlanguage networkâ is activated in both linguistic comprehension (Deniz, Nunez-Elizalde, Huth, & Gallant, 2019; Fedorenko, Hsieh, Nieto-Castañón, Whitfield- Gabrieli, & Kanwisher, 2010; MacSweeney et al., 2002; Regev, Honey, Simony, & Hasson, 2013; T. L. Scott, Gallée, & Fedorenko, 2017) and production (Hu et al., 2021; Menenti, Gierhan, Segaert, & Hagoort, 2011). It is sensitive to regularities in all levels of linguistic structureâfrom phonology, to words, to phrases and sentences (Blank & Fedorenko, 2017; Lerner, Honey, Silbert, & Hasson, 2011; Silbert, Honey, Simony, Poeppel, & Hasson, | 2306.12672#208 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 211 | 49
Implications for AI
5 FUTURE DIRECTIONS
exhibit impaired language production and comprehension, but retain the ability to solve arithmetic and logic puzzles, reason about causality and social situations, and perform many other non-linguistic tasks (e.g., Basso & Capitani, 1985; Bek, Blades, Siegal, & Varley, 2010; Fedorenko & Varley, 2016; Klessinger, Szczerbinski, & Varley, 2007; Lecours & Joanette, 1980; Luria, Tsvetkova, & Futer, 1965; Varley, 1998). Functional neuroimaging studies provide further evidence that the language network is not activated in a variety of non-linguistic tasks including reasoning about arithmetic, logic, actions, or events (Amalric & Dehaene, 2016, 2019; Blank, Kanwisher, & Fedorenko, 2014; Deen, Koldewyn, Kanwisher, & Saxe, 2015; Fedorenko, Behr, & Kanwisher, 2011; Monti, Osherson, Martinez, & Parsons, 2007; Monti, Parsons, & Osherson, 2012; Paunov, Blank, & Fedorenko, 2019; Paunov et al., 2022; Shain, Paunov, Chen, Lipkin, & Fedorenko, 2022). | 2306.12672#211 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 212 | In tandem, a broader line of cognitive neuroscience work has located non-linguistic networks that are activated in processing many of the core cognitive domains we model throughout this paper, including logic, mathematical reasoning (eg. Amalric and Dehaene (2019); Monti et al. (2007)), social reasoning and planning (Adolphs, 2009; Saxe, Moran, Scholz, & Gabrieli, 2006; Saxe & Powell, 2006); and physical reasoning and simulation (Pramod, Cohen, Tenenbaum, & Kanwisher, 2022; Schwettmann, Tenenbaum, & Kanwisher, 2019). More recent work suggests the existence of an âamodal semantics network" (Ivanova, 2022; Ivanova et al., 2021), a network that appears proximal to the language networks activated in processing linguistic structures, interfaces between the language network and the more general multiple demand networks involved in complex non-linguisitc cognition, and that appears to be activated specifically in processing semantically meaningful sentences (as opposed to scrambled tokens or syntactically correct but semantically incoherent strings.) | 2306.12672#212 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.