doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2306.12672 | 213 | Recently, neuroscientists who study language cognition have begun to draw explicit parallels between the language network and LLMs (see Mahowald et al., 2023, for a review). Several recent studies have observed that smaller LLMs trained specifically on the distriutional statistics of language (generally focusing on the GPT-2 model) can predict brain activity in humans processing sentence input (Caucheteux & King, 2022; Goldstein et al., 2022; Schrimpf et al., 2021) and may share representational characteristics of the human language network (Fedorenko et al., 2020; Shain, Blank, van Schijndel, Schuler, & Fedorenko, 2020). These accounts, however, align LLMs with the modular role we propose for neural models in our frameworkânot as end-to-end models of language and reasoning, but instead as robust, context-aware mappings between language and meanings. As a ground for future work, our framework can inform evaluations of LLMs with respect to human language understanding. For instance, our proposal suggests that code-trained LLMs might better capture latent semantic and syntactic structure than language-only LLMs. Ideas from neuroscience, in turn, can help us figure out which kinds of computations can be neurally amortized and where our modelâs boundary between language and thought should lie.
# Implications for AI | 2306.12672#213 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 215 | Growing awareness of the limitations of LLM-based reasoning has motivated several recent proposals for interfacing language models with external symbolic plug-ins or toolkits (Karpas et al., 2022; OpenAI, 2023c; Schick et al., 2023; Wolfram, 2023). At face value, one perspective is to view rational meaning construction as an argument to add probablistic programs to the growing âswiss army knifeâ of LLM plug-ins. However, we see this notion as inverted: thought should not simply be a plug-in on top of language models. Rather, we believe that future AI systems should be architected around thoughtâgeneral-purpose computing systems that provide a principled framework for expressing world models, conditioning them on observations from sources including language and perceptual input, and drawing principled inferences and decisions with respect to the goals of an intelligent system.7 As we show throughout this paper, many core domains of cognition can be expressed as forms of probabilistic inference. A probabilistic language of thought, in turn, provides a unifying language for world modeling that can nest calls to other cognitively-motivated modules. In this sense, all of these | 2306.12672#215 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 216 | language of thought, in turn, provides a unifying language for world modeling that can nest calls to other cognitively-motivated modules. In this sense, all of these plug-ins and modules would become plug-ins to the substrate of thought, including graphics engines, physics simulators, planning algorithms, and, in fact, language models themselves. As we discuss in the future directions of each section, scaling any of our toy implementations towards robust, human-like reasoning and language-understanding systems will almost certainly require more sophisticated implementations of each | 2306.12672#216 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 217 | 7A similar argument has been expressed by Stephen Wolfram in a compelling series of writings on integrating ChatGPT with the Wolfram Language and its suit of symbolic computational tools (Wolfram, 2023).
50
Implications for AI
5 FUTURE DIRECTIONS
reasoning module. We therefore hope this general probabilistic framework suggests a symbolic substrate that might in turn incorporate many of the specific modules and plug-ins in this recent work. | 2306.12672#217 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 218 | To this end, another important near-term AI research direction will involve building probabilistic pro- gramming frameworks that natively incorporate LLMs. Important steps in this direction are already being taken through work leveraging LLMs to approximate prior probabilities over strings (A. K. Lew, Tessler, Mansinghka, & Tenenbaum, 2020) and amortize complex posterior inferences (M. Wu & Goodman, 2022). Indeed, many popular LLM techniques, such as scratchpads (Nye et al., 2021), chain-of-thought prompting (Wei et al., 2022), selection-inference (Creswell, Shanahan, & Higgins, 2022), STaR (Zelikman, Wu, Mu, & Goodman, 2022), and others can be viewed as implementations of probabilistic programs over string-valued random variables (Dohan et al., 2022). A maturing theoretical understanding of LLMs as probabilistic entities will afford powerful ways of harnessing and controlling generations. For instance, the sequential Monte Carlo (SMC) steering technique introduced under the LLaMPPL framework (A. K. Lew, Zhi-Xuan, | 2306.12672#218 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 219 | For instance, the sequential Monte Carlo (SMC) steering technique introduced under the LLaMPPL framework (A. K. Lew, Zhi-Xuan, Grand, & Mansinghka, 2023) enables concise and tractable specification of infilling, prompt intersection, and other constrained LLM generation tasks as language model probabilistic programs. Many of these hybrid models can be viewed as instantiations of rational meaning construction that make resource-motivated tradeoffs between inference in the unstructured space of strings (words) and more structured hypothesis spaces (worlds). | 2306.12672#219 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 220 | # 5.3.2 Robustness and trustworthiness in language understanding
Recent, high-profile attempts to deploy LLMs in production highlight the fundamental robustness challenges of using these models as the backbone of usable AI systems (Brereton, 2023; Sorkin, Warner, Kessler, Hirsch, & Livni, 2023), even with automated filters and supervised finetuning to human preferences. While LLMs may reasonably appear to condition on input language or answer queries under some circumstances, it is precisely this combination of linguistic fluency and underlying unpredictability that makes them problematic in situations where verifiable, systematic behavior is paramount. LLMs easily produce syntactically convincing but inaccurate âhallucinationsâ that fabricate facts and inferences (Dziri, Milton, Yu, Zaiane, & Reddy, 2022; Ji et al., 2022), fail to consistently condition on rules and constraints described in natural language, including rules intended to ensure user safety (Edwards, 2023; Zhuo, Huang, Chen, & Xing, 2023), and can generally degrade into nonsensical or highly undesirable language in the vast, easily accessible âlong tailâ of situations that deviate from their training distribution (Bender, Gebru, McMillan-Major, & Shmitchell, 2021; Roose, 2023; Tangermann, 2023). | 2306.12672#220 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 221 | The unevenness of todayâs LLMs recalls a classic critique of even older neural architectures (Fodor & Pylyshyn, 1988)âthat neural models trained on predictive objectives do not produce systematic, logical outputs by design. Similarly, while current or future LLMs may be able in principle to recover the latent representations and algorithms necessary to reason over languageâor even successfully approximate them in many settingsâthey do not need to produce systematic results by construction. Rather, they often approximate them with unexpected, undesirable outputs, particularly in out-of-distribution settings.
Even if future LLMs do appear to improve with scale without an external reasoning substrate, engineers may find it desirable to distinguish modularly between external symbolic reasoning engines and language- specific systems to enable separate supervision and verification of each. The framework we present here offers one roadmap for language understanding architectures whose robustness guarantees derive from explicit inference over a structured, editable, and formally constrainable programming language. Inferences themselves, and other formalizable reasoning computations including planning and physical simulation, take place in modules constructed explicitly to perform these calculations.
# Interpreting models that use language | 2306.12672#221 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 222 | # Interpreting models that use language
As with verifiability and robustness, the framework we propose here is an architecture for language under- standing systems that are also inherently interpretable, or interpretable by design (Rudin, 2019; Rudin et al., 2022)âit constructs visible, editable, and constrainable world models and meanings that serve as the formal basis for inference, rather than post-hoc explanations decoded from or produced over hidden internal computations.
However, a fundamental part of our hypothesis is that any system that reasons effectively over language should need toâexplicitly or implicitlyârepresent and implement the kinds of computations we formalize throughout this paper. Implementations of this framework might therefore also be useful for model-guided
51
Implications for AI
5 FUTURE DIRECTIONS | 2306.12672#222 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 223 | 51
Implications for AI
5 FUTURE DIRECTIONS
hypotheses and experiments intended to explain other less transparent language processing systems, both biological (as we suggest in Section 5.2.3) and artificial. This framework might be incorporated productively into the growing body of work using explicit world models and symbolic languages to formally model the internal computations of deep neural models (Biggio, Bendinelli, Neitz, Lucchi, & Parascandolo, 2021; Mu & Andreas, 2020) and LLMs specifically (B. Z. Li, Nye, & Andreas, 2021); as with the related body of work using structured probabilistic models and reasoning engines to interpret human neural activity on social reasoning, physical understanding, and other general inference tasks (Ho et al., 2022; Schwettmann, Fischer, Tenenbaum, & Kanwisher, 2018; Watters, Tenenbaum, & Jazayeri, 2021). Explaining how LLMs represent the meanings of language, and perform computations with them, is a pressing open question whose scientific interest only increases if LLMs do appear to become more coherent and robust with scale. | 2306.12672#223 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 224 | In light of this, inspired by our proposed architecture, it may be interesting to probe, or trace, whether end-to-end LLMs construct context-specific world models (B. Z. Li et al., 2021), maintain belief distributions over uncertain world states (Hase et al., 2021), and implement reasoning algorithms like probabilistic inference, physical simulation, or planning over these representations.
# 5.3.4 Learning from human-scale data
Large language models must be trained with many orders of magnitude more language data than any human learner encounters over a lifetime. How can we engineer systems that not only understand language as we do, but also learn from human-scale language data? | 2306.12672#224 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 225 | Effective, data-efficient language models hold great relevance for both scientific and engineering applications. Complete cognitive models of human language understandingâincluding models built on the framework we propose hereâshould account for language acquisition, as well as language use. For engineering purposes, addressing the data-hungry training regime of current LLMs could also address challenges in learning low-resource languages (and the more general problem of accurately learning and deploying the âlong tailâ of knowledge from statistical distributional data) (Kandpal, Deng, Roberts, Wallace, & Raffel, 2022), incorporating more expensive input modalities like videos or embodied trajectories (Ahn et al., 2022; Reed et al., 2022), finetuning on more targeted, task-specific supervision like instruction following (OpenAI, 2023a), and generally enabling the construction of smaller, more accessible models that can be trained without massive computing resources and prohibitive economic and environmental costs (Bender et al., 2021; Dickson, 2020). While current âscaling routesâ look to improve language understanding by increasing data supervision, our hypothesis strongly suggests that this is an expensive, and highly indirect, approach towards learning the representations and inference procedures necessary to reason about language. | 2306.12672#225 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 226 | Instead, our framework suggests several alternative directions for improving data efficiency. First, perhaps the most direct consequence of this framework is the suggestion that neural models need only play a much tighter, focused role in language understanding systemsâas translation models that parse from language into structured symbolic programs for reasoning. Training a translation model focused on parsing from language into probabilistic programs almost certainly requires much less data for effective performance than required to solve the general token prediction problem. | 2306.12672#226 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 227 | Further, several ideas we discuss in Section 5.1.1 and Section 5.1.3 might also be relevant for training simpler translation models, and using them to bootstrap larger and more complex neural language models. First, as we discuss in Section 5.1.3, we might consider a progressively amortized avenue for training even complex translation models like the one in our concrete implementation, which appears to contextually amortize certain pragmatic inferences (such as those that adjust vague quantifiers to the context of a particular world model) that could be explicitly computed from a more literal initial semantic parse. One possibility, then, would be to train a more limited, literal semantic parser from language to probabilistic programs, but seek to train neural models that progressively amortize more of these inferences by supervising on its outputs. Other ideas from human language acquisition might offer more avenues for more radically data efficient learning. Human language learners progress through several phases of language mastery (R. Brown, 1973; Saffran et al., 2001; Tomasello, 2009), appearing to learn initial but highly imperfect grammars and meaning functions that they refine progressively over time, but much more quickly and with much less data than a comparable LLM trained directly on the distribution of language. Framed as a problem of learning a translation model, however, a more data efficient training regime might also draw inspiration from other methods for learning more flexible translation and semantic parsing distributions. Multiple approaches
52 | 2306.12672#227 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 228 | 52
6 CONCLUSION
have used simpler models to bootstrap more complex ones, either by using simpler models trained on more constrained translation objectives to directly initialize the parameters of more complex ones (P. F. Brown, Della Pietra, Della Pietra, Mercer, et al., 1993; Dong & Lapata, 2018; Petrov, Haghighi, & Klein, 2008), or using simpler grammars as generative data sources to train more complex models, as in general wake-sleep training methods that learn predictive models to amortize the outputs of a generative distribution (Andreas, 2019; Hinton, Dayan, Frey, & Neal, 1995; Jia & Liang, 2016).
Both of these approaches rely, importantly, on the language of thought hypothesis we advance here, which separates the computational problem of learning a translation distribution from the problem of learning the representations and algorithms necessary for general intelligence. This drastically reduces the latent structure and computational complexity we seek to learn from distributional supervisionâto learn as efficiently as people, we propose a framework that begins with a substrate for thinking and then suggests avenues for amortizing its outputs or refining translation into this substrate, rather than seeking to learn an effective language of thought itself from natural language data.
# 6 Conclusion | 2306.12672#228 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 229 | # 6 Conclusion
Language is central to our cognition. A theory of meaning in human language should explain how language relates to our thoughtsâhow it connects to all our faculties for reasoning, and how it can shift our beliefs across nearly every domain of what we now, change how we act or respond across a broad range of situations, even construct new knowledge that we might later marshal towards yet unspoken questions and goals. This vision lies at the heart of a human theory of language and meaning, but the most expansive visions of AI have also long been ones in which computers share our language, able to meaningfully understand us as we expect to be understood by other people. Todayâs large language models have made striking advances towards building this reality in many important regards. For the first time, we have built computer systems that can speak fluently back to us, using many more of our own words than ever before. | 2306.12672#229 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 230 | Still, much more is needed to capture our own relationship to language. We do not learn language like a large language model does. We think first, and learn from far less input how language maps into our thoughts. Our own world models and beliefs are not the fragile byproduct of what we can glean from languageâthey are the basis of and core of our cognition, constructed and maintained purposefully towards our intentions and desires. We, of course, are the ones who created the language on which todayâs machine learning models are now trained. That language is the product of and reflection of our own goals and questions, and of conceptual systems of our own invention. We continue to think completely new thoughts, and we continue in turn to produce entirely new language, coining new words and even constructing wholly new languages so that we can build its meaning in the minds of other humans. A cognitive theory of human language must capture and explain these aspects of our language and thought. It might in turn form the basis for AI models that reliably and predictably understand us, and that work in ways that we can interpret, explain, and control. This white paper is simply a sketch towards these ends: an outline of the computational components that could | 2306.12672#230 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 231 | ways that we can interpret, explain, and control. This white paper is simply a sketch towards these ends: an outline of the computational components that could relate human language and a substrate for cognition, and one proposal for how this approach might also incorporate todayâs language models without requiring them to learn to reliably model the world, draw inferences, or make decisions. We hope it can offer one step towards cognitive and AI models that share the meaning we make from language, and that bridge from language into the vast expanse of our thoughts. | 2306.12672#231 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 232 | 53
REFERENCES
# Acknowledgements
We have many people to thank whose comments, critiques, and feedback have influenced this manuscript and shaped it for the better. Among others, we are grateful to Steve Piantadosi, Jesse Snedeker, Kate Davidson, Ellie Pavlick, Paul Pietroski, Thomas Icard, Luca Bonatti, and Susan Carey for their insightful comments on an early version of this manuscript that was presented at the July 2022 McDonnell Network Workshop; as well as for innumerable helpful comments and feedback on developing versions of this manuscript from Joshua Hartshorne, Judy Fan, Robert Hawkins, Katherine Collins, Anna Ivanova, Cedegao Zhang, Hayley Ross, Anna Ivanova, Benjamin Lipkin, Megan Wei, Jiahai Feng, Xuan Tan, Lance Ying, William McCarthy, Laura Schulz and Tyler Brooke-Wilson. Language from all of these collaborators has invaluably and profoundly informed our thoughts. | 2306.12672#232 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 233 | The authors gratefully acknowledge support from support from the MIT Quest for Intelligence, AFOSR Grant No. FA9550-19-1-0269, the MIT-IBM Watson AI Lab, the DARPA Machine Common Sense Program, the ONR Science of AI Program, and Siegel Family Endowment. This material is based on work supported by the National Science Foundation Graduate Research Fellowship under Grant No. 1745302 and No. 2141064. Additionally, GG was supported by the MIT Presidential Fellowship, and JDA was supported by NSF Grant IIS-2212310.
# References
Abend, O., Kwiatkowski, T., Smith, N. J., Goldwater, S., & Steedman, M. (2017). Bootstrapping language
acquisition. Cognition, 164 , 116â143.
Adolphs, R. (2009). The social brain: neural basis of social knowledge. Annual review of psychology, 60 , 693â716.
Ahn, M., Brohan, A., Brown, N., Chebotar, Y., Cortes, O., David, B., . . . others (2022). Do as I can, not as I say: Grounding language in robotic affordances. arXiv preprint arXiv:2204.01691 . | 2306.12672#233 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 234 | Allen, K. R., Smith, K. A., & Tenenbaum, J. B. (2020). Rapid trial-and-error learning with simulation supports flexible tool use and physical reasoning. Proceedings of the National Academy of Sciences, 117 (47), 29302â29310.
Alon, U., Xu, F. F., He, J., Sengupta, S., Roth, D., & Neubig, G. (2022). Neuro-Symbolic Language Modeling with Automaton-augmented Retrieval. undefined .
Amalric, M., & Dehaene, S. (2016, May). Origins of the brain networks for advanced mathematics in expert mathematicians. Proceedings of the National Academy of Sciences of the United States of America, 113 (18), 4909â4917. doi: 10.1073/pnas.1603205113
Amalric, M., & Dehaene, S. (2019, April). A distinct cortical network for mathematical knowledge in the human brain. NeuroImage, 189 , 19â31. Retrieved 2019-07-26, from https://linkinghub.elsevier.com/ retrieve/pii/S1053811919300011 doi: 10.1016/j.neuroimage.2019.01.001 | 2306.12672#234 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 235 | Anderson, J. R. (1990). The adaptive character of thought. Psychology Press. Andreas, J. (2019). Good-enough compositional data augmentation. arXiv preprint arXiv:1904.09545 . Armeni, I., He, Z.-Y., Gwak, J., Zamir, A. R., Fischer, M., Malik, J., & Savarese, S. (2019). 3d scene graph: A structure for unified semantics, 3d space, and camera. In Proceedings of the ieee/cvf international conference on computer vision (pp. 5664â5673).
Artzi, Y., Das, D., & Petrov, S. (2014). Learning compact lexicons for ccg semantic parsing. Artzi, Y., Lee, K., & Zettlemoyer, L. (2015, September). Broad-coverage ccg semantic parsing with amr. In Proceedings of the conference on empirical methods in natural language processing (pp. 1699â1710). Lisbon, Portugal: Association for Computational Linguistics. Retrieved from http://aclweb.org/ anthology/D15-1198
Artzi, Y., & Zettlemoyer, L. (2013). Weakly supervised learning of semantic parsers for mapping instructions to actions. Transactions of the Association for Computational Linguistics, 1 (1), 49â62. | 2306.12672#235 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 236 | Bai, J., Zhou, L., Blanco, A., Liu, S., Wei, F., Zhou, M., & Li, Z. (2021). Jointly learning to repair code and generate commit message. ArXiv , abs/2109.12296 .
Baillargeon, R. (2004). Infantsâ physical world. Current directions in psychological science, 13 (3), 89â94. Baker, C., Saxe, R., & Tenenbaum, J.
(2011). Bayesian theory of mind: Modeling joint belief-desire attribution. In Proceedings of the annual meeting of the cognitive science society (Vol. 33).
54
REFERENCES
Baker, C. L., Saxe, R., & Tenenbaum, J. B. (2009). Action understanding as inverse planning. Cognition,
113 (3), 329â349.
Baker, C. L., Tenenbaum, J. B., & Saxe, R. R. (2007). Goal inference as inverse planning. In Proceedings of the annual meeting of the cognitive science society (Vol. 29). | 2306.12672#236 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 237 | Bar-Zeev, A. (2003). Scenegraphs: Past, present and future. Ãltimo acesso em, 13 . Basso, A., & Capitani, E. (1985, May). Spared musical abilities in a conductor with global aphasia and ideomotor apraxia. Journal of Neurology, Neurosurgery, and Psychiatry, 48 (5), 407â412. Retrieved 2020-08-03, from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1028326/
Battaglia, P. W., Hamrick, J. B., & Tenenbaum, J. B. (2013). Simulation as an engine of physical scene
understanding. Proceedings of the National Academy of Sciences, 110 (45), 18327â18332.
Bek, J., Blades, M., Siegal, M., & Varley, R. A. (2010, May). Language and spatial reorientation: evidence from severe aphasia. Journal of Experimental Psychology. Learning, Memory, and Cognition, 36 (3), 646â658. doi: 10.1037/a0018281 | 2306.12672#237 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 238 | Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 acm conference on fairness, accountability, and transparency (pp. 610â623).
Biernaskie, J. M., Walker, S. C., & Gegear, R. J. (2009). Bumblebees learn to forage like bayesians. The American Naturalist, 174 (3), 413â423.
Biggio, L., Bendinelli, T., Neitz, A., Lucchi, A., & Parascandolo, G. (2021). Neural symbolic regression that
scales. In International conference on machine learning (pp. 936â945).
Bingham, E., Chen, J. P., Jankowiak, M., Obermeyer, F., Pradhan, N., Karaletsos, T., . . . Goodman, N. D. (2019). Pyro: Deep universal probabilistic programming. J. Mach. Learn. Res., 20 , 28:1â28:6. Retrieved from http://jmlr.org/papers/v20/18-403.html | 2306.12672#238 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 239 | Blank, I. A., & Fedorenko, E. (2017, October). Domain-General Brain Regions Do Not Track Linguistic Input as Closely as Language-Selective Regions. Journal of Neuroscience, 37 (41), 9999â10011. Retrieved 2019-11- 06, from https://www.jneurosci.org/content/37/41/9999 doi: 10.1523/JNEUROSCI.3642-16.2017
# Blank, I. A., Kanwisher, N., & Fedorenko, E.
(2014, September). A functional dissociation between language and multiple-demand systems revealed in patterns of BOLD signal fluctuations. Journal of Neurophysiology, 112 (5), 1105â1118. doi: 10.1152/jn.00884.2013 | 2306.12672#239 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 240 | Block, N. (1998). Conceptual role semantics. Bloom, P. (2002). How children learn the meanings of words. MIT press. Bolton, A. D., Haesemeyer, M., Jordi, J., Schaechtle, U., Saad, F. A., Mansinghka, V. K., . . . Engert, F. (2019). Elements of a stochastic 3d prediction engine in larval zebrafish prey capture. ELife, 8 , e51975. Bommasani, R., Hudson, D. A., Adeli, E., Altman, R., Arora, S., von Arx, S., . . . others (2021). On the
opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258 . | 2306.12672#240 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 241 | opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258 .
Borgeaud, S., Mensch, A., Hoffmann, J., Cai, T., Rutherford, E., Millican, K., . . . Sifre, L. (2022, February). Improving language models by retrieving from trillions of tokens (No. arXiv:2112.04426). arXiv. Bowers, M., Olausson, T. X., Wong, L., Grand, G., Tenenbaum, J. B., Ellis, K., & Solar-Lezama, A. (2023, jan). Top-down synthesis for library learning. Proc. ACM Program. Lang., 7 (POPL). Retrieved from https://doi.org/10.1145/3571234 doi: 10.1145/3571234 | 2306.12672#241 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 242 | Branwen, G. (2022). The scaling hypothesis. Gwern.net. Brereton, D. (2023). Bing ai canât be trusted. https://dkb.blog/p/bing-ai-cant-be-trusted. Brooke-Wilson, T. (2023). Why is seeing fast and thinking slow? in prep. Brown, P. F., Della Pietra, S. A., Della Pietra, V. J., Mercer, R. L., et al. (1993). The mathematics of
statistical machine translation: Parameter estimation.
Brown, R. (1973). A first language: The early stages. Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., . . . Amodei, D.
(2020, July). Language Models are Few-Shot Learners. arXiv:2005.14165 [cs] . Retrieved 2020-08-09, from http://arxiv.org/abs/2005.14165 (arXiv: 2005.14165) | 2306.12672#242 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 243 | Bubeck, S., Chandrasekaran, V., Eldan, R., Gehrke, J., Horvitz, E., Kamar, E., . . . others (2023). Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712 .
Bybee, J. L. (1985). Morphology. Typological studies in language. Cai, Q., & Yates, A. (2013). Large-scale semantic parsing via schema matching and lexicon extension. In Proceedings of the 51st annual meeting of the association for computational linguistics (volume 1: Long
55
REFERENCES
papers) (pp. 423â433).
Caliskan, A., Bryson, J. J., & Narayanan, A. (2017). Semantics derived automatically from language corpora contain human-like biases. Science, 356 (6334), 183â186. Retrieved from https://www.science.org/ doi/abs/10.1126/science.aal4230 doi: 10.1126/science.aal4230 | 2306.12672#243 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 244 | Carey, S. (1999). Sources of conceptual change. Conceptual development: Piagetâs legacy, 293â326. Carey, S. (2009). The origin of concepts. New York: Oxford University Press. Carpenter, B., Gelman, A., Hoffman, M. D., Lee, D., Goodrich, B., Betancourt, M., . . . Riddell, A. (2017).
Stan: A probabilistic programming language. Journal of statistical software, 76 (1).
Caucheteux, C., & King, J.-R. (2022, February). Brains and algorithms partially converge in natural language processing. Communications Biology, 5 (1), 1â10. Retrieved 2022-07-05, from https:// www.nature.com/articles/s42003-022-03036-1 (Number: 1 Publisher: Nature Publishing Group) doi: 10.1038/s42003-022-03036-1
Chakraborty, S., Ding, Y., Allamanis, M., & Ray, B. (2022). Codit: Code editing with tree-based neural models. IEEE Transactions on Software Engineering, 48 , 1385â1399. | 2306.12672#244 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 245 | Chakraborty, S., & Ray, B. (2021). On multi-modal learning of editing source code. 2021 36th IEEE/ACM International Conference on Automated Software Engineering (ASE), 443â455.
Chater, N., & Manning, C. D. (2006). Probabilistic models of language processing and acquisition. Trends in
cognitive sciences, 10 (7), 335â344.
Chater, N., & Oaksford, M. (1999). Ten years of the rational analysis of cognition. Trends in cognitive sciences, 3 (2), 57â65.
Chater, N., Zhu, J.-Q., Spicer, J., Sundh, J., León-Villagrá, P., & Sanborn, A. (2020). Probabilistic biases meet the bayesian brain. Current Directions in Psychological Science, 29 (5), 506â512.
Chen, M., Tworek, J., Jun, H., Yuan, Q., Pinto, H. P. d. O., Kaplan, J., . . . others (2021). Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374 . | 2306.12672#245 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 247 | Collins, K. M., Wong, C., Feng, J., Wei, M., & Tenenbaum, J. B. (2022, May). Structured, flexible, and robust: Benchmarking and improving large language models towards more human-like behavior in out-of-distribution reasoning tasks (No. arXiv:2205.05718). arXiv. doi: 10.48550/arXiv.2205.05718 Colmerauer, A., Kanoui, H., Pasero, R., & Roussel, P. (1972). Un systeme de communication en français. Rapport préliminaire de fin de contrat IRIA, Groupe Intelligence Artificielle, Faculté des Sciences de Luminy, Université dâAix-Marseille II .
In History of programming languagesâii (p. 331â367). New York, NY, USA: Association for Computing Machinery. Retrieved from https:// doi.org/10.1145/234286.1057820
Conwell, C., & Ullman, T. D. (2022). Testing relational understanding in text-guided image generation. arXiv preprint arXiv:2208.00005 . | 2306.12672#247 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 248 | Coumans, E., & Bai, Y. (2016). Pybullet, a python module for physics simulation for games, robotics and machine learning.
Craik, K. J. W. (1967). The nature of explanation (Vol. 445). CUP Archive. Creswell, A., Shanahan, M., & Higgins, I. (2022, May). Selection-Inference: Exploiting Large Language Models
for Interpretable Logical Reasoning (No. arXiv:2205.09712). arXiv. doi: 10.48550/arXiv.2205.09712
Csibra, G. (2008). Goal attribution to inanimate agents by 6.5-month-old infants. Cognition, 107 (2), 705â717.
Csibra, G., BÃró, S., Koós, O., & Gergely, G. (2003). One-year-old infants use teleological representations of actions productively. Cognitive Science, 27 (1), 111â133.
Cusumano-Towner, M., Bichsel, B., Gehr, T., Vechev, M., & Mansinghka, V. K. (2018). Incremental inference for probabilistic programs. In Proceedings of the 39th acm sigplan conference on programming language design and implementation (pp. 571â585). | 2306.12672#248 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 249 | Cusumano-Towner, M., Lew, A. K., & Mansinghka, V. K. (2020). Automating involutive MCMC using probabilistic and differentiable programming. arXiv preprint arXiv:2007.09871 .
56
REFERENCES
Cusumano-Towner, M. F., Radul, A., Wingate, D., & Mansinghka, V. K. (2017). Probabilistic programs for
inferring the goals of autonomous agents. arXiv preprint arXiv:1704.04977 .
Cusumano-Towner, M. F., Saad, F. A., Lew, A. K., & Mansinghka, V. K. (2019). Gen: a general-purpose probabilistic programming system with programmable inference. In Proceedings of the 40th acm sigplan conference on programming language design and implementation (pp. 221â236).
Dalvi, B., Tafjord, O., & Clark, P. (2022). Towards teachable reasoning systems: Using a dynamic memory of user feedback for continual system improvement. In Proceedings of the 2022 conference on empirical methods in natural language processing (pp. 9465â9480). | 2306.12672#249 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 250 | Dasgupta, I., & Gershman, S. J. (2021). Memory as a computational resource. Trends in Cognitive Sciences,
25 (3), 240â251.
Davidson, D., & Rescher, N. (1967). The logical form of action sentences. 1967 , 105â122. Davidson, G., Gureckis, T. M., & Lake, B. (2022). Creativity, compositionality, and common sense in human goal generation. In Proceedings of the annual meeting of the cognitive science society (Vol. 44). de Avila Belbute-Peres, F., Smith, K., Allen, K., Tenenbaum, J., & Kolter, J. Z. (2018). End-to-end differentiable physics for learning and control. Advances in neural information processing systems, 31 . Dechter, E., Malmaud, J., Adams, R. P., & Tenenbaum, J. B. (2013). Bootstrap learning via modular concept
discovery. In Proceedings of the international joint conference on artificial intelligence. | 2306.12672#250 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 251 | discovery. In Proceedings of the international joint conference on artificial intelligence.
Deen, B., Koldewyn, K., Kanwisher, N., & Saxe, R. (2015, November). Functional Organization of Social Perception and Cognition in the Superior Temporal Sulcus. Cerebral Cortex , 25 (11), 4596â4609. Retrieved 2022-07-05, from https://doi.org/10.1093/cercor/bhv111 doi: 10.1093/cercor/bhv111
Deng, F., Zhi, Z., Lee, D., & Ahn, S. (2021). Generative scene graph networks. In International conference on learning representations.
Deniz, F., Nunez-Elizalde, A. O., Huth, A. G., & Gallant, J. L. (2019, September). The Representation of Semantic Information Across Human Cerebral Cortex During Listening Versus Reading Is Invariant to Stimulus Modality. Journal of Neuroscience, 39 (39), 7722â7736. Retrieved 2020-03-11, from https:// (Publisher: Society for Neuroscience Section: Research www.jneurosci.org/content/39/39/7722 Articles) doi: 10.1523/JNEUROSCI.0675-19.2019 | 2306.12672#251 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 252 | Dennett, D. C. (2017). From bacteria to bach and back: The evolution of minds. WW Norton & Company. De Raedt, L., Kimmig, A., & Toivonen, H. (2007). Problog: A probabilistic prolog and its application in link discovery. In Proceedings of the 20th international joint conference on artifical intelligence (p. 2468â2473). San Francisco, CA, USA: Morgan Kaufmann Publishers Inc.
Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 .
Dickson, B. (2020). The gpt-3 economy. TechTalks. Ding, Y., Zhang, X., Paxton, C., & Zhang, S. (2023). Task and motion planning with large language models
for object rearrangement. arXiv preprint arXiv:2303.06247 .
Dohan, D., Xu, W., Lewkowycz, A., Austin, J., Bieber, D., Lopes, R. G., . . . others (2022). Language model cascades. arXiv preprint arXiv:2207.10342 . | 2306.12672#252 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 253 | Dong, L., & Lapata, M. (2018). Coarse-to-fine decoding for neural semantic parsing. arXiv preprint arXiv:1805.04793 .
(2017). Solving probability problems in natural language. In Proceedings of the 26th international joint conference on artificial intelligence (p. 3981â3987). AAAI Press.
Duboue, P. A., & McKeown, K. (2003). Statistical acquisition of content selection rules for natural language
generation.
Dumais, S. T., et al. (2004). Latent semantic analysis. Annu. Rev. Inf. Sci. Technol., 38 (1), 188â230. Dziri, N., Milton, S., Yu, M., Zaiane, O., & Reddy, S. (2022). On the origin of hallucinations in conversational
Dziri, N., Milton, S., Yu, M., Zaiane, O., & Reddy, S. (2022). On the origin of hallucinations in conversational models: Is it the datasets or the models? arXiv preprint arXiv:2204.07931. | 2306.12672#253 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 254 | models: Is it the datasets or the models? arXiv preprint arXiv:2204.07931 . Edgington, D. (1992). Validity, uncertainty and vagueness. Analysis, 52 (4), 193â204. Edgington, D. (1997). Vagueness by degrees. Edwards, B. (2023). Ai-powered bing chat spills its secrets via prompt injection attack. Ars Technica. Elkind, D. (1962). Childrenâs conceptions of brother and sister: Piaget replication study v. The Journal of
genetic psychology, 100 (1), 129â136.
Ellis, K., Wong, C., Nye, M., Sable-Meyer, M., Cary, L., Morales, L., . . . Tenenbaum, J. B. (2020). Dreamcoder:
57
REFERENCES
Growing generalizable, interpretable knowledge with wake-sleep bayesian program learning. arXiv preprint arXiv:2006.08381 .
English, G., Nejad, N. G., Sommerfelt, M., Yanik, M. F., & von der Behrens, W. (2023). Bayesian surprise
shapes neural responses in somatosensory cortical circuits. Cell Reports, 42 (2). | 2306.12672#254 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 255 | shapes neural responses in somatosensory cortical circuits. Cell Reports, 42 (2).
Erez, T., Tassa, Y., & Todorov, E. (2015). Simulation tools for model-based robotics: Comparison of bullet, havok, mujoco, ode and physx. In 2015 ieee international conference on robotics and automation (icra) (pp. 4397â4404).
Fang, H., Balakrishnan, A., Jhamtani, H., Bufe, J., Crawford, J., Krishnamurthy, J., . . . Klein, D. (2022). The whole truth and nothing but the truth: Faithful and controllable dialogue response generation with dataflow transduction and constrained decoding. arXiv preprint arXiv:2209.07800 .
Fedorenko, E., Behr, M. K., & Kanwisher, N. (2011, September). Functional specificity for high-level linguistic processing in the human brain. Proceedings of the National Academy of Sciences, 108 (39), 16428â16433. Retrieved 2020-02-27, from https://www.pnas.org/content/108/39/16428 doi: 10.1073/ pnas.1112937108 | 2306.12672#255 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 256 | Fedorenko, E., Blank, I., Siegelman, M., & Mineroff, Z. (2020, February). Lack of selectivity for syntax relative to word meanings throughout the language network. bioRxiv , 477851. Retrieved 2020-03-13, from https://www.biorxiv.org/content/10.1101/477851v2 (Publisher: Cold Spring Harbor Laboratory Section: New Results) doi: 10.1101/477851
Fedorenko, E., Hsieh, P.-J., Nieto-Castañón, A., Whitfield-Gabrieli, S., & Kanwisher, N. (2010, August). New method for fMRI investigations of language: defining ROIs functionally in individual subjects. Journal of Neurophysiology, 104 (2), 1177â1194. doi: 10.1152/jn.00032.2010
Fedorenko, E., & Varley, R. A. (2016, April). Language and thought are not the same thing: evidence from neuroimaging and neurological patients: Language versus thought. Annals of the New York Academy of Sciences, 1369 (1), 132â153. Retrieved 2019-07-27, from http://doi.wiley.com/10.1111/nyas.13046 doi: 10.1111/nyas.13046 | 2306.12672#256 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 257 | Field, H. H. (1977). Logic, meaning, and conceptual role. The Journal of Philosophy, 74 (7), 379â409. Fikes, R. E., & Nilsson, N. J. (1971). Strips: A new approach to the application of theorem proving to
problem solving. Artificial intelligence, 2 (3-4), 189â208.
Firth, J. (1957). A synopsis of linguistic theory, 1930-1955. Studies in linguistic analysis, 10â32. Fodor, J. A. (1975). The language of thought. Cambridge, MA: Harvard University Press. Fodor, J. A. (1983). The modularity of mind. MIT press. Fodor, J. A., & Pylyshyn, Z. W. (1988). Connectionism and cognitive architecture: A critical analysis.
Cognition, 28 (1-2), 3â71.
Fox, D. (2007). Free choice and the theory of scalar implicatures. Presupposition and implicature in compositional semantics, 71â120.
Frank, M. C., Goodman, N. D., & Tenenbaum, J. B. (2009). Using speakersâ referential intentions to model early cross-situational word learning. Psychological science, 20 (5), 578â585. | 2306.12672#257 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 258 | Frege, G. (1892). Ãber sinn und bedeutung. Wittgenstein Studien, 1 (1). Fried, D., Aghajanyan, A., Lin, J., Wang, S. I., Wallace, E., Shi, F., . . . Lewis, M. (2022). Incoder: A
generative model for code infilling and synthesis. ArXiv , abs/2204.05999 .
Gauthier, J., Levy, R., & Tenenbaum, J. B. (2018). Word learning and the acquisition of syntacticâsemantic overhypotheses. arXiv preprint arXiv:1805.04988 .
Gehr, T., Misailovic, S., & Vechev, M. (2016). Psi: Exact symbolic inference for probabilistic programs. In Computer aided verification: 28th international conference, cav 2016, toronto, on, canada, july 17-23, 2016, proceedings, part i 28 (pp. 62â83).
Gehr, T., Steffen, S., & Vechev, M. (2020). λpsi: Exact inference for higher-order probabilistic programs. In Proceedings of the 41st acm sigplan conference on programming language design and implementation (pp. 883â897). | 2306.12672#258 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 259 | Gentner, D., & Goldin-Meadow, S. (2003). Whither whorf. Language in mind: Advances in the study of language and thought, 3â14.
Gentner, D., & Stevens, A. L. (2014). Mental models. Psychology Press. Gershman, S., & Goodman, N. (2014). Amortized inference in probabilistic reasoning. In Proceedings of the
annual meeting of the cognitive science society (Vol. 36).
Gershman, S. J., Horvitz, E. J., & Tenenbaum, J. B. (2015). Computational rationality: A converging
58
REFERENCES
paradigm for intelligence in brains, minds, and machines. Science, 349 (6245), 273â278. | 2306.12672#259 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 260 | 58
REFERENCES
paradigm for intelligence in brains, minds, and machines. Science, 349 (6245), 273â278.
Gerstenberg, T., & Goodman, N. (2012). Ping pong in church: Productive use of concepts in human probabilistic inference. In Proceedings of the annual meeting of the cognitive science society (Vol. 34). Gibson, E. (2014). Language for communication: Language as rational inference. In Proceedings of coling 2014, the 25th international conference on computational linguistics: Technical papers (pp. 781â782). Gibson, E., Futrell, R., Piantadosi, S. T., Dautriche, I., Mahowald, K., Bergen, L., & Levy, R. (2019). How
efficiency shapes human language. Trends in cognitive sciences, 23 (5), 389â407.
Ginsberg, M. L. (1987). Readings in nonmonotonic reasoning. Gleitman, L. (1990). The structural sources of verb meanings. Language acquisition, 1 (1), 3â55. Gleitman, L. R., Cassidy, K., Nappa, R., Papafragou, A., & Trueswell, J. C. (2005). Hard words. Language
learning and development, 1 (1), 23â64. | 2306.12672#260 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 261 | learning and development, 1 (1), 23â64.
Goldin-Meadow, S. (2012). 26. homesign: gesture to language. In Sign language (pp. 601â625). De Gruyter Mouton.
Goldstein, A., Zada, Z., Buchnik, E., Schain, M., Price, A., Aubrey, B., . . . Hasson, U. (2022, March). Shared computational principles for language processing in humans and deep language models. Nature Neuroscience, 25 (3), 369â380. Retrieved 2022-10-31, from https://www.nature.com/articles/s41593 -022-01026-4 (Number: 3 Publisher: Nature Publishing Group) doi: 10.1038/s41593-022-01026-4 Goldwater, S., Griffiths, T. L., & Johnson, M. (2009). A bayesian framework for word segmentation: Exploring
the effects of context. Cognition, 112 (1), 21â54. | 2306.12672#261 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 262 | the effects of context. Cognition, 112 (1), 21â54.
Golovneva, O., Chen, M., Poff, S., Corredor, M., Zettlemoyer, L., Fazel-Zarandi, M., & Celikyilmaz, A. (2022). Roscoe: A suite of metrics for scoring step-by-step reasoning. arXiv preprint arXiv:2212.07919 . Goodman, N. D., & Frank, M. C. (2016). Pragmatic language interpretation as probabilistic inference. Trends
in cognitive sciences, 20 (11), 818â829.
Goodman, N. D., & Lassiter, D. (2015). Probabilistic semantics and pragmatics: Uncertainty in language and thought. The handbook of contemporary semantic theory, 2nd edition. Wiley-Blackwell . | 2306.12672#262 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 263 | Goodman, N. D., Mansinghka, V. K., Roy, D. M., Bonawitz, K. A., & Tenenbaum, J. B. (2008). Church: a language for generative models. In D. A. McAllester & P. Myllymäki (Eds.), UAI 2008, proceedings of the 24th conference in uncertainty in artificial intelligence, helsinki, finland, july 9-12, 2008 (pp. 220â229). AUAI Press. Retrieved from https://dslpitt.org/uai/displayArticleDetails.jsp?mmnu= 1&smnu=2&article_id=1346&proceeding_id=24
Goodman, N. D., Tenenbaum, J. B., & Gerstenberg, T. (2014). Concepts in a probabilistic language of | 2306.12672#263 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 264 | Goodman, N. D., Tenenbaum, J. B., & Gerstenberg, T. (2014). Concepts in a probabilistic language of
thought (Tech. Rep.). Center for Brains, Minds and Machines (CBMM). Gopnik, A. (1996). The scientist as child. Philosophy of science, 63 (4), 485â514. Gothoskar, N., Cusumano-Towner, M., Zinberg, B., Ghavamizadeh, M., Pollok, F., Garrett, A., . . . Mans- inghka, V. (2021). 3dp3: 3d scene perception via probabilistic programming. Advances in Neural Information Processing Systems, 34 , 9600â9612.
Graff, D. (2000). Shifting sands: An interest-relative theory of vagueness. Philosophical topics, 28 (1), 45â81. Grand, G., Blank, I. A., Pereira, F., & Fedorenko, E. (2022). Semantic projection recovers rich human knowledge of multiple object features from word embeddings. Nature Human Behaviour , 1â13. | 2306.12672#264 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 265 | Greenberg, M., & Harman, G. (2005). Conceptual role semantics. Griffiths, T. L., Chater, N., Kemp, C., Perfors, A., & Tenenbaum, J. B. (2010). Probabilistic models of cognition: Exploring representations and inductive biases. Trends in cognitive sciences, 14 (8), 357â364. Griffiths, T. L., Steyvers, M., & Tenenbaum, J. B. (2007). Topics in semantic representation. Psychological
review , 114 2 , 211â44.
Griffiths, T. L., & Tenenbaum, J. B. (2006). Optimal predictions in everyday cognition. Psychological science, 17 (9), 767â773.
Grimshaw, J. (1981). Form, function, and the language acquisition device. The logical problem of language acquisition, 165 , 178.
Harman, G. (1982). Conceptual role semantics. Notre Dame Journal of Formal Logic, 23 (2), 242â256. Harris, Z. S. (1954). Distributional structure. Word . Retrieved from http://psycnet.apa.org/psycinfo/
1956-02807-001 | 2306.12672#265 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 266 | 1956-02807-001
Hartshorne, J. K., OâDonnell, T. J., Sudo, Y., Uruwashi, M., Lee, M., & Snedeker, J. (2016). Psych verbs,
the linking problem, and the acquisition of language. Cognition, 157 , 268â288.
Hase, P., Diab, M., Celikyilmaz, A., Li, X., Kozareva, Z., Stoyanov, V., . . . Iyer, S. (2021). Do language
59
REFERENCES
models have beliefs? methods for detecting, updating, and visualizing model beliefs. arXiv preprint arXiv:2111.13654 .
Heim, I., & Kratzer, A. (1998). Semantics in generative grammar (Vol. 1185). Blackwell Oxford. Hespos, S. J., & Baillargeon, R. (2008). Young infantsâ actions reveal their developing knowledge of support variables: Converging evidence for violation-of-expectation findings. Cognition, 107 (1), 304â316. Hinton, G. E., Dayan, P., Frey, B. J., & Neal, R. M. (1995). The" wake-sleep" algorithm for unsupervised
neural networks. Science, 268 (5214), 1158â1161. | 2306.12672#266 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 267 | neural networks. Science, 268 (5214), 1158â1161.
Ho, M. K., Saxe, R., & Cushman, F. (2022). Planning with theory of mind. Trends in Cognitive Sciences. Hoffman, M. D., Blei, D. M., Wang, C., & Paisley, J. (2013). Stochastic variational inference. Journal of
Machine Learning Research.
Holtzen, S., Van den Broeck, G., & Millstein, T. (2020). Scaling exact inference for discrete probabilistic programs. Proceedings of the ACM on Programming Languages, 4 (OOPSLA), 1â31.
Hu, J., Small, H., Kean, H., Takahashi, A., Zekelman, L., Kleinman, D., . . . Fedorenko, E. (2021, September). The language network supports both lexical access and sentence generation during language production (Tech. Rep.). Retrieved 2021-09-13, from https://www.biorxiv.org/content/10.1101/2021.09.10 .459596v1 (Company: Cold Spring Harbor Laboratory Distributor: Cold Spring Harbor Laboratory Label: Cold Spring Harbor Laboratory Section: New Results Type: article) doi: 10.1101/2021.09.10 .459596 | 2306.12672#267 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 268 | Hughes, N., Chang, Y., & Carlone, L. (2022). Hydra: A real-time spatial perception engine for 3d scene graph construction and optimization. arXiv preprint arXiv:2201.13360 .
Icard, T., & Goodman, N. D. (2015). A resource-rational approach to the causal frame problem. In Cogsci. Isomura, T., Parr, T., & Friston, K. (2019). Bayesian filtering with multiple internal models: toward a theory
of social intelligence. Neural computation, 31 (12), 2390â2431.
Ivanova, A. A. (2022). The role of language in broader human cognition: evidence from neuroscience (Unpublished doctoral dissertation). Massachusetts Institute of Technology.
Ivanova, A. A., Mineroff, Z., Zimmerer, V., Kanwisher, N., Varley, R., & Fedorenko, E. (2021). The language network is recruited but not required for nonverbal event semantics. Neurobiology of Language, 2 (2), 176â201. | 2306.12672#268 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 269 | Izacard, G., Lewis, P., Lomeli, M., Hosseini, L., Petroni, F., Schick, T., . . . Grave, E. (2022). Few-shot Learning with Retrieval Augmented Language Models. undefined . doi: 10.48550/arXiv.2208.03299
Jackendoff, R. S. (1985). Semantics and cognition (Vol. 8). MIT press. Jara-Ettinger, J., Schulz, L. E., & Tenenbaum, J. B. (2020). The naive utility calculus as a unified, quantitative
framework for action understanding. Cognitive Psychology, 123 , 101334.
Ji, Z., Lee, N., Frieske, R., Yu, T., Su, D., Xu, Y., . . . Fung, P. (2022). Survey of hallucination in natural language generation. ACM Computing Surveys. | 2306.12672#269 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 270 | Jia, R., & Liang, P. (2016). Data recombination for neural semantic parsing. arXiv preprint arXiv:1606.03622 . Johnson, J., Hariharan, B., Van Der Maaten, L., Hoffman, J., Fei-Fei, L., Lawrence Zitnick, C., & Girshick, R. (2017). Inferring and executing programs for visual reasoning. In Proceedings of the ieee international conference on computer vision (pp. 2989â2998).
Johnson, J., Krishna, R., Stark, M., Li, L.-J., Shamma, D., Bernstein, M., & Fei-Fei, L. (2015). Image retrieval using scene graphs. In Proceedings of the ieee conference on computer vision and pattern recognition (pp. 3668â3678).
Johnson, R. E., Linderman, S., Panier, T., Wee, C. L., Song, E., Herrera, K. J., . . . Engert, F. (2020). Probabilistic models of larval zebrafish behavior reveal structure on many scales. Current Biology, 30 (1), 70â82. | 2306.12672#270 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 271 | Johnson-Laird, P. N. (1980). Mental models in cognitive science. Cognitive science, 4 (1), 71â115. Johnson-Laird, P. N. (1989). Mental models. Jones, D. (2010, October). Human kinship, from conceptual structure to grammar. Behavioral and Brain Sciences, 33 (5), 367â381. Retrieved 2022-08-09, from https://www.cambridge.org/core/product/ identifier/S0140525X10000890/type/journal_article doi: 10.1017/S0140525X10000890
Kaelbling, L. P., & Lozano-Pérez, T. (2011). Hierarchical task and motion planning in the now. In 2011 ieee international conference on robotics and automation (pp. 1470â1477).
Kaelbling, L. P., & Lozano-Pérez, T. (2013). Integrated task and motion planning in belief space. The International Journal of Robotics Research, 32 (9-10), 1194â1227.
60
REFERENCES
Kandpal, N., Deng, H., Roberts, A., Wallace, E., & Raffel, C. (2022). Large language models struggle to
learn long-tail knowledge. arXiv preprint arXiv:2211.08411 . | 2306.12672#271 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 272 | learn long-tail knowledge. arXiv preprint arXiv:2211.08411 .
Karpas, E., Abend, O., Belinkov, Y., Lenz, B., Lieber, O., Ratner, N., . . . Tenenholtz, M. (2022, May). MRKL Systems: A modular, neuro-symbolic architecture that combines large language models, external knowledge sources and discrete reasoning (No. arXiv:2205.00445). arXiv.
Katz, Y., Goodman, N. D., Kersting, K., Kemp, C., & Tenenbaum, J. B. (2008). Modeling Semantic Cognition as Logical Dimensionality Reduction. Proceedings of the Annual Meeting of the Cognitive Science Society, 30(30), 6.
Kemp, C., & Regier, T. (2012, May). Kinship Categories Across Languages Reflect General Communicative Principles. Science, 336 (6084), 1049â1054. Retrieved 2022-08-09, from https://doi.org/10.1126/ science.1218811 (Publisher: American Association for the Advancement of Science) doi: 10.1126/ science.1218811 | 2306.12672#272 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 273 | Kersten, D., Mamassian, P., & Yuille, A. (2004). Object perception as bayesian inference. Annu. Rev. Psychol., 55 , 271â304.
Kersten, D. K. D., & Yuille, A. (1996). Introduction: A bayesian formulation of visual perception. Perception as Bayesian inference, 1â21.
Khalvati, K., Kiani, R., & Rao, R. P. (2021). Bayesian inference with incomplete knowledge explains perceptual confidence and its deviations from accuracy. Nature communications, 12 (1), 5704. Klein, D., & Manning, C. D. (2003). Accurate unlexicalized parsing. In Proceedings of the 41st annual
meeting of the association for computational linguistics (pp. 423â430).
Klessinger, N., Szczerbinski, M., & Varley, R. A. (2007, January). Algebra in a man with severe aphasia. Neuropsychologia, 45 (8), 1642â1648. Retrieved 2022-06-15, from https://www.sciencedirect.com/ science/article/pii/S0028393207000280 doi: 10.1016/j.neuropsychologia.2007.01.005 | 2306.12672#273 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 274 | Kojima, T., Gu, S. S., Reid, M., Matsuo, Y., & Iwasawa, Y. (2022). Large language models are zero-shot
reasoners. arXiv preprint arXiv:2205.11916 .
Krafft, P., Baker, C., Pentland, A., & Tenenbaum, J. (2016). Modeling human ad hoc coordination. In Proceedings of the aaai conference on artificial intelligence (Vol. 30).
Kulkarni, T. D., Kohli, P., Tenenbaum, J. B., & Mansinghka, V. (2015). Picture: A probabilistic programming language for scene perception. In Proceedings of the ieee conference on computer vision and pattern recognition (pp. 4390â4399).
Kwiatkowksi, T., Zettlemoyer, L., Goldwater, S., & Steedman, M. (2010). Inducing probabilistic ccg grammars from logical form with higher-order unification. In Proceedings of the 2010 conference on empirical methods in natural language processing (pp. 1223â1233). | 2306.12672#274 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 275 | Kwiatkowski, T., Zettlemoyer, L., Goldwater, S., & Steedman, M. (2011). Lexical generalization in ccg grammar induction for semantic parsing. In Proceedings of the 2011 conference on empirical methods in natural language processing (pp. 1512â1523).
Lake, B. M., Ullman, T. D., Tenenbaum, J. B., & Gershman, S. J. (2017). Building machines that learn and
think like people. Behavioral and brain sciences, 40 .
Lakoff, G. (1988). Cognitive semantics. Landauer, T. K., & Dumais, S. T. (1997). A solution to platoâs problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge. Psychological review , 104 (2), 211. Langkilde, I., & Knight, K. (1998). Generation that exploits corpus-based statistical knowledge. In Coling
1998 volume 1: The 17th international conference on computational linguistics.
Lassiter, D., & Goodman, N. D. (2017). Adjectival vagueness in a bayesian model of interpretation. Synthese, 194 (10), 3801â3836. | 2306.12672#275 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 276 | Le, T. A., Baydin, A. G., & Wood, F. (2017). Inference compilation and universal probabilistic programming. In Artificial intelligence and statistics (pp. 1338â1348).
Lecours, A. R., & Joanette, Y. (1980, May). Linguistic and other psychological aspects of paroxysmal aphasia. Brain and Language, 10 (1), 1â23. doi: 10.1016/0093-934x(80)90034-6
Lee, T. S., & Mumford, D. (2003). Hierarchical bayesian inference in the visual cortex. JOSA A, 20 (7), 1434â1448.
Lerner, Y., Honey, C. J., Silbert, L. J., & Hasson, U. (2011, February). Topographic Mapping of a Hierarchy of Temporal Receptive Windows Using a Narrated Story. The Journal of Neuroscience, 31 (8), 2906â 2915. Retrieved 2019-12-28, from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3089381/ doi:
61
REFERENCES
10.1523/JNEUROSCI.3684-10.2011
10.1523/JNEUROSCI.3684-10.2011 | 2306.12672#276 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 277 | 61
REFERENCES
10.1523/JNEUROSCI.3684-10.2011
10.1523/JNEUROSCI.3684-10.2011
Levin, B. (1993). English verb classes and alternations: A preliminary investigation. University of Chicago press.
Lew, A., Agrawal, M., Sontag, D., & Mansinghka, V. (2021). Pclean: Bayesian data cleaning at scale with domain-specific probabilistic programming. In International conference on artificial intelligence and statistics (pp. 1927â1935).
Lew, A. K., Matheos, G., Zhi-Xuan, T., Ghavamizadeh, M., Gothoskar, N., Russell, S., & Mansinghka, V. K. (2023). Smcp3: Sequential monte carlo with probabilistic program proposals. In International conference on artificial intelligence and statistics (pp. 7061â7088).
Lew, A. K., Tessler, M. H., Mansinghka, V. K., & Tenenbaum, J. B. (2020). Leveraging unstructured statistical knowledge in a probabilistic language of thought. In Proceedings of the annual conference of the cognitive science society. | 2306.12672#277 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 278 | Lew, A. K., Zhi-Xuan, T., Grand, G., & Mansinghka, V. K. (2023). Sequential monte carlo steering of large language models using probabilistic programs. arXiv preprint arXiv:2306.03081 .
Lewis, D. (1976). General semantics. In Montague grammar (pp. 1â50). Elsevier. Li, B. Z., Nye, M., & Andreas, J. (2021). Implicit representations of meaning in neural language models.
arXiv preprint arXiv:2106.00737 .
Li, Y., Wang, S., & Nguyen, T. N. (2020). Dlfix: Context-based code transformation learning for automated program repair. 2020 IEEE/ACM 42nd International Conference on Software Engineering (ICSE), 602â614.
Liang, P. (2016). Learning executable semantic parsers for natural language understanding. Communications of the ACM , 59 (9), 68â76.
Liang, P., Daumé III, H., & Klein, D. (2008). Structure compilation: trading structure for features. In Proceedings of the 25th international conference on machine learning (pp. 592â599).
Lieder, F., & Griffiths, T. L. (2019). Resource-rational analysis: Understanding human cognition as the | 2306.12672#278 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 279 | Lieder, F., & Griffiths, T. L. (2019). Resource-rational analysis: Understanding human cognition as the
optimal use of limited computational resources. Behavioral and Brain Sciences, 43 .
Lieder, F., & Griffiths, T. L. (2020). Resource-rational analysis: Understanding human cognition as the
optimal use of limited computational resources. Behavioral and brain sciences, 43 , e1.
Lieder, F., Hsu, M., & Griffiths, T. L. (2014). The high availability of extreme events serves resource-rational decision-making. In Proceedings of the annual meeting of the cognitive science society (Vol. 36). Linzen, T. (2020). How can we accelerate progress towards human-like linguistic generalization? arXiv
preprint arXiv:2005.00955 .
Lipkin, B., Wong, L., Grand, G., & Tenenbaum, J. B. (2023). Evaluating statistical language models as pragmatic reasoners. arXiv preprint arXiv:2305.01020 .
Liu, B., Jiang, Y., Zhang, X., Liu, Q., Zhang, S., Biswas, J., & Stone, P. (2023). Llm+ p: Empowering large language models with optimal planning proficiency. arXiv preprint arXiv:2304.11477 . | 2306.12672#279 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 280 | Liu, H., Ning, R., Teng, Z., Liu, J., Zhou, Q., & Zhang, Y. (2023). Evaluating the logical reasoning ability of chatgpt and gpt-4. arXiv preprint arXiv:2304.03439 .
Liu, R., Wei, J., Gu, S. S., Wu, T.-Y., Vosoughi, S., Cui, C., . . . Dai, A. M. (2022). Mindâs eye: Grounded language model reasoning through simulation. arXiv preprint arXiv:2210.05359 .
Lowie, R. H. (1930). The kinship terminology of the bannock indians. American Anthropologist, 32 (2), 294â299.
Luria, A. R., Tsvetkova, L. S., & Futer, D. S. (1965, June). Aphasia in a composer (V. G. Shebalin). Journal of the Neurological Sciences, 2 (3), 288â292. doi: 10.1016/0022-510x(65)90113-9 | 2306.12672#280 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 281 | Lyu, Q., Havaldar, S., Stein, A., Zhang, L., Rao, D., Wong, E., . . . Callison-Burch, C. (2023). Faithful chain-of-thought reasoning. arXiv preprint arXiv:2301.13379 .
MacSweeney, M., Woll, B., Campbell, R., McGuire, P. K., David, A. S., Williams, S. C. R., . . . Brammer, M. J. (2002, July). Neural systems underlying British Sign Language and audio-visual English processing in native users. Brain, 125 (7), 1583â1593. Retrieved 2021-01-05, from https://doi.org/10.1093/brain/ awf153 doi: 10.1093/brain/awf153
Mahowald, K., Ivanova, A. A., Blank, I. A., Kanwisher, N., Tenenbaum, J. B., & Fedorenko, E. (2023). Dissociating language and thought in large language models: a cognitive perspective. arXiv preprint arXiv:2301.06627 .
Mansinghka, V., Selsam, D., & Perov, Y. (2014). Venture: a higher-order probabilistic programming platform
62
REFERENCES | 2306.12672#281 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 282 | Mansinghka, V., Selsam, D., & Perov, Y. (2014). Venture: a higher-order probabilistic programming platform
62
REFERENCES
with programmable inference. arXiv preprint arXiv:1404.0099 .
Mansinghka, V. K., Kulkarni, T. D., Perov, Y. N., & Tenenbaum, J. (2013). Approximate bayesian image interpretation using generative probabilistic graphics programs. Advances in Neural Information Processing Systems, 26 .
Mansinghka, V. K., Schaechtle, U., Handa, S., Radul, A., Chen, Y., & Rinard, M. (2018). Probabilistic programming with programmable inference. In Proceedings of the 39th acm sigplan conference on programming language design and implementation (pp. 603â616).
Marcus, G., Davis, E., & Aaronson, S. (2022). A very preliminary analysis of dall-e 2. arXiv preprint arXiv:2204.13807 .
Maynez, J., Narayan, S., Bohnet, B., & McDonald, R. (2020). On faithfulness and factuality in abstractive
summarization. arXiv preprint arXiv:2005.00661 . | 2306.12672#282 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 283 | summarization. arXiv preprint arXiv:2005.00661 .
McCarthy, J. (1980). Circumscriptionâa form of non-monotonic reasoning. Artificial intelligence, 13 (1-2), 27â39.
McDermott, D. (1982). A temporal logic for reasoning about processes and plans. Cognitive science, 6 (2), 101â155.
McDermott, D. M. (2000). The 1998 ai planning systems competition. AI magazine, 21 (2), 35â35. Menenti, L., Gierhan, S. M. E., Segaert, K., & Hagoort, P. (2011, September). Shared language: overlap and segregation of the neuronal infrastructure for speaking and listening revealed by functional MRI. Psychological Science, 22 (9), 1173â1182. doi: 10.1177/0956797611418347
Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., & Dean, J. (2013). Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems (pp. 3111â3119). | 2306.12672#283 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 284 | Milch, B., Marthi, B., Russell, S., Sontag, D., Ong, D. L., & Kolobov, A. (2007). BLOG: Probabilistic models with unknown objects. Statistical relational learning, 373.
Mitchell, A., & Jordan, F. M. (2021, June). The Ontogeny of Kinship Categorization. Journal of Cognition and Culture, 21 (1-2), 152â177. Retrieved 2022-08-09, from https://brill.com/view/journals/jocc/ 21/1-2/article-p152_8.xml (Publisher: Brill) doi: 10.1163/15685373-12340101
Mollica, F., Bacon, G., Zaslavsky, N., Xu, Y., Regier, T., & Kemp, C. (2021). The forms and meanings of grammatical markers support efficient communication. Proceedings of the National Academy of Sciences, 118 (49), e2025993118. | 2306.12672#284 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 285 | Mollica, F., & Piantadosi, S. T. (2022, June). Logical word learning: The case of kinship. Psychonomic Bulletin & Review , 29 (3), 766â799. Retrieved 2022-08-09, from https://doi.org/10.3758/s13423-021-02017-5 doi: 10.3758/s13423-021-02017-5
Montague, R. (1970). English as a formal language. Monti, M. M., Osherson, D. N., Martinez, M. J., & Parsons, L. M.
(2007, September). Functional neuroanatomy of deductive inference: A language-independent distributed network. NeuroImage, 37 (3), 1005â1016. Retrieved 2020-04-16, from http://www.sciencedirect.com/science/article/pii/ S1053811907003436 doi: 10.1016/j.neuroimage.2007.04.069
Monti, M. M., Parsons, L. M., & Osherson, D. N. (2012, August). Thought beyond language: neural dissociation of algebra and natural language. Psychological Science, 23 (8), 914â922. doi: 10.1177/ 0956797612437427 | 2306.12672#285 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 286 | Morgan, M. S. (1999). Learning from models. Ideas in Context, 52 , 347â388. Mu, J., & Andreas, J. (2020). Compositional explanations of neurons. Advances in Neural Information
Processing Systems, 33 , 17153â17163.
Nersessian, N. J., et al. (2010). Mental modeling in conceptual change. International Journal on Humanistic Ideology, 3 (01), 11â48.
(2021). Show your work: Scratchpads for intermediate computation with language models. arXiv preprint arXiv:2112.00114 .
Oaksford, M., & Chater, N. (2007). Bayesian rationality: The probabilistic approach to human reasoning. Oxford University Press.
OpenAI. (2023a). Chatgpt: Optimizing language models for dialogue. OpenAI Blog. OpenAI. (2023b). Chatgpt plugins. OpenAI Blog. OpenAI. (2023c). Gpt-4 technical report.
63
REFERENCES | 2306.12672#286 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 287 | 63
REFERENCES
Osgood, C. E. (1952). The nature and measurement of meaning. Psychological bulletin, 49 (3), 197. Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C. L., Mishkin, P., . . . Lowe, R. (2022). Training language models to follow instructions with human feedback. arXiv. Retrieved from https://arxiv.org/ abs/2203.02155 doi: 10.48550/ARXIV.2203.02155
Pan, L., Albalak, A., Wang, X., & Wang, W. Y. (2023). Logic-lm: Empowering large language models with
symbolic solvers for faithful logical reasoning. arXiv preprint arXiv:2305.12295 .
Panthaplackel, S., Nie, P., Gligoric, M., Li, J. J., & Mooney, R. J. (2020). Learning to update natural
Panthaplackel, S., Nie, P., Gligoric, M., Li, J. J., & Mooney, R. J. (2020). Learning to update natural language comments based on code changes. arXiv preprint arXiv:2004.12169. | 2306.12672#287 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 288 | language comments based on code changes. arXiv preprint arXiv:2004.12169 . Parsons, T. (1990). Events in the semantics of english: A study in subatomic semantics. Paunov, A. M., Blank, I. A., & Fedorenko, E. (2019, April). Functionally distinct language and Theory of Mind networks are synchronized at rest and during language comprehension. Journal of Neurophysiology, 121 (4), 1244â1265. Retrieved 2019-07-10, from https://www.physiology.org/doi/10.1152/jn.00619 .2018 doi: 10.1152/jn.00619.2018
(2022, June). Differential Tracking of Linguistic vs. Mental State Content in Naturalistic Stimuli by Language and Theory of Mind (ToM) Brain Networks. Neurobiology of Language, 1â29. Retrieved 2022-07-05, from https://doi.org/10.1162/nol_a_00071 doi: 10.1162/nol_a_00071
Pearl, J. (1988). Probabilistic reasoning in intelligent systems: networks of plausible inference. Morgan kaufmann. | 2306.12672#288 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 289 | Pearl, J. (1988). Probabilistic reasoning in intelligent systems: networks of plausible inference. Morgan kaufmann.
Pearl, J., et al. (2000). Models, reasoning and inference. Cambridge, UK: CambridgeUniversityPress, 19 (2). Pednault, E. P. (1989). Adl: exploring the middle ground between strips and the situation calculus. Kr , 89 , 324â332.
Pereira, F. C., & Shieber, S. M. (2002). Prolog and natural-language analysis. Microtome Publishing. Perfors, A., Tenenbaum, J. B., & Regier, T. (2011). The learnability of abstract syntactic principles.
Cognition, 118 (3), 306â338.
Peters, M. E., Neumann, M., Iyyer, M., Gardner, M., Clark, C., Lee, K., & Zettlemoyer, L. (1802). Deep contextualized word representations. corr abs/1802.05365 (2018). arXiv preprint arXiv:1802.05365 .
Petrov, S., Haghighi, A., & Klein, D. (2008). Coarse-to-fine syntactic machine translation using language projections. In Proceedings of the 2008 conference on empirical methods in natural language processing (pp. 108â116). | 2306.12672#289 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 290 | Philippe, R. (1972). Définition et traitement de lâégalité formelle en démonstration automatique (Unpublished doctoral dissertation). thèse de 3ième cycle, Groupe Intelligence Artificielle, Faculté des Sciences . . . .
Piaget, J. (1951). Judgement and reasoning in the child. London: Routledge and Kegan Paul. Piantadosi, S. T. (2023). Modern language models refute chomskyâs approach to language. Lingbuzz Preprint,
lingbuzz , 7180 .
Piantadosi, S. T., Tenenbaum, J. B., & Goodman, N. D. (2012). Bootstrapping in a language of thought: A formal model of numerical concept learning. Cognition, 123 (2), 199â217.
Pietroski, P. M. (2018). Conjoining meanings: Semantics without truth values. Oxford University Press. Pinker, S. (1984). Language learnability and language development. Pinker, S. (1998). Words and rules. Lingua, 106 (1-4), 219â242. Pinker, S., & MacWhinney, B. (1987). The bootstrapping problem in language acquisition. Mechanisms of
language acquisition, 399â441. | 2306.12672#290 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 291 | language acquisition, 399â441.
Pollard, C., & Sag, I. A. (1994). Head-driven phrase structure grammar. University of Chicago Press. Posner, G. J., Strike, K. A., Hewson, P. W., & Gertzog, W. (1982). Toward a theory of conceptual change.
Science education, 66 (2), 211â227.
Pramod, R., Cohen, M. A., Tenenbaum, J. B., & Kanwisher, N. (2022). Invariant representation of physical
stability in the human brain. Elife, 11 , e71736.
Pyers, J. E., Shusterman, A., Senghas, A., Spelke, E. S., & Emmorey, K. (2010). Evidence from an emerging sign language reveals that language supports spatial cognition. Proceedings of the National Academy of Sciences, 107 (27), 12116â12120.
Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). Language models are
unsupervised multitask learners. OpenAI Blog, 1 (8). | 2306.12672#291 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 292 | unsupervised multitask learners. OpenAI Blog, 1 (8).
Rae, J. W., Borgeaud, S., Cai, T., Millican, K., Hoffmann, J., Song, F., . . . others (2021). Scaling language models: Methods, analysis & insights from training gopher. arXiv preprint arXiv:2112.11446 .
64
REFERENCES
Ramesh, A., Dhariwal, P., Nichol, A., Chu, C., & Chen, M. (2022). Hierarchical text-conditional image
generation with clip latents. arXiv preprint arXiv:2204.06125 .
Ramesh, A., Pavlov, M., Goh, G., Gray, S., Voss, C., Radford, A., . . . Sutskever, I. (2021). Zero-shot
text-to-image generation. In International conference on machine learning (pp. 8821â8831).
Ranganath, R., Gerrish, S., & Blei, D. (2014). Black box variational inference. In Artificial intelligence and
statistics (pp. 814â822). | 2306.12672#292 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 293 | statistics (pp. 814â822).
Reed, S., Zolna, K., Parisotto, E., Colmenarejo, S. G., Novikov, A., Barth-Maron, G., . . . others (2022). A generalist agent. arXiv preprint arXiv:2205.06175 .
Regev, M., Honey, C. J., Simony, E., & Hasson, U. (2013, October). Selective and Invariant Neural Responses to Spoken and Written Narratives. Journal of Neuroscience, 33 (40), 15978â15988. Retrieved 2020-10-02, from https://www.jneurosci.org/content/33/40/15978 (Publisher: Society for Neuroscience Section: Articles) doi: 10.1523/JNEUROSCI.1580-13.2013
Reid, M., & Neubig, G. (2022). Learning to model editing processes. ArXiv , abs/2205.12374 . Ribeiro, D., Wang, S., Ma, X., Zhu, H., Dong, R., Kong, D., . . . others (2023). Street: A multi-task structured
reasoning and explanation benchmark. arXiv preprint arXiv:2302.06729 . | 2306.12672#293 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 294 | reasoning and explanation benchmark. arXiv preprint arXiv:2302.06729 .
Rips, L. J., & Hespos, S. J. (2015). Divisions of the physical world: Concepts of objects and substances. Psychological bulletin, 141 (4), 786.
Roose, K. (2023). Bingâs a.i. chat: I want to be alive. The New York Times. Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use
interpretable models instead. Nature machine intelligence, 1 (5), 206â215.
Rudin, C., Chen, C., Chen, Z., Huang, H., Semenova, L., & Zhong, C. (2022). Interpretable machine learning: Fundamental principles and 10 grand challenges. Statistic Surveys, 16 , 1â85.
Russell, S., & Norvig, P. (2021). Artificial intelligence : a modern approach (Fourth edition. ed.). Hoboken, NJ: Pearson.
(2019). Bayesian synthesis of probabilistic programs for automatic data modeling. Proceedings of the ACM on Programming Languages, 3 (POPL), 1â32. | 2306.12672#294 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 295 | (2019). Bayesian synthesis of probabilistic programs for automatic data modeling. Proceedings of the ACM on Programming Languages, 3 (POPL), 1â32.
Saad, F. A., Rinard, M. C., & Mansinghka, V. K. (2021). Sppl: probabilistic programming with fast exact symbolic inference. In Proceedings of the 42nd acm sigplan international conference on programming language design and implementation (pp. 804â819).
Saffran, J. R., Senghas, A., & Trueswell, J. C. (2001). The acquisition of language by children. Proceedings of the National Academy of Sciences, 98 (23), 12874â12875. | 2306.12672#295 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 296 | Sahlgren, M. (2008). The distributional hypothesis. Italian Journal of Disability Studies, 20 , 33â53. Sanborn, A. N., & Chater, N. (2017). The sampling brain. Trends in Cognitive Sciences, 21 (7), 492â493. Sapir, E. (1929). The status of linguistics as a science. Language, 207â214. Saxe, R., Moran, J. M., Scholz, J., & Gabrieli, J. (2006). Overlapping and non-overlapping brain regions for theory of mind and self reflection in individual subjects. Social cognitive and affective neuroscience, 1 (3), 229â234.
Saxe, R., & Powell, L. J. (2006). Itâs the thought that counts: specific brain regions for one component of theory of mind. Psychological science, 17 (8), 692â699. | 2306.12672#296 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 297 | Schick, T., Dwivedi-Yu, J., Dessì, R., Raileanu, R., Lomeli, M., Zettlemoyer, L., . . . Scialom, T. (2023). Toolformer: Language models can teach themselves to use tools. arXiv preprint arXiv:2302.04761 . Schrimpf, M., Blank, I. A., Tuckute, G., Kauf, C., Hosseini, E. A., Kanwisher, N., . . . Fedorenko, E. (2021, November). The neural architecture of language: Integrative modeling converges on predictive processing. Proceedings of the National Academy of Sciences, 118 (45). Retrieved 2021-12-12, from https://www .pnas.org/content/118/45/e2105646118 (Publisher: National Academy of Sciences Section: Biological Sciences) doi: 10.1073/pnas.2105646118
Schuler, K. K. (2005). Verbnet: A broad-coverage, comprehensive verb lexicon [PhD Thesis]. Univer- sity of Pennsylvania. Retrieved from http://verbs.colorado.edu/~kipper/Papers/dissertation.pdf (ISBN: 0-542-20049-X) | 2306.12672#297 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 298 | Schwettmann, S., Fischer, J., Tenenbaum, J., & Kanwisher, N. (2018). Evidence for an intuitive physics
engine in the human brain. In Cogsci.
Schwettmann, S., Tenenbaum, J. B., & Kanwisher, N. (2019). Invariant representations of mass in the human brain. Elife, 8 , e46619.
65
REFERENCES
Scott, R. M., & Baillargeon, R. (2013). Do infants really expect agents to act efficiently? a critical test of the
rationality principle. Psychological science, 24 (4), 466â474.
Scott, T. L., Gallée, J., & Fedorenko, E. (2017). A new fun and robust version of an fMRI localizer for the frontotemporal language system. Cognitive Neuroscience, 8 (3), 167â176. doi: 10.1080/ 17588928.2016.1201466
Seaman, I. R., van de Meent, J.-W., & Wingate, D. (2018). Nested reasoning about autonomous agents using
probabilistic programs. arXiv preprint arXiv:1812.01569 .
Senghas, A., Kita, S., & Ozyurek, A. (2004). Children creating core properties of language: Evidence from | 2306.12672#298 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 299 | Senghas, A., Kita, S., & Ozyurek, A. (2004). Children creating core properties of language: Evidence from
an emerging sign language in nicaragua. Science, 305 (5691), 1779â1782.
Shain, C., Blank, I. A., van Schijndel, M., Schuler, W., & Fedorenko, E. (2020). fMRI reveals language-specific predictive coding during naturalistic sentence comprehension. Neuropsychologia, 138 , 107307. doi: 10.1016/j.neuropsychologia.2019.107307
Shain, C., Paunov, A. M., Chen, X., Lipkin, B., & Fedorenko, E. (2022, July). No evidence of theory of mind reasoning in the human language network. bioRxiv. Retrieved 2022-07-20, from https:// www.biorxiv.org/content/10.1101/2022.07.18.500516v1 (Pages: 2022.07.18.500516 Section: New Results) doi: 10.1101/2022.07.18.500516 | 2306.12672#299 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 300 | Shan, C.-c., & Ramsey, N. (2017). Exact bayesian inference by symbolic disintegration. In Proceedings of the 44th acm sigplan symposium on principles of programming languages (pp. 130â144).
Shin, R., Brockschmidt, M., Allamanis, M., & Polozov, O. (2018). Program synthesis with learned code
idioms.
Silbert, L. J., Honey, C. J., Simony, E., Poeppel, D., & Hasson, U. (2014, October). Coupled neural systems underlie the production and comprehension of naturalistic narrative speech. Proceedings of the National Academy of Sciences, 111 (43), E4687âE4696. Retrieved 2021-09-06, from https:// www.pnas.org/content/111/43/E4687 (Publisher: National Academy of Sciences Section: PNAS Plus) doi: 10.1073/pnas.1323812111
Smith, K., Frank, S., Rolando, S., Kirby, S., & Loy, J. E. (2020). Simple kinship systems are more learnable. Proceedings of the Annual Meeting of the Cognitive Science Society, 7. | 2306.12672#300 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 301 | Smith, L., & Yu, C. (2008). Infants rapidly learn word-referent mappings via cross-situational statistics. Cognition, 106 (3), 1558â1568.
Snedeker, J. (2016). Clean mapping: A sketchy story about how conceptual structure could shape language acquisition and some evidence suggesting that it just might be true.
Sorkin, A. R., Warner, B., Kessler, S., Hirsch, L., & Livni, E. (2023). Revenge of the chatbots. The New York Times.
Spelke, E. S. (1990). Principles of object perception. Cognitive science, 14 (1), 29â56. Spelke, E. S. (2022). What babies know: Core knowledge and composition volume 1 (Vol. 1). Oxford University
Press.
Spelke, E. S., Gutheil, G., & Van de Walle, G. (1995). The development of object perception. Visual cognition: An invitation to cognitive science, 2 , 297â330. | 2306.12672#301 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 302 | Spelke, E. S., & Kinzler, K. D. (2007). Core knowledge. Developmental science, 10 (1), 89â96. Steedman, M. (2001). The syntactic process. MIT press. Steedman, M. (2011). Combinatory categorial grammar. Sumers, T. R., Hawkins, R. D., Ho, M. K., Griffiths, T. L., & Hadfield-Menell, D. (2022). How to talk so
your robot will learn: Instructions, descriptions, and pragmatics. arXiv preprint arXiv:2206.07870 .
Suster, S., Fivez, P., Totis, P., Kimmig, A., Davis, J., De Raedt, L., & Daelemans, W. (2021). Mapping probability word problems to executable representations. In Proceedings of the 2021 conference on empirical methods in natural language processing (pp. 3627â3640).
Talmy, L. (1988). Force dynamics in language and cognition. Cognitive science, 12 (1), 49â100. Tangermann, V. (2023). Microsoftâs bing ai is leaking maniac alternate personalities named venom and fury.
Futurism. | 2306.12672#302 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 303 | Futurism.
Téglás, E., Vul, E., Girotto, V., Gonzalez, M., Tenenbaum, J. B., & Bonatti, L. L. (2011). Pure Reasoning in 12-Month-Old Infants as Probabilistic Inference. Science, 27 (332), 1054â1059.
Tellex, S., Kollar, T., Dickerson, S., Walter, M., Banerjee, A., Teller, S., & Roy, N. (2011). Understanding natural language commands for robotic navigation and mobile manipulation. In Proceedings of the aaai conference on artificial intelligence (Vol. 25, pp. 1507â1514).
66
REFERENCES
Tenenbaum, J. B., Kemp, C., Griffiths, T. L., & Goodman, N. D. (2011). How to grow a mind: Statistics,
structure, and abstraction. science, 331 (6022), 1279â1285.
Tenney, I., Das, D., & Pavlick, E. (2019). Bert rediscovers the classical nlp pipeline. arXiv preprint
arXiv:1905.05950 .
Tessler, M. H., Tenenbaum, J. B., & Goodman, N. D. (2022). Logic, probability, and pragmatics in syllogistic | 2306.12672#303 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 304 | Tessler, M. H., Tenenbaum, J. B., & Goodman, N. D. (2022). Logic, probability, and pragmatics in syllogistic
reasoning. Topics in Cognitive Science, 14 (3), 574â601.
Thoppilan, R., De Freitas, D., Hall, J., Shazeer, N., Kulshreshtha, A., Cheng, H.-T., . . . Le, Q. (2022, February). LaMDA: Language Models for Dialog Applications (No. arXiv:2201.08239). arXiv. Todorov, E., Erez, T., & Tassa, Y. (2012). Mujoco: A physics engine for model-based control. In 2012
ieee/rsj international conference on intelligent robots and systems (pp. 5026â5033).
Tolpin, D., van de Meent, J.-W., Yang, H., & Wood, F. (2016). Design and implementation of probabilistic programming language Anglican. In Proceedings of the 28th symposium on the implementation and application of functional programming languages (pp. 1â12).
Tomasello, M. (2009). The usage-based theory of language acquisition. In The cambridge handbook of child
language (pp. 69â87). Cambridge Univ. Press. | 2306.12672#304 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 305 | Tomasello, M. (2009). The usage-based theory of language acquisition. In The cambridge handbook of child
language (pp. 69â87). Cambridge Univ. Press.
Tomasello, M. (2022). The evolution of agency: Behavioral organization from lizards to humans. MIT Press. Ullman, T. D. (2023). Large language models fail on trivial alterations to theory-of-mind tasks. arXiv
preprint arXiv:2302.08399 .
Ullman, T. D., Spelke, E., Battaglia, P., & Tenenbaum, J. B. (2017). Mind games: Game engines as an architecture for intuitive physics. Trends in cognitive sciences, 21 (9), 649â665.
Valmeekam, K., Sreedharan, S., Marquez, M., Olmo, A., & Kambhampati, S. (2023). On the planning abilities of large language models (a critical investigation with a proposed benchmark). arXiv preprint arXiv:2302.06706 . | 2306.12672#305 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 306 | Varley, R. A. (1998). Aphasic language, aphasic thought: an investigation of propositional thinking in an a-propositional aphasic. In P. Carruthers & J. Boucher (Eds.), Language and Thought: Interdisciplinary Themes (pp. 128â145). Cambridge University Press. doi: 10.1017/CBO9780511597909.009
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., . . . Polosukhin, I. (2017).
Attention is all you need. Advances in neural information processing systems, 30 .
Vul, E., Goodman, N., Griffiths, T. L., & Tenenbaum, J. B. (2014). One and done? optimal decisions from
very few samples. Cognitive science, 38 (4), 599â637.
Vul, E., & Pashler, H. (2008). Measuring the crowd within: Probabilistic representations within individuals. Psychological Science, 19 (7), 645â647.
Wang, G., Xie, Y., Jiang, Y., Mandlekar, A., Xiao, C., Zhu, Y., . . . Anandkumar, A. (2023). Voyager: An | 2306.12672#306 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 307 | open-ended embodied agent with large language models. arXiv preprint arXiv:2305.16291 .
Wang, R. F., & Spelke, E. S. (2002). Human spatial representation: Insights from animals. Trends in cognitive sciences, 6 (9), 376â382.
Watters, N., Tenenbaum, J., & Jazayeri, M. (2021). Modular object-oriented games: a task framework for reinforcement learning, psychology, and neuroscience. arXiv preprint arXiv:2102.12616 .
Wei, J., Wang, X., Schuurmans, D., Bosma, M., Chi, E., Le, Q., & Zhou, D. (2022). Chain of thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903 .
Weir, N., & Van Durme, B. (2022, September). Dynamic Generation of Interpretable Inference Rules in a Neuro-Symbolic Expert System (No. arXiv:2209.07662). arXiv.
Wellman, H. M., & Gelman, S. A. (1992). Cognitive development: Foundational theories of core domains. Annual review of psychology, 43 (1), 337â375. | 2306.12672#307 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 308 | White, J., Mu, J., & Goodman, N. D. (2020). Learning to refer informatively by amortizing pragmatic
reasoning. arXiv preprint arXiv:2006.00418 .
Whorf, B. (1956). Language, thought, and reality: selected writings. Wilson, S. M., Molnar-Szakacs, I., & Iacoboni, M. (2008, January). Beyond Superior Temporal Cortex: Intersubject Correlations in Narrative Speech Comprehension. Cerebral Cortex , 18 (1), 230â242. Retrieved 2022-06-19, from https://doi.org/10.1093/cercor/bhm049 doi: 10.1093/cercor/bhm049
(2011). Lightweight implementations of probabilistic programming languages via transformational compilation. In Proceedings of the fourteenth international conference on artificial intelligence and statistics (pp. 770â778).
Wiseman, S., Shieber, S. M., & Rush, A. M. (2017). Challenges in data-to-document generation. arXiv
67
REFERENCES
# preprint arXiv:1707.08052 .
Witty, S., Lew, A., Jensen, D., & Mansinghka, V. (2019). Bayesian causal inference via probabilistic program | 2306.12672#308 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 309 | Witty, S., Lew, A., Jensen, D., & Mansinghka, V. (2019). Bayesian causal inference via probabilistic program
synthesis. arXiv preprint arXiv:1910.14124 .
Wolfram, S. (2023). ChatGPT gets its âWolfram Superpowersâ. Retrieved from https://writings .stephenwolfram.com/2023/03/chatgpt-gets-its-wolfram-superpowers/
Wong, C., Ellis, K. M., Tenenbaum, J., & Andreas, J. (2021). Leveraging language to learn program abstractions and search heuristics. In International conference on machine learning (pp. 11193â11204). Wong, Y. W., & Mooney, R. (2007). Learning synchronous grammars for semantic parsing with lambda calculus. In Proceedings of the 45th annual meeting of the association of computational linguistics (pp. 960â967).
Wu, J., Tenenbaum, J. B., & Kohli, P. (2017). Neural scene de-rendering. In Proceedings of the ieee conference on computer vision and pattern recognition (pp. 699â707). | 2306.12672#309 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 310 | Wu, J., Yildirim, I., Lim, J. J., Freeman, B., & Tenenbaum, J. (2015a). Galileo: Perceiving physical object properties by integrating a physics engine with deep learning. Advances in neural information processing systems, 28 .
Wu, J., Yildirim, I., Lim, J. J., Freeman, B., & Tenenbaum, J. (2015b). Galileo: Perceiving Physical Object Properties by Integrating a Physics Engine with Deep Learning. In Advances in Neural Information Processing Systems (Vol. 28). Curran Associates, Inc.
Wu, M., & Goodman, N. (2022). Foundation posteriors for approximate probabilistic inference. arXiv preprint arXiv:2205.09735 .
Wu, S. A., Wang, R. E., Evans, J. A., Tenenbaum, J. B., Parkes, D. C., & Kleiman-Weiner, M. (2021). Too many cooks: Bayesian inference for coordinating multi-agent collaboration. Topics in Cognitive Science, 13 (2), 414â432. | 2306.12672#310 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 311 | Xie, Y., Yu, C., Zhu, T., Bai, J., Gong, Z., & Soh, H. (2023). Translating natural language to planning goals with large-language models. arXiv preprint arXiv:2302.05128 .
Xu, K., Srivastava, A., Gutfreund, D., Sosa, F., Ullman, T. D., Tenenbaum, J., & Sutton, C. (2021). A bayesian-symbolic approach to reasoning and learning in intuitive physics. Advances in Neural Information Processing Systems, 34 , 2478â2490.
Yang, Y., & Piantadosi, S. T. (2022). One model for the learning of language. Proceedings of the National Academy of Sciences, 119 (5), e2021865119.
Yasunaga, M., & Liang, P. (2020). Graph-based, self-supervised program repair from diagnostic feedback. CoRR, abs/2005.10636 . Retrieved from https://arxiv.org/abs/2005.10636 | 2306.12672#311 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 312 | Yi, K., Gan, C., Li, Y., Kohli, P., Wu, J., Torralba, A., & Tenenbaum, J. B. (2019). Clevrer: Collision events for video representation and reasoning. arXiv preprint arXiv:1910.01442 .
(2018). Neural-symbolic vqa: Disentangling reasoning from vision and language understanding. arXiv preprint arXiv:1810.02338 , 31 , 1031â1042.
Yildirim, I., Belledonne, M., Freiwald, W., & Tenenbaum, J. (n.d.). Efficient inverse graphics in biological face processing. , 77.
Ying, L., Collins, K., Wei, M., Zhang, C., Tan, Z.-X., Weller, A., . . . Wong, L. (2023). The neuro-symbolic inverse planning engine (nipe): Modeling probabilistic social inferences from linguistic inputs. ICML ToM Workshop 2023 .
Yuille, A., & Kersten, D. (2006). Vision as bayesian inference: analysis by synthesis? Trends in cognitive sciences, 10 (7), 301â308. | 2306.12672#312 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 313 | Yuille, A., & Kersten, D. (2006). Vision as bayesian inference: analysis by synthesis? Trends in cognitive sciences, 10 (7), 301â308.
Zaslavsky, N., Kemp, C., Regier, T., & Tishby, N. (2018). Efficient compression in color naming and its evolution. Proceedings of the National Academy of Sciences, 115 (31), 7937â7942.
Zelikman, E., Wu, Y., Mu, J., & Goodman, N. (2022). Star: Bootstrapping reasoning with reasoning. Advances in Neural Information Processing Systems, 35 , 15476â15488.
Zhang, C. E., Wong, L., Grand, G., & Tenenbaum, J. B. (2023). Grounded physical language understanding with probabilistic programs and simulated worlds. In Proceedings of the annual conference of the cognitive science society (p. To Appear).
Zhang, J., Panthaplackel, S., Nie, P., Li, J. J., & GligoriÄ, M. (2022). Coditt5: Pretraining for source code and natural language editing. ArXiv , abs/2208.05446 .
Zhi-Xuan, T. (2022). Pddl. jl: An extensible interpreter and compiler interface for fast and flexible ai planning
68 | 2306.12672#313 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
2306.12672 | 314 | Zhi-Xuan, T. (2022). Pddl. jl: An extensible interpreter and compiler interface for fast and flexible ai planning
68
REFERENCES
(Unpublished doctoral dissertation). Massachusetts Institute of Technology.
# Zhi-Xuan, T., Mann, J., Silver, T., Tenenbaum, J., & Mansinghka, V.
(2020). Online bayesian goal inference for boundedly rational planning agents. Advances in neural information processing systems, 33 , 19238â19250.
Zhuo, T. Y., Huang, Y., Chen, C., & Xing, Z. (2023). Exploring ai ethics of chatgpt: A diagnostic analysis.
arXiv preprint arXiv:2301.12867 .
Zinberg, B., Cusumano-Towner, M., & Vikash, K. M. (2019). Structured differentiable models of 3d scenes via generative scene graphs. In Workshop on perception as generative reasoning, neurips, submitted september.
69
A LANGUAGE AND WORLD MODELS
# Appendices | 2306.12672#314 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | [
{
"id": "1810.04805"
},
{
"id": "2302.04761"
},
{
"id": "2108.07258"
},
{
"id": "2201.13360"
},
{
"id": "1802.05365"
},
{
"id": "1707.08052"
},
{
"id": "2205.09712"
},
{
"id": "2304.03439"
},
{
"id": "1910.01442"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2007.09871"
},
{
"id": "2005.00955"
},
{
"id": "2302.05128"
},
{
"id": "1812.01569"
},
{
"id": "2305.12295"
},
{
"id": "2208.00005"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2301.13379"
},
{
"id": "1904.09545"
},
{
"id": "2004.12169"
},
{
"id": "2301.12867"
},
{
"id": "2209.07800"
},
{
"id": "2303.06247"
},
{
"id": "2205.05718"
},
{
"id": "2112.11446"
},
{
"id": "2207.10342"
},
{
"id": "2212.07919"
},
{
"id": "1910.14124"
},
{
"id": "2102.12616"
},
{
"id": "2110.14168"
},
{
"id": "1805.04988"
},
{
"id": "2206.07870"
},
{
"id": "2305.16291"
},
{
"id": "1704.04977"
},
{
"id": "2005.14165"
},
{
"id": "2306.03081"
},
{
"id": "2204.13807"
},
{
"id": "2204.07931"
},
{
"id": "2305.01020"
},
{
"id": "1606.03622"
},
{
"id": "2211.08411"
},
{
"id": "2205.06175"
},
{
"id": "2006.00418"
},
{
"id": "2205.00445"
},
{
"id": "2006.08381"
},
{
"id": "2301.06627"
},
{
"id": "1810.02338"
},
{
"id": "2106.00737"
},
{
"id": "2204.06125"
},
{
"id": "2302.06706"
},
{
"id": "2210.05359"
},
{
"id": "2205.11916"
},
{
"id": "2201.08239"
},
{
"id": "1905.05950"
},
{
"id": "2111.13654"
},
{
"id": "2204.01691"
},
{
"id": "1805.04793"
},
{
"id": "2303.12712"
},
{
"id": "2112.00114"
},
{
"id": "2209.07662"
},
{
"id": "2302.06729"
},
{
"id": "2112.04426"
},
{
"id": "2205.09735"
},
{
"id": "2005.00661"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.