doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
2306.12672
315
69 A LANGUAGE AND WORLD MODELS # Appendices We include code for reference below to help better interpret the examples in the paper. This code is included (with human-readable comments) for completeness and for reference, but is not guaranteed to be the the most up-to-date version of these examples. Please refer to the GitHub repository for the most complete, corrected, and up-to-date code for all examples in this paper, as well as instructions for execution and reproducibility: github.com/gabegrand/world-models. # A Language and world models # A.1 Probabilistic reasoning # A.1.1 Generative world model for tug-of-war 1 ;; This Church program models a tug-of-war game between teams of players.
2306.12672#315
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
316
# A.1.1 Generative world model for tug-of-war 1 ;; This Church program models a tug-of-war game between teams of players. 2 3 ;; Each player has a strength, with strength value 50 being about average. 4 (define strength (mem (lambda (player) (gaussian 50 20)))) 5 6 ;; Each player has an intrinsic laziness frequency. 7 (define laziness (mem (lambda (player) (uniform 0 1)))) 8 9 ;; The team's strength is the sum of the players' strengths. 10 ;; When a player is lazy in a match, they pull with half their strength. 11 (define (team-strength team) 12 (sum 13 (map 14 15 (lambda (player) (if (flip (laziness player)) (/ (strength player) 2) (strength player))) team))) 16 17 ;; The winner of the match is the stronger team. 18 ;; Returns true if team-1 won against team-2, else false. 19 (define (won-against team-1 team-2) 20 (> (team-strength team-1) (team-strength team-2))) # Code Block 1: Generative domain theory for the Bayesian tug-of-war. # A.1.2 Translation examples for tug-of-war
2306.12672#316
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
317
# Code Block 1: Generative domain theory for the Bayesian tug-of-war. # A.1.2 Translation examples for tug-of-war 1 ;; Now, let's translate some user-defined statements. 2 ;; Each statement begins with either `Condition` or `Query`. 3 ;; `Condition` statements provide facts about the scenario. 4 ;; `Query` statements are questions that evaluate quantities of interest. 5 6 ;; Condition: Alice won against Bob. 7 (condition (won-against '(alice) '(bob))) 8 9 ;; Condition: John and Mary won against Tom and Sue. 10 (condition (won-against '(john mary) '(tom sue))) 11 12 ;; Query: If Mary played against Tom, who would win? 13 (query (won-against '(mary) '(tom))) 14 15 ;; Certain statements are underspecified and require some interpretation. For example: 16 ;; Condition: Sue is very strong. 17 (condition (> (strength 'sue) 75)) 18 70 A.2 Relational reasoning A LANGUAGE AND WORLD MODELS
2306.12672#317
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
318
18 70 A.2 Relational reasoning A LANGUAGE AND WORLD MODELS 19 ;; We can `define` new constructs that are useful for translation. For example: 20 ;; Condition: Bob is stronger than John. 21 (define (stronger-than? player-1 player-2) 22 (> (strength player-1) (strength player-2))) 23 (condition (stronger-than? 'bob 'john)) 24 25 ;; Query: Is Sue stronger than Mary? 26 (query (stronger-than? 'sue 'mary)) 27 28 ;; Condition: A couple of the players are stronger than John. 29 (condition (>= (count (map (lambda (player) (stronger-than? player 'john) players)) 2))) # Code Block 2: Prompt examples # A.2 Relational reasoning A.2.1 Generative world model for kinship 1 ;; -- KINSHIP GENERATIVE DOMAIN THEORY -2 3 ;; All the names that can be used in the conversational context. 4 (define ALL-NAMES '(avery blake charlie dana))
2306.12672#318
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
319
5 6 ;; Generates unique person ids of the format (person-0, person-1, ...) 7 (define PERSON-PREFIX "person-") 8 (define new-person-id (make-gensym PERSON-PREFIX)) 9 (define (id->idx person-id) 10 (string->number (string-slice (stringify person-id) (string-length PERSON-PREFIX)))) 11 12 ;; Randomly assign a gender 13 (define person->gender (mem (lambda (person-id) 14 (uniform-draw '(male female))))) 15 16 ;; Randomly-ordered list of person names 17 (define NAMES (shuffle-unique ALL-NAMES)) 18 (define person->name (mem (lambda (person-id) (list-ref NAMES (id->idx person-id))))) 19 20 21 ;; Person node in tree 22 (define (person person-id parent-1-id parent-2-id) (list 23 24 25 26 (pair 'person-id person-id) (pair 'name person-id) (pair 'gender (person->gender person-id)) (pair 'parent-1-id parent-1-id) (pair 'parent-2-id parent-2-id))) 27 28 29 ;; Generate the full tree 30 ;; Max tree size is 1 +
2306.12672#319
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
320
parent-1-id) (pair 'parent-2-id parent-2-id))) 27 28 29 ;; Generate the full tree 30 ;; Max tree size is 1 + (sum_{n=0}^{n=MAX-DEPTH} 2 * MAX-WIDTH^n) 31 (define MAX-WIDTH 3) 32 (define MAX-DEPTH 2) 33 (define PARTNER-PROBABILITY 0.5) 34 (define (generate-tree root-primary-id root-secondary-id depth) 35 (let* ( 36 37 38 ;; Create the primary parent (parent-1-id (new-person-id)) (parent-1 (person parent-1-id root-primary-id root-secondary-id))) 39 40 41 (if (flip PARTNER-PROBABILITY) ;; Case: parent-1 has partner (let* (
2306.12672#320
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
322
42 43 44 ;; Create the secondary parent (parent-2-id (new-person-id)) (parent-2 (person parent-2-id () ())) 45 46 47 48 ;; Link the parents with a partner relation (parent-1 (append parent-1 (list (pair 'partner-id parent-2-id)))) (parent-2 (append parent-2 (list (pair 'partner-id parent-1-id)))) 49 50 51 52 ;; Generate children (n-children (if (>= depth MAX-DEPTH) 0 (bounded-geometric 0.5 0 MAX-WIDTH))) (child-trees (repeat n-children (lambda () (generate-tree parent-1-id parent-2-id (+ depth 1))))) 53 54 55 56 ;; Update the parents to point to the children (child-ids (map (lambda (t) (lookup (first t) 'person-id)) child-trees)) (parent-1 (append parent-1 (list (pair 'child-ids child-ids)))) (parent-2 (append parent-2 (list (pair 'child-ids child-ids))))) 57 58 (append (list parent-1) (list parent-2) (shallow-flatten child-trees))) 59 60 61 ;; Case: parent-1 has no partner (list
2306.12672#322
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
323
(append (list parent-1) (list parent-2) (shallow-flatten child-trees))) 59 60 61 ;; Case: parent-1 has no partner (list parent-1)))) 62 63 ;; Generate the global tree. 64 (define T (generate-tree () () 0)) 65 66 ;; Assign names randomly to (some of) the people in the tree. 67 (define (add-names-to-tree tree names) 68 69 (if (null? tree) () (let* 70 71 72 73 ;; Probability of addding a name to the first person ((p (min 1.0 (/ (length names) (length tree)))) (person (first tree))) (if (flip p) 74 75 ;; Name the person (let 76 ((named-person (update-list person 1 (pair 'name (first names))))) 77 78 79 (cons named-person (add-names-to-tree (rest tree) (rest names)))) ;; Don't name the person (cons person (add-names-to-tree (rest tree) names)))))) 80 81 ;; Update the tree with the name information. 82 (define T (add-names-to-tree T NAMES)) Code Block 3: Generative domain theory for family trees.
2306.12672#323
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
324
# A.2.2 Kinship tree utilities 1 ;; -- KINSHIP TREE UTILITIES -- 2 3 ;; Returns all instances of person with property `key` equal to `value` 4 (define filter-by-property (mem (lambda (key value) 5 6 (filter (lambda (p) (equal? (lookup p key) value)) T)))) 7 8 ;; Returns the unique instance of person with name. 9 (define get-person-by-name (mem (lambda (name) (let ((results (filter-by-property 'name name))) (if (null? results) () (first results)))))) 10 11 12 13 72 A.2 Relational reasoning A LANGUAGE AND WORLD MODELS
2306.12672#324
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
325
14 15 ;; People without a name can be referenced directly by person-id. 16 (define get-person-by-id 17 18 (mem (lambda (person-id) (if (null? person-id) 19 20 () (let ((idx (id->idx person-id))) 21 (if (>= idx (length T)) () (list-ref T idx))))))) 22 23 ;; Get a person object either by name or person-id. 24 (define get-person 25 (mem (lambda (person-ref) 26 (cond 27 28 29 ((null? person-ref) ()) ((member? person-ref NAMES) (get-person-by-name person-ref)) (else (get-person-by-id person-ref)))))) 30 31 ;; Get a property of a person. 32 (define get-property 33 (mem (lambda (name key) 34 (lookup (get-person name) key)))) 35 36 ;; -- TREE OPERATORS -- 37 ;; predicate :: name -> boolean 38 39 (define (map-tree predicate) 40 (map (lambda (x) (predicate (lookup x 'name))) T)) 41 42 (define (filter-tree predicate) 43 (filter (lambda (x) (predicate (lookup x 'name))) T)) 44 45 (define (exists predicate) 46 (some (map-tree predicate)))
2306.12672#325
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
326
# Code Block 4: Utility functions for kinship trees. # A.2.3 Kinship conceptual system 1 ;; -- KINSHIP CONCEPTUAL SYSTEM -2 3 ;; Gets the partner of a person. 4 (define (partner-of name) 5 (get-property (get-property name 'partner-id) 'name)) 6 7 ;; Gets the parents of a person. 8 (define (parents-of name) 9 (let* ((parent-1-id (get-property name 'parent-1-id)) 10 11 12 (parent-1-name (get-property parent-1-id 'name)) (parent-2-id (get-property name 'parent-2-id)) (parent-2-name (get-property parent-2-id 'name))) 13 (list parent-1-name parent-2-name))) 14 15 ;; Gets the grandparents of a person. 16 (define (grandparents-of name) 17 (let ((parent-1 (first (parents-of name)))) 18 (parents-of parent-1))) 19 20 ;; Gets the children of a person. 21 (define (children-of name) 73 A.2 Relational reasoning A LANGUAGE AND WORLD MODELS 22 # (let ((child-ids (get-property name 'child-ids))) 23
2306.12672#326
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
327
73 A.2 Relational reasoning A LANGUAGE AND WORLD MODELS 22 # (let ((child-ids (get-property name 'child-ids))) 23 # (map (lambda (child-id) (get-property child-id 'name)) child-ids))) 24 25 ;; Gets the siblings of a person. 26 (define (siblings-of name) 27 (let* ((parent-1-id (get-property name 'parent-1-id)) 28 29 # (child-ids (get-property parent-1-id 'child-ids)) (child-names (map (lambda (child-id) (get-property child-id 'name)) child-ids))) 30 (filter (lambda (child-name) (not (equal? child-name name))) child-names)))
2306.12672#327
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
328
31 32 ;; -- BOOLEAN RELATIONS -- 33 (define (partner-of? name_a name_b) 34 (equal? name_a (partner-of name_b))) 35 36 (define (parent-of? name_a name_b) 37 (member? name_a (parents-of name_b))) 38 39 (define (father-of? name_a name_b) 40 (and (equal? (get-property name_a 'gender) 'male) 41 (parent-of? name_a name_b))) 42 43 (define (mother-of? name_a name_b) 44 (and (equal? (get-property name_a 'gender) 'female) 45 (parent-of? name_a name_b))) 46 47 (define (grandparent-of? name_a name_b) 48 (member? name_a (grandparents-of name_b))) 49 50 (define (grandfather-of? name_a name_b) 51 (and (equal? (get-property name_a 'gender) 'male) 52 (grandparent-of? name_a name_b))) 53 54 (define (grandmother-of? name_a name_b) 55 (and (equal? (get-property name_a 'gender) 'female) 56 (grandparent-of? name_a
2306.12672#328
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
329
name_a name_b) 55 (and (equal? (get-property name_a 'gender) 'female) 56 (grandparent-of? name_a name_b))) 57 58 (define (child-of? name_a name_b) 59 (member? name_a (children-of name_b))) 60 61 (define (son-of? name_a name_b) 62 (and (equal? (get-property name_a 'gender) 'male) 63 (child-of? name_a name_b))) 64 65 (define (daughter-of? name_a name_b) 66 (and (equal? (get-property name_a 'gender) 'female) 67 (child-of? name_a name_b))) 68 69 (define (sibling-of? name_a name_b) 70 (member? name_a (siblings-of name_b))) 71 72 (define (brother-of? name_a name_b) 73 (and (equal? (get-property name_a 'gender) 'male) 74 (sibling-of? name_a name_b))) 75 76 (define (sister-of? name_a name_b) 77 (and (equal? (get-property name_a 'gender) 'female) 78 (sibling-of? name_a name_b)))
2306.12672#329
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
330
Code Block 5: Conceptual system and derived predicates for kinship trees. 74 A.2 Relational reasoning A LANGUAGE AND WORLD MODELS # A.2.4 Translation examples for kinship 1 ;; -- CONDITION AND QUERY STATEMENTS -- 2 ;; Now, let's translate some user-defined statements. 3 ;; Each statement begins with either `Condition` or `Query`. 4 ;; `Condition` statements provide facts about the scenario. 5 ;; `Query` statements are questions that evaluate quantities of interest.
2306.12672#330
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
331
6 7 ;; Condition: Ryan's partner is Taylor. 8 (condition (partner-of? 'ryan 'taylor)) 9 10 ;; Condition: Taylor is the mother of Sam. 11 (condition (mother-of? 'taylor 'sam)) 12 13 ;; Condition: Sam's father is Ryan. 14 (condition (father-of? 'ryan 'sam)) 15 16 ;; Condition: Sam has two siblings. 17 (condition (= (length (siblings-of 'sam)) 2)) 18 19 ;; Condition: Sam has a brother. 20 (condition 21 (exists (lambda (x) 22 (brother-of? x 'sam)))) 23 24 ;; Condition: Payton's partner has a brother named Kyle. 25 (condition 26 (exists (lambda (x) (and 27 28 (partner-of? x 'payton) (brother-of? 'kyle x))))) 29 30 ;; Condition: Payton's partner has a sister who has a son named Sam. 31 (condition 32 (exists (lambda (x) (and 33 34 (partner-of? x 'payton) (exists (lambda (y) (and 35 36 (sister-of? y x) (son-of? 'sam y)))))))) 37 38 ;; Query: Who are Sam's parents? 39 (query (parents-of 'sam)) 40 41 ;; Query: How many children does Kyle
2306.12672#331
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
333
52 # (sister-of? x 'taylor)))) # Code Block 6: Translation examples for kinship trees. 75 A.3 Perceptual and physical reasoning A LANGUAGE AND WORLD MODELS # A.2.5 Why not Prolog? Readers who are familiar with the world of logic programming may wonder why we have chosen to model the kinship domain in Church instead of a more standard logic programming language, such as Prolog. Indeed, kinship is often one of the introductory examples in Prolog textbooks (Pereira & Shieber, 2002) and online tutorials,8 from which we drew inspiration when writing this section. Moreover, there are many structural parallels between our framework and the style of declarative programming embodied by Prolog: schemecondition statements in Church are similar to facts in Prolog; derived concepts like schemefather-of? in our Church kinship model are analogous to Prolog rules; and schemequery performs similar functions in both languages (though the algorithms that underlie these queries differ in important ways). And, as discussed in the introduction to Section 3.1, Prolog was originally developed as a model of natural language (Colmerauer et al., 1972) and has deep ties to computational linguistics. So: why not use Prolog?
2306.12672#333
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
334
In short, there is nothing about our approach to semantic parsing that precludes swapping out Church for other programming languages, like Prolog, SMT solvers, or even a general purpose language like Python. In fact, with the right prompting, Codex readily translates natural language utterances like Avery has a sister named Blake into sister_of(blake, avery) in Prolog. On the parsing side, we did not encounter any technical limitations to using LLMs to translate natural language into Prolog. However, because Prolog is based on definite (Horn) clauses, there are limitations in the kinds of utterances that we can express and the kinds of inferences that we can make when working in Prolog. For instance, a typical Prolog kinship model might have a rule defining the concept of a “grandfather” as follows: grandfather_of(X,Y) :- male(X), parent_of(X,Z), parent_of(Z,Y).
2306.12672#334
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
335
grandfather_of(X,Y) :- male(X), parent_of(X,Z), parent_of(Z,Y). Now, if we learn that Charlie is the grandfather of Dana, we might be inclined to translate this into Prolog as a fact: grandfather_of(charlie, dana). Given this information, we can make various deductive inferences: e.g., that Charlie is male, and that there exists some person in the family tree who is both the child of Charlie and the parent of Dana. In fact, this is how the grandfather_of(X,Y) rule is defined in the first place.
2306.12672#335
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
336
For this reason, it is especially counterintuitive that these kinds of inferences are not at all straightforward in Prolog. Because logical implication in definite clauses is unidirectional, anyone satisfying the right-hand side of the grandfather_of(X,Y) rule is considered a grandfather. However, our rule says nothing about what being a grandfather entails. Moreover, our above translation grandfather_of(charlie, dana) is actually quite facile; it simply modifies grandfather_of(X,Y) such that queries will now return true for anyone satisfying the original definition; or for the special case where X=charlie and Y=dana. These are all examples of limitations of the kinds of deductive inferences that we can model with Prolog. Additionally, there are many kinds of inductive inferences that are not well-captured by Prolog; e.g., because Charlie has at least one child, he is more likely to have multiple children, and is more likely to be married.
2306.12672#336
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
337
In sum, to get the kinds of mixed deductive and inductive inferences that we would like to see in an expressive language-of-thought, we need to have ways of incorporating and trading off uncertainty in our world model. ProbLog (De Raedt et al., 2007; Dries et al., 2017; Suster et al., 2021), a probabilistic extension of Prolog in which deduction rules can be annotated with probabilities, offers one way of integrating uncertainty with deductive reasoning. Church goes a step further by specifying a generative domain theory in addition to probabilistic inference rules. We believe that this interplay between probabilistic priors and likelihoods—which is central to Bayesian inference—is also at the heart of human cognition. # A.3 Perceptual and physical reasoning Static visual scenes A.3.1 Generative world model for static visual scenes 1 ;; Objects have a shape attribute, which is a choice of cube, sphere, or cylinder shape categories. 2 (define choose-shape 3 # (mem (lambda (obj-id) # 8https://swish.swi-prolog.org/p/prolog-family-tree.pl 76 A.3 Perceptual and physical reasoning A LANGUAGE AND WORLD MODELS (pair 'shape (uniform-draw '(mug can bowl)))))) 4
2306.12672#337
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
339
13 )))))) 14 15 ;; An object is an object ID, and the object's attribute types and their values. 16 (define object (mem (lambda (obj-id) (list 17 18 19 (pair 'object-id obj-id) (choose-shape obj-id) (choose-color obj-id))))) 20 21 ;; Scenes can have a maximum of 12 objects. 22 (define max-objects 12) 23 ;; The number of objects in a scene tends to be not too large, and is capped at the maximum number of objects. 24 (define choose-num-objects 25 (mem (lambda (scene-id) (floor (min max-objects (* max-objects (exponential 1))))))) 26 27 ;; Then, for each object we intend to generate, generate an object indexical, and associate it with a choice of attributes. 28 (define obj-id-gensym (make-gensym "obj-")) 29 (define (generate-n-objects scene-id total-objects) (if (= total-objects 0) (list (object (obj-id-gensym))) (cons (object (obj-id-gensym)) (generate-n-objects scene-id (- total-objects 1))))) 33 (define objects-in-scene (mem (lambda (scene-id)
2306.12672#339
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
340
(generate-n-objects scene-id (- total-objects 1))))) 33 (define objects-in-scene (mem (lambda (scene-id) (generate-n-objects scene-id (choose-num-objects 30 31 32 scene-id))))) 34 35 36 ;; An object is red if it is of this continuous color value. 37 (define red (list 255 0 0)) 38 ;; An object is blue if it is of this continuous color value. 39 (define blue (list 0 0 255)) 40 ;; An object is green if it is of this continuous color value. 41 (define green (list 0 255 0)) 42 ;; An object is yellow if it is of this continuous color value. 43 (define yellow (list 255 255 0)) 44 45 ;; Check if an object is of a given shape. 46 (define is-shape? (lambda (shape) (lambda (object) (equal? (cdr (assoc 'shape object)) shape)))) 47 ;; Check if an object is of a given named color. 48 (define is-color? (lambda (color) (lambda (object) (equal? (cdr (assoc 'color object)) color)))) 49 50 ;; Select only objects from the scene of a given color. 51 (define filter-color(lambda (color) (lambda (object-list) (filter (is-color? color)
2306.12672#340
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
342
Code Block 7: Generative domain theory for tabletop scenes. Generates scenes containing a set of objects which vary in shape and color. These scene states are rendered by a separately generated render function to generate images. Shown with natural language comments, but these are not used in the LLM prompt. 77 A.3 Perceptual and physical reasoning A LANGUAGE AND WORLD MODELS A.3.2 Translation examples for static visual scenes
2306.12672#342
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
343
77 A.3 Perceptual and physical reasoning A LANGUAGE AND WORLD MODELS A.3.2 Translation examples for static visual scenes 1 ;; There's a blue thing. 2 (condition (> (length ((filter-color blue) (objects-in-scene 'this-scene))) 0)) 3 ;; There's at least two blue plates. 4 (condition (>= (length ((filter-color blue) ((filter-shape 'plate) (objects-in-scene 'scene)))) 5 6 7 8 2)) 9 ;; There's many blue plates. 10 (condition (>= (length ((filter-color blue) ((filter-shape 'plate) (objects-in-scene 'scene)))) 11 12 13 14 5)) 15 ;; There's exactly two plates and there's also a yellow thing. 16 (condition 17 18 (and (= (length ((filter-shape 'plate) (objects-in-scene 'scene))) 2) (> (length ((filter-color yellow) (objects-in-scene 'scene))) 0))) 19 20 ;; Is there a mug? 21 (query (> (length ((filter-shape 'mug) (objects-in-scene 'this-scene))) 0)) Code Block 8: Translation examples for the visual domain. These examples are concatenated with the visual scenes generative model to produce the prompt used to generate new translations. Dynamic physical scenes A.3.3 Generative world model for physical scenes
2306.12672#343
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
344
Dynamic physical scenes A.3.3 Generative world model for physical scenes # 1 (define (get_attribute obj key) 2 (if (assoc key obj) (rest (assoc key obj)) ())) 3 4 (define (member? a b) 5 (if (member a b) true false)) 6 (define concatenate 7 8 (lambda (list-1 list-2) (if (null? list-1) 9 10 list-2 (cons (car list-1) (concatenate (cdr list-1) list-2))))) 11 12 (define (pairs x l) 13 (define (aux accu x l) 14 15 16 (if (null? l) accu (let ((y (car l)) 17 (tail (cdr l))) 18 (aux (cons (cons x y) accu) x tail)))) 19 (aux '() x l)) 20 21 (define (cartesian_product l m) 22 (define (aux accu l) 23 24 25 (if (null? l) accu (let ((x (car l)) 26 (tail (cdr l))) 78 A.3 Perceptual and physical reasoning A LANGUAGE AND WORLD MODELS 27 # (aux (append (pairs x m) accu) tail)))) (aux '() l)) 28
2306.12672#344
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
345
Generative domain theory: dynamic scenes. Collision detection. 31 (define get_num_objects 2) 32 (define OBJECT_DEFAULT_RADIUS 1) 33 (define GRAVITY 9.8) 34 (define DELTA_T 0.5) 35 36 (define get_initial_color 37 38 39 40 (lambda (obj_id) (if (eq? obj_id 'obj-0) (list 255 0 0) (list 0 0 255)))) 41 42 (define choose_mass 43 (mem (lambda (obj_id) 44 (abs (gaussian 5 3))))) 45 46 (define choose_shapes 47 (mem (lambda (scene-id) (uniform-draw (list 'sphere 'block))))) 48 49 (define min_x -3) 50 (define max_x 3) 51 (define mid_x (+ (/ (- max_x min_x) 2) min_x)) 52 (define get_initial_x (lambda (obj_id) (if (eq? obj_id 'obj-0) 53 54 55 56 min_x mid_x))) 57 58 (define min_force 0) 59 60 61 (define max_force 10) (define mid_force (+ (/ (- max_force min_force) 2) min_force)) (define
2306.12672#345
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
346
min_force 0) 59 60 61 (define max_force 10) (define mid_force (+ (/ (- max_force min_force) 2) min_force)) (define choose_initial_force 62 (mem (lambda (obj_id) 63 64 65 (if (eq? obj_id 'obj-0) (abs (gaussian mid_force 3)) 0 66 )))) 67 68 (define static_friction_constant (lambda (shape) 69 (if (eq? shape 'sphere) 70 71 0.02 0.05) 72 )) 73 (define kinetic_friction_constant (lambda (shape) 74 (if (eq? shape 'sphere)
2306.12672#346
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
347
29 30 ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; Generative domain theory: dynamic scenes. Collision detection. 75 76 0.01 0.02) 77 )) 78 (define normal_force (lambda (m) (* m GRAVITY))) 79 (define force_after_friction (lambda (f v shape m) (if (> (abs v) 0) (- f (* (kinetic_friction_constant shape) (normal_force m))) (if (< f (* (static_friction_constant shape) (normal_force m))) 0 (- f (* 80 81 82 # (kinetic_friction_constant shape) (normal_force m))) 83 )))) 79 A.3 Perceptual and physical reasoning A LANGUAGE AND WORLD MODELS
2306.12672#347
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
348
84 85 (define newtons_second (lambda (f m) (/ f m))) 86 (define v_next (lambda (v_prev a_prev delta_t) 87 88 (let ((v_temp (+ v_prev (* a_prev delta_t)))) (if (>= (* v_prev v_temp) 0) v_temp 0)) 89 )) 90 (define x_next (lambda (x_prev v_prev delta_t) (+ x_prev (* v_prev delta_t)))) 91 (define initial_object_state (mem (lambda (obj_id scene_id) 92 93 (let ((obj_shape (choose_shapes scene_id))) (let ((obj_mass (choose_mass obj_id))) 94 95 (let ((obj_color (get_initial_color obj_id))) (let ((initial_x (get_initial_x obj_id))) 96 97 (let ((initial_push_force (choose_initial_force obj_id))) (let ((initial_force (force_after_friction initial_push_force 0 obj_shape obj_mass))) 98 99 100 101 102 103 104 105 106 107 108 109 (list (pair 'object_id obj_id) (pair 'object_radius OBJECT_DEFAULT_RADIUS) (pair 'shape
2306.12672#348
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
349
104 105 106 107 108 109 (list (pair 'object_id obj_id) (pair 'object_radius OBJECT_DEFAULT_RADIUS) (pair 'shape obj_shape) (pair 'mass obj_mass) (pair 'color obj_color) (pair 'x initial_x) (pair 'initial_push_force initial_push_force) (pair 'f initial_force) (pair 't 0) (pair 'a_prev (newtons_second initial_force obj_mass)) (pair 'a (newtons_second initial_force obj_mass)) (pair 'v_0 0) (pair 'v (v_next 0 (newtons_second initial_force 110 111 obj_mass) DELTA_T))) 112 ))))))))) 113 (define obj_id_gensym (make_gensym "obj-")) 114 (define generate_initial_state 115 116 (mem (lambda (scene_id total_objects) (if (= total_objects 1) 117 118 (list (initial_object_state (obj_id_gensym) scene_id)) (cons (initial_object_state (obj_id_gensym) scene_id) (generate_initial_state scene_id (- total_objects 1))))))) 119 120 (define generate_initial_scene_event_state (mem
2306.12672#349
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
351
129 (define event_id_gensym (make_gensym "event-")) 130 (define circle_intersect? (lambda (subject_x subject_radius object_x object_radius) 131 (let ((square_circle_distance (expt (- subject_x object_x) 2))) 132 (let ((square_radii (expt (+ subject_radius object_radius) 2))) 133 (leq square_circle_distance square_radii))) 134 )) 135 (define elastic_collision_subject_v (lambda (subject_m subject_v object_m object_v) 136 137 )) (/ (+ (* 2 (* object_m object_v)) (* subject_v (- subject_m object_m))) (+ subject_m object_m)) 138 80 A.3 Perceptual and physical reasoning A LANGUAGE AND WORLD MODELS # 139 (define get_collision_events (lambda (time scene_event_state_for_time)
2306.12672#351
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
352
140 141 142 (let ((scene_event_state (get_attribute scene_event_state_for_time time))) (let ((scene_state (get_attribute scene_event_state 'scene_states))) (if (= (length scene_state) 1) 143 () 144 145 146 (fold (lambda (event events) (if (equal? event ()) events (cons event events))) (let ((paired_object_states (cartesian_product scene_state scene_state))) (map (lambda (paired_objects) () 147 148 149 150 151 152 153 154 155 156 157 158 (let ((event_subject (get_attribute (first paired_objects) 'object_id))) (let ((event_object (get_attribute (cdr paired_objects) 'object_id))) (if (eq? event_subject event_object) () (let ((subject_v (get_attribute (first paired_objects) 'v))) (let ((subject_x (get_attribute (first paired_objects) 'x))) (let ((subject_m (get_attribute (first paired_objects) 'mass))) (let ((subject_radius (get_attribute (first paired_objects) 'object_radius))) (let ((object_v (get_attribute (cdr paired_objects) 'v))) (let ((object_x
2306.12672#352
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
353
(first paired_objects) 'object_radius))) (let ((object_v (get_attribute (cdr paired_objects) 'v))) (let ((object_x (get_attribute (cdr paired_objects) 'x))) (let ((object_m (get_attribute (cdr paired_objects) 'mass))) (let ((object_radius (get_attribute (cdr paired_objects) 'object_radius))) (if (circle_intersect? subject_x subject_radius object_x object_radius) 159 160 (list 161 162 163 164 165 166 (pair 'event-id (event_id_gensym)) (pair 'event_time time) (pair 'event_predicates (list 'is_colliding)) (pair 'event_subject event_subject) (pair 'event_object event_object) (pair 'subject_initial_v subject_v) (pair 'subject_final_v (elastic_collision_subject_v subject_m subject_v object_m 167 object_v)) 168 169 (pair 'object_initial_v object_v) ) 170 ())))))))))) ))) paired_object_states))) 171 172 173 ))))) 174 175 176 (define generate_next_object_state (lambda (current_time event_state) (lambda (prev_object_state) 177
2306.12672#353
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
354
171 172 173 ))))) 174 175 176 (define generate_next_object_state (lambda (current_time event_state) (lambda (prev_object_state) 177 178 (let ((obj_id (cdr (assoc 'object_id prev_object_state)))) (let ((collision_events (fold (lambda (event events) (if (equal? (get_attribute event 'event_subject) obj_id) (cons event events) events)) () event_state))) 179 180 181 182 (if (> (length collision_events) 0) (generate_collision_event_state current_time obj_id prev_object_state (car collision_events)) (generate_no_collision_event_state current_time obj_id prev_object_state) ) ))))) 183
2306.12672#354
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
355
140 141 143 144 145 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 ))))) 174 175 182 183 184 # 185 (define generate_collision_event_state (lambda (current_time obj_id prev_object_state collision_event) 186 # (let ((obj_radius (cdr (assoc 'object_radius prev_object_state)))) 187 # (let ((obj_mass (cdr (assoc 'mass prev_object_state)))) # (let ((obj_color (cdr (assoc 'color prev_object_state)))) (let ((obj_shape (cdr (assoc 'shape prev_object_state)))) 188 189 (let ((obj_shape (cdr (assoc 'shape prev_object_state)))) 189 190 # (let ((v_prev (cdr (assoc 'v prev_object_state)))) 191 192 # (let ((a_prev (cdr (assoc 'a_prev prev_object_state)))) (let ((x_prev (cdr (assoc 'x prev_object_state)))) 193 # (let ((v (get_attribute collision_event 'subject_final_v))) 194 # (let ((x (x_next x_prev v 1))) 195 # (list 81 A.3 Perceptual and physical reasoning
2306.12672#355
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
357
196 197 198 199 200 201 202 203 204 205 206 207 (pair 'object_id obj_id) (pair 'object_radius obj_radius) (pair 'shape obj_shape) (pair 'color obj_color) (pair 'mass obj_mass) (pair 'x x) (pair 'f 0) (pair 't (* current_time DELTA_T)) (pair 'a_prev 0) (pair 'a 0) (pair 'v_0 0) (pair 'v v)) ))))) 208 209 210 )) )))) 211 212 (define generate_no_collision_event_state (lambda (current_time obj_id prev_object_state) 213 (let ((obj_radius (cdr (assoc 'object_radius prev_object_state)))) 214 (let ((obj_mass (cdr (assoc 'mass prev_object_state)))) 215 216 (let ((obj_color (cdr (assoc 'color prev_object_state)))) (let ((obj_shape (cdr (assoc 'shape prev_object_state)))) 217 (let ((v_prev (cdr (assoc 'v prev_object_state)))) 218 219 (let ((a_prev_no_friction (cdr (assoc 'a_prev prev_object_state)))) (let ((a_prev (newtons_second
2306.12672#357
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
358
218 219 (let ((a_prev_no_friction (cdr (assoc 'a_prev prev_object_state)))) (let ((a_prev (newtons_second (force_after_friction 0 v_prev obj_shape obj_mass) obj_mass))) 220 (let ((x_prev (cdr (assoc 'x prev_object_state)))) 221 (let ((v (v_next v_prev a_prev DELTA_T))) 222 (let ((x (x_next x_prev v_prev DELTA_T))) 223 224 225 226 227 228 229 230 231 232 233 234 235 (list (pair 'object_id obj_id) (pair 'object_radius obj_radius) (pair 'shape obj_shape) (pair 'color obj_color) (pair 'mass obj_mass) (pair 'x x) (pair 'f (force_after_friction 0 v_prev obj_shape obj_mass)) (pair 't (* current_time DELTA_T)) (pair 'a_prev a_prev) (pair 'a 0) (pair 'v_0 0) (pair 'v v)) ))))) 236 237 238 ))) )))) 239
2306.12672#358
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
359
# 240 (define generate_next_scene_state (lambda (prev_scene_state event_state next_time) (map (generate_next_object_state next_time event_state) prev_scene_state))) 241 242 # 243 (define generate_next_scene_event_state_time (lambda (next_time scene_event_state_for_times) 244 245 246 (let ((prev_scene_event_state (get_attribute scene_event_state_for_times (- next_time 1)))) (let ((prev_scene_state (get_attribute prev_scene_event_state 'scene_states))) (let ((event_state (get_collision_events (- next_time 1) scene_event_state_for_times))) 247 248 # (pair next_time (list 249 250 (pair 'scene_states (generate_next_scene_state prev_scene_state event_state next_time)) (pair 'event_states event_state) 251 252 ))))) )) 253 # 254 (define generate_next_scene_event_states 82 A.3 Perceptual and physical reasoning A LANGUAGE AND WORLD MODELS 256
2306.12672#359
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
360
251 252 ))))) )) 253 # 254 (define generate_next_scene_event_states 82 A.3 Perceptual and physical reasoning A LANGUAGE AND WORLD MODELS 256 # (lambda (current_time prev_scene_event_states_for_times) (cons (generate_next_scene_event_state_time current_time prev_scene_event_states_for_times) prev_scene_event_states_for_times) 257 )) 258 # 259 (define generate_scene_event_states_for_times (mem (lambda (scene_id total_objects total_time) 260 # (if (= total_time 0) 261 262 (list (generate_initial_scene_event_state # scene_id total_objects) 263 264 # ) (let ((prev_scene_event_states # (generate_scene_event_states_for_times scene_id total_objects (- total_time 1)))) 265 # (generate_next_scene_event_states total_time prev_scene_event_states) 266 )))))
2306.12672#360
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
361
267 268 (define max_time 9) 269 270 (define base_states_for_times (generate_scene_event_states_for_times 'this_scene get_num_objects max_time)) 271 272 ;;;;;;;;;;;;;;;;;;;;;;;;;;Derived predicates. 273 (define objects_in_scene (lambda (base_states_for_times) 274 275 276 (let ((initial_base_states_at_time (cdr (assoc 0 (cdr base_states_for_times))))) (let ((base_state (cdr (assoc 'scene_states initial_base_states_at_time)))) base_state 277 )) 278 )) 279 (define red (list 255 0 0)) 280 (define blue (list 0 0 255)) 281 (define is_color? (lambda (color) (lambda (object) (equal? (cdr (assoc 'color object)) color)))) 282 (define is_shape? (lambda (shape) (lambda (object) (equal? (cdr (assoc 'shape object)) shape)))) 283 284 (define all_objects (objects_in_scene base_states_for_times)) 285 (define (exists_object predicate) 286 (some (map predicate (objects_in_scene base_states_for_times)))) 287 288 (define (filter_objects predicate) (map (lambda
2306.12672#361
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
362
predicate) 286 (some (map predicate (objects_in_scene base_states_for_times)))) 287 288 (define (filter_objects predicate) (map (lambda (o) (get_attribute o 'object_id)) (filter predicate (objects_in_scene base_states_for_times)))) 291 292 ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; 293 (define QUICKLY_THRESHOLD 2) 294 (define SLOWLY_THRESHOLD 2) 289 290 295 296 (define is_moving_events (mem (lambda (base_states_for_times) 297 298 299 300 (fold (lambda (base_state_for_time these_events) (let ((current_time (car base_state_for_time))) (let ((base_state (cdr (assoc 'scene_states (cdr base_state_for_time))))) (fold (lambda (obj_state these_events) 301 302 303 304 (let ((obj_id (cdr (assoc 'object_id obj_state)))) (let ((obj_velocity (cdr (assoc 'v obj_state)))) (let ((obj_speed (abs obj_velocity))) (if (> obj_speed 0) 305 ;; 306 (let ((event_predicates 307 308 (if (> obj_speed
2306.12672#362
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
365
309 310 311 312 (if (< obj_speed SLOWLY_THRESHOLD) (list 'is_moving 'is_slowly) (list 'is_moving) )) 313 )) 314 (cons 315 316 317 318 (list (pair 'event-id (event_id_gensym)) (pair 'event_time current_time) (pair 'event_predicates event_predicates) (pair 'event_subject obj_id) 319 320 (pair 'event_speed obj_speed) 321 ) 322 323 these_events)) these_events 324 ))))) these_events base_state)))) 325 326 327 () base_states_for_times)))) 328 329 (define is_resting_events (mem (lambda (base_states_for_times) 330 331 332 333 (fold (lambda (base_state_for_time these_events) (let ((current_time (car base_state_for_time))) (let ((base_state (cdr (assoc 'scene_states (cdr base_state_for_time))))) (fold (lambda (obj_state these_events) 334 335 336 337 (let ((obj_id (cdr (assoc 'object_id obj_state)))) (let ((obj_velocity (cdr (assoc 'v obj_state)))) (let ((obj_speed (abs obj_velocity)))
2306.12672#365
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
366
'object_id obj_state)))) (let ((obj_velocity (cdr (assoc 'v obj_state)))) (let ((obj_speed (abs obj_velocity))) (if (= obj_speed 0) 338 ;; 339 (let ((event_predicates 340 (list 'is_resting))) 341 (cons 342 343 344 345 (list (pair 'event-id (event_id_gensym)) (pair 'event_time current_time) (pair 'event_predicates event_predicates) (pair 'event_subject obj_id) 346 347 (pair 'event_speed obj_speed) 348 ) 349 350 these_events)) these_events 351 352 353 ))))) these_events base_state)))) 354 () base_states_for_times)))) 355 356 (define is_colliding_events (mem (lambda (base_states_for_times) 357 358 359 360 (fold (lambda (base_state_for_time these_events) (let ((current_time (car base_state_for_time))) (let ((event_states (cdr (assoc 'event_states (cdr base_state_for_time))))) (fold (lambda (event_state these_events)
2306.12672#366
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
367
309 361 362 363 364 (let ((subject_initial_speed (abs (get_attribute event_state 'subject_initial_v)))) (let ((subject_final_speed (abs (get_attribute event_state 'subject_final_v)))) (let ((object_initial_speed (abs (get_attribute event_state 'object_initial_v)))) (let ((cause_subject_object_event (and (> subject_initial_speed 0) (= 365 366 # object_initial_speed 0)))) (let ((event_predicates 84 A.3 Perceptual and physical reasoning A LANGUAGE AND WORLD MODELS
2306.12672#367
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
368
367 (if (and cause_subject_object_event (eq? subject_final_speed 0)) 368 369 (list 'is_launching 'is_hitting 'is_colliding) (if (> subject_initial_speed 0) 370 371 372 (list 'is_hitting 'is_colliding) (list 'is_colliding) ) 373 ))) 374 375 (cons (list 376 377 378 379 380 381 382 383 (pair 'event-id (get_attribute event_state 'event-id)) (pair 'event_time (get_attribute event_state 'event_time)) (pair 'event_predicates event_predicates) (pair 'event_subject (get_attribute event_state 'event_subject)) (pair 'event_object (get_attribute event_state 'event_object)) (pair 'subject_initial_v (get_attribute event_state 'subject_initial_v )) (pair 'subject_final_v (get_attribute event_state 'subject_final_v )) (pair 'object_initial_v (get_attribute event_state 'object_initial_v )) ) these_events)))))) 384 385 ) these_events event_states) 386 387 ))) () base_states_for_times) 388 389 ))) 390 391 392 393 (define events_in_scene (concatenate 394 395 396 397
2306.12672#368
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
369
event_states) 386 387 ))) () base_states_for_times) 388 389 ))) 390 391 392 393 (define events_in_scene (concatenate 394 395 396 397 (is_colliding_events base_states_for_times) (concatenate (is_moving_events base_states_for_times) (is_resting_events base_states_for_times)))) 398 399 400 (define is_event? (lambda (event_predicate event) (member? event_predicate (get_attribute event 'event_predicates)))) 401 402 (define is_subject_of_event? (lambda (event object ) (equal? 403 404 (get_attribute event 'event_subject) (get_attribute object 'object_id) 405 ))) 406 407 (define is_object_of_event? (lambda (event object ) (equal? 408 409 (get_attribute event 'event_object) (get_attribute object 'object_id) 410 ))) 411 412 (define event_subject_is? (lambda (event predicate) (member? 413
2306.12672#369
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
370
414 (get_attribute event 'event_subject) (filter_objects predicate) 415 ))) 416 (define event_object_is? (lambda (event predicate) (member? 418 (get_attribute event 'event_object) (filter_objects predicate) 419 ))) 420 # 421 (define (exists_event predicate) 422 # (some (map predicate events_in_scene))) 423 # 424 (define (filter_events predicate) 85 A.3 Perceptual and physical reasoning A LANGUAGE AND WORLD MODELS 425 (filter predicate events_in_scene)) Code Block 9: Generative domain theory for physical scenes. Generates scenes containing a red object left of a blue object, and a randomly generated force. These scene states are forward simulated using a physics engine which is shown implemented within this Church code. Shown with natural language comments, but these are not used in the LLM prompt. # A.3.4 Translation examples for visual scenes 1 ;; The objects are all balls. 2 (condition (all (map (lambda (o) ((is_shape? 'sphere) o)) all_objects))) 3 ;; Everything is a ball. 4 (condition (all (map (lambda (o) ((is_shape? 'sphere) o)) all_objects))) 5 ;; Imagine the red thing is a block, and is somewhat heavy. 6 (condition (exists_object (lambda (object)
2306.12672#370
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
371
7 8 9 10 (and ((is_color? red) object) ((is_shape? 'cube) object) (> (get_attribute object 'mass) 2) )))) 11 12 ;; There is a blue ball, and it is quite heavy. 13 (condition (exists_object (lambda (object) 14 15 16 17 (and ((is_color? blue) object) ((is_shape? 'sphere) object) (> (get_attribute object 'mass) 3.5) )))) 18 19 ;; Now, the red block is very light. 20 (condition (exists_object (lambda (object) 21 22 23 24 (and ((is_color? red) object) ((is_shape? 'cube) object) (< (get_attribute object 'mass) 1) )))) 25 26 ;; A blue ball is somewhat light. 27 (condition (exists_object (lambda (object) 28 29 30 31 (and ((is_color? red) object) ((is_shape? 'cube) object) (< (get_attribute object 'mass) 2) )))) 32 33 ;; Imagine the red block gets pushed lightly to the right. 34 (condition (exists_object (lambda (object) 35 36 37 38 (and ((is_color? red) object) ((is_shape? 'cube) object) (< (get_attribute object
2306.12672#371
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
372
(lambda (object) 35 36 37 38 (and ((is_color? red) object) ((is_shape? 'cube) object) (< (get_attribute object 'initial_push_force) 2) )))) 39 40 ;; Now, imagine a red ball is pushed hard to the right. 41 (condition (exists_object (lambda (object) 42 43 44 45 (and ((is_color? red) object) ((is_shape? 'sphere) object) (> (get_attribute object 'initial_push_force) 6) )))) 46 47 ;; A red block hits a blue block. 48 (condition 49 (exists_object (lambda (object_1)
2306.12672#372
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
373
86 A.4 Social reasoning A LANGUAGE AND WORLD MODELS # 50 (exists_object (lambda (object_2) 51 (exists_event (lambda (event) 52 (and 53 54 55 56 57 58 59 ((is_color? red) object_1) ((is_shape? 'cube) object_1) ((is_color? blue) object_2) ((is_shape? 'cube) object_2) (is_subject_of_event? event object_1) (is_object_of_event? event object_2) (is_event? 'is_hitting event)) ))))))) 60 61 ;; What's the final velocity of the red block after it is hit? 62 (query (last (map 63 (lambda (event) (get_attribute event 'subject_final_v)) 64 (filter_events 65 (lambda (e) 66 (and 67 (is_event? 'is_colliding e) 68 (event_subject_is? e (lambda (o) 69 70 71 (and ((is_color? red) o) ((is_shape? 'cube) o)))))))))) Code Block 10: Translation examples for the physics domain. These examples are concatenated with the physical scenes generative model to produce the prompt used to generate new translations. # A.4 Social reasoning A.4.1 Generative world model for social reasoning
2306.12672#373
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
374
# A.4 Social reasoning A.4.1 Generative world model for social reasoning # 1 (define gridworld (list 4 (list 'ames 'lawn 'lawn 'lawn 'sushi) (list 'ames 'lawn 'lawn 'lawn 'danner) (list 'office 'barlow 'barlow 'barlow 'danner) (list 'ames 'lawn 'lawn 'lawn 'danner) (list 'ames 'lawn 'lawn 'lawn 'vegetarian) (list 'pizza 'carson 'carson 'carson 'danner) # 7 8 )) 9 (define restaurants (list 'sushi 'pizza 'vegetarian)) 10 11 (define initial_x 1) 12 (define initial_y 3) 13 14 15 (define has_bike (mem (lambda (agent-id) (flip)))) 16 (define available_motions (mem (lambda (agent-id) (if (has_bike agent-id) (list 'is_walking 'is_biking) (list 'is_walking))))) 17 (define directions (list 'west 'east 'north 'south)) 18 (define available_actions (mem (lambda (agent-id) (cons (pair 'stay 'stay) (cartesian_product (available_motions agent-id) directions))))) 19
2306.12672#374
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
376
26 27 28 29 ))))) (list (gaussian POSITIVE_UTILITY_MEAN UTILITY_VARIANCE) (gaussian NEGATIVE_UTILITY_MEAN UTILITY_VARIANCE) 30 31 (define motion_utility (mem (lambda (agent-id location_type motion_type) 32 (case location_type 33 34 35 36 37 38 (('lawn) (case motion_type (('is_biking) -1) (('is_walking) -0.2) (('is_staying) 0) (else 0)) ) (else (case motion_type 39 40 41 42 43 44 )))) (('is_biking) -0.01) (('is_walking) -0.2) (('is_staying) 0) (else 0))) 45 46 (define food_utility (mem (lambda (agent-id location_type) 47 48 49 50 51 52 53 54 (case location_type (('lawn) 0) (('ames) 0) (('barlow) 0) (('carson) 0) (('danner) 0) (('office) 0) (else (if (is_open location_type) (restaurant_utility agent-id location_type) NEGATIVE_UTILITY_MEAN)) 55 56 )))) 57 58 (define
2306.12672#376
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
377
(is_open location_type) (restaurant_utility agent-id location_type) NEGATIVE_UTILITY_MEAN)) 55 56 )))) 57 58 (define utility_function (mem (lambda (agent-id gridworld state_x state_y action) 59 60 61 62 (let ((location_type (get_gridworld_at gridworld state_x state_y))) (let ((motion_type (car action))) (let ((state_food_utility (food_utility agent-id location_type))) (let ((state_motion_utility (motion_utility agent-id location_type motion_type))) (+ state_food_utility state_motion_utility)))))))) 63 64 65 (define get_gridworld_at (lambda (gridworld x y) (list-elt (list-elt gridworld y) x) 66 67 )) 68 (define x_increment (lambda (direction) 69 70 71 72 73 (case direction (('west) -1) (('east) 1) (('north) 0) (('south) 0) (('stay) 0)
2306.12672#377
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
379
85 (define gridworld_max_y (lambda (gridworld) (length gridworld))) 86 (define gridworld_transition (lambda (gridworld current_x current_y action) 87 88 (let ((direction (cdr action))) (let ((next_x (if (>= current_x (gridworld_max_x gridworld)) current_x (+ (x_increment direction) current_x)))) 89 90 (let ((next_x (if (< next_x 1) current_x next_x))) (let ((next_y (if (>= current_y (gridworld_max_y gridworld)) current_y (+ (y_increment direction) current_y)))) (let ((next_y (if (< next_y 1) current_y next_y))) (let ((next_state (get_gridworld_at gridworld next_x next_y))) (list next_state next_x next_y) 91 92 93 94 )))))))) 95 96 (define value_function (mem (lambda (agent-id curr_iteration gridworld state_x state_y) 97 98 (if (equal? curr_iteration -1) 0 (let ((prev_optimal_action_value (optimal_action_value agent-id (- curr_iteration 1) gridworld state_x
2306.12672#379
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
380
-1) 0 (let ((prev_optimal_action_value (optimal_action_value agent-id (- curr_iteration 1) gridworld state_x state_y))) (cdr prev_optimal_action_value)) 99 100 )))) 101 102 (define available_actions_to_values (mem (lambda (agent-id curr_iteration gridworld state_x state_y) 103 (map (lambda (action) 104 105 106 107 108 (let ((utility (utility_function agent-id gridworld state_x state_y action))) (let ((next_state (gridworld_transition gridworld state_x state_y action))) (let ((next_state_x (second next_state))) (let ((next_state_y (third next_state))) (let ((next_state_value (value_function agent-id curr_iteration gridworld next_state_x next_state_y))) 109 (pair action (+ utility next_state_value)) 110 111 112 ))) )))))) (available_actions agent-id)) 113 114 (define optimal_action_value (mem (lambda (agent-id curr_iteration gridworld state_x state_y) 115 (let ((actions_to_values (available_actions_to_values agent-id curr_iteration gridworld state_x
2306.12672#380
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
382
122 (if (<= (value_function agent-id MAX_ITERATIONS gridworld initial_x initial_y) 0) true (let ((location_type (get_gridworld_at gridworld state_x state_y))) (let ((state_food_utility (food_utility agent-id location_type))) 125 # (> state_food_utility 0))))))) 126 127 128 129 (define optimal_policy_from_initial_state (mem (lambda (agent-id gridworld state_x state_y) 130 131 132 134 135 136 (if (should_terminate agent-id gridworld state_x state_y) () (let ((curr_optimal_action_value (optimal_action_value agent-id MAX_ITERATIONS gridworld state_x state_y))) (let ((curr_optimal_action (car curr_optimal_action_value))) (let ((next_state (gridworld_transition gridworld state_x state_y curr_optimal_action))) (let ((next_state_x (second next_state))) (let ((next_state_y (third next_state))) (let ((remaining_policy (optimal_policy_from_initial_state agent-id gridworld next_state_x next_state_y))) 89 A.4 Social reasoning
2306.12672#382
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
384
(cons curr_optimal_action remaining_policy) 137 138 )))))))))) 139 140 (define trajectory_from_initial_state (mem (lambda (agent-id gridworld state_x state_y) 141 142 143 144 145 146 147 148 (if (should_terminate agent-id gridworld state_x state_y) () (let ((curr_optimal_action_value (optimal_action_value agent-id MAX_ITERATIONS gridworld state_x state_y))) (let ((curr_optimal_action (car curr_optimal_action_value))) (let ((next_state (gridworld_transition gridworld state_x state_y curr_optimal_action))) (let ((next_state_location (first next_state))) (let ((next_state_x (second next_state))) (let ((next_state_y (third next_state))) (let ((remaining_trajectory (trajectory_from_initial_state agent-id gridworld next_state_x next_state_y))) (cons next_state_location remaining_trajectory)) 149 150 )))))))))) 151 152 (define optimal_policy (mem (lambda (agent-id gridworld initial_state_x initial_state_y) (cons (pair 'start 'start)
2306.12672#384
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
385
151 152 (define optimal_policy (mem (lambda (agent-id gridworld initial_state_x initial_state_y) (cons (pair 'start 'start) (optimal_policy_from_initial_state agent-id gridworld 153 initial_state_x initial_state_y))))) 154 155 (define optimal_trajectory (mem (lambda (agent-id gridworld initial_state_x initial_state_y) 156 (cons (get_gridworld_at gridworld initial_state_x initial_state_y) (trajectory_from_initial_state agent-id gridworld initial_state_x initial_state_y)) 157 ))) 158 159 (define optimal_policy_with_trajectory (mem (lambda (agent-id gridworld initial_state_x initial_state_y) 160 (zip (optimal_policy agent-id gridworld initial_state_x initial_state_y) (optimal_trajectory agent-id gridworld initial_state_x initial_state_y)) 161 ))) 162 163 (define get_terminal_goal_state (mem (lambda (agent-id gridworld initial_state_x initial_state_y) 164 (last (optimal_trajectory agent-id gridworld initial_state_x initial_state_y))))) 165 166 (define
2306.12672#385
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
386
initial_state_y) 164 (last (optimal_trajectory agent-id gridworld initial_state_x initial_state_y))))) 165 166 (define trajectory_has_location_type? (mem (lambda (agent-id location_type gridworld initial_state_x initial_state_y) (member? location_type (optimal_trajectory agent-id gridworld initial_state_x initial_state_y)) 167 168 ))) 169 (define policy_has_motion_type? (mem (lambda (agent-id motion_type gridworld initial_state_x initial_state_y) 170 (let ((policy_motions (map (lambda (action) (first action)) (optimal_policy agent-id gridworld initial_state_x initial_state_y)))) (member? motion_type policy_motions) 171 172 )))) 173 (define policy_and_trajectory_has_motion_at_location? (mem (lambda (agent-id motion_type location_type gridworld initial_state_x initial_state_y) 174 (let ((policy_motions (map (lambda (action) (first action)) (optimal_policy agent-id gridworld
2306.12672#386
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
387
initial_state_x initial_state_y)))) (let ((trajectory (optimal_trajectory agent-id gridworld initial_state_x initial_state_y))) (let ((motions_at_locations (zip policy_motions trajectory))) (member? (list motion_type location_type) motions_at_locations) 175 (member? (list motion_type location_type) motions_at_locations) 177 178 )))))) 179 180 (define motion_at_location? (mem (lambda (agent-id motion_type location_type gridworld initial_state_x initial_state_y) 181 # (let ((policy_motions (map (lambda (action) (first action)) (optimal_policy agent-id gridworld # initial_state_x initial_state_y)))) 182 183 # (let ((trajectory (optimal_trajectory agent-id gridworld initial_state_x initial_state_y))) (let ((motions_at_locations (zip policy_motions trajectory))) 90 A.4 Social reasoning A LANGUAGE AND WORLD MODELS
2306.12672#387
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
388
motions_at_locations 184 185 )))))) 186 ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; 187 ;; Derived predicates. 188 ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; 189 (define action_id_gensym (make_gensym "action-")) 190 (define is_going_to_actions (mem (lambda (agent-id) 191 192 193 (let ((action_states (optimal_policy_with_trajectory agent-id gridworld initial_x initial_y))) (let ((final_location (last (last action_states)))) (list (list 194 195 196 197 (pair 'action_id (action_id_gensym)) (pair 'action_subject agent-id) (pair 'action_predicates (list 'is_going (list 'to final_location))) (pair 'action_preposition 'to) (pair 'action_location final_location) 198 199 ))))))) 200 201 (define is_going_on_actions (mem (lambda (agent-id) 202 203 204 205 206 207 (let ((action_states (optimal_policy_with_trajectory agent-id gridworld initial_x initial_y))) (fold (lambda (action_state these_actions) (let ((action_location (last action_state))) (let ((action_manner (first (first
2306.12672#388
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
389
initial_y))) (fold (lambda (action_state these_actions) (let ((action_location (last action_state))) (let ((action_manner (first (first action_state)))) (let ((action_direction (cdr (first action_state)))) (cons (list 208 209 210 211 (pair 'action_id (action_id_gensym)) (pair 'action_subject agent-id) (pair 'action_predicates (list 'is_going action_manner action_direction (list 'on action_location))) 212 213 (pair 'action_preposition 'on) (pair 'action_location action_location) 214 215 ) these_actions) 216 )))) 217 218 )))) () action_states) 219 220 (define actions_in_scene (mem (lambda (agent-id) (concatenate (is_going_to_actions agent-id) (is_going_on_actions agent-id))))) 221 (define is_action? (lambda (action action_predicate) (member? action_predicate (lookup action 'action_predicates)))) 222 (define is_subject_of_action? (lambda (action entity) (eq? 223 224 (lookup action 'action_subject) entity 225 ))) 226
2306.12672#389
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
390
227 (define is_preposition_of_action? (lambda (action preposition) (eq? # (lookup action 'action_preposition) preposition 228 229 ))) 230 231 (define is_location_of_action? (lambda (action location) (eq? # (lookup action 'action_location) location 232 233 ))) 234 235 # 236 (define get_location (lambda (action) (lookup action 'action_location) 237 238 )) 239 91 A.4 Social reasoning A LANGUAGE AND WORLD MODELS 240 (define (exists_action agent-id predicate) 241 (some (map predicate (actions_in_scene agent-id)))) 242 243 (define (get_actions agent-id predicate) 244 (fold (lambda (action these_actions) (if (predicate action) (cons action these_actions) these_actions)) 245 246 ) () (actions_in_scene agent-id)) Code Block 11: Generative domain theory for restaurant navigation domain. Generates agents with varying preferences in a gridworld environment. Also implements a value iteration-based planner directly in the Church code. A.4.2 Translation examples for social reasoning domain
2306.12672#390
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
392
(< (restaurant_utility 'bob 'pizza) 0) (< (restaurant_utility 'bob 'vegetarian) 10) 7 8 9 )) 10 ;; The pizza place is not open. 11 (condition (not (is_open 'pizza))) 12 ;; Condition on: Bob walked North on Danner. 13 (condition (exists_action 'bob (lambda (action) 14 15 16 17 18 (and (is_subject_of_action? action 'bob) (is_action? action 'is_walking) (is_action? action 'north) (is_preposition_of_action? action 'on) (is_location_of_action? action 'danner))))) 19 20 ;; Does Bob like vegetarian food? 21 (query (> (restaurant_utility 'bob 'vegetarian) 0)) 22 ;; Where is Bob going? 23 (query (get_actions 'bob (lambda (action) (and (is_subject_of_action? action 'bob) (is_action? action 'is_going))))) 26 27 ;; Where will Bob go to for lunch? 28 (query (get_location (first 24 25 29 30 31 32 33 (get_actions 'bob (lambda (action) (and (and (is_subject_of_action? action
2306.12672#392
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
395
Bootstrapped language-to-code translations for novel words There is a dax. Awog blicks a foog. EE Condition (existsobject (lanbda (object) (is_shape? ‘dax) object))) (condition (exists_object (lambda (object_1) (exists_object (lambda (object_2) There is a pelgy dax. (exists_event (lambda (event) (and (and (and (and (and (and (and o (condition (exists_object (lambda (object) oy (Cis_shape? 'wog) object_1)) and (Cis_shape? 'foog) object_2)) Fy (Gs.color? ‘pelgy) object) (is_subject_of_event? event object_1)) (Cis shape? 'dax) object))))) (is_object_of_event? event object_2)) (is_event_base? 'is_blicking event) (caused_event? event object_1)))))))))) A pelgy dax is gorping. (condition (exists_object (lambda (object) : (exists_event (lambda (event) Awog and a foog are zeeming. (and (and (and i Fy (Cis_color? 'pelgy) object) (condition
2306.12672#395
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
396
(lambda (event) Awog and a foog are zeeming. (and (and (and i Fy (Cis_color? 'pelgy) object) (condition (Cis_shape? 'dax) object)) (and (is_subject_of_event? event object)) (is_event_base? 'is_gorping event)))))))) (exists_object (lambda (object_1) (exists_event (lambda (event_1) (and (and . (Cis_shape? 'wog) object_1) Awug gorps feppily. (is_subject_of_event? event_1 object_1)) oy (is_event_base? 'is_zeeming event_1)))))) (condition (exists_object (lambda (object) (exists_event (lambda (event) (exists_object (lambda (object_2) (and (and Cand (exists_event (lambda (event_2) Fy (Cis_shape? 'wug) object) (and (and (is_subject_of_event? event object)) (Cis_shape? 'foog) object_2) (is_event_base? 'is_gorping event) (is_subject_of_event? event_2 object_2)) (is_event_modifier?
2306.12672#396
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
398
Figure 14: Example translations with novel words, suggesting that language-to-code models can leverage syntax-semantic mappings to inform hypothesized meanings. # B.2 Code editing While our framework focuses primarily on generating code in the PLoT, this view encompasses only part of the broader story of natural language. In particular, in certain contexts, it might not make sense to write new code, but instead to modify the existing domain theory. Consider the following statements, taken in context of the domains we explored in Section 3: (Tug-of-war) The team’s strength is the strength of the strongest player. • (Kinship) Avery has two kids from a previous marriage. • (Visual scenes) There’s a red mug stacked on top of a yellow can. • (Navigation) There’s a river separating the North and South sides of town, which you can paddle across in nice weather. These utterances bend or break the rules of their respective domain theories. To properly integrate these kinds of language, we’d need to edit pieces of the existing generative models.
2306.12672#398
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
399
These utterances bend or break the rules of their respective domain theories. To properly integrate these kinds of language, we’d need to edit pieces of the existing generative models. While language-guided code editing is still an open area of research, recent advances offer an exciting glimpse of what might be possible in the near-term. Ouyang et al. (2022) use a combination of finetuning and reinforcement learning to make GPT-3 adhere more closely to human-authored instructions. The resulting InstructGPT models, which OpenAI make available on their API, are capable of editing existing text based on short natural language instructions (e.g., “Fix the grammar”; “Turn this into a poem.”).9 Excitingly, this same approach extends to code-based LLMs, meaning that it is possible to prompt GPT models to edit a piece of code according to some instructions. Indeed, we can use OpenAI’s editing interface off-the-shelf to handle utterances requiring localized changes to the domain model (see below for a simple example in the tug-of-war domain). # 9https://openai.com/blog/gpt-3-edit-insert/ 93 C ATTRIBUTIONS Redefine: The team’s strength is the strength of the strongest player. 1 ;; The team's strength is the sum of the
2306.12672#399
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
400
93 C ATTRIBUTIONS Redefine: The team’s strength is the strength of the strongest player. 1 ;; The team's strength is the sum of the 1 ;; The team's strength is the strength of the strongest player. players' strengths. strongest player. 2 ;; When a player is lazy in a match, they pull with half their strength. with half their strength. 3 (define (team-strength team) 3 (define (team-strength team) 4 5 (sum (map 4 5 (apply max 6 (lambda (player) 6 (lambda (player) 7 8 9 (if (flip (laziness player)) (/ (strength player) 2) (strength player))) 7 8 9 (if (flip (laziness player)) (/ (strength player) 2) (strength player))) 10 team))) 10 team))) 2 ;; When a player is lazy in a match, they pull
2306.12672#400
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.12672
401
2 ;; When a player is lazy in a match, they pull Though questions of scaling and robustness remain, the problem of modeling sequences of code changes is currently gaining traction in the machine learning for code community, which has recently produced multiple language-guided neural models of code editing (Chakraborty, Ding, Allamanis, & Ray, 2022; Chakraborty & Ray, 2021; Fried et al., 2022; Panthaplackel, Nie, Gligoric, Li, & Mooney, 2020; Reid & Neubig, 2022; J. Zhang, Panthaplackel, Nie, Li, & Gligorić, 2022) that draw broadly on contemporary work in automated program repair (Bai et al., 2021; Y. Li, Wang, & Nguyen, 2020; Yasunaga & Liang, 2020). These advances suggest a broader vision for our framework in which domain theories, expressed in the PLoT, can be iteratively grown and revised to reflect natural language instruction. Moreover, as code LLMs become more general-purpose, the technical gap between generation and editing will continue to narrow, suggesting a point in the near future where defining new components of a domain theory will be a special case of language-guided code editing. # C Attributions # C.1 Attribution of graphics resources Se Artificial neural network icon by sachin modgekar from thenounproject.com. # fod
2306.12672#401
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural language models with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will provide a roadmap towards cognitive models and AI systems that synthesize the insights of both modern and classical computational perspectives.
http://arxiv.org/pdf/2306.12672
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
cs.CL, cs.AI, cs.SC
null
null
cs.CL
20230622
20230623
[ { "id": "1810.04805" }, { "id": "2302.04761" }, { "id": "2108.07258" }, { "id": "2201.13360" }, { "id": "1802.05365" }, { "id": "1707.08052" }, { "id": "2205.09712" }, { "id": "2304.03439" }, { "id": "1910.01442" }, { "id": "2302.08399" }, { "id": "2201.11903" }, { "id": "2007.09871" }, { "id": "2005.00955" }, { "id": "2302.05128" }, { "id": "1812.01569" }, { "id": "2305.12295" }, { "id": "2208.00005" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2301.13379" }, { "id": "1904.09545" }, { "id": "2004.12169" }, { "id": "2301.12867" }, { "id": "2209.07800" }, { "id": "2303.06247" }, { "id": "2205.05718" }, { "id": "2112.11446" }, { "id": "2207.10342" }, { "id": "2212.07919" }, { "id": "1910.14124" }, { "id": "2102.12616" }, { "id": "2110.14168" }, { "id": "1805.04988" }, { "id": "2206.07870" }, { "id": "2305.16291" }, { "id": "1704.04977" }, { "id": "2005.14165" }, { "id": "2306.03081" }, { "id": "2204.13807" }, { "id": "2204.07931" }, { "id": "2305.01020" }, { "id": "1606.03622" }, { "id": "2211.08411" }, { "id": "2205.06175" }, { "id": "2006.00418" }, { "id": "2205.00445" }, { "id": "2006.08381" }, { "id": "2301.06627" }, { "id": "1810.02338" }, { "id": "2106.00737" }, { "id": "2204.06125" }, { "id": "2302.06706" }, { "id": "2210.05359" }, { "id": "2205.11916" }, { "id": "2201.08239" }, { "id": "1905.05950" }, { "id": "2111.13654" }, { "id": "2204.01691" }, { "id": "1805.04793" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2209.07662" }, { "id": "2302.06729" }, { "id": "2112.04426" }, { "id": "2205.09735" }, { "id": "2005.00661" } ]
2306.16527
0
3 2 0 2 g u A 1 2 ] R I . s c [ 2 v 7 2 5 6 1 . 6 0 3 2 : v i X r a ee) “ # OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents Hugo Laurençon∗,1,2 Lucile Saulnier∗,1 Léo Tronchon∗,1 Stas Bekman∗,1 Amanpreet Singh∗,1 Anton Lozhkov1 Thomas Wang1 Siddharth Karamcheti1,3 Alexander M. Rush†,1 Douwe Kiela†,1,3 Matthieu Cord†,2 Victor Sanh∗,†,1 ∗Equal contributions, †Senior contributions [email protected] 1Hugging Face 2Sorbonne Université 3Stanford University # Abstract
2306.16527#0
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
1
# An Overview of Catastrophic AI Risks Dan Hendrycks Center for AI Safety Mantas Mazeika Center for AI Safety Thomas Woodside Center for AI Safety # Abstract Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.1
2306.12001#1
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.12420
1
# LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models # Shizhe Diao∗ Rui Pan∗ Hanze Dong∗ Ka Shun Shum Jipeng Zhang Wei Xiong # Tong Zhang # Abstract Large foundation models have demonstrated a great ability to achieve general human-level intelligence far beyond traditional approaches. As the technique keeps attracting attention from the AI community, more and more large foundation models have become publically available. However, most of those models exhibit a major deficiency in specialized-task applications, where the step of finetuning is still required for obtaining satisfactory performance. As the number of available models and specialized tasks keeps growing, the job of general finetuning becomes highly nontrivial. In this paper, we take the first step to address this issue. We introduce an extensible and lightweight toolkit, LMFlow, which aims to simplify the finetuning and inference of general large foundation models. LMFlow offers a complete finetuning workflow for a large foundation model to support personalized training with limited computing resources. Furthermore, it supports continuous pretraining, instruction tuning, parameter-efficient finetuning, alignment tuning, and large model inference, along with carefully designed and extensible APIs. This toolkit has been thoroughly tested and is available at https://github.com/ OptimalScale/LMFlow. # Introduction
2306.12420#1
LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models
Large foundation models have demonstrated a great ability to achieve general human-level intelligence far beyond traditional approaches. As the technique keeps attracting attention from the AI community, more and more large foundation models have become publically available. However, most of those models exhibit a major deficiency in specialized-task applications, where the step of finetuning is still required for obtaining satisfactory performance. As the number of available models and specialized tasks keeps growing, the job of general finetuning becomes highly nontrivial. In this paper, we take the first step to address this issue. We introduce an extensible and lightweight toolkit, LMFlow, which aims to simplify the finetuning and inference of general large foundation models. LMFlow offers a complete finetuning workflow for a large foundation model to support personalized training with limited computing resources. Furthermore, it supports continuous pretraining, instruction tuning, parameter-efficient finetuning, alignment tuning, and large model inference, along with carefully designed and extensible APIs. This toolkit has been thoroughly tested and is available at https://github.com/OptimalScale/LMFlow.
http://arxiv.org/pdf/2306.12420
Shizhe Diao, Rui Pan, Hanze Dong, Ka Shun Shum, Jipeng Zhang, Wei Xiong, Tong Zhang
cs.CL, cs.AI
13 pages, 3 figures
null
cs.CL
20230621
20230621
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2108.07258" }, { "id": "2304.06767" }, { "id": "2211.05100" }, { "id": "1907.01752" }, { "id": "2211.01786" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "1904.05342" }, { "id": "2005.12729" }, { "id": "2204.05862" }, { "id": "2112.00861" }, { "id": "2303.14742" }, { "id": "2212.10560" }, { "id": "2305.17926" }, { "id": "2304.03277" }, { "id": "2305.14314" }, { "id": "2304.01196" } ]
2306.16527
1
# Abstract Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of in- terleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset’s content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.1. # 1 Introduction
2306.16527#1
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
2
1This paper is for a wide audience, unlike most of our writing, which is for empirical AI researchers. We use imagery, stories, and a simplified style to discuss the risks that advanced AIs could pose, because we think this is an important topic for everyone. 1 # Executive Summary Artificial intelligence (AI) has seen rapid advancements in recent years, raising concerns among AI experts, policymakers, and world leaders about the potential risks posed by advanced AIs. As with all powerful technologies, AI must be handled with great responsibility to manage the risks and harness its potential for the betterment of society. However, there is limited accessible information on how catastrophic or existential AI risks might transpire or be addressed. While numerous sources on this subject exist, they tend to be spread across various papers, often targeted toward a narrow audience or focused on specific risks. In this paper, we provide an overview of the main sources of catastrophic AI risk, which we organize into four categories: Malicious use. Actors could intentionally harness powerful AIs to cause widespread harm. Specific risks include bioterrorism enabled by AIs that can help humans create deadly pathogens; the deliberate dissemination of uncontrolled AI agents; and the use of AI capabilities for propaganda, censorship, and surveillance. To reduce these risks, we suggest improving biosecurity, restricting access to the most dangerous AI models, and holding AI developers legally liable for damages caused by their AI systems.
2306.12001#2
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.12420
2
# Introduction Large foundation models, and in particular large language models (LLMs), have demonstrated general abilities to perform different tasks beyond what was possible previously. However, for specialized domains or tasks, it is necessary to further finetune such LLMs to achieve improved performance on such domains or tasks. The typical processes to finetune such large models include: • Continuous pretraining on special domains so that a large foundation model can acquire knowledge on these domains. • Instruction tuning to teach a large foundation model the capability to follow these specialized natural language instructions and perform tasks required by such instructions. • Reinforcement learning with human feedback (RLHF) to teach a large foundation model skills to perform conversation according to human preference. While a number of pretrained large models, including GPT-J [35], Bloom [30], LLaMA [34], etc., are publically available and have already been incorporated into the Hugging Face model repository [16], there is no publically available toolkit that can be easily used to perform finetuning tasks for these different models. The purpose of this package is to offer a simple-to-use and lightweight toolkit so that developers and researchers can perform efficient finetuning and inference of large models with limited resources. # ∗Equal contribution. Preprint.
2306.12420#2
LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models
Large foundation models have demonstrated a great ability to achieve general human-level intelligence far beyond traditional approaches. As the technique keeps attracting attention from the AI community, more and more large foundation models have become publically available. However, most of those models exhibit a major deficiency in specialized-task applications, where the step of finetuning is still required for obtaining satisfactory performance. As the number of available models and specialized tasks keeps growing, the job of general finetuning becomes highly nontrivial. In this paper, we take the first step to address this issue. We introduce an extensible and lightweight toolkit, LMFlow, which aims to simplify the finetuning and inference of general large foundation models. LMFlow offers a complete finetuning workflow for a large foundation model to support personalized training with limited computing resources. Furthermore, it supports continuous pretraining, instruction tuning, parameter-efficient finetuning, alignment tuning, and large model inference, along with carefully designed and extensible APIs. This toolkit has been thoroughly tested and is available at https://github.com/OptimalScale/LMFlow.
http://arxiv.org/pdf/2306.12420
Shizhe Diao, Rui Pan, Hanze Dong, Ka Shun Shum, Jipeng Zhang, Wei Xiong, Tong Zhang
cs.CL, cs.AI
13 pages, 3 figures
null
cs.CL
20230621
20230621
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2108.07258" }, { "id": "2304.06767" }, { "id": "2211.05100" }, { "id": "1907.01752" }, { "id": "2211.01786" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "1904.05342" }, { "id": "2005.12729" }, { "id": "2204.05862" }, { "id": "2112.00861" }, { "id": "2303.14742" }, { "id": "2212.10560" }, { "id": "2305.17926" }, { "id": "2304.03277" }, { "id": "2305.14314" }, { "id": "2304.01196" } ]
2306.16527
2
# 1 Introduction Recent systems demonstrate the effectiveness of training large multimodal models such as Flamingo on naturally occurring multimodal documents (Alayrac et al., 2022; Aghajanyan et al., 2022; Huang et al., 2023). A multimodal document is a succession of text paragraphs interleaved by images, such as web pages that contain images. Models trained on these web documents outperform vision and language models trained solely on image-text pairs on various benchmarks (Alayrac et al., 2022). They can also generate long and coherent text about a set of multiple images. While these results are compelling, they have not been replicable. The datasets used in these works are not publicly available, and relatively little information is known about their creation process and composition. This state motivates the creation of large-scale collections of high-quality multimodal web documents to support the creation of the next generation of models. We take inspiration from existing large open image-text datasets such as LAION (Schuhmann et al., 2022) and COYO (Byeon et al., 2022), comprised of hundreds of millions of image-text
2306.16527#2
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
3
AI race. Competition could pressure nations and corporations to rush the development of AIs and cede control to AI systems. Militaries might face pressure to develop autonomous weapons and use AIs for cyberwarfare, enabling a new kind of automated warfare where accidents can spiral out of control before humans have the chance to intervene. Corporations will face similar incentives to automate human labor and prioritize profits over safety, potentially leading to mass unemployment and dependence on AI systems. We also discuss how evolutionary pressures might shape AIs in the long run. Natural selection among AIs may lead to selfish traits, and the advantages AIs have over humans could eventually lead to the displacement of humanity. To reduce risks from an AI race, we suggest implementing safety regulations, international coordination, and public control of general-purpose AIs. Organizational risks. Organizational accidents have caused disasters including Chernobyl, Three Mile Island, and the Challenger Space Shuttle disaster. Similarly, the organizations developing and deploying advanced AIs could suffer catastrophic accidents, particularly if they do not have a strong safety culture. AIs could be accidentally leaked to the public or stolen by malicious actors. Organizations could fail to invest in safety research, lack understanding of how to reliably improve AI safety faster than general AI capabilities, or suppress internal concerns about AI risks. To reduce these risks, better organizational cultures and structures can be established, including internal and external audits, multiple layers of defense against risks, and state-of-the-art information security.
2306.12001#3
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.12420
3
# ∗Equal contribution. Preprint. Continuous Pretraining Finetuning Domain-specific Data (a private Socoeepesteso Data H a : Reward Model Zoo 4 ; ilteration Public Models (1) Domain Adaptation Law, Medical, Finance... Private Models Task-specific Data Model Deployment Foundation Models (2) Task Adaptation (3) Instruction Finetuning (4) RLHF LLaMA, Bloom... Summarization, Q&A, Translation... Figure 1: The system design of LMFlow. Starting from a publically available foundation model, there are four possible stages including (1) domain adaptation, (2) task adaptation, (3) instruction finetuning, and (4) reinforcement learning with human feedback. The following key features are supported by the toolkit: • Continous pretraining, instruction tuning, and RLHF on user-defined datasets. • Simple and extensible APIs for developers. Efficient tuning with low-rank adaptation (LoRA). • A novel RLHF algorithm RAFT (Reward rAnked FineTuning) to simply RLHF pipeline for generative models. • A simplified model inference framework.
2306.12420#3
LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models
Large foundation models have demonstrated a great ability to achieve general human-level intelligence far beyond traditional approaches. As the technique keeps attracting attention from the AI community, more and more large foundation models have become publically available. However, most of those models exhibit a major deficiency in specialized-task applications, where the step of finetuning is still required for obtaining satisfactory performance. As the number of available models and specialized tasks keeps growing, the job of general finetuning becomes highly nontrivial. In this paper, we take the first step to address this issue. We introduce an extensible and lightweight toolkit, LMFlow, which aims to simplify the finetuning and inference of general large foundation models. LMFlow offers a complete finetuning workflow for a large foundation model to support personalized training with limited computing resources. Furthermore, it supports continuous pretraining, instruction tuning, parameter-efficient finetuning, alignment tuning, and large model inference, along with carefully designed and extensible APIs. This toolkit has been thoroughly tested and is available at https://github.com/OptimalScale/LMFlow.
http://arxiv.org/pdf/2306.12420
Shizhe Diao, Rui Pan, Hanze Dong, Ka Shun Shum, Jipeng Zhang, Wei Xiong, Tong Zhang
cs.CL, cs.AI
13 pages, 3 figures
null
cs.CL
20230621
20230621
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2108.07258" }, { "id": "2304.06767" }, { "id": "2211.05100" }, { "id": "1907.01752" }, { "id": "2211.01786" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "1904.05342" }, { "id": "2005.12729" }, { "id": "2204.05862" }, { "id": "2112.00861" }, { "id": "2303.14742" }, { "id": "2212.10560" }, { "id": "2305.17926" }, { "id": "2304.03277" }, { "id": "2305.14314" }, { "id": "2304.01196" } ]
2306.16527
3
OBELICS: https://huggingface.co/datasets/HuggingFaceM4/OBELICS OBELICS reproduction code: https://github.com/huggingface/OBELICS IDEFICS models: https://huggingface.co/HuggingFaceM4/idefics-80b Preprint. Under review. Image-Text Pairs Tottenham vs Chelsea Live Streaming Tottenham Spurs vs Chelsea Live Streaming Multimodal Document The match between Tottenham Spurs vs Chelsea will kick off from 16:30 at Tottenham Hotspur Stadium, London. The derby had been played 54 times and the Blues have dominated the Spurs. Out of 54 matches played, Chelsea has won 28 times and Spurs had only won 7 times. The remaining 19 matches had ended in draw. However, in recent 5 meetings, Spurs had won 3 times where Chelsea had won the other two times. ... Figure 1: A comparison of extraction from the same web document. For image-text pairs, the alt-text of images is often short or non-grammatical. For OBELICS, the extracted multimodal web document interleaves long-form text with the images on the page.
2306.16527#3
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
4
Rogue AIs. A common and serious concern is that we might lose control over AIs as they become more intelligent than we are. AIs could optimize flawed objectives to an extreme degree in a process called proxy gaming. AIs could experience goal drift as they adapt to a changing environment, similar to how people acquire and lose goals throughout their lives. In some cases, it might be instrumentally rational for AIs to become power-seeking. We also look at how and why AIs might engage in deception, appearing to be under control when they are not. These problems are more technical than the first three sources of risk. We outline some suggested research directions for advancing our understanding of how to ensure AIs are controllable. Throughout each section, we provide illustrative scenarios that demonstrate more concretely how the sources of risk might lead to catastrophic outcomes or even pose existential threats. By offering a positive vision of a safer future in which risks are managed appropriately, we emphasize that the emerging risks of AI are serious but not insurmountable. By proactively addressing these risks, we can work toward realizing the benefits of AI while minimizing the potential for catastrophic outcomes. 2 # Contents # 1 Introduction
2306.12001#4
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.12420
4
• A novel RLHF algorithm RAFT (Reward rAnked FineTuning) to simply RLHF pipeline for generative models. • A simplified model inference framework. Based on a 7-billion-parameter LLaMA model, it only takes one Nvidia 3090 GPU and five hours to train a personalized model. We used this framework to finetune a series of 7-billion, 13-billion, 33-billion, and 65-billion parameter versions of LLaMA on a single machine and have released the model weights for academic research. The trained model weights can be immediately used for a question-and-answer service on the website lmflow.com. Using LMFlow, anyone can train their own personalized model. Each person can choose the appropriate model according to their available resources, for tasks such as question answering, companionship, writing, translation, and expert consultations in various fields. The larger the model and data size, the longer the training time provided the better the results. Currently, we trained a 33B model and achieved comparable or even better performance than ChatGPT. # 2 Toolkit Overview # 2.1 System Design
2306.12420#4
LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models
Large foundation models have demonstrated a great ability to achieve general human-level intelligence far beyond traditional approaches. As the technique keeps attracting attention from the AI community, more and more large foundation models have become publically available. However, most of those models exhibit a major deficiency in specialized-task applications, where the step of finetuning is still required for obtaining satisfactory performance. As the number of available models and specialized tasks keeps growing, the job of general finetuning becomes highly nontrivial. In this paper, we take the first step to address this issue. We introduce an extensible and lightweight toolkit, LMFlow, which aims to simplify the finetuning and inference of general large foundation models. LMFlow offers a complete finetuning workflow for a large foundation model to support personalized training with limited computing resources. Furthermore, it supports continuous pretraining, instruction tuning, parameter-efficient finetuning, alignment tuning, and large model inference, along with carefully designed and extensible APIs. This toolkit has been thoroughly tested and is available at https://github.com/OptimalScale/LMFlow.
http://arxiv.org/pdf/2306.12420
Shizhe Diao, Rui Pan, Hanze Dong, Ka Shun Shum, Jipeng Zhang, Wei Xiong, Tong Zhang
cs.CL, cs.AI
13 pages, 3 figures
null
cs.CL
20230621
20230621
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2108.07258" }, { "id": "2304.06767" }, { "id": "2211.05100" }, { "id": "1907.01752" }, { "id": "2211.01786" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "1904.05342" }, { "id": "2005.12729" }, { "id": "2204.05862" }, { "id": "2112.00861" }, { "id": "2303.14742" }, { "id": "2212.10560" }, { "id": "2305.17926" }, { "id": "2304.03277" }, { "id": "2305.14314" }, { "id": "2304.01196" } ]
2306.16527
4
pairs obtained through web crawling. These datasets have been critical to developing and replicating numerous recent multimodal models (Radford et al., 2021; Wang et al., 2022; Yu et al., 2022; Wang et al., 2022; Liu et al., 2023). While this approach allows for building extremely large and diverse training datasets, we note several limitations to using only image-text pairs. From a language perspective, these datasets rely primarily on alt-text, meaning the text given is brief, captures an approximate snapshot of the image’s content, and often lacks grammatical correctness. From a document perspective, image-text pairs remove an image from its natural context on a page and its relationship with other documents. In this work, we introduce OBELICS2, an openly-accessible curated web-scale dataset consisting of 141 million multimodal English web documents which contain 353 million associated images and 115 billion tokens. OBELICS collects full multimodal documents interleaving text and images as shown in Figure 1. We describe the dataset creation process, outline the filtering and curation steps and shed light on the dataset’s content and limitations. To demonstrate the viability of OBELICS, we train IDEFICS, an 80 billion parameter multimodal model and show competitive performance against large-scale multimodal models such as Flamingo (Alayrac et al., 2022). # 2 Related Works
2306.16527#4
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
5
. 2.1 Bioterrorism . . . 2.2 Unleashing AI Agents . 2.3 Persuasive AIs . . . 2.4 Concentration of Power . . 2.5 Suggestions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Military AI Arms Race . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.1 Lethal Autonomous Weapons
2306.12001#5
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.12420
5
# 2 Toolkit Overview # 2.1 System Design An illustration of the LMFlow system design is shown in Figure 1. There are four stages for improving the performance of a publicly available large language model. The first stage is domain adaptation, which involves modifying the model to better handle a specific domain by training the model on that domain. The second stage is task adaptation, which involves adapting the model to perform a specific task, such as summarization, question-answering, and translation. The third stage is instruction finetuning, which involves adjusting the model’s parameters based on instructional question-answer pairs. The final stage is reinforcement learning with human feedback, which involves using human feedback to further align the model to human preference. LMFlow provides a complete finetuning workflow for these four stages, supporting large language models’ personalized training with limited computing resources. 2 # 2.2 Installation LMFlow has been fully tested on Linux OS (Ubuntu 20.04) and can be installed by executing the following commands. $ git clone https :// github . com / OptimalScale / LMFlow . git $ cd LMFlow $ conda create - n lmflow python =3.9 - y $ conda activate lmflow $ conda install mpi4py $ pip install - e . # 2.3 Data Format
2306.12420#5
LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models
Large foundation models have demonstrated a great ability to achieve general human-level intelligence far beyond traditional approaches. As the technique keeps attracting attention from the AI community, more and more large foundation models have become publically available. However, most of those models exhibit a major deficiency in specialized-task applications, where the step of finetuning is still required for obtaining satisfactory performance. As the number of available models and specialized tasks keeps growing, the job of general finetuning becomes highly nontrivial. In this paper, we take the first step to address this issue. We introduce an extensible and lightweight toolkit, LMFlow, which aims to simplify the finetuning and inference of general large foundation models. LMFlow offers a complete finetuning workflow for a large foundation model to support personalized training with limited computing resources. Furthermore, it supports continuous pretraining, instruction tuning, parameter-efficient finetuning, alignment tuning, and large model inference, along with carefully designed and extensible APIs. This toolkit has been thoroughly tested and is available at https://github.com/OptimalScale/LMFlow.
http://arxiv.org/pdf/2306.12420
Shizhe Diao, Rui Pan, Hanze Dong, Ka Shun Shum, Jipeng Zhang, Wei Xiong, Tong Zhang
cs.CL, cs.AI
13 pages, 3 figures
null
cs.CL
20230621
20230621
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2108.07258" }, { "id": "2304.06767" }, { "id": "2211.05100" }, { "id": "1907.01752" }, { "id": "2211.01786" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "1904.05342" }, { "id": "2005.12729" }, { "id": "2204.05862" }, { "id": "2112.00861" }, { "id": "2303.14742" }, { "id": "2212.10560" }, { "id": "2305.17926" }, { "id": "2304.03277" }, { "id": "2305.14314" }, { "id": "2304.01196" } ]
2306.16527
5
# 2 Related Works Image-text pairs datasets The largest multimodal datasets, such as LAION (Schuhmann et al., 2021, 2022), Conceptual Captions (Sharma et al., 2018; Changpinyo et al., 2021), ALIGN (Jia et al., 2021), COYO (Byeon et al., 2022), and DataComp (Gadre et al., 2023), contain billions of image-text pairs and are usually obtained through web-crawling and alt-text extraction. A variety of multimodal models have been trained on this type of dataset: multimodal encoder models which use a contrastive objective (Radford et al., 2021; Wang et al., 2022), image generation based on Transformers or diffusion processes (Nichol et al., 2022; Ramesh et al., 2022; Rombach et al., 2021; Saharia et al., 2022). While the scale of these datasets makes them attractive candidates for training, our work focuses on extracting images and the textual context in which they appear instead of extracting the associated alternative text. Web document datasets Insights from scaling language models (Kaplan et al., 2020; Hoffmann et al., 2022) emphasize the need for increasingly bigger datasets. For instance, # 2Open Bimodal Examples from Large fIltered Commoncrawl Snapshots 2
2306.16527#5
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
6
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.1 Lethal Autonomous Weapons (LAWs) . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.2 Cyberwarfare . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.3 Automated Warfare . 3.1.4 Actors May Risk Extinction Over Individual Defeat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 Economic Competition Undercuts Safety . . . . . . . . . . . . . . . . . . . . . . . 3.2.2 Automated Economy . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2306.12001#6
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.12420
6
# 2.3 Data Format LMFlow accepts several .json files as input. Users can provide a list of .json files under a specified dataset directory. For example, 1 |- path_to_dataset |- data_1.json 2 |- data_2.json |- another_data.json |- ... Each json file shall have the following format (three instances with four keys for example), 1 { 2 3 "type": "TYPE", "instances": [ 4 { 5 6 7 8 "KEY_1": "VALUE_1.1", "KEY_2": "VALUE_1.2", "KEY_3": "VALUE_1.3", "KEY_4": "VALUE_1.4", 9 10 }, { 11 12 13 14 "KEY_1": "VALUE_2.1", "KEY_2": "VALUE_2.2", "KEY_3": "VALUE_2.3", "KEY_4": "VALUE_2.4", 15 16 }, { 17 18 19 20 "KEY_1": "VALUE_3.1", "KEY_2": "VALUE_3.2", "KEY_3": "VALUE_3.3", "KEY_4": "VALUE_3.4", 21 }, 22 23 } ]
2306.12420#6
LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models
Large foundation models have demonstrated a great ability to achieve general human-level intelligence far beyond traditional approaches. As the technique keeps attracting attention from the AI community, more and more large foundation models have become publically available. However, most of those models exhibit a major deficiency in specialized-task applications, where the step of finetuning is still required for obtaining satisfactory performance. As the number of available models and specialized tasks keeps growing, the job of general finetuning becomes highly nontrivial. In this paper, we take the first step to address this issue. We introduce an extensible and lightweight toolkit, LMFlow, which aims to simplify the finetuning and inference of general large foundation models. LMFlow offers a complete finetuning workflow for a large foundation model to support personalized training with limited computing resources. Furthermore, it supports continuous pretraining, instruction tuning, parameter-efficient finetuning, alignment tuning, and large model inference, along with carefully designed and extensible APIs. This toolkit has been thoroughly tested and is available at https://github.com/OptimalScale/LMFlow.
http://arxiv.org/pdf/2306.12420
Shizhe Diao, Rui Pan, Hanze Dong, Ka Shun Shum, Jipeng Zhang, Wei Xiong, Tong Zhang
cs.CL, cs.AI
13 pages, 3 figures
null
cs.CL
20230621
20230621
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2108.07258" }, { "id": "2304.06767" }, { "id": "2211.05100" }, { "id": "1907.01752" }, { "id": "2211.01786" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "1904.05342" }, { "id": "2005.12729" }, { "id": "2204.05862" }, { "id": "2112.00861" }, { "id": "2303.14742" }, { "id": "2212.10560" }, { "id": "2305.17926" }, { "id": "2304.03277" }, { "id": "2305.14314" }, { "id": "2304.01196" } ]
2306.16527
6
# 2Open Bimodal Examples from Large fIltered Commoncrawl Snapshots 2 LLaMA (Touvron et al., 2023) was trained on a dataset of 1.4T tokens created exclusively from openly accessible English web content. The authors noticed that an even bigger dataset would have benefited the model. To address that need, multiple web-scale datasets have been introduced and made available: c4 (Raffel et al., 2019), ROOTS (Laurençon et al., 2022), Pile (Gao et al., 2020), OSCAR (Ortiz Suárez et al., 2020). Although OBELICS falls in the same category of making accessible large collections of curated web documents, the additional extraction of images changes the nature of the resulting dataset. It allows training models with additional vision capabilities.
2306.16527#6
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
7
. . 3.2.2 Automated Economy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Corporate AI Race . . . . . . 3.3 Evolutionary Pressures . . 3.4 Suggestions . . . . . . 4.1 Accidents Are Hard to Avoid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Organizational Factors can Reduce the Chances of Catastrophe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Suggestions . . . . . . . . . . 5.1 Proxy Gaming . 5.2 Goal Drift . . 5.3 Power-Seeking
2306.12001#7
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.12420
7
where the TYPE indicates the dataset type and defines the set of keys { KEY_1, KEY_2, ... and their corresponding interpretations. A list of supported types is detailed as follows. TextOnly This is the most common dataset type, which only contains raw texts in each sample. This type of dataset can be used as the training set for text decoder models, or the input of decoder models / encoder-decoder models. Its format is as follows (three instances, for example), 1 { 2 3 "type": "text_only", "instances": [ 4 5 { "text": "SAMPLE_TEXT_1" }, { "text": "SAMPLE_TEXT_2" }, 3 6 { "text": "SAMPLE_TEXT_3" }, 7 8 } ] Text2Text This is the dataset type mostly used for inferencing, which contains a pair of texts in each sample. This type of dataset can be used as the training set for text encoder-decoder models, or question-answer pair for evaluating model inferences. Its format is as follows (three instances for example),
2306.12420#7
LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models
Large foundation models have demonstrated a great ability to achieve general human-level intelligence far beyond traditional approaches. As the technique keeps attracting attention from the AI community, more and more large foundation models have become publically available. However, most of those models exhibit a major deficiency in specialized-task applications, where the step of finetuning is still required for obtaining satisfactory performance. As the number of available models and specialized tasks keeps growing, the job of general finetuning becomes highly nontrivial. In this paper, we take the first step to address this issue. We introduce an extensible and lightweight toolkit, LMFlow, which aims to simplify the finetuning and inference of general large foundation models. LMFlow offers a complete finetuning workflow for a large foundation model to support personalized training with limited computing resources. Furthermore, it supports continuous pretraining, instruction tuning, parameter-efficient finetuning, alignment tuning, and large model inference, along with carefully designed and extensible APIs. This toolkit has been thoroughly tested and is available at https://github.com/OptimalScale/LMFlow.
http://arxiv.org/pdf/2306.12420
Shizhe Diao, Rui Pan, Hanze Dong, Ka Shun Shum, Jipeng Zhang, Wei Xiong, Tong Zhang
cs.CL, cs.AI
13 pages, 3 figures
null
cs.CL
20230621
20230621
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2108.07258" }, { "id": "2304.06767" }, { "id": "2211.05100" }, { "id": "1907.01752" }, { "id": "2211.01786" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "1904.05342" }, { "id": "2005.12729" }, { "id": "2204.05862" }, { "id": "2112.00861" }, { "id": "2303.14742" }, { "id": "2212.10560" }, { "id": "2305.17926" }, { "id": "2304.03277" }, { "id": "2305.14314" }, { "id": "2304.01196" } ]
2306.16527
7
Multimodal web document datasets The recent most performant vision and language models are trained on large sets of multimodal web documents. For instance, Flamingo (Alayrac et al., 2022), an 80 billion multimodal model, was trained on a mix of 2.1 billion image-text pairs, 27 million video-text pairs, and 43 million multimodal web documents. The latter called M3W, includes 185 million images. Similarly, KOSMOS-1 (Huang et al., 2023) was trained on a mixture containing 71 million multimodal web documents. However, in both cases, the dataset is not publicly available, and little information is accessible as to the dataset’s content, the strategies employed to create that dataset (including filtering strategies), and the quality of the resulting web documents, which ultimately hinders further research.
2306.16527#7
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
8
. . . . 4.3 Suggestions . . . . . . . . . . 5.1 Proxy Gaming . 5.2 Goal Drift . . 5.3 Power-Seeking . . . 5.4 Deception . . 5.5 Suggestions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 6 8 8 10 11 13 13 13 14 15 16 17 18 19 20 23 25 26 28 32 34 35 36 38 40 42 43 44
2306.12001#8
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.12420
8
1 { 2 3 "type": "text2text", "instances": [ 4 { 5 6 "input": "SAMPLE_INPUT_1", "output": "SAMPLE_OUTPUT_1", 7 8 }, { 9 10 "input": "SAMPLE_INPUT_2", "output": "SAMPLE_OUTPUT_2", 11 12 }, { 13 14 "input": "SAMPLE_INPUT_3", "output": "SAMPLE_OUTPUT_3", 15 }, 16 17 } ] # 2.4 Continuous Pretraining The endeavor to bridge the divide between pretraining domains and downstream domains has led to the adoption of a prevalent approach, known as continuous pretraining [4, 1, 15, 21], which involves the ongoing pretraining on an extensive collection of unlabeled data that is specific to a given domain. Continuous pretraining is LMFlow supports continuous pretraining natively, which is an effective way to adapt LLMs to a specific domain. Users just need to collect a set of unlabeled data and prepare them to TextOnly data format. The following process will be handled by autoregressive training. # Instruction Tuning
2306.12420#8
LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models
Large foundation models have demonstrated a great ability to achieve general human-level intelligence far beyond traditional approaches. As the technique keeps attracting attention from the AI community, more and more large foundation models have become publically available. However, most of those models exhibit a major deficiency in specialized-task applications, where the step of finetuning is still required for obtaining satisfactory performance. As the number of available models and specialized tasks keeps growing, the job of general finetuning becomes highly nontrivial. In this paper, we take the first step to address this issue. We introduce an extensible and lightweight toolkit, LMFlow, which aims to simplify the finetuning and inference of general large foundation models. LMFlow offers a complete finetuning workflow for a large foundation model to support personalized training with limited computing resources. Furthermore, it supports continuous pretraining, instruction tuning, parameter-efficient finetuning, alignment tuning, and large model inference, along with carefully designed and extensible APIs. This toolkit has been thoroughly tested and is available at https://github.com/OptimalScale/LMFlow.
http://arxiv.org/pdf/2306.12420
Shizhe Diao, Rui Pan, Hanze Dong, Ka Shun Shum, Jipeng Zhang, Wei Xiong, Tong Zhang
cs.CL, cs.AI
13 pages, 3 figures
null
cs.CL
20230621
20230621
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2108.07258" }, { "id": "2304.06767" }, { "id": "2211.05100" }, { "id": "1907.01752" }, { "id": "2211.01786" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "1904.05342" }, { "id": "2005.12729" }, { "id": "2204.05862" }, { "id": "2112.00861" }, { "id": "2303.14742" }, { "id": "2212.10560" }, { "id": "2305.17926" }, { "id": "2304.03277" }, { "id": "2305.14314" }, { "id": "2304.01196" } ]
2306.16527
8
Concurrently to our work, the Multimodal C4 (mmc4) dataset (Zhu et al., 2023) was recently made accessible. It consists of 103 million multimodal web documents that include 585 million images. Although there are similarities between our datasets, it is important to highlight particular distinctions. First, our dataset is based on more recent documents from February 2020 to February 2023, whereas mmc4 uses documents from April 2019. Additionally, our filtering heuristics appear to be more comprehensive: we leverage the HTML DOM trees to filter out undesirable texts and images, whereas mmc4 uses the HTML to find images in order to merge them with the original C4 dataset by solving a bipartite assignment problem based on a CLIP model similarities. Last, we implement additional deduplication steps at the image, document, and paragraph levels. # 3 Creation of the Multimodal Web Document Dataset
2306.16527#8
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
9
# 2 Malicious Use # 3 AI Race # 4 Organizational Risks # 5 Rogue AIs # 6 Discussion of Connections Between Risks # 7 Conclusion # A Frequently Asked Questions 3 51 # 1 Introduction The world as we know it is not normal. We take for granted that we can talk instantaneously with people thousands of miles away, fly to the other side of the world in less than a day, and access vast mountains of accumulated knowledge on devices we carry around in our pockets. These realities seemed far-fetched decades ago, and would have been inconceivable to people living centuries ago. The ways we live, work, travel, and communicate have only been possible for a tiny fraction of human history. Yet, when we look at the bigger picture, a broader pattern emerges: accelerating development. Hundreds of thousands of years elapsed between the time Homo sapiens appeared on Earth and the agricultural revolution. Then, thousands of years passed before the industrial revolution. Now, just centuries later, the artificial intelligence (AI) revolution is beginning. The march of history is not constant—it is rapidly accelerating.
2306.12001#9
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.12420
9
# Instruction Tuning Instruction tuning [29, 38, 9, 23, 37], also called supervised finetuning, is an approach used to enhance the performance of language models by training them to follow natural language instructions. This involves training the model on a small set of task-specific data, most of which are in prompt-answer format, including positive or negative examples, prompts, constraints, and other elements commonly present in human language. The primary objective of instruction tuning is to improve the model’s proficiency in undertaking multiple tasks and to generalize more effectively to new or unseen tasks. This is accomplished by teaching the model to comprehend and integrate various language cues and constraints relevant to the given task. By improving the language models’ ability to comprehend and follow natural language commands, this approach can unlock new levels of performance and productivity in diverse applications. Instruction tuning enables LLMs to provide more accurate and relevant responses to user queries, making them a more effective conversational agents. # 2.6 RLHF as Finetuning
2306.12420#9
LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models
Large foundation models have demonstrated a great ability to achieve general human-level intelligence far beyond traditional approaches. As the technique keeps attracting attention from the AI community, more and more large foundation models have become publically available. However, most of those models exhibit a major deficiency in specialized-task applications, where the step of finetuning is still required for obtaining satisfactory performance. As the number of available models and specialized tasks keeps growing, the job of general finetuning becomes highly nontrivial. In this paper, we take the first step to address this issue. We introduce an extensible and lightweight toolkit, LMFlow, which aims to simplify the finetuning and inference of general large foundation models. LMFlow offers a complete finetuning workflow for a large foundation model to support personalized training with limited computing resources. Furthermore, it supports continuous pretraining, instruction tuning, parameter-efficient finetuning, alignment tuning, and large model inference, along with carefully designed and extensible APIs. This toolkit has been thoroughly tested and is available at https://github.com/OptimalScale/LMFlow.
http://arxiv.org/pdf/2306.12420
Shizhe Diao, Rui Pan, Hanze Dong, Ka Shun Shum, Jipeng Zhang, Wei Xiong, Tong Zhang
cs.CL, cs.AI
13 pages, 3 figures
null
cs.CL
20230621
20230621
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2108.07258" }, { "id": "2304.06767" }, { "id": "2211.05100" }, { "id": "1907.01752" }, { "id": "2211.01786" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "1904.05342" }, { "id": "2005.12729" }, { "id": "2204.05862" }, { "id": "2112.00861" }, { "id": "2303.14742" }, { "id": "2212.10560" }, { "id": "2305.17926" }, { "id": "2304.03277" }, { "id": "2305.14314" }, { "id": "2304.01196" } ]
2306.16527
9
# 3 Creation of the Multimodal Web Document Dataset Common Crawl data 41.2B docs Collecting a large number of HTML files • Selection of English content • Early text deduplication • Quality classification 1.1B docs Simplifying HTML files • DOM tree cleaning strategies • Tag unwrapping • Node removal • Modification of specific nodes 10x smaller HTML files Extracting multimodal web documents Filtering multimodal web documents • Preservation of the original structure of the web pages • Image downloading 1.1B docs 2B images • Node level image filtering • Paragraph-level text filtering • Document-level filtering 365M docs 1.4B images Responsible filtering Deduplicating • Exclusion of opted-out images • NSFW images removal • Image deduplication • Document deduplication • Paragraph deduplication 141M docs 353M images OBELICS Figure 2: Overview of the steps involved in creating OBELICS. This section provides an overview of the critical choices of the creation and filtering process. Figure 2 gives a high-level summary of the main steps involved. Many details are omitted from this section, and we invite the reader to refer to the appendix A.1 for completeness. 3 # 3.1 Collecting a Large Number of HTML Files
2306.16527#9
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
10
We can capture this trend quantitatively in Figure 1, which shows how estimated gross world product has changed over time [1, 2]. The hyperbolic growth it depicts might be explained by the fact that, as technology ad- vances, the rate of technological advancement also tends to increase. Empowered with new technologies, people can innovate faster than they could before. Thus, the gap in time between each landmark development narrows. It is the rapid pace of development, as much as the sophistication of our technology, that makes the present day an unprecedented time in human history. We have reached a point where technological advancements can transform the world beyond recognition within a human life- time. For example, people who have lived through the creation of the internet can remem- ber a time when our now digitally-connected world would have seemed like science fiction. From a historical perspective, it appears possible that the same amount of development could now be condensed in an even shorter timeframe. We might not be certain that this will occur, but neither can we rule it out. We therefore wonder: what new technology might usher in the next big acceleration? In light of recent advances, AI seems an increasingly plausible candidate. Perhaps, as AI continues to become more powerful, it could lead to a qualitative shift in the world, more profound than any we have experienced so far. It could be the most impactful period in history, though it could also be the last.
2306.12001#10
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]