id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
listlengths 1
1
|
---|---|---|---|---|---|---|
2307.12856#75
|
A Real-World WebAgent with Planning, Long Context Understanding, and Program Synthesis
|
# F # f o e c n a m r o f r e P : 0 1 e l b a T l a n i g i r o o t # s t l u s e r e v i t i t e p m o c 1 - R # s l e d o M 8 2 . 8 4 e g r a L - 5 T g n o L 5 3 . 8 4 # L X - 5 T g n o L 2 5 . 8 4 e g r a L - 5 T g n o L - n a l F 7 3 . 8 4 L X - 5 T g n o L - n a l F 26 # F f o e c n a m r o f r e P : 1 1 e l b a T Preprint # J PER-TASK PERFORMANCE ON MINIWOB++
|
2307.12856#74
|
2307.12856#76
|
2307.12856
|
[
"2101.02235"
] |
2307.12856#76
|
A Real-World WebAgent with Planning, Long Context Understanding, and Program Synthesis
|
Task HTML-T5-XL (347K) HTML-T5-XL (12K) Flan-T5-XL (347K) WebN-T5-XL (12K) book-flight choose-date choose-date-easy choose-date-medium choose-list click-button click-button-sequence click-checkboxes click-checkboxes-large click-checkboxes-soft click-checkboxes-transfer click-collapsible click-collapsible-2 click-color click-dialog click-dialog-2 click-link click-menu click-option click-pie click-scroll-list click-shades click-shape click-tab click-tab-2 click-tab-2-hard click-test click-test-2 click-widget count-shape email-inbox email-inbox-forward-nl email-inbox-forward-nl-turk email-inbox-nl-turk enter-date enter-password enter-text enter-text-dynamic enter-time focus-text focus-text-2 grid-coordinate guess-number identify-shape login-user login-user-popup multi-layouts multi-orderings navigate-tree search-engine social-media social-media-all social-media-some tic-tac-toe use-autocomplete use-spinner 0.99 0.16 1.00 0.56 0.22 1.00 1.00 1.00 0.90 0.99 1.00 1.00 0.93 1.00 1.00 0.74 0.99 0.37 1.00 0.96 0.99 0.00 0.79 1.00 0.94 0.88 1.00 1.00 1.00 0.67 1.00 1.00 1.00 0.99 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 0.13 1.00 1.00 1.00 1.00 1.00 0.99 0.93 0.99 0.31 0.89 0.57 0.97 0.07 0.00 0.03 0.28 0.14 0.19 0.92 1.00 1.00 0.94 0.64 1.00 0.41 0.26 1.00 1.00 0.31 1.00 0.26 1.00 0.89 0.91 0.05 0.57 1.00 0.40 0.30 1.00 1.00 0.94 0.55 0.99 0.92 1.00 0.76 0.00 0.99 0.96 1.00 0.00 1.00 1.00 1.00 0.00 0.89 0.80 0.63 1.00 1.00 0.99 0.55 0.93 0.84 0.60 0.46 0.23 0.07 0.48 0.08 1.00 0.57 0.16 0.98 1.00 1.00 0.98 1.00 0.99 1.00 0.94 0.27 1.00 0.34 1.00 0.41 1.00 0.99 0.00 0.00 0.58 1.00 0.94 0.57 1.00 1.00 1.00 0.64 0.99 1.00 1.00 0.92 1.00 1.00 1.00 1.00 0.00 1.00 1.00 1.00 0.10 0.90 1.00 0.97 1.00 1.00 1.00 0.59 0.99 0.09 0.39 0.42 0.98 0.03 0.00 0.00 0.03 0.00 0.26 1.00 1.00 0.96 0.22 0.54 0.63 0.00 0.00 0.27 1.00 0.24 1.00 0.37 0.87 0.51 0.00 0.00 0.53 0.74 0.18 0.12 1.00 1.00 1.00 0.41 0.38 0.60 0.33 0.23 0.00 0.97 0.89 0.98 0.00 1.00 1.00 0.49 0.00 0.88 0.82 0.72 0.83 0.88 0.91 0.34 0.21 0.00 0.02 0.48 0.22 0.07
|
2307.12856#75
|
2307.12856#77
|
2307.12856
|
[
"2101.02235"
] |
2307.12856#77
|
A Real-World WebAgent with Planning, Long Context Understanding, and Program Synthesis
|
Table 12: Per-task average success rate on 56 tasks from MiniWoB++. We refer to Furuta et al. (2023) and Gur et al. (2022) for the baseline performances. 27
|
2307.12856#76
|
2307.12856
|
[
"2101.02235"
] |
|
2307.12573#0
|
Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models
|
3 2 0 2 l u J 4 2 ] L C . s c [ 1 v 3 7 5 2 1 . 7 0 3 2 : v i X r a # Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models Yuanzhi Liang 1, Linchao Zhu 2, Yi Yang 2 1 University of Technology Sydney, 2 Zhejiang University [email protected] [email protected] [email protected] # Abstract
|
2307.12573#1
|
2307.12573
|
[
"2212.10060"
] |
|
2307.12573#1
|
Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models
|
Recent advancements in natural language and Large Language Models (LLMs) have enabled AI agents to simulate human-like interactions within virtual worlds. However, these interactions still face limitations in complexity and ï¬ exibility, partic- ularly in scenarios involving multiple characters and novel objects. Pre-deï¬ ning all interactable objects in the agentâ s world model presents challenges, and conveying implicit intentions to multiple characters through complex interactions remains dif- ï¬ cult. To address these issues, we propose integrating virtual Game Masters (GMs) into the agentâ s world model, drawing inspiration from Tabletop Role-Playing Games (TRPGs). GMs play a crucial role in overseeing information, estimating playersâ intentions, providing environment descriptions, and offering feedback, compensating for current world model deï¬
|
2307.12573#0
|
2307.12573#2
|
2307.12573
|
[
"2212.10060"
] |
2307.12573#2
|
Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models
|
ciencies. To facilitate future explorations for complex interactions, we introduce a benchmark named Tachikuma, compris- ing a Multiple character and novel Object based interaction Estimation (MOE) task and a supporting dataset. MOE challenges models to understand charactersâ intentions and accurately determine their actions within intricate contexts involving multi-character and novel object interactions. Besides, the dataset captures log data from real-time communications during gameplay, providing diverse, grounded, and complex interactions for further explorations. Finally, we present a simple prompting baseline and evaluate its performance, demonstrating its effectiveness in enhancing interaction understanding. We hope that our dataset and task will inspire further research in complex interactions with natural language, fostering the development of more advanced AI agents. # Introduction ... the teaching of language is not explaining, but training. â
|
2307.12573#1
|
2307.12573#3
|
2307.12573
|
[
"2212.10060"
] |
2307.12573#3
|
Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models
|
Ludwig Josef Johann Wittgenstei, Philosophical Investigations In recent years, there has been a growing interest in constructing AI agents capable of simulating and supporting human-like interactions across various domains. Notably, some agents have exhibited exceptional performance, surpassing human abilities in games like MOBA, Starcraft, poker, and Go. Building on the advancements in Large Language Models (LLMs), researchers have extended agent interactions to incorporate natural language. For instance, Park et al. [17] have introduced generative agents that engage in free-form interactions using natural language, thereby creating virtual worlds where agents reside and even demonstrate spontaneous activities such as hosting parties. Similarly, Liu et al. [12] have developed simulated societies in which LLM-powered agents engage in the virtual world and can support some discussions for social problems.
|
2307.12573#2
|
2307.12573#4
|
2307.12573
|
[
"2212.10060"
] |
2307.12573#4
|
Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models
|
These recent developments hold promise 1 for advancing AI agents by leveraging natural language as an interactive tool, enabling them to exhibit more human-like behaviors. Furthermore, the exploration of phenomena resulting from endowing agents with more powerful language abilities for interaction can offer valuable insights. As discussed in the philosophical investigation, Ludwig Josef Johann Wittgenstein emphasized that teaching language is a form of training rather than mere explanation. General human communication is similar to engaging in a language game. Language serves as a fundamental tool for human interaction with the environment, facilitating the transmission of information, communication, negotiation, and cooperation within human groups, and contributing to the overall functioning of society. While the relationship between language and intelligence remains an open question, it is always worth exploring the potential evolution of more powerful and autonomous agents that can interact using natural language. Going further with agent interactions, we have yet to fully empower the sufï¬ cient openness and freedom in the interactions between agents and the world. Existing approaches have often imposed constraints on agent interactions, leading to limited complexity and diversity in their capabilities. These constraints arise from the lack of interactions involving novel objects and multiple characters. While some prior research has explored language-based interaction abilities in generative agents [17], their diversity remains restricted, focusing on a limited range of interactable objects. Additionally, previous works have primarily concentrated on two-character communication without considering implicit intentions through complex interactions. Such interactions fail to encompass nuanced behaviors (e.g., refusal, persuasion, group decision making, coalition building), akin to real-time communications involving multi-characters. To address this challenge, we draw inspiration from tabletop role-playing games (TRPGs) and in- troduce a Game Master (GM) role into the agentâ s world model. TRPGs inherently offer highly complex and diverse interactions through natural language, involving multiple players in intricate and grounded multi-character scenarios. The GM oversees the game, provides scenario details, under- stands charactersâ intentions, and offers feedback on player actions, aligning with the requirements for a more comprehensive world model. Constructing and introducing a virtual GM capable of handling complex interactions with real humans could signiï¬ cantly enhance the feedback given to agents. However, existing benchmarks in TRPG-related research lack the scope needed to develop a virtual GM that compensates for world model deï¬
|
2307.12573#3
|
2307.12573#5
|
2307.12573
|
[
"2212.10060"
] |
2307.12573#5
|
Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models
|
ciencies. Current virtual GM works only explore short and simple interactions in limited rounds, lacking sufï¬ cient complexity. For instance, previous works have been derived from play-by-post forums [15, 3], where players contribute by writing and posting their responses on the forum. While, this asynchronous online communication introduces signiï¬ cant delays, with players often waiting for hours or even weeks to receive responses. As a result, data collected from such forums struggle to capture the vibrant and nuanced grounded semantics characteristic of real-time human interactions. Moreover, the forum-based communication format tends to encourage players to respond to the immediate turn and provide formal written replies, thereby limiting the richness and groundedness of expressions that can be observed in real-time interactions with multi-characters. Consequently, previous works derived from forum data do not fully represent the diversity and complexity found in real-world multi-character interactions. More comprehensive and realistic benchmarks are needed to support the development of effective virtual GMs and address the deï¬ ciencies in agent world models. In this paper, we take the ï¬ rst step towards enhancing the world model for agents by integrating a virtual GM role capable of handling complex real-time interactions with multiple characters. We propose a benchmark, named Tachikuma, designed to encourage the designation of the virtual GM to effectively handle these complex interactions, infer charactersâ intentions, and provide accurate feedback to corresponding characters. Our benchmark consists of two components: a Multiple character and novel Object based interaction Estimation (MOE) task and a supporting dataset. In MOE, models are presented with intricate contexts extracted from TRPG log data, capturing real- time communications during gameplay. The objective is to infer character intentions and identify corresponding interactions, typically represented as skill checks, judged by a GM. The dataset supports the MOE task by providing long and intricate contexts from game logs, featuring interactions among multiple characters. The complexity of interactions among multiple characters, grounded in natural language, makes MOE a valuable testbed for evaluating abilities of virtual GMs. Furthermore, in our dataset, we collect complex and long contexts with diverse real-human interac- tions from the game logs. Our dataset differs from conventional play-by-post forum data collection
|
2307.12573#4
|
2307.12573#6
|
2307.12573
|
[
"2212.10060"
] |
2307.12573#6
|
Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models
|
2 methods. Instead, we utilize data extracted from a Chinese TRPG forum1. These forum records, com- piled by GMs after game ending, consist of voice recordings or real-time chat logs. This data source overcomes the limitations of play-by-post data collection, enabling us to extract long contexts with complex semantics similar to the real interactions. As these logs capture immediate communications, the interactions also exhibit higher groundedness, resulting in more vibrant and realistic responses akin to everyday conversations, as demonstrated in Fig. 2. Moreover, our dataset encompasses not only the popular DND rules [6] but also a wide range of diverse game rules, including Call of Cthulhu (COC) [14], Pathï¬ nder2 (PF2) [2], Savage Worlds (SW) [7], etc. This diversity enhances the com- plexity and variety of our dataset. Building upon this dataset, we introduce MOE task, which consists of 1,003 context sections extracted from the game logs. Each section represents a complete adventure with multiple turns, showcasing intricate semantics. As shown in Tab. 1, MOE includes an average of 32.12 turns per context excerpt, in contrast to previous works that typically involve only one turn. The number of possible answers for characters and skills varies depending on the context, ranging from one to eleven.
|
2307.12573#5
|
2307.12573#7
|
2307.12573
|
[
"2212.10060"
] |
2307.12573#7
|
Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models
|
Additionally, speciï¬ c game rules necessitate different skill categories for answers. For instance, considering the DND rule, there are 51 potential skills. These factors collectively contribute to MOE representing a challenging task for AI agents. The agent must demonstrate a comprehensive understanding of both the complex interactions, emulating human-like comprehension. To provide a comprehensive assessment, we report the F-score as the ï¬ nal metric, separately for the predicted characters and overall intention answers. Evaluating character predictions reï¬ ects the accuracy of methods in inferring playersâ intentions. Simultaneously, evaluating overall answers offers insights into the understanding ability of both character intentions and the corresponding interactions. Finally, we present a three-step prompting baseline for constructing an agent capable of handling interactions like a real-human GM in TRPGs. Our simple baseline serves to demonstrate the value of our task and dataset in understanding complex interactions. Our method incorporates prompts speciï¬ cally related to existing characters, their intentions, and the associated skill checks. By utilizing these prompts, we guide LLMs in gradually comprehending the intricate interactions that occur between players. We thoroughly evaluate our baseline method and compare its performance with other prompting methods utilizing various LLMs within MOE task. The experimental results indicate that MOE task is solvable but still possesses a large room for further improvement. Furthermore, leveraging the answers obtained from MOE task, we employ LLMs to generate responses that simulate a real-human GM in the games. To evaluate the quality of these generated responses, we invite numerous volunteers to provide subjective evaluations. The experimental results demonstrate that incorporating the improved understanding ability of the agent leads to higher levels of factual correctness, naturalness, and groundedness in the generated responses, closely resembling real-human interactions. These results further underscore the signiï¬ cance of understanding ability in constructing proï¬ cient agents and highlight the importance of our benchmark. We hope our dataset and benchmark as valuable resources that will inspire the research community to delve into the understanding of complex interactions and contribute to the development of more capable AI agents. Our contributions can be summarized as follows:
|
2307.12573#6
|
2307.12573#8
|
2307.12573
|
[
"2212.10060"
] |
2307.12573#8
|
Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models
|
1. We introduce a Multiple character and novel Object based interaction Estimation (MOE) task, speciï¬ cally addressing challenges in handling complex interaction like a real-human GM. This task serves as a valuable testbed for evaluating the abilities of constructing virtual GMs and contributes to advancements in developing more realistic agents. 2. We collect a dataset for MOE to address the limitations in exploring long contexts and intricate multi-character interactions in real-time communications. This dataset bridges a crucial gap in the current research, offering a comprehensive resource for analyzing and understanding these complex interactions.
|
2307.12573#7
|
2307.12573#9
|
2307.12573
|
[
"2212.10060"
] |
2307.12573#9
|
Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models
|
3. We introduce a prompting baseline and conduct a comprehensive evaluation of different prompting methods using a range of Large Language Models (LLMs) within MOE task. The experimental results indicate that MOE task is solvable, yet there is ample room for further improvement. 4. We conduct subjective evaluations based on the answers obtained from MOE. These evaluations show that better performances in MOE lead to higher levels of factual correctness, naturalness, and groundedness in the generated responses, which are crucial factors for creating a vivid agents. These results further underscore the signiï¬ cance of our dataset and task in improving AI agents.
|
2307.12573#8
|
2307.12573#10
|
2307.12573
|
[
"2212.10060"
] |
2307.12573#10
|
Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models
|
# 1www.goddessfantasy.net 3 # 2 Related Work Tabletop Role-Playing Games (TRPGs) are immersive games where players assume different character roles in ï¬ ctional settings, guided by a Game Master (GM) who provides relevant information to progress the game. These games involve diverse and complex grounded natural language interactions among multiple characters with distinct personalities and backgrounds. Due to the diversity and complexity, TRPGs serve as valuable testbeds [22, 13, 3] for research in Natural Language Processing (NLP). Several works have explored NLP problems using TRPG game records. For instance, Louis et al. [13] proposed predicting character actions based on previous interactions. Other works [19, 16] focused on generating ï¬ exible dialogue or descriptions in accordance with varying contexts or speciï¬ c rules in TRPGs. Furthermore, recent studies have commonly utilized play-by-post data from popular DND forums, providing a substantial corpus for research. This play-by-post format allows players to interact by posting replies, reducing participation barriers and generating a signiï¬ cant number of game rounds on the forum. Chris et al. [3] have collected extensive corpus from these forums, resulting in the creation of TRPG dialogue datasets.
|
2307.12573#9
|
2307.12573#11
|
2307.12573
|
[
"2212.10060"
] |
2307.12573#11
|
Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models
|
Subsequently, Pei et al. [25] ï¬ ltered the dataset and developed a guidance generation task called GANDALF. Given the context from a single round, GANDALF predicts the guidance provided by the DM under the DND rule. Zhu et al. [26] further extended the approach by constructing a more comprehensive and larger dataset using the play-by-post format in Discord, a messaging program. This dataset, named FIREBALL, contains additional game details such as dialogues, states, combat procedures, etc. It serves as a versatile testbed for language generation, particularly focusing on generating commands for games, including combat actions, checks, and dice rolls.
|
2307.12573#10
|
2307.12573#12
|
2307.12573
|
[
"2212.10060"
] |
2307.12573#12
|
Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models
|
In this paper, we address the limitations of previous works in exploring more complex interactions. We introduce Multiple character and novel Object based interaction Estimation (MOE) task and Multiple character and a supporting dataset as valuable resources for interaction understanding for agents. Unlike previous approaches that rely on play-by-post formats, our dataset leverages game logs obtained from real-time interactions, providing a more grounded and complex semantics. MOE requires methods to answer questions about next acting characters and their corresponding actions. This task and dataset open up new possibilities for improving the agents with enhanced factual correctness, naturalness, and groundedness. # 3 Multiple character and novel Object based interaction Estimation # 3.1 Task Overview We introduce a novel task, Multiple character and novel Object based interaction Estimation (MOE), which presents a challenge to comprehend complex interactions within long contexts. The input contexts, illustrated in Fig. 2, involve 11 turns encompassing intricate interactions among three players and an NPC. In MOE task, the primary objective is to accurately determine the character who will act in the next turn and identify the corresponding actions. It is important to note that actions in Tabletop Role-Playing Games (TRPGs) can be simpliï¬ ed and classiï¬ ed as various pre-deï¬
|
2307.12573#11
|
2307.12573#13
|
2307.12573
|
[
"2212.10060"
] |
2307.12573#13
|
Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models
|
ned skills. Game Masters (GMs) play a crucial role in guiding players to perform correct skill checks during gameplay, resulting in GMs intuitively annotating all intended actions, which are recorded in the game log. As a result, the game logs naturally contain labeled character intentions, enabling MOE to leverage this data to construct intention understanding tasks with accurate intention labels. Moreover, there are two primary challenges that need to be addressed in MOE. Both challenges requires the methods to provide higher understanding ability to the input interactions.
|
2307.12573#12
|
2307.12573#14
|
2307.12573
|
[
"2212.10060"
] |
2307.12573#14
|
Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models
|
The ï¬ rst challenge revolves around comprehending the behaviors and intentions of multiple characters in complex scenarios. As depicted in Fig. 2, the current game scenario involves four characters: the brown bear, Bill, Elvis Zem, and Maurice. While all characters interact with one another, only one player intends to perform an action and needs to undergo a skill check in the upcoming turn. In the ï¬ rst turn, Bill expresses his disinterest in engaging in the ï¬ ght.
|
2307.12573#13
|
2307.12573#15
|
2307.12573
|
[
"2212.10060"
] |
2307.12573#15
|
Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models
|
Subsequently, Zem combines the electric spell with the sickle. Notably, the spell was cast in Turn 4 and its effects were explained by the GM in Turn 10. Thus, the spellâ s execution has already taken place and should not be reevaluated after Turn 10. The second challenge is understanding the game rules and aligning them with the charactersâ movements. In Fig. 2, Maurice intends to escape from the bearâ s attack. However, there 4
|
2307.12573#14
|
2307.12573#16
|
2307.12573
|
[
"2212.10060"
] |
2307.12573#16
|
Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models
|
Inputs: latest turn Inputs: long context with grounded language iS} contexts withlatestiturn esate Long contexts of complex interactions Inpt rief records Short contexts with brief records with multiple characters Inputs: brief dialogue Outputs: Character and skill check answers + Lu: Investigation + Timothy: Perception + Robin: History + Lucy: Perception Figure 1: Examples of different tasks and datasets based on game logs of TRPG. Our MOE and MOD focuses on the understanding of long and complex interactions with Long contexts.
|
2307.12573#15
|
2307.12573#17
|
2307.12573
|
[
"2212.10060"
] |
2307.12573#17
|
Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models
|
is no speciï¬ c â escapeâ operation in the skill checks deï¬ ned by the DND rules. Instead, the bear utilizes its strength to grapple Maurice in the game, and Maurice must also check their strength to contest against the bear. To answer this skill check, methods need to comprehend the intentions and movements of characters and, based on the game rules, infer the appropriate check items for the current turn, akin to a real-human. # 3.2 Evaluation To provide a comprehensive assessment of context understanding in MOE, we evaluate the predicted character names and overall predictions separately.
|
2307.12573#16
|
2307.12573#18
|
2307.12573
|
[
"2212.10060"
] |
2307.12573#18
|
Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models
|
Speciï¬ cally, we measure the average Character Precision (CP) and Character Recall (CR) for character names, as well as the average Skill Precision (SP) and Skill Recall (SR) for both character names and associated skills. Additionally, we compute the Character F-scores (CF) for character names and Skill F-score (SF) for both character names with associated skills. 1h. cP=â ditelP (1) # i if. CR=z Dte/s! (2) # i if. SP=â dts /p' (3) 1h... SR= FE Dot/ a! (4) # i CF = 2« CP *CR/(CP + CR) (5) CF = 2 â CP â CR/(CP + CR) SF = 2 â SP â SR/(SP + SR) SF = 2« SP x SR/(SP + SR) (6)
|
2307.12573#17
|
2307.12573#19
|
2307.12573
|
[
"2212.10060"
] |
2307.12573#19
|
Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models
|
5 (5) (6) Input context +o [untlith: REA! RSSUANA Ss LBL. Bill: "Great! Just a little more!â | cheered from afar on the carriage. (Turn2] GM: $s RHR, FASTIN. Ba MRS, PARANA, GM: The brown bear clumsily stretches out its paws, lunging at Maurice. Although the movement is stiff and slow, the strength behind itis definitely not to be underestimated. Bill: disinterest in engaging in the fightâ
|
2307.12573#18
|
2307.12573#20
|
2307.12573
|
[
"2212.10060"
] |
2307.12573#20
|
Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models
|
allund) 277 EM: N+, RR LPeS KER. : Elvis Zem: Takes a step forward, then slashes at the brown bear with his sickle, # electricity sparking off its blade. (Turd) S36: TATE. Maurice: Dodges the bear's claw and circles to its back. [Turné] GM: S236) 4837 7 HER SURAT RRM SAEs, RRA T Bie, GM: Maurice imagines how he would gracefully dodge the bear's claws and get to its back, only to find himself pinned to the snow by the bear. [Turns] PeAR: â WB! GAR! "ELARBKT RARER, â OB chev AR, RAR ARMIBSC, FEMBSEAE? * Bill: "Oh! A spell!â Bill squints his eyes, "An electric spell, who was your master, Mr. Zem?" (und) se REN EE: â ROSHRATAZEME, RLRESL NAR. * Elvis Zem:
|
2307.12573#19
|
2307.12573#21
|
2307.12573
|
[
"2212.10060"
] |
2307.12573#21
|
Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models
|
"My teacher was a nobody, and | only know a few minor spells.â Elvis Zem: combines the electric spell with the sickle. The spell was qaeg cast in Turnd and its effects were explained by GM in Turn10 Understand interactions : , Estimate characters & actions Prediction for [Turn12]: Po@: tei, RRM: BA; FOG: RiwM, RALAE: Ue FOB: RRM Bea Character name: Bill, Check skill Character name: Maurice, Check sl Character name: Elvis Zem, Check Ground Truth: #9: Sivii, HestHe: Dat S rurnto] GM: sgaeaeie senna ca), Sear GM: The brown bear is severely injured by Elvis's strike, letting out a painful roar. tee Turntt] Sai: BEAR RS (GUERIN) Maurice: Desperately tries to escape from the bear's grip (using Escape to counter the grapple). Maurice: was caught by a bear and needs to break free from the bear's restraints to confront it Character name: Maurice, Check skill: Strength [Turn12] Gut: ? Figure 2: Example of MOE. In the given context, a scenario unfolds where three players ï¬
|
2307.12573#20
|
2307.12573#22
|
2307.12573
|
[
"2212.10060"
] |
2307.12573#22
|
Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models
|
nd themselves facing a formidable brown bear in combat. Each character actively participates in the battle, except for Bill, who observes from the safety of a carriage. During the encounter, Zem casts a spell; however, it is important to note that the skill check for this particular spell has already been performed after Turn 4 and was explained by the DM in Turn 10. Consequently, the only character currently requiring a skill check is Maurice. Despite his intention to escape from the bear, the DND rule does not include a speciï¬
|
2307.12573#21
|
2307.12573#23
|
2307.12573
|
[
"2212.10060"
] |
2307.12573#23
|
Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models
|
c â escapeâ skill. In such a predicament, Maurice must utilize his strength to resist the bearâ s attempt to grapple him. As a result, the DM advises him to perform a strength check in adherence to the DND rule. Furthermore, we also present the predicted results from GPT-3.5 utilizing template prompts. The results demonstrate a lack of effective context comprehension and highlight the challenges in understanding complex interactions among agents. where i indicates the i-th sample, tc represent the number of correctly predicted character names, ts denote the number of correct predictions for both character names and associated skills, p indicate the total number of predicted tuples, g represent the number of answers in the ground truth, and K represent the total number of evaluation data samples. The metrics CP and CR are employed to evaluate the understanding of character intentions, focusing on the accuracy of predicting the characters about to take action. The proposed methods are required to provide correct character predictions, thereby achieving higher values for CP and CR. Then, to achieve higher SP and SR, the proposed methods must accurately comprehend both character intentions and the rules of the game. It is worth noting that if the model consistently predicts all characters as outputs, it may increase the recall but reduce the precision. Conversely, if the method tends to predict only one character, it may achieve higher precision but lower recall. To strike a balance between these factors, we employ F-scores as the ï¬ nal evaluation metrics in our experiments. The F-scores consider both precision and recall values, providing a comprehensive measure of the performance. 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% Figure 3: Distribution of character number in MOE labels. # 3.3 Skill Check Annotation In Tabletop Role-Playing Games (TRPGs), skill checks can directly indicate the playersâ intentions in the game, which play a crucial role in determining the success or failure of character actions. When a player wishes to have their character perform an action involving a skill, such as combat, persuasion, or searching for clues, the game models or rules provide a difï¬ culty level or target number for the 6 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% Figure 4:
|
2307.12573#22
|
2307.12573#24
|
2307.12573
|
[
"2212.10060"
] |
2307.12573#24
|
Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models
|
Distribution of skill names in MOE labels of the contexts within DND rule. initiative (ini), intelligence (int), perception (per), arcana (arc), insight (ins). action. This difï¬ culty level represents the challenge or desired level of success for the action. The Game Master (GM) assumes the responsibility of judging and guiding the player in performing the skill check based on the characterâ s proï¬ ciency associated with the action. The GM then rolls a dice to determine the outcome. In our task, we annotate the skill checks performed by playersâ characters during the games based on the semantic or recorded results provided by the GM. It is important to note that some skill checks are not predictable based solely on previous interactions. For example, in Call of Cthulhu (COC) games, perception checks may be prompted by the GM when players enter speciï¬
|
2307.12573#23
|
2307.12573#25
|
2307.12573
|
[
"2212.10060"
] |
2307.12573#25
|
Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models
|
c locations. These checks are closely tied to the game models and cannot be predicted in advance. Additionally, certain checks require additional calculations based on character attributes or cards, such as determining damage based on hit points or factoring in armor class to reduce damage. These calculations cannot be inferred solely from the game logs and we also remove these check in MOE. Thus, we have excluded any checks that are unpredictable and included only those check items that can be inferred from the game logs. For example, the COC logs contain 61 check items (including skills and attributes) that can be veriï¬ ed, while the DND logs contain 25 such items. Further details regarding the check items will be provided in the supplementary material. # 3.4 Context Excerpt Following the labeling of check items in the game logs, we proceed to excerpt relevant contexts associated with each check. Our selection of excerpted contexts is guided by three key principles to ensure the inclusion of long and diverse interactions. First, we ensure that the excerpted contexts encompass complete events within the game, such as the entire process of encountering enemies or the detailed information and clues leading up to the exploration of certain locations. This ensures that the extracted contexts provide a comprehensive understanding of the events. Second, we require that the excerpted contexts involve at least two characters who are present in the current scenarios of the game. This criterion allows for the examination of interactions between multiple characters, providing a more complex context for analysis. Lastly, we ensure that at least one character within the excerpted contexts has a skill check that can be predicted. This principle guarantees that the selected contexts contain situations where skill checks can be inferred based on the information available up to the last turn. By adhering to these principles, we ensure that the contexts support the understanding of the complex interactions and enable the inference of charactersâ intentions in subsequent turns.
|
2307.12573#24
|
2307.12573#26
|
2307.12573
|
[
"2212.10060"
] |
2307.12573#26
|
Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models
|
# 3.5 Statistical Analysis We present the statistical results of answers in MOE in Tab. 1. In total, we have extracted and labeled 1,003 sets of contexts and corresponding skill checks, which serve as the input context and ground truth for our task. The average number of turns in our dataset is 32.12, indicating its complexity compared to previous works that primarily focused on single-turn responses. Furthermore, we provide the distributions of skill check labels of the Dungeons and Dragons (DND) logs in the MOE task, as illustrated in Fig. 4 and Fig. 3. The number of characters involved in skill checks varies from 1 to 11, with an average of 1.696 characters per skill check.
|
2307.12573#25
|
2307.12573#27
|
2307.12573
|
[
"2212.10060"
] |
2307.12573#27
|
Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models
|
This reï¬ ects the complexity of multi-character interactions within our dataset, which close to the real-human communication in the games. Additionally, the items for skill checks exhibit diversity, highlighting the varied interactions between players. Both sets of statistical results underscore the value of our task as a comprehensive testbed for understanding complex interactions in TRPGs. 7 rules Dataset DDD Corpus [13] â ¼4,430,000 DND DND PBP [3] DND GANDALF [25] DND,COC,PF,SW Ours Table 1: Dataset statistic. Our dataset exhibits a comparable scale to previous works, while also encompassing a higher diversity of game rules. # 4 Dataset To support our MOE task with more grounded and complex data, we have collect a new dataset.
|
2307.12573#26
|
2307.12573#28
|
2307.12573
|
[
"2212.10060"
] |
2307.12573#28
|
Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models
|
It is sourced from a Chinese TRPG forum2. This forum hosts a wide array of game records uploaded by users, spanning different rule systems e.g., DND, COC, PF, SW, etc. Unlike play-by-post forums [3], where players interact by writing and posting responses, the game logs in this forum are compiled and summarized by the Game Masters (GMs) after the game sessions3. Besides, in play-by-post scenarios, interactions between players are not immediate, and the feedback from the next player may not appear until several days or even weeks later. In contrast, the majority of game logs in our dataset are derived from instant messaging platforms, including voice and text communication. This characteristic allows for the capture of abundant immediate player responses, closely mirroring daily conversations with grounded language interactions. Consequently, our dataset provides more grounded semantics within real-time communication, making it conducive for exploring AI agents. Statistically, our dataset comprises 95 sets of records from different games with various rule systems. It encompasses a total of 647,480 Chinese words, as indicated in Tab. 1. In summary, our dataset not only surpasses previous works in terms of data diversity, groundedness, and complexity but also matches or exceeds their scale.
|
2307.12573#27
|
2307.12573#29
|
2307.12573
|
[
"2212.10060"
] |
2307.12573#29
|
Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models
|
# 5 Think Before Speak prompting method We propose a three-step agent generation benchmark called â Think Before Speakâ (TBS), which aims to guide Large Language Models (LLMs) in comprehending complex and lengthy contexts of interactions more accurately. Unlike simple template-based prompting approaches [11, 24, 8], our method takes into consideration the speciï¬ c properties of Tabletop Role-Playing Games (TRPGs) and incorporates the principles of Chain of Thought (CoT) [21, 9] in its prompting design. In the generated check item, the answers consist of character names and corresponding skill names. However, directly expecting the models to produce accurate character and skill names is a challenging task. Using a single-step template prompting approach may result in LLMs generating characters that do not exist in the given contexts, characters with no relevant actions, mismatches between characters and their associated skills, or skills that are not deï¬ ned within the game rules. To address these challenges, our method guides LLMs through a three-step process. Firstly, the models are prompted to identify the characters present in the current game scenarios. Then, they are encouraged to consider the intentions of the characters and list those who are likely to take action or are engaged in ongoing movements. Finally, we provide the models with a comprehensive set of possible skills derived from the game rules, allowing them to select the most appropriate character-skill combinations that the GM may ask the players to check. This gradual guidance facilitates more accurate and context-aware responses from the LLMs.
|
2307.12573#28
|
2307.12573#30
|
2307.12573
|
[
"2212.10060"
] |
2307.12573#30
|
Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models
|
Speciï¬ cally, in the ï¬ rst step of our prompting approach, we guide the language models by providing a prompt such as â Based on the TRPG game record provided above, identify the characters or NPCs that exist in the current scenarios.â This prompts the language model to recognize and understand the characters present in the given contexts. In the second step, we prompt the language models with a question like â Which character or NPC is expected to carry out activities next?â This encourages the models to delve deeper into the semantics of the contexts and infer the intentions of the characters. For the ï¬ nal step, we provide LLMs with all possible skills deï¬ ned in the TRPG rules and guide 2www.goddessfantasy.net 3Most users whose logs are used in our study have agreed and provided their informed consent. We are trying to contact and communicate all users to be informed and agree with the participation of the research.
|
2307.12573#29
|
2307.12573#31
|
2307.12573
|
[
"2212.10060"
] |
2307.12573#31
|
Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models
|
8 LLMs Prompting Method GPT-3.5 GPT-4 template prompt template prompt + zcot char prompt + skill prompt pre-char prompt + char prompt + statistic predictor pre-char prompt + char prompt + skill prompt + zcot pre-char prompt + char prompt + skill prompt CF 42.02 39.28 50.43 53.32 50.50 53.32 SF 15.30 14.46 14.78 5.03 12.88 15.91 CF 43.21 42.45 53.55 57.94 53.45 57.94 SF 15.93 16.25 16.79 5.03 17.39 20.02 Table 2: Comparison of different prompting methods and LLMs. Results prove that our task is solvable but requires higher understanding ability for grounded and complex semantics. them to generate character names that correspond to the potential skill checks. Our prompts for this step include phrases such as â What skills are required for the mentioned characters to carry out their respective activities?â Furthermore, to facilitate comparison with other benchmarks, we extend the TBS approach to also generate utterances to simulate a real-human GM. Given the predictions from the TBS model, LLMs are required to generate responses in the tone and style of a GM. We achieve this by using prompts such as â As a game master for a TRPG game, generate responses based on the provided character names and the corresponding skills.â # 6 Experimental Results In this section, we provide a detailed discussion, comprehensive evaluation, and analysis of our benchmark. Baseline Methods: As our baseline, we employ LLMs with template prompting, which have been utilized in previous studies.
|
2307.12573#30
|
2307.12573#32
|
2307.12573
|
[
"2212.10060"
] |
2307.12573#32
|
Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models
|
We speciï¬ cally adapt two popular LLMs, which are GPT-3.5 and GPT- 4 [1]. By incorporating different language models, we can thoroughly assess the performance of our prompting benchmark. Furthermore, recent researches [21, 9, 20] have demonstrated the efï¬ cacy of Chain-of-Thought (CoT) methods in improving understanding capabilities. To compare with this approach, we include the zero-shot CoT (zcot) method [9] in our evaluation. Additionally, to demonstrate the ability to infer check items, we introduce a statistical predictor for check items. Given the predicted characters, we select the skills with the highest probability based on the statistical distribution observed in our dataset. This statistical predictor serves as a lower bound for generating check items and also reveals the impact of any biases present in our dataset. Evaluations: To evaluate the effects of MOE and TBS frameworks on interaction understanding, we introduce the concept of a virtual Game Master (GM) in TRPGs. The virtual GM serves as a simulation of a real-human GM, possessing the ability to comprehend interactions, infer intentions, interact with players, and provide guidance for their actions.
|
2307.12573#31
|
2307.12573#33
|
2307.12573
|
[
"2212.10060"
] |
2307.12573#33
|
Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models
|
This role fulï¬ lls the criteria of our requirements for the agents that enable to understand complex interactions. By incorporating the virtual GM, we create a platform to assess the agentsâ understanding of complex interactions and their ability to navigate diverse scenarios. In detail, we generate GM utterances using both ground truth information from C2A and predictions from TBS. The generation process follows the methodology outlined in [26, 11], which leverages LLMs, template prompts, and additional inputs for characters and skills. Rather than relying on metrics based on captioning in previous works [25, 26], we employ subjective evaluation conducted by real-human players.
|
2307.12573#32
|
2307.12573#34
|
2307.12573
|
[
"2212.10060"
] |
2307.12573#34
|
Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models
|
Given the diversity of descriptions in grounded language, there is no deï¬ nitive ground truth for evaluating the responses of GMs. Subjective evaluation provides more valuable insights into the degree of realism in the generated utterances. Following [25, 18, 4, 23, 10], we invite volunteers to score the responses based on three factors: naturalness, groundedness, and factual correctness. Naturalness assesses the extent to which the generated responses resemble human-like language. Groundedness measures the degree to which the responses effectively employ grounded language similar to everyday communication. Lastly, factual correctness evaluates whether there are any factual errors or inconsistencies with the given contexts.
|
2307.12573#33
|
2307.12573#35
|
2307.12573
|
[
"2212.10060"
] |
2307.12573#35
|
Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models
|
9 # 6.1 Objective Evaluation Comparison of Prompting Methods: We conduct a comparison between our proposed method and different prompting approaches. The results, as shown in Tab. 2, reveal the effectiveness of our step-wise prompting approach compared to baselines such as zero-shot CoT and the statistical predictor. The experimental results demonstrate that each step in our prompting process contributes signiï¬ cantly, leading to improved F-score for both characters and skills. This highlights the enhanced understanding capability of LLMs in comprehending the given contexts. Furthermore, due to the distribution bias present in our dataset, the statistical predictor proves to be useful, albeit with considerably lower performance compared to our proposed method and other prompting methods. This reveal the lower performance boundary in predicting skill labels. Furthermore, in line with previous studies [9, 5, 21], the incorporation of zero-shot CoT has demon- strated improvements in the performance of LLMs across various tasks. However, when applied to the MOE task, the observed enhancements are not as substantial. Since MOE involves more grounded semantics and complex interactions, it presents a challenging scenario for existing prompting methods and remains an unsolved problem that requires further investigation. Comparison of different language models: We further investigate the impact of different LLMs on the performance of our prompting methods. With advancements in LLM, the overall understanding and reasoning capabilities have signiï¬ cantly improved. As depicted in Tab. 2, employing more ad- vanced language models leads to higher performance in MOE task. In addition to the effectiveness of the prompting methods, the enhancements in LLMs themselves are also beneï¬ cial in comprehending the intricacies of complex and grounded interactions. The experimental results reveal that our task is solvable, yet there remains ample room for further exploration and improvement. # 6.2 Subjective Evaluation We conducted a subjective evaluation by recruiting real-human players of TRPG as volunteers and collecting their responses through questionnaires. The average scores in different factors, which are naturalness, groundedness, and factual correctness, were computed following established guidelines [25, 18, 4, 23, 10]. The statistical results are presented in Fig. 5. Notably, methods that take into account the predictions or ground truth of MOE demonstrate higher performance across all evaluation factors. Generally, methods utilizing MOE labels outperform those using predicted labels.
|
2307.12573#34
|
2307.12573#36
|
2307.12573
|
[
"2212.10060"
] |
2307.12573#36
|
Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models
|
Moreover, when considering MOE predictions, the methods achieve superior performance in generating virtual GM responses. This observation conï¬ rms that a higher understanding ability for complex semantics leads to more vivid and human-like responses from the agents. Additionally, it underscores the strong correlation between MOE performance and virtual GM performance, highlighting the importance of MOE in the pursuit of improved agent generation. Besides, our prompting method demonstrates superior performance in all evaluated factors. Speciï¬ - cally, our method exhibits signiï¬ cant improvements in factual correctness compared to the baseline methods. Furthermore, in terms of groundedness and naturalness, our method showcases comparable or even better performance than other methods. These results indicate that our method achieves enhanced understanding ability and is capable of generating improved utterances as GM descriptions. However, there is still ample room for improvement in terms of groundness and naturalness. The generated utterances may occasionally be overly verbose and lack the same level of vividness as those produced by real humans. This performance gap motivates further exploration of more effective methods for constructing advanced AI agents.
|
2307.12573#35
|
2307.12573#37
|
2307.12573
|
[
"2212.10060"
] |
2307.12573#37
|
Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models
|
# 7 Conclusion This paper proposes a new dataset, task, and benchmark to enhance the understanding ability of AI agents in dealing with complex interactions with multiple characters. The existing works in this ï¬ eld have limitations, particularly their reliance on forum-based data collections and do not consider complex and grounded semantics in the real-time communications. To overcome these limitations, we formalize a new task named Multiple character and Open instances based interaction Estimation (MOE), providing a testbed for the understanding ability of the agents and leading further improvements in agentsâ factual correctness. We also introduce a dataset to support MOE task, which is derived from real-time game logs in tabletop role-playing games (TRPGs) and provides
|
2307.12573#36
|
2307.12573#38
|
2307.12573
|
[
"2212.10060"
] |
2307.12573#38
|
Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models
|
10 10 Ta shhon Real-human Template Template CSAlabel+ CSA label + gpt4 TBS prediction + TBS prediction + response prompt + gpt3.5._ prompt + gptd gpt3.5 ept3.5 gpt4 Factual correctness mGroundness mi Naturainess | Figure 5: Subjective evaluation by volunteers. With MOE labels or predictions from our method, LLMs generate better responses that close to the real-human in all three evaluating factors. a richer and more complex context capable of supporting MOE tasks. Additionally, we introduce a prompting benchmark designed speciï¬ cally to reï¬
|
2307.12573#37
|
2307.12573#39
|
2307.12573
|
[
"2212.10060"
] |
2307.12573#39
|
Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models
|
ne the interaction capabilities of AI agents in TRPGs. This benchmark focuses on understanding complex interactions and generating vibrant game master utterances. The three-stage generation process, which includes game check and GM utterance generation, has been evaluated both objectively and subjectively. The results clearly indicate that this approach signiï¬ cantly enhances the quality of AI responses within the TRPG context. We hope that this work will serve as inspiration for the AI community to further explore and enhance their understanding of complex grounded interactions and advance the interaction ability of AI agents.
|
2307.12573#38
|
2307.12573#40
|
2307.12573
|
[
"2212.10060"
] |
2307.12573#40
|
Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models
|
# 8 Limitations and Social Impacts While the use of an AI agent in a tabletop role-playing game (TRPG) could revolutionize the way these games are played, providing consistent and unbiased decisions, there are potential limitations and social impacts to consider. One key limitation is the AIâ s ability to simulate human creativity, empathy, and adaptability, which are all fundamental to the role of a game master. For instance, the AI may not fully comprehend nuanced player interactions or adapt the game based on the playersâ emotional state. Additionally, there could be social implications, such as the potential reduction in human interaction and shared storytelling, which are often crucial elements of TRPGs. For players, part of the joy of a TRPG is the shared human experience, the unpredictable responses, and the subtle non-verbal cues, which an AI might not replicate. The introduction of an AI game master could also result in job loss in professional game-mastering circles. Despite the AIâ s potential to provide a consistent and more accessible gaming experience, these human and social elements may be irreplaceable in a TRPG context.
|
2307.12573#39
|
2307.12573#41
|
2307.12573
|
[
"2212.10060"
] |
2307.12573#41
|
Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models
|
# References [1] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877â 1901, 2020. [2] Jason Bulmahn. Pathï¬ nder Roleplaying Game: Advanced Playerâ s Guide. Paizo, 2010. [3] Chris Callison-Burch, Gaurav Singh Tomar, Lara Martin, Daphne Ippolito, Suma Bailis, and David Reitter. Dungeons and dragons as a dialog challenge for artiï¬
|
2307.12573#40
|
2307.12573#42
|
2307.12573
|
[
"2212.10060"
] |
2307.12573#42
|
Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models
|
cial intelligence. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 9379â 9393, Abu Dhabi, United Arab Emirates, Dec. 2022. Association for Computational Linguistics. [4] Alexandra DeLucia, Aaron Mueller, Xiang Lisa Li, and JoË ao Sedoc. Decoding methods for neural narrative generation. arXiv preprint arXiv:2010.07375, 2020. [5] Qingxiu Dong, Lei Li, Damai Dai, Ce Zheng, Zhiyong Wu, Baobao Chang, Xu Sun, Jingjing Xu, and Zhifang Sui. A survey for in-context learning. arXiv preprint arXiv:2301.00234, 2022.
|
2307.12573#41
|
2307.12573#43
|
2307.12573
|
[
"2212.10060"
] |
2307.12573#43
|
Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models
|
[6] Gary Gygax and Dave Arneson. dungeons & dragons, volume 19. Tactical Studies Rules Lake Geneva, WI, 1974. [7] Shane Lacy Hensley, Clint Black, et al. Savage worlds. Studio 2 Publishing, 2008. [8] Jie Huang and Kevin Chen-Chuan Chang. Towards reasoning in large language models: A survey. arXiv preprint arXiv:2212.10403, 2022.
|
2307.12573#42
|
2307.12573#44
|
2307.12573
|
[
"2212.10060"
] |
2307.12573#44
|
Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models
|
[9] Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large language models are zero-shot reasoners. arXiv preprint arXiv:2205.11916, 2022. 11 [10] Yuanzhi Liang, Qianyu Feng, Linchao Zhu, Li Hu, Pan Pan, and Yi Yang. Seeg: Semantic energized co-speech gesture generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10473â 10482, 2022. [11] Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig.
|
2307.12573#43
|
2307.12573#45
|
2307.12573
|
[
"2212.10060"
] |
2307.12573#45
|
Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models
|
Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. ACM Computing Surveys, 55(9):1â 35, 2023. [12] Ruibo Liu, Ruixin Yang, Chenyan Jia, Ge Zhang, Denny Zhou, Andrew M Dai, Diyi Yang, and Soroush Vosoughi. Training socially aligned language models in simulated human society. arXiv preprint arXiv:2305.16960, 2023. [13] Annie Louis and Charles Sutton.
|
2307.12573#44
|
2307.12573#46
|
2307.12573
|
[
"2212.10060"
] |
2307.12573#46
|
Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models
|
Deep dungeons and dragons: Learning character-action interactions from role-playing game transcripts. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 708â 713, 2018. [14] Howard Phillips Lovecraft. The call of Cthulhu. Lulu. com, 2016. [15] Lara J Martin, Srijan Sood, and Mark O Riedl. Dungeons and dqns: Toward reinforcement learning agents that play tabletop roleplaying games.
|
2307.12573#45
|
2307.12573#47
|
2307.12573
|
[
"2212.10060"
] |
2307.12573#47
|
Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models
|
In INT/WICED@ AIIDE, 2018. [16] Pax Newman and Yudong Liu. Generating descriptive and rules-adhering spells for dungeons & dragons ï¬ fth edition. In Proceedings of the 9th Workshop on Games and Natural Language Processing within the 13th Language Resources and Evaluation Conference, pages 54â 60, Marseille, France, June 2022. European Language Resources Association. [17] Joon Sung Park, Joseph C Oâ Brien, Carrie J Cai, Meredith Ringel Morris, Percy Liang, and Michael S Bernstein.
|
2307.12573#46
|
2307.12573#48
|
2307.12573
|
[
"2212.10060"
] |
2307.12573#48
|
Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models
|
Generative agents: Interactive simulacra of human behavior. arXiv preprint arXiv:2304.03442, 2023. [18] Manasvi Sagarkar, John Wieting, Lifu Tu, and Kevin Gimpel. Quality signals in generated stories. In Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics, pages 192â 202, 2018. [19] Wai Man Si, Prithviraj Ammanabrolu, and Mark Riedl.
|
2307.12573#47
|
2307.12573#49
|
2307.12573
|
[
"2212.10060"
] |
2307.12573#49
|
Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models
|
Telling stories through multi-user dialogue by modeling character relations. In Proceedings of the 22nd Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 269â 275, Singapore and Online, July 2021. Association for Computational Linguistics. [20] Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171, 2022. [21] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35:24824â 24837, 2022. [22] Nathaniel Weir, Ryan Thomas, Randolph Dâ Amore, Kellie Hill, Benjamin Van Durme, and Harsh Jhamtani.
|
2307.12573#48
|
2307.12573#50
|
2307.12573
|
[
"2212.10060"
] |
2307.12573#50
|
Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models
|
Ontologically faithful generation of non-player character dialogues. arXiv preprint arXiv:2212.10618, 2022. [23] Pieter Wolfert, Nicole Robinson, and Tony Belpaeme. A review of evaluation practices of gesture generation in embodied conversational agents. IEEE Transactions on Human-Machine Systems, 52(3):379â 389, 2022. [24] Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Be- ichen Zhang, Junjie Zhang, Zican Dong, et al. A survey of large language models. arXiv preprint arXiv:2303.18223, 2023. [25] Pei Zhou, Andrew Zhu, Jennifer Hu, Jay Pujara, Xiang Ren, Chris Callison-Burch, Yejin Choi, and Prithviraj Ammanabrolu. An ai dungeon masterâ s guide: Learning to converse and guide with intents and theory-of-mind in dungeons and dragons. arXiv preprint arXiv:2212.10060, 2022.
|
2307.12573#49
|
2307.12573#51
|
2307.12573
|
[
"2212.10060"
] |
2307.12573#51
|
Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models
|
[26] Andrew Zhu, Karmanya Aggarwal, Alexander Feng, Lara J Martin, and Chris Callison-Burch. Fireball: A dataset of dungeons and dragons actual-play with structured game state information. arXiv preprint arXiv:2305.01528, 2023. 12
|
2307.12573#50
|
2307.12573
|
[
"2212.10060"
] |
|
2308.03762#0
|
GPT-4 Can't Reason
|
arXiv:2308.03762v2 2023 3 2 0 2 g u A 0 1 ] L C . s c [ 2 v 2 6 7 3 0 . 8 0 3 2 : v i X r a # GPT-4 Canâ t Reason (Position Paper) Konstantine Arkoudas Dyania Health August 11, 2023 # Abstract GPT-4 was released in March 2023 to wide acclaim, marking a very substantial improvement across the board over GPT-3.5 (OpenAIâ s previously best model, which had powered the initial release of ChatGPT). Despite the genuinely impressive improvement, however, there are good reasons to be highly skeptical of GPT-4â
|
2308.03762#1
|
2308.03762
|
[
"2308.03762"
] |
|
2308.03762#1
|
GPT-4 Can't Reason
|
s ability to reason. This position paper discusses the nature of reasoning; criticizes the current formulation of reasoning problems in the NLP community and the way in which the reasoning performance of LLMs is currently evaluated; introduces a collection of 21 diverse reasoning problems; and performs a detailed qualitative analysis of GPT-4â s performance on these problems. Based on the results of that analysis, this paper argues that, despite the occasional ï¬ ashes of analytical brilliance, GPT-4 at present is utterly incapable of reasoning. # 1.1 Introduction In early January I wrote a commentary1 presenting an informal evaluation of ChatGPT across a broad range of subject areas: conventional NLU, folk physics, information retrieval, pragmatics, theory of mind, spatial infer- ence, simple logical reasoning, and math. The key takeaways were that ChatGPT was a seminal breakthrough; that LLM-based systems are not mere stochastic parrots but build genuine abstractions and can exhibit cre- ativity; that such systems will enable a large array of new and exciting applications; and that, despite all of the above, these systems are still severely limited when it comes to reasoning. GPT-4 was released a couple of months after that, delivering very substantial improvements across the board. I remain impressed and excited by the general capabilities and potential of LLMs, and I have little doubt that their performance will continue to improve in the near future. Nevertheless, there are increasing grounds for skepticism concerning their reasoning abilities. In this position paper I will argue that the best LLM at this time, GPT-4, is utterly incapable of reasoning, in spite of its sporadic displays of ingenuity. I will largely steer clear of the much broaderâ and more vagueâ debate about whether LLMs in general are capable of (consistently robust) reasoning, but a few brief remarks will help to set the stage and clarify why it makes sense to restrict attention to a speciï¬ c LLM. On one side of that broader debate, rosy predictions by LLM enthusiasts rely excessively on ever-changing scaling â lawsâ that rest on ï¬ imsy empirical evidence and on a host of questionable modeling assumptions, ill-understood concepts (such as â emergentâ
|
2308.03762#0
|
2308.03762#2
|
2308.03762
|
[
"2308.03762"
] |
2308.03762#2
|
GPT-4 Can't Reason
|
LLM properties2), and a somewhat dogmatic belief that minimizing cross-entropy loss on next-token prediction over a huge corpus will deliver a general reasoning engine via the magic of transfer learning and the construction of generic higher-level representations. On the other side of the debate, while LLM skeptics have serious arguments to make, those arguments are mostly a priori and somewhat vague (for instance, that LLMs lack â a model of the worldâ ), and I do not think they settle the question. In my view, the most compelling a priori considerations against the plausibility of reliably robust LLM reasoning turn on computational complexity results. Reasoning is a (very) compu- tationally hard problem. In fact, in the general case (ï¬ rst-order or higher-order logic), it is algorithmically undecidable, i.e., every bit as unsolvable as the halting problem. Thus, by Churchâ s thesis, we cannot expect any algorithm, LLMs included, to solve arbitrary reasoning problems in a sound and complete way.3 But even â easierâ classes of reasoning problems4 typically have either exponential or at least nontrivial polynomial- 1A modiï¬ ed version of that is being published in the journal Philosophy & Technology. 2The notion of an emergent property is clear enough, at least at a high enough level. What is not clear is the relationship between such properties and LLM architectures, their basic conï¬ gurations (number of parameters, compute budget, dataset size, and so on), and more importantly, important tasks such as reasoning. 3Or with perfect precision and recall, to put itâ more looselyâ in ML-like terms. 4Of which there are many: propositional logic, the two-variable fragment of ï¬ rst-order logic, the Ackerman fragment, the guarded fragment, various quantiï¬ er-preï¬ x fragments, and so on. 1 time complexity proï¬ les. Problem classes that have linear-time inference algorithms, such as Horn clauses over literals, are rarely expressive enough. This tradeoff between generality and expressivity on the one hand and tractability on the other means that no LLM, no matter how large or how extensively and cleverly trained and tuned, will ever be able to crack an arbitrary reasoning problem.
|
2308.03762#1
|
2308.03762#3
|
2308.03762
|
[
"2308.03762"
] |
2308.03762#3
|
GPT-4 Can't Reason
|
And this is consistent with the famous â no free lunchâ theorem of machine learning, which points to a similar inverse relationship between model generality and performance. But LLM advocates can make a couple of cogent counterpoints, while granting that there will never be an AI oracle that can essentially solve the halting problem. First, they can point out that even though a problem might have high worst-case asymptotic complexity, it might still be solvable well enough in practice. Unlike random instances, real-world instances of reasoning problems (and indeed real-world instances of most computationally hard problems) appear to have structure that allows clever algorithms to tackle them effectively.5 There are many examples here, from the simplex algorithm for linear programming and SAT solvers to term uniï¬ cation algorithms and even automatic theorem provers for full ï¬ rst-order logic. All of these problems are hard (having at least exponential-time worst-case complexity), yet somehow we have algorithms for them that seem to work successfully on a wide variety of inputs. Second, and perhaps more important, we need not aim for an oracle anyway. Humans are not oracles either, nor do they seem to follow any particular algorithm that captures any one speciï¬ c class of reasoning problems. The ability of humans to reason is much more ï¬
|
2308.03762#2
|
2308.03762#4
|
2308.03762
|
[
"2308.03762"
] |
2308.03762#4
|
GPT-4 Can't Reason
|
uid and messy, but impressive nevertheless. Is it impossible to build something like an LLM-based system with the reasoning ability of a well-trained engineer of average intelligence (which perhaps can then become even more intelligent and better trained by an endless process of learning and improvement)? I donâ t think that building such a system can be ruled out on a priori grounds (and here I differ from hard-core AI skeptics). I think itâ s implausible, for a number of reasons,6 but ultimately this strikes me as an empirical question that must be decided on a case-by-case basis, by subjecting a speciï¬ c system to testing, i.e., by interrogating it, probing it, and analyzing its responses. And the case I will consider here is that of GPT-4, which appears, by all accounts, to be the most capable LLM at present. There are two questions that must be addressed before we proceed. First, we must agree on what reasoning is, and second, we must say something about methodology. The next section contains a brief discussion of reasoning, but for those who wish to skip that section and dive right into the problems, the upshot is that weâ ll focus on (a liberal conception of) deductive reasoning. Regarding methodology, just like the January piece, my evaluation here is not based on a corpus or set of corpora. Instead, I present a detailed qualitative analysis of GPT-4â s performance on 21 simple reasoning problems across a wide range of areas, most of which have been made up from scratch, while the rest (such as Wasonâ s selection task) have been manually tweaked so as to make them less recognizable to the model. This is done partly to avoid data contamination, which is a serious problem affecting corpus-based eval- uations. Given how little we know about the training regimen of ChatGPT, it is impossible to know for sure whether any existing dataset or problem has effectively been â seenâ by the model during its pretraining or subsequent alignment, whether weâ re talking about NLP datasets, medical licensing exams, Python program- 5Understanding that structure and rigorously characterizing its relationship with algorithm performance (e.g., via different problem parameterizations, such as clause/variable ratios in the case of SAT) is a key open problem in theoretical computer science, but that is another matter.
|
2308.03762#3
|
2308.03762#5
|
2308.03762
|
[
"2308.03762"
] |
2308.03762#5
|
GPT-4 Can't Reason
|
6Humans do not seem to solve problems by predicting the most likely sequence of tokens to generate. They think, explore, experiment, engage in protracted conversation with the people who posed the problem (sometimes over weeks, months, or even years), reï¬ ne, generalize, come up with new concepts and terminology, prove results, make and refute conjectures, apply heuristics, execute algorithms, analyze and synthesize, and iterate. But how solutions are generated is one thing and what solutions are generated is another, and thatâ s why itâ s not incoherent to speak of a model whose reasoning performance is roughly at the same level as that of an average human engineer. Such a claim can be understood operationally, to mean that a given LLM is able to produce roughly the same solutions that we might reasonably expect an average human engineer to produce (though obviously on a very different time scale).
|
2308.03762#4
|
2308.03762#6
|
2308.03762
|
[
"2308.03762"
] |
2308.03762#6
|
GPT-4 Can't Reason
|
2 ming problems, LSAT or bar-entrance exams, SAT or GRE tests, and so on.7 The qualiï¬ cation â effectivelyâ is important, because even though a speciï¬ c problem might not have been seen in its exact form (in a string- matching sense), an essentially equivalent variant with a different surface formulation might well have been. Hence, simple contamination tests based on substring checks, such as those carried out by OpenAI in their GPT-4 Technical Report [8] (posted in March 2023), are not sufï¬ cient to guarantee lack of contamination.8 The absence of a large corpus makes the discussion more qualitative rather than quantitative. However, the results are arguably more informative than a numeric metric computed over a corpus, for a number of reasons. First, because contamination can be ruled out conclusively; second, because the problems span a large gamut of areas; and third, because a qualitative discussion of a problem allows for greater depth of analysis and more context in which to interpret the results. By contrast, the only way to perform a truly informative quantitative evaluation is to come up with a brand new corpus that satisï¬ es all of the following criteria: (a) originality; (b) uniformly high quality; (c) sufï¬ ciently large size; and (d) diversity (not being limited to one type of task only). This is a very challenging undertaking. Even then, a few simple numeric metrics on a brand new dataset might not be particularly illuminating. Are the numbers measuring the right things? Do we even know the right things to measure? Is there an appropriate backdrop in which the numbers can be understood? For deeper insight, we need to put individual examples under a magnifying glass. This is particularly important because we need to scrutinize the explanations (â chains of thoughtâ ) gener- ated by a reasoner. Unfortunately, almost all reasoning corpora comprise either multiple-choice questions or binary classiï¬ cation problems (e.g., â Does sentence p2 follow from premise p1, yes or no?â ). Why? Mostly because it is easy to mechanically evaluate model performance on such datasets.
|
2308.03762#5
|
2308.03762#7
|
2308.03762
|
[
"2308.03762"
] |
2308.03762#7
|
GPT-4 Can't Reason
|
But even in the absence of contamination, this type of test set runs the serious risk that the LLM will manage to pick the right answers by latching on to spurious statistical regularities, i.e., to arrive at the right answers for the wrong reasons [6, 10].9 Adversarial augmentation of an existing dataset might help, especially if we know what we are trying to guard against, but unless an adversarial version restores near-random performance, this can quickly devolve into a game of whac-a-mole, where we detect a new round of bogus regularities exploited by the model and must undertake a new round of adversarial interventions. Ultimately, there is really no proper way to assess the reasoning ability of a system unless we ask it to explain its output. This is an essential part of reasoning, which is not about producing the right answer by hook or by crook but about deriving the right answer for the right reasons. And rote metrics like ROUGE-L are not ï¬ t for purpose here. We need to roll up our sleeves and analyze LLM explanations and proof attempts manually. We also need to gauge their performance in a dialog setting (e.g., what happens when a reasoning error is pointed out to them?). This is the sort of analysis undertaken in this paper. I believe the results show unequivocally that GPT-4 cannot reason. The errors are too pervasive and too egregious. GPT-4 doesnâ t solve even one of the 21 problems discussed here. But much more concerning are the fundamentally ï¬ awed explanations and proof attempts it produces along the way. LLM believers will probably demur: But humans also make mistakes, and surely weâ re not prepared to say that humans canâ t reason just because they make mistakes? First, it is not accurate to say without qualiï¬ cation that â humans can reason,â certainly not in the sense that we can randomly pluck any person from the street and expect them to reliably perform normatively correct reasoning. Most neurobiologically normal humans 7According to the analysis carried out by the lm-contamination index, well-known NLP datasets such as Squad, CoNLL03, MNLI, and others, are indeed contaminated, while several others are at best suspicious.
|
2308.03762#6
|
2308.03762#8
|
2308.03762
|
[
"2308.03762"
] |
2308.03762#8
|
GPT-4 Can't Reason
|
8In fact, the substring checks carried out by OpenAI were not even applied on the entire problem instance, only on 3 randomly selected substrings of 50 characters each. This is not enough to ensure disjointness for long (or even moderately long) problems, which are quite common in tests like the UBE (Uniform Bar Exam). 9Models have been shown to leverage the presence of certain cue words (especially negation words) and to formulate quick-and- dirty (i.e., unsound) heuristics such as lexical overlap, subsequence, and constituency [6]. Most of these results are from 2019 and revolve around BERT, but more recent work [9] has shown that while larger foundational models such as ChatGPT are more robust to input perturbations and OOD (out-of-distribution) samples, these continue to be challenges, suggesting that even ChatGPT-scale models learn unsound shortcuts.
|
2308.03762#7
|
2308.03762#9
|
2308.03762
|
[
"2308.03762"
] |
2308.03762#9
|
GPT-4 Can't Reason
|
3 have the capacity to become proï¬ cient in reasoning, but actually attaining such proï¬ ciency takes signiï¬ cant training and discipline. Humans are known to be susceptible to a large assortment of cognitive biases, which can only be overcome by rigorous instruction. Focusing on the reasoning skills of untrained people is a bit like focusing on the singing skills of the general population. Everybody sings in the shower, but without formal training (or at least exceptional talent) the results are usually regrettable. Of course, even sophisticated human reasoners make mistakes, just like trained singers can hit false notes. But if a human made these mistakes, the ones reported in this article, then I would conclude without any hes- itation that they cannot reason. Even if they went on to list a large number of other examples demonstrating impeccable reasoning, I would suspect that other factors (such as rote memorization or cheating) were behind the performance discrepancy. For the mistakes reported here are not performance mistakes, the sort of innocu- ous errors that humans might makeâ and promptly correctâ when they are careless or tired. If a human made these mistakes, and made them consistently under repeated questioning, that would indicate without doubt that they donâ t have the necessary logical competence, that they lack fundamental concepts that are part and parcel of the fabric of reasoning, such as logical entailment and set membership. And I would certainly not entrust that person with generating reams of Python or Javascript code for an enterprise. Nor would I start organizing international conferences to investigate how their reasoning prowess might threaten humanity with extinction. # 1.2 What is Reasoning? Reasoning is not quite the same thing as intelligence, but itâ
|
2308.03762#8
|
2308.03762#10
|
2308.03762
|
[
"2308.03762"
] |
2308.03762#10
|
GPT-4 Can't Reason
|
s a necessary ingredient for it. Broadly put, reasoning is the process of drawing and evaluating conclusions from a given body of information. More precisely, it is the process of making andâ more importantlyâ justifying arguments. An argument consists of a conclusion (the argumentâ s upshot, so to speak) and a set of premises from which the conclusion is derived. Premises represent information that is taken as given, if only provisionally, for the purposes of the argument. The conclusion and the premises are typically declarative sentences (expressed either in natural language or in the notation of a symbolic logic) that can be true or false, but they may also be represented by alternative notational devices, such as diagrams. We say that a set of premises S logically entails (or logically implies) a conclusion p iff p is true whenever all the sentences in S are true, in which case the argument is said to be valid. This means that itâ s logically impossible to have a state of affairs in which every element of S holds but p does not. This key logical relationship is a linchpin of human reasoning.10 10Here we understood premises and conclusions as syntactic objects (sentences or diagrams), but there are alternative approaches. For instance, a semanticist might think of premises and conclusions as propositions, abstract objects capable of being true or false. A sentence then expresses or represents a proposition. Propositions are handy theoretical entities for many reasons. For example, they can serve as the objects of psychological attitudes such as beliefs and desires.
|
2308.03762#9
|
2308.03762#11
|
2308.03762
|
[
"2308.03762"
] |
2308.03762#11
|
GPT-4 Can't Reason
|
What do I mean when I claim to believe that Obama won the 2012 presidential election? Surely I donâ t believe a particular sentence, i.e., a speciï¬ c syntactic object like â Obama won the 2012 US presidential electionâ (I). Rather, I believe something about the way the world actually is. That something can be understood as a proposition, a unique entity that can expressed by many different equivalent sentences. Propositions can be cashed out in modal terms, as sets of possible worlds (or as â situationsâ in situation-theoretic semantics [2]). A possible world is a way in which things might have been, but described completely, down to the most minute detail (unlike situations, which can be thought of as partial speciï¬ cations of worlds). So the proposition that Obama won the 2012 US presidential election is identiï¬ ed with the set of all possible worlds in which Obama won that election. This set becomes the information content of sentences such as (I). Propositions can also serve to analyze fundamental semantic notions such as entailment. A set of premises {p1, . . . , pn} entails a conclusion p iff the intersection of the sets of possible words represented by all the pi is a superset of the set of worlds represented by p. This is another way of understanding the claim that the conclusion of a valid deductive argument does not introduce any information that is not already contained in the premises. Note, however, that while the possible-worlds approach to propositions is very powerful, it also suffers from severe defects, as it is notoriously coarse-grained, meaning that it cannot distinguish between propositions that we intuitively regard as quite distinct. This is perhaps easier to see in the case of mathematical truths, which, being necessary (true in all possible worlds), are collapsed into one and the same object, the set of all possible worlds (and dually, of course, all contradictions are identiï¬ ed with the empty set of worlds). As a result, the proposition that 1 + 1 = 2 and Fermatâ s theorem become identical, as they have the exact same information content. There have been attempts to address these issues (structured propositions and impossible
|
2308.03762#10
|
2308.03762#12
|
2308.03762
|
[
"2308.03762"
] |
2308.03762#12
|
GPT-4 Can't Reason
|
4 Valid deductive arguments (whose conclusions are entailed by the premises) are said to be analytical (or sometimes tautological), insofar as, technically speaking, they convey no information.11 This idea is also sometimes expressed by calling such arguments non-ampliative, meaning that there is no information contained in the conclusion that is not already containedâ if only latentlyâ in the premises. Deduction is the process of making and justifying non-ampliative arguments. Deductive arguments are typically justiï¬ ed by proofs, which are sequences of inference steps, each of which applies an inference rule to a number of premises and/or results of previous steps and derives a new result. The last step derives the ï¬ nal conclusion of the proof. An inference rule may be low-level and easy to apply or higher-level and computationally expensive. But all inference rules are required to be sound (or truth-preserving), that is, they must ensure that if the inputs are true then so is the output. All mathematical proofs are deductive, and mathematical reasoning in general is predominantly deductive.12 The conventional view is that some arguments are ampliative, meaning that the conclusion is not quite entailed by the premises. In other words, it is possible for the premises to be true while the conclusion is false. These are typically subdivided into inductive and abductive arguments,13 although some authors view induction as a species of abduction, and even more authors view abduction as a species of induction.
|
2308.03762#11
|
2308.03762#13
|
2308.03762
|
[
"2308.03762"
] |
2308.03762#13
|
GPT-4 Can't Reason
|
There is no rigorous deï¬ nition of either, but roughly, the premises of a good inductive argument make its conclusion likely, though never quite certain (in contrast to deduction, where the truth of the premises guarantees the truth of the conclusion). Induction can generate speciï¬ c conclusions from all kinds of premises (speciï¬ c or general), but often it proceeds from speciï¬ c individual observations o1, . . . , on to a more general hypothesis H that subsumes the individual oi in some sense (for instance, H may be a universally quantiï¬ ed sentence and the oi could be instances of that sentence). Much of what ML algorithms do can be viewed as inductive reasoning. For instance, a linear-regression algorithm might take as input n datapoints about car models, where each datapoint is of the form di = ((ci, hi, yi), mi) for i = 1, . . . , n, where ci is the number of cylinders for the ith car model, hi is the horsepower, yi is the model year, and the dependent variable mi is the mpg (miles per gallon). And it might produce as output a formula like m = w1 · c + w2 · h + w3 · y + b, which predicts the mpg of a car model from its number of cylinders, horsepower, and model year.14 Here w1, w2, w3, and b
|
2308.03762#12
|
2308.03762#14
|
2308.03762
|
[
"2308.03762"
] |
2308.03762#14
|
GPT-4 Can't Reason
|
worlds being two of the most prominent), but the interested reader will have to consult the literature for more details. 11This can be made more precise using information-theoretic notions, at least in the case of propositional logic, where we have an inï¬ nite supply of formulas that are either atomic (propositional variables) or else Boolean combinations of formulas. Instead of imposing the usual Kolmogorov axioms on a probability measure deï¬ ned over a set of events (a Ï -ï¬ eld) from a sample space â ¦, we impose the same axioms (non-negativity, ï¬ nite additivity, and the axiom that assigns a measure of 1 to every tautologyâ the analogue of P(â ¦) = 1) on a probability measure deï¬ ned over the set of all formulas. Then truth and falsity become the extreme probabilities of 1 and 0, respectively. This allows us to associate a probability P(Ï ) with any sentence (event) Ï , and hence every sentence Ï automatically gets an information content in the usual way: IC(Ï ) = â log P(Ï ). To say that the information content of a valid deductive argument with premises {p1, . . . , pn} and conclusion p is zero is simply to say that the conditional p1 â § · · · â § pn â p is a tautology. By deï¬ nition, a tautology Ï has probability 1, and therefore IC(Ï ) = 0. 12At this point the reader might ask: If deductive arguments convey zero information, why bother with them? Indeed, if all mathematical proofs are proofs of tautologies, with zero information content, what is their point? The thinking is that arguments with no information content are not useful, so if all deductive arguments (including all mathematical results) have zero information content, then they are not useful. This is, in brief, the so-called â scandal of deductionâ (named by parity to the â scandal of induction,â i.e., Humeâ s problem of induction). There have not been any widely accepted resolutions of this ostensible paradox.
|
2308.03762#13
|
2308.03762#15
|
2308.03762
|
[
"2308.03762"
] |
2308.03762#15
|
GPT-4 Can't Reason
|
But few of course doubt that mathematical results are actually informative and extend our knowledge. (Surely if we woke up tomorrow and read that someone proved P 6= NP, that would be tremendously informative.) Itâ s also clear that the word â informationâ has a number of informal meanings that are not captured by the canonical deï¬ nition of information content (as the negative logarithm of probability), and most efforts to resolve the â scandal of deductionâ have attempted to formalize distinct notions of informational gain that would render deductive arguments informative. 13Several other types of reasoning are often discussed in the literature, such as analogical reasoning (which includes, for instance, case-based reasoning), Bayesian reasoning, causal reasoning, and so on, but these are usually subsumed under one of the three main categories I have described, most often under induction. (But there is no consensus, for instance, some thinkers, from Aristotle to recent authors, have tried to assimilate analogical reasoning under deduction.) 14We are assuming of course that the car model whose mpg we are predicting was not included in the given data, otherwise there would be no prediction or generalization involved.
|
2308.03762#14
|
2308.03762#16
|
2308.03762
|
[
"2308.03762"
] |
2308.03762#16
|
GPT-4 Can't Reason
|
5 are speciï¬ c numbers (weights) representing a hyperplane that minimizes the mean squared error for the input data (meaning that the hyperplane determined by these weights might not ï¬ t the n datapoints perfectly, but it does so better than the hyperplane determined by any other set of weights).15 The main distinguishing feature of abductive reasoning is a strong emphasis on explanation. Abduction consists mostly in making and justifying arguments that explain a set of facts. If one day I come home early from work and I see a plumberâ s van parked in my neighborsâ driveway, I might conclude that my neighbors are having some plumbing work done in their house. The premise here is â There is a plumbing van parked in my neighborsâ drivewayâ and the conclusion is â My neighbors are having plumbing work done in their house.â This is sometimes called â inference to the best explanation,â because the conclusion serves to explain the premise(s). This is also a form of ampliative reasoningâ the conclusion does not follow logically from the premises. There are many alternative explanations of a given set of facts or observations (perhaps a plumber parked there temporarily, or the neighbors bought the van, or the neighbors have a plumber friend who is making a social visit, and so on). A good abductive inference will yield a hypothesis that has more explanatory value than competing hypotheses. But how exactly to measure the quality of an abductive piece of reasoning is an open question.16 Note that it doesnâ t take a large leap of imagination to view induction as a form of abduction. Observing a large number of black (and only black) swans and then conjecturing that all swans are black could be seen as abductive reasoning, as the conclusion â x . swan(x) â color(x) = black would explain all the observed data. Linear regression can also be seen as the making of an abductive hypothesis, as can (much more generally) Maximum Likelihood Estimation, a principle that underlies many ML algorithms and is often associated with induction.
|
2308.03762#15
|
2308.03762#17
|
2308.03762
|
[
"2308.03762"
] |
2308.03762#17
|
GPT-4 Can't Reason
|
All of the above is received wisdom, but itâ s worth mentioning that there have been thinkers, called â de- ductivistsâ (ranging from philosophers such as Popper and Musgrave to statisticians such as Fisher), who contend that deduction is the only real form of reasoning there is, insofar as itâ s the only one for which we have a rigorous and properly understood formal notion of validity; and that other (ampliative) arguments are best understood as reconstructed deductions, typically as enthymemes (arguments that omit tacitly understood premises).
|
2308.03762#16
|
2308.03762#18
|
2308.03762
|
[
"2308.03762"
] |
2308.03762#18
|
GPT-4 Can't Reason
|
I ï¬ nd that position congenial,17 but venturing into that discussion would take us too far aï¬ eld. For present purposes it sufï¬ ces to say that we will focus on deduction, because it is the type of reasoning that underpins most logico-mathematical thought and for which we have clear normative standards of evaluation. An important note: I view the discovery and justiï¬ cation of particular models (including counterexamples and countermodels in general) as part and parcel of reasoning. This is not a controversial view; some cognitive scientists view models and associated cognitive processes as the fundamental ingredients of human reasoning [11]. In addition, however, I view model-based reasoning as at least partly deductive, because even though the actual process of discovering models might not be a process of deduction18, its outcome is a claim (namely, that a given interpretation satisï¬ es a set of premises) that can be veriï¬ ed or falsiï¬ ed deductively, taking as premises the deï¬ nition of the model itself and possibly other general knowledge about the modelâ s domain. Indeed, I will consider even computation as a form of deduction, because a particular computation can be naturally regarded as a deductive derivation of a conclusion of the form f (e1, . . . , en) = v, where f (e1, . . . , en) is the application of an arbitrary function f to arbitrary argument expressions e1, . . . , en, ultimately yielding value 15The training of deep neural networks, too, works by trying to discover values for various weights that are â optimalâ for a given training dataset (in that they minimize loss), except that in their case the relationship between the inputs, outputs, and weights can be much more complicated (non-linear) and the training algorithm might not converge to the optimal weight values. 16Some desired properties of explanations are obvious. Truth is one of themâ a good explanation cannot be based on a false hypothesis. But other desired properties, such as parsimony and generality (explaining as much as possible while assuming as little as possible) are much harder to explicate. 17Even from a purely linguistic viewpoint, it doesnâ t seem appropriate to say that I have â concludedâ or â derivedâ or â inferredâ anything at all in the swan or in the plumber examples.
|
2308.03762#17
|
2308.03762#19
|
2308.03762
|
[
"2308.03762"
] |
2308.03762#19
|
GPT-4 Can't Reason
|
I have simply made a tentative hypothesis (or conjecture), which might be refuted. 18In the same way that even the process of discovering deductions is not itself deductive, at least not entirely so. Both are funda- mentally search processes, though they are almost certainly informed and generally penetrated by deduction. 6 v as the result. The premises for the derivation consist of the deï¬ nition of f and possibly other auxiliary functions, along with the usual equational axioms (reï¬ exivity, symmetry, transitivity, and functional/relational congruence).19 # 1.3 Test Problems This section will start with the usual caveat: GPT-4 is a nondeterministic system that might produce different answers on different runs, even with the same parameter settings. All of the following exchanges with GPT-4 have been transcribed verbatim, and in my experience the errors discussed here tend to be robust, but itâ s conceivable that for a given example GPT-4 might generate a different output even in response to the exact same prompt.20
|
2308.03762#18
|
2308.03762#20
|
2308.03762
|
[
"2308.03762"
] |
2308.03762#20
|
GPT-4 Can't Reason
|
# 1.3.1 Simple Arithmetic The ability to perform basic arithmetic is a necessary ingredient for reasoning. GPT-4 is still unable to reliably perform elementary arithmetic operations such as addition and multiplication. To ensure that GPT-4 isnâ t falling back on rote memorization, we can ask it to ï¬ rst select two random integers in a range of our choice and then perform the operation on the selected values: KA: Select two random numbers between 1381 and 1453 and multiply them together, reporting the result. GPT-4: Sure, letâ s select two random numbers in the range of 1381 to 1453. Letâ s say these numbers are 1405 and 1421. To get the product, we simply multiply these two numbers together: 1405 * 1421 = 1996025 Alas, the correct answer is 1405 · 1421 = 1996505.
|
2308.03762#19
|
2308.03762#21
|
2308.03762
|
[
"2308.03762"
] |
2308.03762#21
|
GPT-4 Can't Reason
|
One response here from the LLM camp has been to concede that GPT-4 is not a Turing-complete frame- work and cannot perform general-purpose computation: It canâ t add, multiply or divide, it canâ t sort lists, it canâ t invert matrices or solve systems of differential equations, it canâ t compile arbitrary programs to machine code, it canâ t derive theorems, and so on ad inï¬ nitum. Nevertheless, the response goes, these computational â gapsâ can be ï¬ lled by plug-ins, along the lines of the ReAct (short for â Reasoning and Actingâ ) [4] or MRKL (short for â Miracleâ ) [3] frameworks.
|
2308.03762#20
|
2308.03762#22
|
2308.03762
|
[
"2308.03762"
] |
2308.03762#22
|
GPT-4 Can't Reason
|
It is this line of work that has led to systems such as LangChain and AutoGPT. The idea here is to use the LLM as a planner to recursively decompose a complex problem into a sequence of subproblems that can be dispatched to appropriate specialized agents via appropriate APIs. The problem with this approach is its circularity. Planning itself requires reasoning, so this is a Catch-22 situation: We canâ t solve reasoning by delegating to appropriate agents, because ï¬ guring out the delegation (how the problem should be decomposed, which agents to call and how, and how to compose the results) is itself computationally infeasible. It not only requires the ability to understand natural language, but also the ability to reason about preconditions and effects. And this is assuming a ï¬ xed collection of agents with clear-cut APIs.21 Even under these overly simplistic assumptions, planning is very computationally expensive (PSPACE-complete), and radical simpliï¬ cations are needed to scale down the complexity even to the level of NP-completeness. Under more realistic assumptions, planning is performed under incomplete knowledge and 19This viewpoint assumes a functional-programming stance, but computation can be readily reduced to deduction in any other style of programming (e.g., imperative) by an appropriate axiomatic formulation of the relevant semantics (e.g., operational semantics using stores). 20In addition, of course, different versions of GPT-4 might get deployed at any time. 21An unrealistic assumption given that the Internet is ï¬ lled with an unbounded number of agents (millions of them, from completely arbitrary computer programs to smart-phone apps to travel-booking APIs to games and beyond) that provide an open-ended and constantly changing array of functionality. 7 it is mechanically undecidable.
|
2308.03762#21
|
2308.03762#23
|
2308.03762
|
[
"2308.03762"
] |
2308.03762#23
|
GPT-4 Can't Reason
|
As we understand them today, expecting generic LLMs to tackle planning with any degree of success is naive. # 1.3.2 Simple Counting While concrete counting is not necessarily a reasoning activity,22 it is surely a requirement for any generally capable reasoning system. Here I give GPT-4 a propositional variable with 27 negation signs in front of it and ask it to count the number of negations. For a human this would be an easy task, especially because the negation signs are written in ï¬ ve blocks with ï¬ ve tildes each, followed by a ï¬ nal pair of negation signs. KA: How many times is p negated in the following formula: â ¼ â ¼ â ¼ â ¼ â ¼ â ¼ â ¼ â ¼ â ¼ â ¼ â ¼ â ¼ â ¼ â ¼ â ¼ â ¼ â ¼ â ¼ â ¼ â ¼ â ¼ â ¼ â ¼ â ¼ â ¼ â ¼ â ¼ p? GPT-4: The formula youâ ve given includes 28 negation symbols (â â ¼â ) in front of the propositional variable â pâ .
|
2308.03762#22
|
2308.03762#24
|
2308.03762
|
[
"2308.03762"
] |
2308.03762#24
|
GPT-4 Can't Reason
|
In logic, the negation symbol changes the truth value of a statement to its opposite. If a statement is true, its negation is false, and vice versa. Double negation, like â â ¼ â ¼ pâ , is equivalent to â pâ , because negating a proposition twice returns it to its original truth value. Given this, to ï¬ nd the effective number of negations, we can simply divide the total number of negation symbols by 2. If thereâ s a remainder, then there is a net negation. If not, then there isnâ t. 28 divided by 2 is 14, with no remainder, meaning thereâ s no net negation. So effectively, the variable â pâ is not negated in the given formula. Itâ s equivalent to just â pâ . A difference of one negation sign might not seem like a serious matter, until you realize that it makes all the difference in logical import,23 as GPT-4â s own (unsolicited) explanation stresses. Repeating this experiment multiple times gave widely different results (including 50 and 54), even after explicitly telling GPT-4 to take its time and count carefully. Here is another example. I greet GPT-4 ï¬ ve times in a row and then I ask it a simple question: # KA: Hi GPT-4: Hello! How can I assist you today? KA: Hello GPT-4: How can I assist you today? KA: Greetings GPT-4: Greetings! How can I assist you today? KA: Hi there. GPT-4: Hi there! How can I help you today? KA: Hullo! GPT-4: Hello! How can I assist you today? KA: Youâ ve been greeted a few times so far. Three of those greetings were actually from my girlfriend, the rest from me. How many times did I greet you?
|
2308.03762#23
|
2308.03762#25
|
2308.03762
|
[
"2308.03762"
] |
2308.03762#25
|
GPT-4 Can't Reason
|
GPT-4: If your girlfriend greeted me three times and all the other greetings were from you, then you greeted me four times. This is assuming that all seven greetings (â Hiâ , â Helloâ , â Greetingsâ , â Hi thereâ , â Hullo!â ) were included in the count. 22By concrete counting I mean counting a number of speciï¬ c object tokens instantiated in space and time, as in the coins in oneâ s pocket or the number of lines in a text ï¬ le.
|
2308.03762#24
|
2308.03762#26
|
2308.03762
|
[
"2308.03762"
] |
2308.03762#26
|
GPT-4 Can't Reason
|
By contrast, abstract counting based on combinatorial principles, search procedures, and logical constraints (like the scheduling problem in Section 1.3.9) is indeed a reasoning activity. 23In the same way that the numbers 100000 and 1000000 only differ in one zero, but if we are talking about your bank balance that one zero makes a huge difference. 8 In this case the arithmetic is correct (7 â 3 = 4) but the countâ and ultimately the resulting answerâ
|
2308.03762#25
|
2308.03762#27
|
2308.03762
|
[
"2308.03762"
] |
2308.03762#27
|
GPT-4 Can't Reason
|
is hallucinated. # 1.3.3 (Medical) Common Sense In the present setting we may regard commonsensical arguments as straightforward enthymematic deductions of conclusions from given information plus unstated premises that constitute tacit, generally accepted back- In this particular case, such common-sense knowledge would be propositions like â A ground knowledge. person is alive until they die, after which they do not become alive again.â KA: Mableâ s heart rate at 9 AM was 75 bpm and her blood pressure at 7 PM was 120/80. She died at 11 PM. Was she alive at noon?
|
2308.03762#26
|
2308.03762#28
|
2308.03762
|
[
"2308.03762"
] |
2308.03762#28
|
GPT-4 Can't Reason
|
GPT-4: Based on the information provided, itâ s impossible to deï¬ nitively say whether Mable was alive at noon. The data points provided (heart rate at 9 AM and blood pressure at 7 PM) both indicate that she was alive at those times. However, without additional information about her condition between 7 PM and 11 PM, or at noon speciï¬ cally, we canâ t make a certain determination about her status at noon. Remember, medical information can be complex and nuanced. Understanding a personâ s health status often requires a comprehensive examination and should be performed by a healthcare professional. If this is a hypothetical question, keep in mind that real-life situations would require more complete information. No comment is needed here, though the gratuitousâ but by now de rigueurâ piece of legalese issuing the vague disclaimer at the end (â this-is-a-complex-issue-and-make-sure-to-consult-with-a-professionalâ ) seems ironic. # 1.3.4 Elementary Logic If P (x) implies Q(x) and Q(a) does not hold then we can infer, by modus tollens, that P (a) does not hold either (because if it did then Q(a) would too). This is as elementary of a tautology as can be, yet GPT-4 is perfectly willing to produce a countermodel: KA: Find a model in which P(x) implies Q(x), Q(a) does not hold, and P(a) holds. GPT-4: To ï¬ nd a model in which P(x) implies Q(x), Q(a) does not hold, and P(a) holds, letâ s consider the following interpretations: 1. P(x): x is an even number 2. Q(x): x is a positive number 3. a: â 2 Now, letâ s check if these interpretations satisfy the given conditions: 1. P(x) implies Q(x): If x is an even number, then x is a positive number. This statement is not universally true, as we can have negative even numbers. However, it does not preclude the existence of a model with the other given conditions.
|
2308.03762#27
|
2308.03762#29
|
2308.03762
|
[
"2308.03762"
] |
2308.03762#29
|
GPT-4 Can't Reason
|
2. Q(a) does not hold: Q(â 2) means â 2 is a positive number, which is false. So, this condition is satisï¬ ed. 3. P(a) holds: P(â 2) means â 2 is an even number, which is true. So, this condition is also satisï¬ ed. Thus, the model with the interpretations P(x): x is an even number, Q(x): x is a positive number, and a: â 2 satisï¬ es the conditions P(x) implies Q(x), Q(a) does not hold, and P(a) holds. 9 Observe that GPT-4 recognizes that P (x) does not in fact imply Q(x) under the interpretation it gave, but it nevertheless dismisses this inconvenient consideration because â it does not preclude the existence of a model with the other given conditions,â as if the satisfaction of the other conditions was enough (it isnâ t; a counter- model must satisfy all of the given premises while falsifying the conclusion). Moreover, just a few sentences later GPT-4 claims that P (x) does imply Q(x) under the given interpretation, contradicting its own earlier claim. We will see that internal inconsistency is a recurrent theme. When we ask it to elaborate on why it thinks that P (x) implies Q(x), GPT-4â s response is revealing: KA: Can you explain why this model satisï¬ es the ï¬ rst constraint (namely, that P (x) implies Q(x))? GPT-4: You are correct that my previous explanation was not clear enough for the ï¬ rst constraint. I apologize for that. In the given model, P (x) does not necessarily imply Q(x). Letâ s reconsider the model and ï¬ nd one that satisï¬ es all the constraints. 1. P (x): x is a multiple of 4 2. Q(x): x is an even number 3. a : 4 Now letâ s check if these interpretations satisfy the given conditions: 1. P (x) implies Q(x): If x is a multiple of 4, then x is an even number. This statement is universally true because every multiple of 4 is also a multiple of 2, making it an even number. 2. Q(a) does not hold:
|
2308.03762#28
|
2308.03762#30
|
2308.03762
|
[
"2308.03762"
] |
2308.03762#30
|
GPT-4 Can't Reason
|
Q(4) means 4 is an even number, which is true. So, this condition is not satisï¬ ed. To satisfy all three conditions, we can modify the interpretations as follows: 1. P (x): x is a positive multiple of 4 2. Q(x): x is a positive even number 3. a : â 4 Now letâ s check if these interpretations satisfy the given conditions: 1. P (x) implies Q(x): If x is a positive multiple of 4, then x is a positive even number. This statement is universally true because every positive multiple of 4 is also a multiple of 2, making it a positive even number. 2. Q(a) does not hold: Q(â 4) means â 4 is a positive even number, which is false. So, this condition is satisï¬ ed. 3. P (a) holds: P (â 4) means â 4 is a positive multiple of 4, which is false. So, this condition is not satisï¬ ed. I must correct my previous It seems that ï¬ nding a model that satisï¬ es all three conditions is challenging. claim: there may not be a model that satisï¬ es all three conditions, P (x) implies Q(x), Q(a) does not hold, and P (a) holds, simultaneously. Thus, after multiple unsuccessful attempts to ï¬ nd a countermodel, GPT-4 acknowledges that such a counter- model might not exist. But, remarkably, when GPT-4 is asked to settle the matter one way or the other, it jumps back to the position that a countermodel does exist after all.
|
2308.03762#29
|
2308.03762#31
|
2308.03762
|
[
"2308.03762"
] |
2308.03762#31
|
GPT-4 Can't Reason
|
KA: Can you prove that one exists or that one does not exist? 10 GPT-4: I apologize for the confusion earlier. Let me provide a formal proof to show that a model that satisï¬ es all three conditions P (x) implies Q(x), Q(a) does not hold, and P (a) holds does indeed exist. Letâ s consider a model with the following interpretations: 1. P(x): x is an element of set A 2. Q(x): x is an element of set B 3. a: a speciï¬ c element, a Now, letâ s deï¬ ne the sets A and B as follows: Set A = {a}. Set B = {x | x is an element of A and x 6= a}. Now, letâ s check if these interpretations satisfy the given conditions: 1. P (x) implies Q(x): If x is an element of set A, then x is an element of set B. In this case, since the only element in set A is a, and set B is deï¬ ned such that it contains all elements of A except a, this implication is vacuously true. This is because there are no other elements in set A for which the implication would be false.
|
2308.03762#30
|
2308.03762#32
|
2308.03762
|
[
"2308.03762"
] |
2308.03762#32
|
GPT-4 Can't Reason
|
2. Q(a) does not hold: Q(a) means a is an element of set B. By deï¬ nition, set B does not include a, so this condition is satisï¬ ed. 3. P (a) holds: P (a) means a is an element of set A. Since set A is deï¬ ned to contain only a, this condition is also satisï¬ ed. Thus, the model with the interpretations P (x): x is an element of set A, Q(x): x is an element of set B, and a: a speciï¬ c element, satisï¬ es the conditions P (x) implies Q(x), Q(a) does not hold, and P (a) holds.
|
2308.03762#31
|
2308.03762#33
|
2308.03762
|
[
"2308.03762"
] |
2308.03762#33
|
GPT-4 Can't Reason
|
The response verges on nonsensical. GPT-4 has deï¬ ned B (in a roundabout way) as the empty set â , so that Q is always false. This means that P (x) cannot possibly imply Q(x) as long as there is a single value of x for which P (x) holdsâ and GPT-4 itself has provided such a value: x = a. Yet somehow GPT-4 claims that P (x) implies Q(x). KA: Can you again explain why P (x) implies Q(x) in this model? GPT-4: You are correct. My previous explanation for P (x) implies Q(x) was not accurate. I apologize for the confusion. I will now provide a correct model that satisï¬ es all three conditions. Letâ s consider a model with the following interpretations: 1. P (x): x is divisible by 3 2. Q(x): x is divisible by 2 3. a: 3 Now, letâ s check if these interpretations satisfy the given conditions: 1. P (x) implies Q(x): If x is divisible by 3, then x is divisible by 2. This statement is not universally true. However, we can create a domain D that contains only one element, a = 3, which is divisible by 3 but not by 2. By restricting the domain, the implication P (x) implies Q(x) becomes vacuously true within the domain D because there are no elements in D for which the implication would be false.
|
2308.03762#32
|
2308.03762#34
|
2308.03762
|
[
"2308.03762"
] |
2308.03762#34
|
GPT-4 Can't Reason
|
11 2. Q(a) does not hold: Q(3) means 3 is divisible by 2. Since 3 is not divisible by 2, this condition is satisï¬ ed. 3. P (a) holds: P (3) means 3 is divisible by 3. Since 3 is divisible by 3, this condition is also satisï¬ ed. Thus, the model with the interpretations P (x): x is divisible by 3, Q(x): x is divisible by 2, and a: 3, and the domain D = 3 satisï¬ es the conditions P (x) implies Q(x), Q(a) does not hold, and P (a) holds. # 1.3.5 Simple Quantiï¬ er Semantics Here we give GPT-4 two easy problems to test its understanding of quantiï¬ ers. Here is the ï¬ rst problem: KA: Consider the following three sentences: 1. [forall x . P(x) ==> Q(x)] 2. [exists x . P(x)] 3. [exists x . â ¼ Q(x)] Either disprove or prove the following claim: These three sentences are jointly satisï¬ able. GPT-4: The three sentences are not jointly satisï¬ able. Hereâ s why: 1. [forall x . P(x) ==> Q(x)] says that for all x, if P(x) is true then Q(x) is true. 2. [exists x . P(x)] says that there exists at least one x such that P(x) is true. 3. [exists x . â ¼ Q(x)] says that there exists at least one x such that Q(x) is not true. If we consider sentence 2, there exists at least one x such that P(x) is true. According to sentence 1, for this x (or these xâ s), Q(x) must also be true. However, sentence 3 states that there exists at least one x for which Q(x) is not true. Given sentences 1 and 2, this cannot be the case - there cannot exist an x for which P(x) is true and yet Q(x) is not true. Therefore, the three sentences are not jointly satisï¬ able. The sentences are clearly jointly satisï¬
|
2308.03762#33
|
2308.03762#35
|
2308.03762
|
[
"2308.03762"
] |
2308.03762#35
|
GPT-4 Can't Reason
|
able, a trivial model being the domain {a1, a2} with P (a1), Q(a1), ¬P (a2), and ¬Q(a2). The second problem concerns this biconditional: [â x . P (x) â Q(x)] â [(â x . P (x)) â (â x . Q(x))] (1.1) The left-to-right implication holds, but the right-to-left direction fails. Counterexamples are easy to ï¬ nd, for example, take the domain to be integers, P (x) to mean x is even and Q(x) to mean x is odd. Then the equivalence on the right-hand side of (1.1) holds, but clearly itâ s not true that every integer is even iff it is odd. KA: Prove or disprove the following: (forall x . P(x) <==> Q(x)) holds if and only if the following biconditional holds: (forall x . P(x)) <==> (forall x . Q(x)). GPT-4: We will prove this statement by showing that both directions of the main biconditional hold.
|
2308.03762#34
|
2308.03762#36
|
2308.03762
|
[
"2308.03762"
] |
2308.03762#36
|
GPT-4 Can't Reason
|
12 â ¢ (forall x . P(x) <==> Q(x)) implies (forall x . P(x)) <==> (forall x . Q(x)) As- sume that for all x, P(x) <==> Q(x) holds. Direction 1: If forall x, P(x) holds, we need to show that forall x, Q(x) holds. Since P(x) <==> Q(x) for all x, if P(x) holds for all x, then Q(x) must also hold for all x, as they are equivalent.
|
2308.03762#35
|
2308.03762#37
|
2308.03762
|
[
"2308.03762"
] |
2308.03762#37
|
GPT-4 Can't Reason
|
Direction 2: If forall x, Q(x) holds, we need to show that forall x, P(x) holds. Since P(x) <==> Q(x) for all x, if Q(x) holds for all x, then P(x) must also hold for all x, as they are equivalent. Thus, we have shown that (forall x . P(x)) <==> (forall x . Q(x)) holds when (forall x . P(x) <==> Q(x)) holds. â ¢ (forall x . P(x)) <==> (forall x . Q(x)) implies (forall . P(x) <==> Q(x)) Assume that (forall x . P(x)) <==> (forall x . Q(x)) holds. We need to show that for all x, P(x) <==> Q(x). Let x be an arbitrary element from the domain of discourse. We will consider the two directions of the biconditional P(x) <==> Q(x): Direction 1: If P(x) holds, we need to show that Q(x) holds. If P(x) holds for our arbitrary x, then it must be the case that forall x, P(x) holds. Since (forall x . P(x)) <==> (forall x . Q(x)) holds, it follows that forall x, Q(x) holds. Therefore, Q(x) must hold for our arbitrary x as well.
|
2308.03762#36
|
2308.03762#38
|
2308.03762
|
[
"2308.03762"
] |
2308.03762#38
|
GPT-4 Can't Reason
|
Direction 2: If Q(x) holds, we need to show that P(x) holds. If Q(x) holds for our arbitrary x, then it must be the case that forall x, Q(x) holds. Since (forall x . P(x)) <==> (forall x . Q(x)) holds, it follows that forall x, P(x) holds. Therefore, P(x) must hold for our arbitrary x as well. Thus, we have shown that for all x, P(x) <==> Q(x) holds when (forall x . P(x)) <==> (forall x . Q(x)) holds. In conclusion, we have shown that (forall x . P(x) <==> Q(x)) holds if and only if (forall x . P(x)) <==> (forall x . Q(x)) holds. Notice how pernicious the fallacy is here: â If P(x) holds for our arbitrary x, then it must be the case that forall x, P(x) holds.â It sounds plausible, but of course it is quite wrong. KA: Can you elaborate on your claim â If P(x) holds for our arbitrary x, then it must be the case that forall x, P(x) holdsâ ? GPT-4:
|
2308.03762#37
|
2308.03762#39
|
2308.03762
|
[
"2308.03762"
] |
2308.03762#39
|
GPT-4 Can't Reason
|
You are correct to question that claim. It was a mistake in my response. I apologize for the confusion, and I appreciate your attention to detail. Here is the corrected proof. (forall x . P(x) <==> Q(x)) implies (forall x . P(x)) <==> (forall x . Q(x)) Assume that for all x, P(x) <==> Q(x) holds. Direction 1: If forall x, P(x) holds, we need to show that forall x, Q(x) holds. Since P(x) <==> Q(x) for all x, if P(x) holds for all x, then Q(x) must also hold for all x, as they are equivalent.
|
2308.03762#38
|
2308.03762#40
|
2308.03762
|
[
"2308.03762"
] |
2308.03762#40
|
GPT-4 Can't Reason
|
Direction 2: If forall x, Q(x) holds, we need to show that forall x, P(x) holds. Since P(x) <==> Q(x) for all x, if Q(x) holds for all x, then P(x) must also hold for all x, as they are equivalent. Thus, we have shown that (forall x . P(x)) <==> (forall x . Q(x)) holds when (forall x . P(x) <==> Q(x)) holds. (forall x . P(x)) <==> (forall x . Q(x)) implies (forall x . P(x) <==> Q(x)) Assume that (forall x . P(x)) <==> (forall x . Q(x)) holds. We need to show that for all x, P(x) <==> Q(x). Let x be an arbitrary element from the domain of discourse. We will consider the two directions of the biconditional P(x) <==> Q(x):
|
2308.03762#39
|
2308.03762#41
|
2308.03762
|
[
"2308.03762"
] |
2308.03762#41
|
GPT-4 Can't Reason
|
13 Direction 1: If P(x) holds, we need to show that Q(x) holds. If P(x) holds for our arbitrary x, we cannot directly conclude that forall x, P(x) holds. However, we can proceed as follows: Since (forall x . P(x)) <==> (forall x . Q(x)) holds, either both sides of the biconditional are true or both sides are false. If forall x, P(x) holds, then by the biconditional, forall x, Q(x) must hold as well, and thus Q(x) holds for our arbitrary x. If it is not the case that forall x, P(x) holds, then there must be some x where P(x) does not hold. But, for our arbitrary x, P(x) holds, which means our arbitrary x cannot be one of those instances. In this case, it is not guaranteed that Q(x) holds for our arbitrary x.
|
2308.03762#40
|
2308.03762#42
|
2308.03762
|
[
"2308.03762"
] |
2308.03762#42
|
GPT-4 Can't Reason
|
Direction 2: If Q(x) holds, we need to show that P(x) holds. Similarly, if Q(x) holds for our arbitrary x, we cannot directly conclude that forall x, Q(x) holds. We can use the same approach as in Direction 1. Since (forall x . P(x)) <==> (forall x . Q(x)) holds, either both sides of the biconditional are true or both sides are false. If forall x, Q(x) holds, then by the biconditional, forall x, P(x) must hold as well, and thus P(x) holds for our arbitrary x. If it is not the case that forall x, Q(x) holds, then there must be some x where Q(x) does not hold. But, for our arbitrary x, Q(x) holds, which means our arbitrary x cannot be one of those instances. In this case, it is not guaranteed that P(x) holds for our arbitrary x. As we cannot guarantee that P(x) <==> Q(x) for our arbitrary KA: Please continue. GPT-4: My apologies for the premature ending in my previous response. Let me provide an alternative approach to the proof for the second part. (forall x . P(x)) <==> (forall x . Q(x)) implies (forall x . P(x) <==> Q(x)) Assume that (forall x . P(x)) holds. We need to show that for all x, P(x) <==> Q(x). Let x be an arbitrary element from the domain of discourse. We will consider the two directions of the biconditional P(x) <==> Q(x): Direction 1: If P(x) holds, we need to show that Q(x) holds. We will use proof by contradiction. Assume that Q(x) does not hold for our arbitrary x. Now, since Q(x) does not hold, it cannot be the case that forall x, Q(x) holds. Since (forall x <==> P(x)) <==> (forall x . Q(x)) holds, this implies that it cannot be the case that forall x, P(x) holds. However, we assumed that P(x) holds for our arbitrary x, which leads to a contradiction. Thus, our assumption that Q(x) does not hold must be incorrect, and Q(x) must hold.
|
2308.03762#41
|
2308.03762#43
|
2308.03762
|
[
"2308.03762"
] |
2308.03762#43
|
GPT-4 Can't Reason
|
Direction 2: If Q(x) holds, we need to show that P(x) holds. We will use proof by contradiction again. Assume that P(x) does not hold for our arbitrary x. Now, since P(x) does not hold, it cannot be the case that forall x, P(x) holds. Since (forall x . P(x)) <==> (forall x . Q(x)) holds, this implies that it cannot be the case that forall x, Q(x) holds. However, we assumed that Q(x) holds for our arbitrary x, which leads to a contradiction. Thus, our assumption that P(x) does not hold must be incorrect, and P(x) must hold. In both directions, we have shown that P(x) <==> Q(x) holds for our arbitrary x. Since x was arbitrary, this holds for all x. Therefore, (forall x . P(x) <==> Q(x)) holds when (forall x . P(x)) <==> (forall x . Q(x)) holds. In conclusion, we have shown that (forall x . P(x) <==> Q(x)) holds if and only if (forall x . P(x)) <==> (forall x . Q(x)) holds. While the reasoning appears to proceed along the right general lines, it is, again, insidiously wrong. The reductio ad absurdum given for the two directions hinges on the fallacy that if a property T is assumed to hold for a particular but arbitrarily chosen x, then we must have â x . T (x), leading to the erroneous conclusion that T (x) and ¬â x . T (x) are jointly inconsistent. It is as if GPT-4 knows what needs to be done but does not know how to do it properly, and instead of saying so, it proceeds to generate a bogus but plausible-looking
|
2308.03762#42
|
2308.03762#44
|
2308.03762
|
[
"2308.03762"
] |
2308.03762#44
|
GPT-4 Can't Reason
|
14 # â proof.â â proof.â KA: Please clarify your claim that â However, we assumed that P(x) holds for our arbitrary x, which leads to a contradictionâ . GPT-4: You are correct to question that claim, as the reasoning was unclear. I apologize for the confusion. Let me provide a clearer explanation for the second part of the proof. (forall x . P(x)) <==> (forall x . Q(x)) implies (forall x . P(x) <==> Q(x)) Assume that (forall x . P(x)) <==> (forall x . Q(x)) holds. We need to show that for all x, P(x) <==> Q(x). Let x be an arbitrary element from the domain of discourse. We will consider the two directions of the biconditional P(x) <==> Q(x): Direction 1: If P(x) holds, we need to show that Q(x) holds. If P(x) holds for our arbitrary x, we can make the following argument: If it is not the case that forall x, P(x) holds, then there must exist some x for which P(x) does not hold. However, if forall x, P(x) does hold, then by the given biconditional (forall x . P(x)) <==> (forall x . Q(x)) forall x, Q(x) must also hold. Since P(x) holds for our arbitrary x, either it holds for all x or there exists some other x where P(x) does not hold. In either case, we canâ t deduce anything about Q(x) for our arbitrary x. Direction 2: If Q(x) holds, we need to show that P(x) holds. If Q(x) holds for our arbitrary x, we can make a similar argument as in Direction 1: If it is not the case that forall x, Q(x) holds, then there must exist some x for which Q(x) does not hold. However, if forall x, Q(x) does hold, then by the given biconditional (forall x . P(x)) <==> (forall x . Q(x)), forall x, P(x) must also hold. Since Q(x) holds for our arbitrary x, either it holds for all x or there exists some other x where Q(x) does not hold. In either case, we canâ
|
2308.03762#43
|
2308.03762#45
|
2308.03762
|
[
"2308.03762"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.