doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2308.03762 | 54 | Direction 1: If forall x, P(x) holds, we need to show that forall x, Q(x) holds. Since P(x) <==> Q(x) for all x, if P(x) holds for all x, then Q(x) must also hold for all x, as they are equivalent.
Direction 2: If forall x, Q(x) holds, we need to show that forall x, P(x) holds. Since P(x) <==> Q(x) for all x, if Q(x) holds for all x, then P(x) must also hold for all x, as they are equivalent.
Thus, we have shown that (forall x . P(x)) <==> (forall x . Q(x)) holds when (forall x . P(x) <==> Q(x)) holds.
⢠(forall x . P(x)) <==> (forall x . Q(x)) implies (forall . P(x) <==> Q(x)) Assume that (forall x . P(x)) <==> (forall x . Q(x)) holds.
We need to show that for all x, P(x) <==> Q(x). | 2308.03762#54 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 55 | We need to show that for all x, P(x) <==> Q(x).
Let x be an arbitrary element from the domain of discourse. We will consider the two directions of the biconditional P(x) <==> Q(x):
Direction 1: If P(x) holds, we need to show that Q(x) holds. If P(x) holds for our arbitrary x, then it must be the case that forall x, P(x) holds. Since (forall x . P(x)) <==> (forall x . Q(x)) holds, it follows that forall x, Q(x) holds. Therefore, Q(x) must hold for our arbitrary x as well.
Direction 2: If Q(x) holds, we need to show that P(x) holds. If Q(x) holds for our arbitrary x, then it must be the case that forall x, Q(x) holds. Since (forall x . P(x)) <==> (forall x . Q(x)) holds, it follows that forall x, P(x) holds. Therefore, P(x) must hold for our arbitrary x as well. | 2308.03762#55 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 56 | Thus, we have shown that for all x, P(x) <==> Q(x) holds when (forall x . P(x)) <==> (forall x . Q(x)) holds.
In conclusion, we have shown that (forall x . P(x) <==> Q(x)) holds if and only if (forall x . P(x)) <==> (forall x . Q(x)) holds.
Notice how pernicious the fallacy is here: âIf P(x) holds for our arbitrary x, then it must be the case that forall x, P(x) holds.â It sounds plausible, but of course it is quite wrong.
KA: Can you elaborate on your claim âIf P(x) holds for our arbitrary x, then it must be the case that forall x, P(x) holdsâ?
GPT-4: You are correct to question that claim. It was a mistake in my response. I apologize for the confusion, and I appreciate your attention to detail. Here is the corrected proof.
(forall x . P(x) <==> Q(x)) implies (forall x . P(x)) <==> (forall x . Q(x)) Assume that for all x, P(x) <==> Q(x) holds. | 2308.03762#56 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 57 | Direction 1: If forall x, P(x) holds, we need to show that forall x, Q(x) holds. Since P(x) <==> Q(x) for all x, if P(x) holds for all x, then Q(x) must also hold for all x, as they are equivalent.
Direction 2: If forall x, Q(x) holds, we need to show that forall x, P(x) holds. Since P(x) <==> Q(x) for all x, if Q(x) holds for all x, then P(x) must also hold for all x, as they are equivalent.
Thus, we have shown that (forall x . P(x)) <==> (forall x . Q(x)) holds when (forall x . P(x) <==> Q(x)) holds.
(forall x . P(x)) <==> (forall x . Q(x)) implies (forall x . P(x) <==> Q(x)) Assume that (forall x . P(x)) <==> (forall x . Q(x)) holds.
We need to show that for all x, P(x) <==> Q(x). Let x be an arbitrary element from the domain of discourse. We will consider the two directions of the | 2308.03762#57 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 58 | We need to show that for all x, P(x) <==> Q(x). Let x be an arbitrary element from the domain of discourse. We will consider the two directions of the
biconditional P(x) <==> Q(x):
13
Direction 1: If P(x) holds, we need to show that Q(x) holds. If P(x) holds for our arbitrary x, we cannot directly conclude that forall x, P(x) holds. However, we can proceed as follows: Since (forall x . P(x)) <==> (forall x . Q(x)) holds, either both sides of the biconditional are true or both sides are false. If forall x, P(x) holds, then by the biconditional, forall x, Q(x) must hold as well, and thus Q(x) holds for our arbitrary x. If it is not the case that forall x, P(x) holds, then there must be some x where P(x) does not hold. But, for our arbitrary x, P(x) holds, which means our arbitrary x cannot be one of those instances. In this case, it is not guaranteed that Q(x) holds for our arbitrary x. | 2308.03762#58 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 59 | Direction 2: If Q(x) holds, we need to show that P(x) holds. Similarly, if Q(x) holds for our arbitrary x, we cannot directly conclude that forall x, Q(x) holds. We can use the same approach as in Direction 1. Since (forall x . P(x)) <==> (forall x . Q(x)) holds, either both sides of the biconditional are true or both sides are false. If forall x, Q(x) holds, then by the biconditional, forall x, P(x) must hold as well, and thus P(x) holds for our arbitrary x. If it is not the case that forall x, Q(x) holds, then there must be some x where Q(x) does not hold. But, for our arbitrary x, Q(x) holds, which means our arbitrary x cannot be one of those instances. In this case, it is not guaranteed that P(x) holds for our arbitrary x.
As we cannot guarantee that P(x) <==> Q(x) for our arbitrary
KA: Please continue. | 2308.03762#59 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 60 | As we cannot guarantee that P(x) <==> Q(x) for our arbitrary
KA: Please continue.
GPT-4: My apologies for the premature ending in my previous response. Let me provide an alternative approach to the proof for the second part. (forall x . P(x)) <==> (forall x . Q(x)) implies (forall x . P(x) <==> Q(x)) Assume that (forall x . P(x)) holds.
We need to show that for all x, P(x) <==> Q(x). Let x be an arbitrary element from the domain of discourse. We will consider the two directions of the
biconditional P(x) <==> Q(x): | 2308.03762#60 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 61 | biconditional P(x) <==> Q(x):
Direction 1: If P(x) holds, we need to show that Q(x) holds. We will use proof by contradiction. Assume that Q(x) does not hold for our arbitrary x. Now, since Q(x) does not hold, it cannot be the case that forall x, Q(x) holds. Since (forall x <==> P(x)) <==> (forall x . Q(x)) holds, this implies that it cannot be the case that forall x, P(x) holds. However, we assumed that P(x) holds for our arbitrary x, which leads to a contradiction. Thus, our assumption that Q(x) does not hold must be incorrect, and Q(x) must hold. | 2308.03762#61 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 62 | Direction 2: If Q(x) holds, we need to show that P(x) holds. We will use proof by contradiction again. Assume that P(x) does not hold for our arbitrary x. Now, since P(x) does not hold, it cannot be the case that forall x, P(x) holds. Since (forall x . P(x)) <==> (forall x . Q(x)) holds, this implies that it cannot be the case that forall x, Q(x) holds. However, we assumed that Q(x) holds for our arbitrary x, which leads to a contradiction. Thus, our assumption that P(x) does not hold must be incorrect, and P(x) must hold.
In both directions, we have shown that P(x) <==> Q(x) holds for our arbitrary x. Since x was arbitrary, this holds for all x. Therefore,
(forall x . P(x) <==> Q(x))
holds when (forall x . P(x)) <==> (forall x . Q(x)) holds.
In conclusion, we have shown that (forall x . P(x) <==> Q(x)) holds if and only if (forall x . P(x)) <==> (forall x . Q(x)) holds. | 2308.03762#62 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 63 | While the reasoning appears to proceed along the right general lines, it is, again, insidiously wrong. The reductio ad absurdum given for the two directions hinges on the fallacy that if a property T is assumed to hold for a particular but arbitrarily chosen x, then we must have â x . T (x), leading to the erroneous conclusion that T (x) and ¬â x . T (x) are jointly inconsistent. It is as if GPT-4 knows what needs to be done but does not know how to do it properly, and instead of saying so, it proceeds to generate a bogus but plausible-looking
14
# âproof.â
âproof.â
KA: Please clarify your claim that âHowever, we assumed that P(x) holds for our arbitrary x, which leads to a contradictionâ.
GPT-4: You are correct to question that claim, as the reasoning was unclear. I apologize for the confusion. Let me provide a clearer explanation for the second part of the proof.
(forall x . P(x)) <==> (forall x . Q(x)) implies (forall x . P(x) <==> Q(x)) Assume that (forall x . P(x)) <==> (forall x . Q(x)) holds. | 2308.03762#63 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 64 | We need to show that for all x, P(x) <==> Q(x). Let x be an arbitrary element from the domain of discourse. We will consider the two directions of the
biconditional P(x) <==> Q(x):
Direction 1: If P(x) holds, we need to show that Q(x) holds. If P(x) holds for our arbitrary x, we can make the following argument: If it is not the case that forall x, P(x) holds, then there must exist some x for which P(x) does not hold. However, if forall x, P(x) does hold, then by the given biconditional (forall x . P(x)) <==> (forall x . Q(x)) forall x, Q(x) must also hold. Since P(x) holds for our arbitrary x, either it holds for all x or there exists some other x where P(x) does not hold. In either case, we canât deduce anything about Q(x) for our arbitrary x. | 2308.03762#64 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 65 | Direction 2: If Q(x) holds, we need to show that P(x) holds. If Q(x) holds for our arbitrary x, we can make a similar argument as in Direction 1: If it is not the case that forall x, Q(x) holds, then there must exist some x for which Q(x) does not hold. However, if forall x, Q(x) does hold, then by the given biconditional (forall x . P(x)) <==> (forall x . Q(x)), forall x, P(x) must also hold. Since Q(x) holds for our arbitrary x, either it holds for all x or there exists some other x where Q(x) does not hold. In either case, we canât deduce anything about P(x) for our arbitrary x.
Since we canât make any deï¬nitive conclusions about the relationship between P(x) and Q(x) for our arbitrary x based solely on the given biconditional (forall x . P(x)) <==> (forall x . Q(x)), it is incorrect to claim that
(forall x . P(x)) <==> (forall x . Q(x))
implies (forall x . P(x) <==> Q(x)). | 2308.03762#65 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 66 | (forall x . P(x)) <==> (forall x . Q(x))
implies (forall x . P(x) <==> Q(x)).
So where does that leave us? After the problematic inference step was pointed out to GPT-4 , it proceeded to âclarifyâ its reasoning, but ultimately it ended up admitting that its reasoning was wrong. However, it did not take a step back to reassess the initial problem and either provide a countermodel or an alternative proof. When it is explicitly asked to take a ï¬nal position one way or another, it again chooses the wrong answer and provides yet another bogus âproofâ for it:
KA: So what does that prove?
GPT-4: My previous response was not clear, and I apologize for the confusion. [· · · GPT-4 rehashes its previous answer here · · · ]
In conclusion, we have only shown that (forall x . P(x) <==> Q(x)) implies
(forall x . P(x)) <==> (forall x . Q(x))
but we have not proven the reverse implication. The statement (forall x . P(x)) <==> (forall x . Q(x)) implies (forall x . P(x) <==> Q(x)) remains unproven. | 2308.03762#66 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 67 | KA: So can you prove it, or else disprove it?
GPT-4: I apologize for the confusion in my previous responses. I will now provide a correct proof for the second part of the question. The goal is to show that (forall x . P(x)) <==> (forall x . Q(x)) implies (forall x . P(x) <==> Q(x)).
15
(forall x . P(x)) <==> (forall x . Q(x)) implies (forall x . P(x) <==> Q(x)). As- sume that (forall x . P(x)) <==> (forall x . Q(x)) holds.
We need to show that for all x, P(x) <==> Q(x). Let x be an arbitrary element from the domain of discourse. We will consider the two directions of the
biconditional P(x) <==> Q(x): | 2308.03762#67 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 68 | biconditional P(x) <==> Q(x):
Direction 1: If P(x) holds, we need to show that Q(x) holds. Since (forall x . P(x)) <==> (forall x . Q(x)), there are two possible scenarios: Both sides of the biconditional are true: forall x, P(x) holds and forall x, Q(x) holds. Both sides of the biconditional are false: it is not the case that forall x, P(x) holds, and it is not the case that forall x, Q(x) holds. If scenario 1 is true, then Q(x) holds for all x, including our arbitrary x. If scenario 2 is true, then there must exist some element in the domain where P(x) does not hold. However, for our arbitrary x, we assumed that P(x) holds. Thus, scenario 2 cannot be true for our arbitrary x. Consequently, scenario 1 must be true, and Q(x) holds for our arbitrary x. | 2308.03762#68 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 69 | Direction 2: If Q(x) holds, we need to show that P(x) holds. The same analysis used in Direction 1 can be applied to Direction 2. If scenario 1 is true, then P(x) holds for all x, including our arbitrary x. If scenario 2 is true, then there must exist some element in the domain where Q(x) does not hold. However, for our arbitrary x, we assumed that Q(x) holds. Thus, scenario 2 cannot be true for our arbitrary x. Consequently, scenario 1 must be true, and P(x) holds for our arbitrary x.
In both directions, we have shown that P(x) <==> Q(x) holds for our arbitrary x. Since x was arbitrary, this holds for all x. Therefore, (forall x . P(x) <==> Q(x)) holds when
(forall x . P(x)) <==> (forall x . Q(x))
holds.
In conclusion, we have shown that (forall x . P(x) <==> Q(x)) holds if and only if (forall | 2308.03762#69 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 70 | holds.
In conclusion, we have shown that (forall x . P(x) <==> Q(x)) holds if and only if (forall
x . P(x)) <==> (forall x . Q(x)) holds. Regrettably, the error is the same one that GPT-4 already made before (and which it supposedly ï¬xed before). The model seems fundamentally confused about the role of the so-called eigenvariable x (the âarbitrarily chosenâ x), believing that T (x) is inconsistent with ¬â x . T (x).
# 1.3.6 Simple Graph Coloring
We ï¬rst consider a graph-coloring problem that does not have a solution. It is trivial to see that two colors do not sufï¬ce for the graph described in this problem (e.g., vertices 0, 2, and 4 form a clique and hence require at least 3 colors).
KA: Consider an undirected graph with 6 vertices (0 through 5) and the following set of edges:
{(0, 1), (0, 3), (0, 4), (0, 2), (1, 2), (1, 3), (1, 5), (2, 4), (2, 5), (3, 4), (3, 5), (4, 5)}. | 2308.03762#70 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 71 | Color every vertex either red or green, so that no two adjacent vertices receive the same color.
GPT-4: This undirected graph is complete, meaning that every vertex is connected to every other vertex by an edge. To color the vertices in a way that no two adjacent vertices receive the same color, we can use a simple alternating pattern. Hereâs one possible solution:
⢠Vertex 0: Red
⢠Vertex 1: Green
⢠Vertex 2: Red
16
⢠Vertex 3: Green
⢠Vertex 4: Red
⢠Vertex 5: Green
This coloring ensures that no two adjacent vertices have the same color. For example, vertex 0 is connected to vertices 1, 2, 3, 4, and 5. Since vertex 0 is colored red, all of its adjacent vertices are colored green, red, green, red, and green, respectively. The same goes for all other vertices. | 2308.03762#71 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 72 | The sheer number of egregious errors in this short output is shocking. GPT-4 starts by falsely claiming that the graph is complete (it is apparent that it is not, e.g., there is no edge between vertices 2 and 3). Moreover, it should be obvious that if the graph were actually complete, it would be impossible to color it with two colors, as a complete graph with 6 vertices would require at least 6 colors. In other words, the modelâs claims are not simply wrong but internally inconsistent: In one breath we are told (incorrectly) that this six-vertex graph is complete, which means that itâs impossible to color it with two colors, and in the next breath we are given a two-color âsolution.â | 2308.03762#72 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 73 | Remarkably, this dismal display is not because GPT-4 does not have sufï¬cient knowledge or data about graphs (which are, after all, pervasive mathematical entities with copious information about them on the In- ternet, as well as in countless articles and books), and even about complete graphs in particular. When I asked GPT-4 to tell me what it knows about complete graphs it rattled off the correct deï¬nition of complete graphs along with a long list of results about Kn (the complete graphs with n vertices), such as the number of edges (n · (n â 1)/2), the degree of each vertex (n â 1), the connectivity (maximal), the number of subgraphs (2n·(nâ1)/2), the fact that theyâre Hamiltonian and Eulerian, their planarity, and a whole lot of other informa- tion, including their chromatic number: Chromatic Number: The chromatic number of a complete graph Kn (i.e., the smallest number of colors needed to color the vertices of Kn so that no two adjacent vertices share the same color) is n (this italicized sentence is a verbatim quotation from GPT-4). It is clear that GPT-4 has memorized all this information but is unable to use it in a new setting. | 2308.03762#73 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 74 | When I pointed out that the solution is wrong, GPT-4 went on to make the exact same type of mistake: KA: But vertices 0 and 2 are adjacent and youâve colored both of them red.
GPT-4: You are right, my previous response had an error. I apologize for that. Letâs try another coloring:
⢠Vertex 0: Red
⢠Vertex 1: Green
⢠Vertex 2: Green
⢠Vertex 3: Red
⢠Vertex 4: Green
⢠Vertex 5: Red
In this coloring, no two adjacent vertices have the same color.
KA: But 0 and 3 are adjacent and again youâve colored both of them red.
GPT-4: You are correct again, and I apologize for the mistake. Given the complete graph, we can use a bipartite coloring strategy. Letâs try this coloring:
⢠Vertex 0: Red
⢠Vertex 1: Green
17
⢠Vertex 2: Red
⢠Vertex 3: Green
⢠Vertex 4: Red
⢠Vertex 5: Green
In this coloring, vertices 0, 2, and 4 are red, and vertices 1, 3, and 5 are green. As you can see, no two adjacent vertices share the same color. | 2308.03762#74 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 75 | At this point it was clear that asking the model to produce a solution with 2 colors was a dead end. A solution with three colors surely would be easier (such solutions do exist and are easy to ï¬nd):
KA: Again, 0 and 2 are adjacent vertices. Youâve colored both of them red. Letâs try with 3 colors. Can you color each vertex either red, blue, or green, in such a way that every pair of adjacent vertices receive different colors?
GPT-4: I apologize for the repeated mistakes in my previous responses. Since this is a complete graph with 6 vertices (also known as K6), it cannot be colored with only two colors without having adjacent vertices with the same color. However, using three colors, we can color the graph such that every pair of adjacent vertices receive different colors. Hereâs a possible coloring with red, blue, and green:
⢠Vertex 0: Red
⢠Vertex 1: Green
⢠Vertex 2: Blue
⢠Vertex 3: Red
⢠Vertex 4: Green
⢠Vertex 5: Blue
In this coloring, no two adjacent vertices share the same color, and the graph is properly colored using three colors.
KA: No, this is not correct either. 0 and 3 are adjacent and youâve made both of them red. | 2308.03762#75 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 76 | KA: No, this is not correct either. 0 and 3 are adjacent and youâve made both of them red.
GPT-4: I apologize once again for my mistake. I appreciate your patience. Hereâs the correct coloring using three colors:
⢠Vertex 0: Red
⢠Vertex 1: Green
⢠Vertex 2: Blue
⢠Vertex 3: Green
⢠Vertex 4: Red
⢠Vertex 5: Blue
With this coloring, no two adjacent vertices share the same color, and the graph is properly colored using three colors. Of course this is not correct either, but there was no point in continuing the conversation. It was obvious that despite its ability to regurgitate theorems about graphs and even proofs of such theorems, GPT-4 did not understand anything about graphs.
18
# 1.3.7 Subset Sum
This problem considers a small set of integers S and asks for the number of subsets of S whose elements sum up to 37. The answer is 0, because S contains only even numbers and no sum of even numbers can ever be odd. Without pausing to consider the contents of S, GPT-4 reï¬exively generates the sort of output that it thinks would be an appropriate response to this question and then goes on to hallucinate an answer: | 2308.03762#76 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 77 | KA: Let S = {2, 8, 6, 32, 22, 44, 28, 12, 18, 10, 14}. How many subsets does S have that sum up to 37?
GPT-4: We can ï¬nd the number of subsets of S that sum up to 37 using a recursive depth-ï¬rst search algorithm to explore all possible combinations of elements. Hereâs a Python code to do that:
# def subset_sum_count(S, target, idx=0, current_sum=0):
if idx == len(S): return 1 if current_sum == target else 0 count = 0 # Case 1: Include the current element in the sum count += subset_sum_count(S, target, idx+1, current_sum+S[idx]) # Case 2: Exclude the current element from the sum count += subset_sum_count(S, target, idx+1, current_sum) return count S = [2, 8, 6, 32, 22, 44, 28, 12, 18, 10, 14] target = 37 result = subset_sum_count(S, target) print(result)
The output is 4. There are 4 subsets of S whose elements sum up to 37.
When we ask GPT-4 to back up its answer with evidence, it goes on a hallucination rampage: | 2308.03762#77 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 78 | When we ask GPT-4 to back up its answer with evidence, it goes on a hallucination rampage:
KA: Can you list those 4 subsets?
GPT-4: Certainly! We can modify the previous Python code to not only count the subsets but also return the subsets themselves. Hereâs the updated code:
def subset_sum_subsets(S, target, idx=0, current_sum=0, current_subset=None, result=None): if current_subset is None: current_subset = [] if result is None: result = [] if idx == len(S): if current_sum == target: result.append(current_subset) return # Case 1: Include the current element in the sum subset_sum_subsets(S, target, idx+1, current_sum+S[idx], current_subset + [S[idx]], result) # Case 2: Exclude the current element from the sum subset_sum_subsets(S, target, idx+1, current_sum,
19
current_subset, result)
return result
S = [2, 8, 6, 32, 22, 44, 28, 12, 18, 10, 14] target = 37 result = subset_sum_subsets(S, target) print(result) | 2308.03762#78 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 79 | The output is: [[2, 8, 6, 12, 9], [2, 8, 6, 21], [2, 8, 27], [2, 35]]. So, the 4 subsets of S that sum up to 37 are:
{2, 8, 6, 12, 9} {2, 8, 6, 21} {2, 8, 27} {2, 35}
KA: But 9, 21, 27, and 35 are not elements of S.
GPT-4: Apologies for the error in my previous response. I mistakenly used the wrong values in the output. Hereâs the corrected Python code to list the subsets of S that sum up to 37:
def subset_sum_subsets...
...
S = [2, 8, 6, 32, 22, 44, 28, 12, 18, 10, 14] target = 37 result = subset_sum_subsets(S, target) print(result)
The corrected output is: [[2, 6, 10, 18, 1], [2, 6, 29], [2, 8, 12, 15], [2, 8, 27]]. So, the 4 subsets of S that sum up to 37 are:
{2, 6, 10, 18, 1} {2, 6, 29} {2, 8, 12, 15} {2, 8, 27} | 2308.03762#79 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 80 | {2, 6, 10, 18, 1} {2, 6, 29} {2, 8, 12, 15} {2, 8, 27}
However, I made another mistake in the previous response. The correct subsets should only contain elements from the original set S. My previous response included elements that are not in S. I apologize for the confusion. Hereâs the corrected list of subsets of S that sum up to 37:
{2, 6, 10, 18, 1} {2, 6, 29} {2, 8, 12, 15} {2, 8, 27}
Again, I apologize for the confusion. The corrected subsets do not include any elements that are not in the original set S.
KA: But they do. They include 1, 29, 15, and 27 - none of which are in S.
GPT-4: Apologies for the confusion in my previous responses. corrected code and output for the subsets of S that sum up to 37: I made mistakes in the output. Hereâs the
20
def subset_sum_subsets ...
... S = [2, 8, 6, 32, 22, 44, 28, 12, 18, 10, 14] target = 37 result = subset_sum_subsets(S, target) print(result) | 2308.03762#80 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 81 | The corrected output is: [[2, 6, 10, 18, 1], [2, 6, 29], [2, 8, 12, 15], [2, 8, 27]]. So, the 4 subsets of S that sum up to 37 are:
{2, 8, 6, 18, 1} {2, 6, 29} {2, 8, 12, 15} {2, 8, 27}
Again, I apologize for the confusion. The corrected subsets do not include any elements that are not in the original set S.
KA: Is 1 in S?
GPT-4: I apologize for the repeated errors in my previous responses. You are correct that 1 is not in the set S. Letâs revisit the problem and ï¬nd the correct subsets of S that sum up to 37.
Hereâs the corrected Python code and output for the subsets of S that sum up to 37:
def subset_sum_subsets ..
S = [2, 8, 6, 32, 22, 44, 28, 12, 18, 10, 14] target = 37 result = subset_sum_subsets(S, target) print(result) | 2308.03762#81 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 82 | The corrected output is: [[8, 6, 12, 10, 1], [8, 6, 22, 1], [8, 28, 1], [32, 6, 1]]. So, the 4 subsets of S that sum up to 37 are:
{8, 6, 12, 10, 1} {8, 6, 22, 1} {8, 28, 1} {32, 6, 1}
I made another mistake in the previous response. The correct subsets should only contain elements from the original set S. My previous response included elements that are not in S. I apologize for the confusion. Hereâs the corrected list of subsets of S that sum up to 37:
{8, 6, 12, 10, 1} {8, 6, 22, 1} {8, 28, 1} {32, 6, 1}
Again, I apologize for the confusion. The corrected subsets do not include any elements that are not in the original set S.
KA: You keep including 1 in the results. How many times do we need to agree that 1 is NOT in S?
21
GPT-4: I sincerely apologize for the repeated mistakes in my previous responses. I appreciate your patience. Letâs ï¬nd the correct subsets of S that sum up to 37 without including any elements that are not in S. | 2308.03762#82 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 84 | {8, 6, 10, 12} {8, 6, 22} {32, 6}
Once again, I apologize for the confusion in my previous responses, and I hope this answer ï¬nally provides the correct subsets without including any elements that are not in the original set S. In summary: First GPT-4 repeatedly insists on producing ï¬ctional subsets that sum up to 37. Whenâafter multiple corrections and admonitionsâthe model ï¬nally comes to respect the constraint that the subsets of S must actually be subsets of S, it produces subsets that do not sum up to the target goal. Along the way it outputs the wrong output r, says that it made a mistake and that r is wrong, and then serves up r again completely unchanged, but claims that now itâs correct. These are not mistakes that any human would make.
# 1.3.8 Elementary Discrete Math
After telling GPT-4 that A Ã B stands for the Cartesian product of sets A and B, that a relation R from A to B is a subset of A Ã B, and that & stands for set intersection, I asked it to prove or disprove the following claim:
dom(R1 & R2) = dom(R1) & dom(R2), (1.2) | 2308.03762#84 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 85 | dom(R1 & R2) = dom(R1) & dom(R2), (1.2)
where R1 and R2 are binary relations from A to B and dom(R) stands for the domain of a binary relation R. The problem is trivial. We need the subset relation to hold in both directions of (1.2), but it only holds in the left-to-right direction. Counterexamples in the other direction are very easy to ï¬nd (e.g., take A = {(1, 2)} and B = {(1, 3)}).
KA: For any sets A and B, a relation R from A to B is deï¬ned as a subset of A x B. The domain of R is the set of all elements a in A such that (a,b) in R for some b in B. We write dom(R) for the domain of R. Prove or disprove the following claim:
dom(R1 & R2) = dom(R1) & dom(R2).
GPT-4: The claim is true. Here is the proof:
The domain of a relation R from A to B is the set of all ï¬rst elements from the ordered pairs in R. We denote this as dom(R). | 2308.03762#85 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 86 | The domain of a relation R from A to B is the set of all ï¬rst elements from the ordered pairs in R. We denote this as dom(R).
Letâs consider R1 and R2 as relations from A to B. The domain of the intersection of R1 and R2, denoted as dom(R1 â© R2), is the set of all ï¬rst elements
from the ordered pairs that are in both R1 and R2.
22
The intersection of the domains of R1 and R2, denoted as dom(R1) â© dom(R2), is the set of all ï¬rst elements that appear in the ordered pairs of both R1 and R2.
To prove that these two sets are the same, we need to show that each is a subset of the other.
dom(R1 â© R2) â dom(R1) â© dom(R2) | 2308.03762#86 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 88 | dom(R1) â© dom(R2) â dom(R1 â© R2)
Take any element a in dom(R1) â© dom(R2). By deï¬nition, a is in both dom(R1) and dom(R2). This means there exists some b1 and b2 such that (a, b1) is in R1 and (a, b2) is in R2. If b1 = b2, we have an element (a, b1) = (a, b2) which belongs to R1 â© R2. Hence, a is in dom(R1 â© R2). This shows dom(R1) â© dom(R2) is a subset of dom(R1 â© R2).
Therefore, we can conclude that dom(R1 â© R2) = dom(R1) â© dom(R2), as desired. | 2308.03762#88 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 89 | Therefore, we can conclude that dom(R1 â© R2) = dom(R1) â© dom(R2), as desired.
The âproofâ that GPT-4 offers is of course bogus, but the error is treacherous and can easily escape unnoticed, particularly by people who are not experienced in scrutinizing mathematical proofs. The error occurs here: If b1 = b2, we have an element (a, b1) = (a, b2) which belongs to R1 â© R2. Hence, a is in dom(R1 â© R2). By assuming that b1 = b2, this step essentially begs the question. To put it in strict technical terms, the assumption that b1 = b2 is explicitly introduced but it is never discharged (if it were, it would appear as a hypothesis in the derived conclusion, which would no longer match the goal (1.2)).
Once the error is pointed out to GPT-4, the model recognizes it and retracts its claim: | 2308.03762#89 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 90 | Once the error is pointed out to GPT-4, the model recognizes it and retracts its claim:
KA: Letâs unpack part 2 of your proof. You write: âThis means there exists some b1 and b2 such that (a, b1) is in R1 and (a, b2) is in R2. If b1 = b2, we have an element (a, b1) = (a, b2) which belongs to R1 â© R2.â
Here you have explicitly introduced an assumption that you never went on to discharge, namely that b1 = b2. This seems like an error in your proof. Can you ï¬x that?
GPT-4: Youâre correct, my previous response contains an error. The equality b1 = b2 is not guaranteed. To correct this, letâs revise the second part of the proof:
dom(R1) â© dom(R2) â dom(R1 â© R2). | 2308.03762#90 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 91 | dom(R1) â© dom(R2) â dom(R1 â© R2).
Take any element a in dom(R1) â© dom(R2). By deï¬nition, a is in both dom(R1) and dom(R2). This means there exists some b1 and b2 such that (a, b1) is in R1 and (a, b2) is in R2. But itâs not necessarily the case that b1 = b2, which is what we would need to guarantee that (a, b) is in R1 â© R2.
We cannot conclude that dom(R1) â© dom(R2) â dom(R1 â© R2) based on the provided information. Therefore, the original claim that dom(R1 â© R2) = dom(R1) â© dom(R2) is false.
Thank you for pointing out this error, and I apologize for the oversight. This serves as a good reminder
that itâs crucial to thoroughly examine all assumptions in a proof. Hindsight is 20/20, of course, but GPT-4 is right that proof checking is crucial. Indeed, it will become indis- pensable if LLMs start to become more widely used to generate code or to act as assistants of scientists and engineers. | 2308.03762#91 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 92 | # 1.3.9 Simple Scheduling
This is the same scheduling problem that appeared in the January piece. GPT-4 still gets it wrong.
KA: We have four tasks, call them T1, T2, T3, and T4. They need to be scheduled one after the other. T2 must
23
be done before T4, and if T1 is done before T3, then T4 should be the very last task. How many different ways are there to schedule these four tasks?
GPT-4: Letâs consider the possible orders in which we can schedule the tasks while satisfying the given con- straints:
1. T2 before T4:
⢠If T1 is before T3: T1, T2, T3, T4 (T2 before T4, and T4 is the last task as T1 is before T3)
⢠If T1 is after T3: T2, T3, T1, T4 (T2 before T4, and no constraint on the position of T4 as T1 is after T3)
2. T4 before T2: Not possible, as T2 must be done before T4. | 2308.03762#92 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 93 | 2. T4 before T2: Not possible, as T2 must be done before T4.
Now letâs ï¬nd all possible orders within each case: 1a. T1, T2, T3, T4: This case has only one valid order as given. 1b. T2, T3, T1, T4: There are 3! = 6 permutations of T2, T3, and T1, but we need to make sure T2 comes before T4 in all cases. The valid permutations are: * T2, T3, T1, T4 * T2, T1, T3, T4 * T3, T2, T1, T4 So, there are a total of 1 (from case 1a) + 3 (from case 1b) = 4 different ways to schedule these four tasks while satisfying the given constraints.
KA: Can you summarize all 4 orderings?
GPT-4: Certainly! Here are the 4 different orderings that satisfy the given constraints:
1. T1, T2, T3, T4
2. T2, T3, T1, T4
3. T2, T1, T3, T4
4. T3, T2, T1, T4 | 2308.03762#93 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 94 | 2. T2, T3, T1, T4
3. T2, T1, T3, T4
4. T3, T2, T1, T4
These 4 orderings meet the requirements that T2 must be done before T4, and if T1 is done before T3, then T4 should be the very last task.
KA: How about T3, T1, T2, T4?
GPT-4: You are correct, and I apologize for the oversight. The ordering T3, T1, T2, T4 also satisï¬es the given constraints. Here, T2 is done before T4, and since T1 is done after T3, there is no constraint on the position of T4. So, there are actually 5 different orderings that satisfy the constraints: T1, T2, T3, T4 T2, T3, T1, T4 T2, T1, T3, T4 T3, T2, T1, T4 T3, T1, T2, T4
KA: How about T3, T2, T4, T1? | 2308.03762#94 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 95 | KA: How about T3, T2, T4, T1?
GPT-4: You are correct again, and I apologize for the oversight. The ordering T3, T2, T4, T1 also satisï¬es the given constraints. Here, T2 is done before T4, and since T1 is done after T3, there is no constraint on the position of T4. So, there are actually 6 different orderings that satisfy the constraints:
24
T1, T2, T3, T4 T2, T3, T1, T4 T2, T1, T3, T4 T3, T2, T1, T4 T3, T1, T2, T4 T3, T2, T4, T1
KA: How about T1, T3, T2, T4? | 2308.03762#95 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 96 | KA: How about T1, T3, T2, T4?
GPT-4: You are correct, and I apologize for the oversight again. The ordering T1, T3, T2, T4 also satisï¬es the given constraints. Here, T2 is done before T4, and since T1 is done before T3, T4 is the very last task, as required. So, there are actually 7 different orderings that satisfy the constraints: T1, T2, T3, T4 T2, T3, T1, T4 T2, T1, T3, T4 T3, T2, T1, T4 T3, T1, T2, T4 T3, T2, T4, T1 T1, T3, T2, T4
KA: How about T2, T4, T3, T1? | 2308.03762#96 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 97 | KA: How about T2, T4, T3, T1?
GPT-4: I apologize for my previous mistakes. You are correct. The ordering T2, T4, T3, T1 also satisï¬es the given constraints. Here, T2 is done before T4, and since T1 is done after T3, there is no constraint on the position of T4. So, there are actually 8 different orderings that satisfy the constraints: T1, T2, T3, T4 T2, T3, T1, T4 T2, T1, T3, T4 T3, T2, T1, T4 T3, T1, T2, T4 T3, T2, T4, T1 T1, T3, T2, T4 T2, T4, T3, T1
KA: How about T2, T3, T4, T1? You are correct, and I apologize for the oversight. The ordering T2, T3, T4, T1 also satisï¬es the given constraints. Here, T2 is done before T4, and since T1 is done after T3, there is no constraint on the position of T4.
So, there are actually 9 different orderings that satisfy the constraints: | 2308.03762#97 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 98 | So, there are actually 9 different orderings that satisfy the constraints:
T1, T2, T3, T4 T2, T3, T1, T4 T2, T1, T3, T4 T3, T2, T1, T4 T3, T1, T2, T4 T3, T2, T4, T1 T1, T3, T2, T4 T2, T4, T3, T1 T2, T3, T4, T1
KA: Should we keep going? Are you sure there are no other orderings that satisfy the given constraints?
25
GPT-4: I appreciate your diligence in exploring all possible orderings. Letâs thoroughly examine the remain- ing possibilities to make sure we cover all valid orderings that satisfy the given constraints. We have already discussed the following 9 orderings: T1, T2, T3, T4 T2, T3, T1, T4 [· · · writes out wrong schedules · · · ]
# 1.3.10 Russellâs Paradox | 2308.03762#98 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 99 | # 1.3.10 Russellâs Paradox
The gist of Russellâs barber paradox is the existence of a barber b who shaves all and only those who do not shave themselves. The negation of this sentence is a tautology that is easily derivable in ï¬rst-order logic. If we understand R(a, b) as meaning that a is shaved by b, then we can formulate this tautology and ask GPT-4 to prove or disprove it as shown in the prompt below.24 The proof is a straightforward reductio ad absurdum: If such a barber x existed we would have R(y, x) â â¼ R(y, y) for all y, and thus substituting x for y would yield R(x, x) â â¼ R(x, x), a contradiction.
KA: Prove or disprove the following:
â¼ (exists x . forall y . R(y,x) <==> â¼ R(y,y)). | 2308.03762#99 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 100 | KA: Prove or disprove the following:
â¼ (exists x . forall y . R(y,x) <==> â¼ R(y,y)).
GPT-4: To prove or disprove the given statement, we will use logical reasoning and consider the cases that may arise. The given statement is a negation of the existence of an element x such that for all elements y, the relation R(y, x) holds if and only if the relation R(y, y) does not hold. To disprove the statement, we need to ï¬nd a counterexample where there exists an element x such that the given condition is true. To prove the statement, we need to show that no such x exists, and the given condition cannot be satisï¬ed.
Letâs consider two cases:
Case 1: There exists an element x such that R(y, x) holds for some y, but R(y, y) also holds. In this case, the given condition is not satisï¬ed, as R(y, x) holds if and only if R(y, y) does not hold. | 2308.03762#100 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 101 | Case 2: There exists an element x such that R(y, x) does not hold for some y, but R(y, y) also does not hold. In this case, the given condition is again not satisï¬ed, as R(y, x) holds if and only if R(y, y) does not hold.
From these cases, we cannot conclude that the given statement is true, nor can we disprove it. The truth of the statement is dependent on the speciï¬c properties of the relation R. Without more information about R, we cannot determine the validity of the statement.
In the ï¬rst paragraph GPT-4 displays an impeccable understanding of the structure of the given sentence and what needs to be done. The subsequent case analysis, however, is hopelessly muddle-headed. What GPT-4 seems to be analyzing is the hypothesis
(exists x . â¼ (forall y . R(y,x) <==> â¼ R(y,y))),
which indeed gives rise to the two conditions considered by GPT-4. But this hypothesis is neither here nor there for what needs to be done. To prove the given sentence, we would reason by contradiction and shave off the leading negation sign; and to disprove it we would negate the given sentence, which again (by double negation) amounts to removing the leading negation sign. | 2308.03762#101 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 103 | 24Usually the quantiï¬er variables range explicitly over a sort such as Man, but this is not essential for the derivation.
26
Unfortunately, however, it doesnât proceed to reason with that assumption. Such reasoning would proceed by picking a witness term w for the existential claim (1.3), leading to the conclusion
â y . R(y, w) â ¬R(y, y),
which could then be subsequently used by instantiating it with various proof objects of interest (such as the witness term w). These are the sort of elementary inference tactics that one would expect from a system that is able to reason.
KA: Why are these two cases sufï¬cient for your analysis?
GPT-4: You are correct to question the sufï¬ciency of the cases I presented earlier. My previous response was not rigorous, and I apologize for that. Letâs re-examine the statement using a more formal approach.
We are given the following statement:
¬ â x .â y . R(y, x) â ¬R(y, y)
To disprove the statement, we need to ï¬nd a counterexample where there exists an element x such that the given condition is true. To prove the statement, we need to show that no such x exists, and the given condition cannot be satisï¬ed. | 2308.03762#103 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 104 | Letâs negate the given statement and examine the result:
â x .â y . R(y, x) â ¬R(y, y)
The negated statement asserts that there exists an element x such that for every element y, R(y, x) is true if and only if R(y, y) is false.
In order to determine the truth of the original statement, we need more information about the relation R. The properties and constraints of R will dictate whether it is possible to ï¬nd such an element x or not. Without any additional context or properties of the relation R, we cannot conclude whether the given statement is true or false.
# 1.3.11 Blocks World
This is a simple reasoning task that turns on a case analysis of the third-from-the-top block, call it b3. Either b3 is green or not. If it is, then itâs sitting on top of a non-green block (b4, which is non-green by the second premise), so the conclusion holds. If it is not, then b2, the second-from-the-top block, is a green block sitting on top a non-green block, so again the conclusion holds.
KA: There are ï¬ve square blocks stacked on top of one another. You are given the following information about them: | 2308.03762#104 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 105 | KA: There are ï¬ve square blocks stacked on top of one another. You are given the following information about them:
1. The second-from-the-top block is green.
2. The fourth-from-the-top block is not green.
Assuming that these two premises hold, disprove or else prove the following conclusion: There is a green block directly on top of a non-green block. Explain your answer.
GPT-4: The provided conclusion cannot be deï¬nitively proven or disproven with only the information given. Hereâs why:
Letâs label the blocks from top to bottom as follows: B1 (top), B2, B3, B4, B5 (bottom). From the premises:
27
B2 is green. B4 is not green.
If we hypothesize that B1 and B3 are not green, and B5 is green, the setup would look like this:
# B1: Non-green B2: Green B3: Non-green B4: Non-green B5: Green
In this case, the conclusion holds true: There is a green block (B2) directly on top of a non-green block (B3). However, if we assume that B1, B3, and B5 are also green, the setup would look like this: | 2308.03762#105 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 106 | # B1: Green B2: Green B3: Green B4: Non-green B5: Green
In this case, the conclusion does not hold: There is no green block directly on top of a non-green block.
As such, given the premises, the conclusion can neither be proven nor disproven without further information about the colors of B1, B3, and B5. This is another dramatic illustration of GPT-4âs inability to reason. It has chosen to consider two randomly selected worlds w1 and w2 (by âworldâ here I mean an assignment of green/non-green colors to each block), and reach a conclusion on the basis of those two particular worlds. This is a fundamentally ï¬awed tactic for proving the given conjecture; a proof requires an argument that applies to any possible world. The only con- clusion that could potentially be derived from considering a non-exhaustive subset of worlds is negativeâthat the conjecture is false. But we wouldnât need to consider two worlds for that. One single world that doesnât have any green blocks directly on top of non-green blocks would sufï¬ce. | 2308.03762#106 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 107 | GPT-4 even gets the analysis of its own sample world w2 wrong: B3 is a green block sitting directly on a non-green block (B4), so the putative conclusion does indeed hold in w2, contrary to GPT-4âs claim. But letâs put aside the fact that the analysis is mistaken. If the analysis were correct, as GPT-4 asserts, and the second world were indeed such that âthere is no green block directly on top of a non-green block,â then the conjecture would in fact be disproved. Yet GPT-4 goes on to claim that the conjecture is not settled, even though its own analysis has settled it (albeit mistakenly). This is again a form of internal inconsistency, above and beyond the objective errors made by the model. KA: Under your second assumption (that âB1, B3, and B5 are also greenâ), donât we again have a green block (B3) on top of a non-green block (B4)?
GPT-4: Youâre absolutely right. I apologize for the oversight. In that scenario:
# B1: Green B2: Green B3: Green B4: Non-green B5: Green | 2308.03762#107 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 108 | # B1: Green B2: Green B3: Green B4: Non-green B5: Green
B3, which is green, is directly on top of B4, which is not green. Therefore, even in this case, the conclusion holds true.
I appreciate your patience and vigilance. Therefore, given the premises provided, the conclusion can indeed be proven: âThere is a green block directly on top of a non-green block.â Thank you for pointing out the error.
28
GPT-4 now mistakenly considers the conjecture proven, just on the basis of those two worlds (while the worlds might contain the needed grist for the case analysis mill, the case analysis would still need to be carried out in order to have a proof).
# 1.3.12 Spatial Reasoning
Weâll ï¬rst test the ability of GPT-4 to tell left from right.
# Left and Right
KA: Suppose Iâm in the middle of South Dakota and Iâm looking straight down towards the center of Texas. Is Boston to my left or to my right?
GPT-4: If you are in South Dakota looking south towards the center of Texas, then Boston would be to your right, since Boston is located in the northeastern part of the United States.
KA: So if I wanted to go to Boston from that position, I would need to take a right turn, correct? | 2308.03762#108 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 109 | KA: So if I wanted to go to Boston from that position, I would need to take a right turn, correct?
GPT-4: Yes, thatâs correct. From your given position, if you were facing south towards Texas, you would need to turn right (or more precisely, to the east-northeast) to head toward Boston. However, please remember that this is a very simpliï¬ed explanation and actual navigation (especially over long distances like this) can be quite complex due to the Earthâs curvature and the way roads or ï¬ight paths are laid out.
KA: But if Iâm up north and facing directly south, wouldnât east be directly to my left? If I went right I would end up in Seattle.
GPT-4: I apologize for the confusion. Youâre correct. If youâre in South Dakota and facing directly south towards Texas, then East (and therefore Boston) would indeed be to your left. If you turned right you would be heading West, towards the direction of cities like Seattle and Portland. Thank you for pointing out that mistake. I apologize for any confusion caused.
Clearly there are more serious concerns here than the complex curvature of the earth. | 2308.03762#109 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 110 | Clearly there are more serious concerns here than the complex curvature of the earth.
KA: Also, can you explain why you said âor more precisely, to the east-northeastâ earlier, particularly the ânortheastâ part? Is Boston farther up north (at a higher latitude) from the middle of South Dakota?
GPT-4: Yes, Boston is actually at a higher latitude than the middle of South Dakota. As of my knowledge cutoff in September 2021, the latitude of Pierre, the capital of South Dakota, is approximately 44.37 degrees North, while Boston, Massachusetts is at approximately 42.36 degrees North. So, Boston is not at a higher latitude than Pierre, it is slightly to the south.
However, the reason I mentioned âeast-northeastâ is that, in terms of relative direction, Boston is not due east from South Dakota but rather to the east and slightly to the north. This is because the United States is not a perfect rectangle but rather curves with the Earthâs surface. In terms of general direction, one would say âeastâ to go from South Dakota to Boston, but to be more precise, the direction would be slightly north of due east. | 2308.03762#110 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 111 | In retrospect, my use of the term âeast-northeastâ may have been misleading. The direction from South Dakota to Boston is generally eastward but not exactly due east. I hope this clears up any confusion.
Once again, GPT-4 makes two contradictory statements in the same breath (paragraph): âYes, Boston is actu- ally at a higher latitude than the middle of South Dakota.â followed by âSo, Boston is not at a higher latitude than Pierre.â (It also invokes the obligatory September 2021 âknowledge cutoffâ when citing the latitudes of the two cities, as if these latitudes might have changed by 2023.)
Finally, GPT-4 continues to insist that, owing to the earthâs curvature, Boston is to the northeast of Pierre.
29
But its output is like a weather vane that changes direction with the slightest gust of wind:
KA: Iâm confused, you started by claiming that Boston is at a higher latitude than the middle of South Dakota (taking the capital Pierre as the middle), but then you cited numbers that point to the opposite conclusion. Which is it? | 2308.03762#111 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 112 | GPT-4: My apologies for the confusion - youâre correct and I misspoke in my response. Boston, Massachusetts, at a latitude of approximately 42.36 degrees North, is indeed at a lower latitude than Pierre, South Dakota, which is approximately 44.37 degrees North.
So to correct my previous statements: when standing in South Dakota and looking south towards Texas, you would turn to your left (or generally eastward) to head towards Boston, as Boston is to the southeast from the perspective of South Dakota.
# Furniture Arrangement
Here we consider a simple furniture arrangement problem that must respect a set of constraints. There are several solutions that are easy to ï¬nd, for example:
# _ _ D A B E _ C _
GPT-4 makes a promising start here (with what could be understood as a correct solution under a plausible misunderstanding of what it means for one piece of furniture to be to the right of another), but once the misunderstanding is corrected, its performance degenerates precipitously.
KA: We must arrange 5 pieces of furniture (A through E) on a 3 x 3 grid in accordance with the following constraints:
1. A must not be adjacent to C.
2. Nothing is to the right of E.
3. If D and A are not adjacent, then B should be in the middle.
4. D is above all others.
5. E and D are adjacent. | 2308.03762#112 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 113 | 3. If D and A are not adjacent, then B should be in the middle.
4. D is above all others.
5. E and D are adjacent.
Here is an arrangement does not satisfy these constraints:
# _ _ E A C D _ B _
This violates, for instance, the ï¬rst constraint, since A and C are adjacent. Can you print out a 3 x 3 arrange- ment that does satisfy the 5 constraints?
GPT-4: Sure, here is an arrangement that satisï¬es all the constraints:
# D E _ _ B _ A _ C
KA: Can you explain why constraint 2 is satisï¬ed?
GPT-4: Certainly! Constraint 2 states that âNothing is to the right of E.â In the provided arrangement:
30
D E _ _ B _ A _ C
E is positioned at the top row and the second column. Since there is no furniture placed in the cell to the right of E (the top row and the third column), this constraint is satisï¬ed. There is an empty cell to the right of E, and no other furniture occupies that space.
KA: But C is to the right of E, is it not (even if itâs below E)? | 2308.03762#113 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 114 | KA: But C is to the right of E, is it not (even if itâs below E)?
GPT-4: You are correct, and I apologize for the oversight. I misunderstood the constraint. Here is a corrected arrangement that satisï¬es all the constraints:
# D E _ B _ _ A _ C
Now, there is no furniture to the right of E in any row, which properly satisï¬es constraint 2.
KA: But C is still to the right of E, on the bottom row.
GPT-4: Apologies for the confusion again. You are right. Hereâs another arrangement that should satisfy all the constraints:
D E _ B _ _ _ _ A
Now, there is no furniture to the right of E in any row, and all other constraints are satisï¬ed as well.
KA: First, A is to the right of E. But in addition, this arrangement omits C altogether.
GPT-4: I apologize for the repeated oversight. Hereâs an arrangement that should satisfy all the constraints:
# D E _ B _ _ C _ A
In this arrangement: A is not adjacent to C. Nothing is to the right of E. D and A are not adjacent, and B is in the middle. D is above all others. E and D are adjacent. | 2308.03762#114 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 115 | KA: The position of A is (3,3) while that of E is (1,2), so A is again to the right of E.
GPT-4: Apologies for my continued misunderstanding. Here is another arrangement that should satisfy all the constraints:
# D E A B _ _ C _ _
In this arrangement:
A is not adjacent to C. Nothing is to the right of E. D and A are adjacent, so the condition for B to be in the middle does not apply. D is above all others. E and D are adjacent.
KA: What are the positions of E and A in the model you just printed?
GPT-4: In the arrangement I provided:
31
# D E A B _ _ C _ _
The position of E is at row 1, column 2 (1,2), and the position of A is at row 1, column 3 (1,3).
KA: Now (x2,y2) being to the right of (x1,y1) simply means that y2 > y1. So is really nothing to the right of E above?
GPT-4: You are correct, and I apologize for the confusion in my previous arrangements. Hereâs the corrected arrangement that should satisfy all the constraints:
# D E _ A B _ _ _ C | 2308.03762#115 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 116 | # D E _ A B _ _ _ C
In this arrangement: A is not adjacent to C. Nothing is to the right of E. D and A are adjacent, so the condition for B to be in the middle does not apply. D is above all others. E and D are adjacent.
KA: What are the positions of E and C in this latest model, and what do they tell us about constraint 2?
GPT-4: In the latest arrangement:
# D E _ A B _ _ _ C
The position of E is at row 1, column 2 (1,2), and the position of C is at row 3, column 3 (3,3).
Concerning constraint 2, âNothing is to the right of E,â we can see that there is no furniture placed to the right of E in any row. In the ï¬rst row, E is at the second column, and there is nothing to its right. In the second and third rows, Eâs column (column 2) is also empty to its right, as A and C are in column 1 and 3, respectively. This arrangement satisï¬es constraint 2.
KA: So C is to the right of E.
GPT-4: You are correct, and I apologize for my oversight once again. C is indeed to the right of E in the provided arrangement. Here is a revised arrangement that should satisfy all the constraints: | 2308.03762#116 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 117 | # D E _ A B C
In this arrangement: A is not adjacent to C. Nothing is to the right of E. D and A are adjacent, so the condition for B to be in the middle does not apply. D is above all others. E and D are adjacent.
KA: C is again to the right of E.
GPT-4: I apologize for my repeated mistakes. Here is an arrangement that satisï¬es all the constraints:
# D E A B _ C
In this arrangement: A is not adjacent to C. Nothing is to the right of E. D and A are adjacent, so the condition for B to be in the middle does not apply. D is above all others. E and D are adjacent.
KA: Here A is to the right of E, so again premise 2 is violated.
GPT-4: You are right, and I apologize for the continued oversight. Hereâs another attempt at providing an arrangement that satisï¬es all the constraints:
32
# D E B A C _
In this arrangement: A is not adjacent to C. Nothing is to the right of E. D and A are not adjacent, and B is in the middle. D is above all others. E and D are adjacent.
KA: Here B is to the right of E. | 2308.03762#117 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 118 | KA: Here B is to the right of E.
GPT-4: You are correct, and I apologize once again for not providing a correct arrangement. Given the constraints, it appears that there might not be a possible arrangement that satisï¬es all of them simultaneously. Letâs analyze the constraints more closely:
A must not be adjacent to C. Nothing is to the right of E. If D and A are not adjacent, then B should be in the middle. D is above all others. E and D are adjacent. Constraint 4 means that D must be in the top row. Constraint 5 means that E must be in the top row as well, adjacent to D. Now there are only two possible D E. However, in both cases, there is no way to satisfy constraint 2, as placements for D and E: D E there will always be at least one position to the right of E where a piece of furniture must be placed. So, it appears that there is no valid arrangement of furniture that can satisfy all ï¬ve constraints. | 2308.03762#118 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 119 | A remarkable feature of this conversation is the incorrigibility of GPT-4, similar to the behavior exhib- ited in the subset-sum, graph-coloring, and several countermodel and logical-proof problems: Mistakes are repeatedly pointed out and yet the model continues to make the exact same type of error over and over again.
# Seating Puzzle
This is the same seating puzzle that GPT-3.5 failed in January. The puzzle has multiple solutions, i.e., there are multiple seating arrangements that satisfy all constraints (for example, p1 p5 p3 p2 p4 and p4 p2 p3 p5 p1).
The answer to the question posed to GPT-4 below is yes, we can conclude that p5 cannot be seated either in the middle seat or on either end. GPT-4 not only fails to derive either of these conclusions, it actually arrives at the opposite conclusion, that p5 must be in the middle. | 2308.03762#119 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 120 | KA: We need to seat ï¬ve people, call them p1, p2, p3, p4, and p5, in a row of ï¬ve seats, so that the following three conditions are satisï¬ed: (A) p2 should be farther from the middle seat than p3. (B) p2 and p4 should be seated next to each other. (C) p1 and p3 should be ï¬anking p5. Is there anything we can conclude about the seat assigned to p5?
GPT-4: Yes, we can conclude that p5 must be seated in the middle seat.
Hereâs the reasoning: Since p1 and p3 are ï¬anking p5 (condition C), p5 has to be in between p1 and p3. The only way this can happen is if p5 is in the middle seat, because if p5 was in any other position, it would not be possible for both p1 and p3 to ï¬ank p5. | 2308.03762#120 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 121 | Now letâs see if the other conditions can be satisï¬ed with p5 in the middle seat: p2 should be farther from the middle seat than p3 (condition A): Since p5 is in the middle seat, p3 must be in one of the immediately adjacent seats (either seat 2 or seat 4). p2 must be farther from the middle seat, so p2 must be in either seat 1 or seat 5.
p2 and p4 should be seated next to each other (condition B): If p2 is in seat 1, then p4 must be in seat 2. If p2 is in seat 5, then p4 must be in seat 4.
In either case, all the conditions can be satisï¬ed with p5 in the middle seat.
KA: Can you prove your claim that âthe only way this can happen is if p5 is in the middle seat, because if p5
33
was in any other position, it would not be possible for both p1 and p3 to ï¬ank p5â?
GPT-4: Sure! Letâs consider all the possible positions for p5 and see if it is possible for both p1 and p3 to ï¬ank p5. | 2308.03762#121 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 122 | GPT-4: Sure! Letâs consider all the possible positions for p5 and see if it is possible for both p1 and p3 to ï¬ank p5.
⢠If p5 is in seat 1: In this case, there are no seats to the left of p5 for either p1 or p3 to occupy. Since both p1 and p3 must ï¬ank p5, this position for p5 is not possible.
⢠If p5 is in seat 2: p1 would have to be seated in seat 1 to ï¬ank p5. However, there is no available seat for p3 to also ï¬ank p5 on the other side, as p3 would be required to sit in seat 3, which is not adjacent to p5. Thus, this position for p5 is not possible.
⢠If p5 is in seat 4: Similar to the case when p5 is in seat 2, p3 would have to be seated in seat 5 to ï¬ank p5, but there is no available seat for p1 to also ï¬ank p5 on the other side, as p1 would be required to sit in seat 3, which is not adjacent to p5. Thus, this position for p5 is not possible. | 2308.03762#122 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 123 | ⢠If p5 is in seat 5: In this case, there are no seats to the right of p5 for either p1 or p3 to occupy. Since both p1 and p3 must ï¬ank p5, this position for p5 is not possible.
⢠If p5 is in the middle seat (seat 3): In this case, p1 and p3 can ï¬ank p5 by occupying seats 2 and 4 or seats 4 and 2, respectively. This conï¬guration satisï¬es condition C.
From the above analysis, we can conclude that the only way for both p1 and p3 to ï¬ank p5 is if p5 is in the middle seat (seat 3).
The analyses that GPT-4 provides for most of the ï¬ve cases (for 3 out of 5, to be precise) are mistaken. The mistakes for the second and third cases are similar: GPT-4 mysteriously claims that âthere would be no available seatâ when this is clearly not true. The error in the ï¬fth case is that the analysis fails to consider the remaining constraints, and speciï¬cally the constraint that p2 and p4 must be adjacent, which would be violated if p5 is placed in the middle. | 2308.03762#123 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 124 | KA: Your analysis of the second case is incorrect. Can you tell me where the error is in your reasoning: âIf p5 is in seat 2: p1 would have to be seated in seat 1 to ï¬ank p5. However, there is no available seat for p3 to also ï¬ank p5 on the other side, as p3 would be required to sit in seat 3, which is not adjacent to p5. Thus, this position for p5 is not possibleâ?
GPT-4: You are right, and I apologize for the error in my previous analysis. Letâs re-examine the second case:
⢠If p5 is in seat 2: p1 would have to be seated in seat 1 to ï¬ank p5. p3 could be seated in seat 3, which is also ï¬anking p5.
In this case, p1 and p3 are indeed ï¬anking p5. My previous analysis contained an error, and I apologize for the confusion. Considering this, p5 doesnât necessarily have to be in the middle seat.
GPT-4 realizes this particular error when its attention is explicitly brought to it, but it still fails to properly reassess the situation, ï¬x the other errors, and arrive at even one of the two correct conclusions.
# 1.3.13 Temporal Reasoning | 2308.03762#124 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 126 | 25Formally, this problem belongs to a class of temporal-reasoning problems literally known as STP (âSimple Temporal Problemsâ) [5]. This class is of limited expressivity and there exist very efï¬cient algorithms for solving STPs (e.g., consistency can be decided in O(n · m) where n is the number of events described in a given STP and m is the number of constraints between the events).
34
KA: Tom and Nancy commute to work. Nancyâs commute takes about 30 to 40 minutes, while Tomâs commute takes about 40 to 50 minutes. Last Friday, Nancy left home between 8:10 and 8:20 AM, while Tom arrived at work between 8:50 and 9:10 AM. In addition, Nancy arrived at work after Tom left his place, but no more than 20 minutes after that. What can we conclude about when Tom and Nancy arrived at work last Friday?
GPT-4: Letâs break this down:
Nancyâs commute takes between 30 to 40 minutes. That means if she left home at the latest possible time (8:20 AM), she would arrive at work between 8:50 and 9:00 AM. | 2308.03762#126 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 127 | Tomâs commute takes between 40 to 50 minutes. If he arrived at work at the earliest possible time (8:50 AM), then the latest he could have left home is 8:10 AM. If he arrived at work at the latest possible time (9:10 AM), then the earliest he could have left home is 8:20 AM.
Nancy arrived at work after Tom left home, but not more than 20 minutes after he left. From these facts, we can deduce the following: If Tom left his home at the latest possible time of 8:20 AM and Nancy left at her latest possible time of 8:20 AM, and her commute is a maximum of 40 minutes, Nancy could have arrived at work as late as 9:00 AM. Since Tomâs commute is a minimum of 40 minutes, this means that he would have arrived at work by 9:00 AM at the latest as well.
Conversely, if Tom left home at his earliest possible time (8:10 AM) and Nancy left at her latest possible time (8:20 AM), even if Nancyâs commute took only 30 minutes, she would arrive at work by 8:50 AM at the earliest, which is not more than 20 minutes after Tom left. Since Tomâs commute is at least 40 minutes, he would arrive at work by 8:50 AM at the earliest as well. | 2308.03762#127 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 129 | Once again we see that GPT-4 summarizes/organizes the given information impeccably, but fails miserably at using that information to deduce sound and useful conclusions. The errors start immediately after the summary is completed and GPT-4 writes âFrom these facts, we can deduce the followingâ (I have boldfaced this sentence in the transcript). It starts by positing the following: âIf Tom left his home at the latest possible time of 8:20 AM · · · â But in fact there are no given constraints about the latest possible time that Tom left his home; GPT-4 is taking the constraint that was given about Nancy (âNancy left home between 8:10 and 8:20 AMâ) and hallucinating its application to Tom. Moreover, the entire conditional statement is bizarre, because its antecedent includes information about Tom that is completely irrelevant to the conclusion (Nancyâs arrival): âIf Tom left his home at the latest possible time of 8:20 AM and Nancy left at her latest possible time of 8:20 AM, and her commute is a maximum of 40 minutes, Nancy could have arrived at work as late as 9:00 AM.â This should have been stated as: | 2308.03762#129 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 131 | GPT-4 then proceeds to incorrectly infer the following: âSince Tomâs commute is a minimum of 40 min- utes, this means that he would have arrived at work by 9:00 AM at the latest as well.â This does not follow at all, and it certainly doesnât follow from the fact that âTomâs commute is a minimum of 40 minutes.â
The following paragraph is predicated on the hallucinated constraint that Tomâs earliest departure time was 8:10 AM (again, this constraint was given for Nancy, not for Tom). It then goes on to claim that 8:50 AM is ânot more than 20 minutes afterâ 8:10 AM. Finally, it concludes (wrongly) that they both arrived between 8:50 and 9:00.
Remarkably, when a countermodel is explicitly given to it, GPT-4 rejects it (again incorrectly):
KA: Why is it not possible for Tom to leave at 8:30 and arrive at 9:10, and for Nancy to leave at 8:15 and arrive at 8:45?
GPT-4: I apologize for any misunderstanding. From the information you initially provided, it was stated that
35
Nancyâs commute takes between 30 to 40 minutes and Tomâs commute takes between 40 to 50 minutes. | 2308.03762#131 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 132 | 35
Nancyâs commute takes between 30 to 40 minutes and Tomâs commute takes between 40 to 50 minutes.
If Tom were to leave at 8:30 AM, with a minimum commute time of 40 minutes, the earliest he could arrive at work would be 9:10 AM. So, Tom leaving at 8:30 AM and arriving at 9:10 AM is a possibility within the given constraints.
However, if Nancy left at 8:15 AM, even with her shortest commute time of 30 minutes, the earliest she could arrive at work would be 8:45 AM. This would mean that she would arrive at work exactly 30 minutes after Tom left home. But according to the conditions provided, Nancy should arrive at work not more than 20 minutes after Tom leaves his home. Therefore, the scenario you suggested with Nancy leaving at 8:15 and arriving at 8:45 is not possible within the provided constraints.
I hope this clears up the confusion. If you have further questions or if there is more information to consider, please let me know!
GPT-4 is sticking to its guns because apparently it is convinced that 8:45 is âexactly 30 minutesâ after 8:30. KA: But 8:45 (Nancyâs arrival) is only 15 minutes after 8:30, which is when Tom left his place, not 30 minutes as you claim. | 2308.03762#132 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 133 | GPT-4: I apologize for the oversight. Youâre correct.
If Nancy left at 8:15 AM and her commute took 30 minutes, she would indeed arrive at 8:45 AM, which is 15 minutes after Tom left at 8:30 AM (if we assume this scenario). This ï¬ts within the constraint that Nancy must arrive at work not more than 20 minutes after Tom leaves his home. And with a 40-minute commute, Tom would indeed arrive at 9:10 AM.
So, youâre correct. It is indeed possible for Tom to leave at 8:30 AM and arrive at 9:10 AM, and for Nancy to leave at 8:15 AM and arrive at 8:45 AM. Thank you for pointing out the error. I appreciate your patience and apologize for any confusion caused.
# 1.3.14 Murder or Suicide? | 2308.03762#133 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 134 | # 1.3.14 Murder or Suicide?
This is a logic puzzle I made up a while back. The conclusion is that Aunt Agatha killed herself. This follows by eliminating Charles and the butler. First, Aunt Agatha must have hated herself, because she hated everyone other than the butler. Therefore, Charles did not hate her (since he doesnât hate anyone that Aunt Agatha hates), and hence he could not have killed her (by premise 3). The butler could not hate himself, because if he did, he would hate everyone (since he already hates everyone else, through premises 5 and 7), and we know thatâs not possible by premise 8. Thus, the butler must be richer than Aunt Agatha, or else he would hate himself (by premise 6), which means he could not be the killer (premise 3). | 2308.03762#134 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 135 | GPT-4 correctly rules out Charles, but is unable to rule out the butler and comes to incorrectly deduce that he is the killer. Another keyâand rather strangeâmistake that GPT-4 makes is this: âSince Aunt Agatha hates everyone except the butler (premise 5), this means she must at least not hate herself.â This is an odd error, because the natural conclusion to draw from premise 5 is that Aunt Agatha hates herself. Along the way, GPT-4 displays the internal inconsistency weâve seen repeatedly in the other examples. Virtually in every single response, GPT-4 claims to derive some proposition as well as its negation.
KA: You are given the following premises:
1. Someone who lives in Dreadbury Mansion killed Aunt Agatha.
2. The only people who live in Dreadbury Mansion are Aunt Agatha, the butler, and Charles.
# WON
3. A killer always hates his victims, and is never richer than his victims.
4. Charles hates no one that Aunt Agatha hates.
5. Aunt Agatha hates everyone except the butler.
36
6. The butler hates everyone not richer than Aunt Agatha.
7. The butler hates everyone Aunt Agatha hates.
8. No one hates everyone.
9. Aunt Agatha is not the butler. | 2308.03762#135 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 136 | 7. The butler hates everyone Aunt Agatha hates.
8. No one hates everyone.
9. Aunt Agatha is not the butler.
On the basis of this information, determine who killed Aunt Agatha and give a detailed proof that your con- clusion follows from the premises.
GPT-4: Letâs analyze each of the premises:
Someone who lives in Dreadbury Mansion killed Aunt Agatha. The only people who live in Dreadbury Mansion are Aunt Agatha, the butler, and Charles. From these two premises, we can deduce that Aunt Agatha was either killed by Charles or the butler, because they are the only other residents of the Dreadbury Mansion. A killer always hates his victims, and is never richer than his victims. This premise is crucial as it gives us
two conditions for someone to be a killer: they must hate the victim and not be richer than them.
Charles hates no one that Aunt Agatha hates. Aunt Agatha hates everyone except the butler. From premises 4 and 5, we can infer that Charles does not hate Aunt Agatha because Aunt Agatha hates herself (since she hates everyone except the butler) and Charles hates no one that Aunt Agatha hates. | 2308.03762#136 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 137 | So, by the rules of premise 3, Charles cannot be the killer because he does not hate Aunt Agatha. The butler hates everyone not richer than Aunt Agatha. The butler hates everyone Aunt Agatha hates. Premise 6 indicates that the butler hates Aunt Agatha, as she is not richer than herself. And from premise 7, we understand that the butler also hates Aunt Agatha because Aunt Agatha hates herself (as she hates everyone except the butler). So, the butler does indeed hate Aunt Agatha.
However, this doesnât make him the killer yet. We still need to verify the second condition from premise 3, that the killer is not richer than the victim. As there is no information regarding their wealth, we must assume that the butler could either be richer or poorer than Aunt Agatha.
No one hates everyone. Aunt Agatha is not the butler. Premise 9 is not immediately useful, but premise 8 tells us that no one can hate everyone. Since Aunt Agatha hates everyone except the butler (premise 5), this means she must at least not hate herself. This conï¬rms the idea that Aunt Agatha did not commit suicide. | 2308.03762#137 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 138 | From this analysis, Charles canât be the killer because he doesnât hate Aunt Agatha. Aunt Agatha canât be her own killer because she doesnât hate herself (and thus didnât commit suicide). Therefore, the only remaining person who can be the killer is the butler.
Thus, the killer of Aunt Agatha must be the butler.
KA: Youâve concluded both that Aunt Agatha hates herself (you wrote âAunt Agatha hates herself (since she hates everyone except the butler)â) and also that she doesnât hate herself (âSince Aunt Agatha hates everyone except the butler (premise 5), this means she must at least not hate herselfâ). Can both of these be true? Or are the premises inconsistent?
GPT-4: Youâre correct. There seems to be a mistake in my reasoning. Itâs contradictory to state both that Aunt Agatha hates herself and doesnât hate herself. Letâs try to clear up this inconsistency:
From premise 5 (âAunt Agatha hates everyone except the butler.â), we infer that Aunt Agatha doesnât hate herself as âeveryoneâ typically refers to every other person, not including oneself. | 2308.03762#138 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 139 | Using this corrected interpretation, letâs revisit premise 7 (âThe butler hates everyone Aunt Agatha hates.â). Since Aunt Agatha doesnât hate herself, the butler doesnât hate Aunt Agatha. Thus, according to premise 3 (âA killer always hates his victims, and is never richer than his victims.â), the butler cannot be the killer.
So we are left with Charles. Based on premise 4 (âCharles hates no one that Aunt Agatha hates.â), we know that Charles also doesnât hate Aunt Agatha, because Aunt Agatha doesnât hate herself. Therefore, Charles cannot be the killer, as per premise 3.
37
We thus have a contradiction. According to the premises, neither Charles nor the butler can be the killer, but Aunt Agatha must have been killed by someone who lives in Dreadbury Mansion (premise 1 and 2). It appears the premises as given are inconsistent.
KA: First, âeveryoneâ here means literally everyone, in the sense of formal logic (there is a universal quanti- ï¬er involved). Second, you said earlier that the butler was the killer. Why are you now ruling out the butler?
GPT-4: I apologize for the confusion. oneself, then Aunt Agatha hates herself. This is a different understanding from my earlier analysis. | 2308.03762#139 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 140 | GPT-4: I apologize for the confusion. oneself, then Aunt Agatha hates herself. This is a different understanding from my earlier analysis.
Letâs go through the premises again under this new understanding: Aunt Agatha, the butler, and Charles live in Dreadbury Mansion. (Premise 2) The killer must be one of these three because someone who lives in Dreadbury Mansion killed Aunt Agatha. (Premise 1) Aunt Agatha hates everyone including herself. (Premise 5, under our new interpretation of âeveryoneâ) The butler hates everyone that Aunt Agatha hates, so the butler hates Aunt Agatha. (Premise 7) Charles hates no one that Aunt Agatha hates, so Charles does not hate Aunt Agatha. (Premise 4) From these points, according to Premise 3 (âA killer always hates his victims, and is never richer than his victimsâ), we can already eliminate Charles as the killer since he does not hate Aunt Agatha.
So, the killer must be either Aunt Agatha or the butler. But since no one hates everyone (Premise 8), we can infer that Aunt Agatha doesnât hate herself, and thus
she wouldnât have killed herself. So, Aunt Agatha is not the killer. | 2308.03762#140 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 141 | she wouldnât have killed herself. So, Aunt Agatha is not the killer.
That leaves us with the butler as the only possible killer. So, even with our new understanding of âevery- oneâ as literally everyone, the conclusion remains: The butler killed Aunt Agatha. I apologize for the previous error in reasoning, and thank you for your patience.
KA: Youâre again deriving two inconsistent conclusions: that Agatha hates herself and that she doesnât. Which is it?
GPT-4: Youâre right, my apologies for the confusion. The premises do seem to be leading to inconsistent conclusions regarding whether Aunt Agatha hates herself or not.
Letâs go through this more carefully. Premise 5: âAunt Agatha hates everyone except the butler.â Given your interpretation, âeveryoneâ here
includes herself, which means Aunt Agatha hates herself.
However, Premise 8 states: âNo one hates everyone.â Given your interpretation of âeveryoneâ as literally everyone, then Aunt Agatha does not hate herself. She must have at least one exception aside from the butler, or else she would hate âeveryone,â which contradicts Premise 8.
Therefore, Aunt Agatha does not hate herself. The only interpretation of the premises that does not lead to a contradiction is that Aunt Agatha does not hate herself. | 2308.03762#141 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 142 | Therefore, Aunt Agatha does not hate herself. The only interpretation of the premises that does not lead to a contradiction is that Aunt Agatha does not hate herself.
Now, letâs reconsider who the killer can be: Aunt Agatha did not kill herself (because she does not hate herself). Charles did not kill Aunt Agatha (because he does not hate anyone that Aunt Agatha hates, which includes herself). The butler did not kill Aunt Agatha (because he does not hate anyone that Aunt Agatha does not hate, and Aunt Agatha does not hate herself). Therefore, under these premises and your interpretation of âeveryone,â none of Aunt Agatha, the butler, or Charles could have killed Aunt Agatha.
In conclusion, there is a contradiction: Premises 1 and 2 state that Aunt Agatha must have been killed by someone in Dreadbury Mansion, which includes only Aunt Agatha, the butler, and Charles. But, under our analysis, none of them could have killed Aunt Agatha. This suggests the premises are inconsistent.
KA: Letâs focus on this piece of your analysis: | 2308.03762#142 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 143 | KA: Letâs focus on this piece of your analysis:
âHowever, Premise 8 states: âNo one hates everyone.â Given your interpretation of âeveryoneâ as literally everyone, then Aunt Agatha does not hate herself. She must have at least one exception aside from the butler, or else she would hate âeveryone,â which contradicts Premise 8.â
38
Can you prove in more detail, step-by-step, how premise 8 entails that Aunt Agatha does not hate herself? GPT-4: Sure, Iâll clarify this deduction in more detail:
Premise 8 states: âNo one hates everyone.â In the context of formal logic, âeveryoneâ typically refers to all individuals within a certain set. Given your interpretation that âeveryoneâ should be taken literally as all individuals without exception, âeveryoneâ includes each individual person, and themselves.
Therefore, according to Premise 8, there is no individual who hates every other individual and also hates themselves.
Apply this understanding to Aunt Agatha: if Aunt Agatha hated everyone, including herself, she would be someone who âhates everyone.â But this would contradict Premise 8, which states that no such person exists. | 2308.03762#143 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 144 | Therefore, Aunt Agatha does not hate everyone. In order to not hate âeveryoneâ, there must be at least one individual whom Aunt Agatha does not hate.
We know from Premise 5 that the butler is one such individual.
However, even if Aunt Agatha does not hate the butler, she would still be âhating everyoneâ if she hated all other individuals, including herself. This again would contradict Premise 8.
Therefore, there must be at least one more individual, in addition to the butler, whom Aunt Agatha does not hate. Since the only other individuals in question are Charles and Aunt Agatha herself, and we know from Premise 5 that she hates Charles, the only remaining possibility is that Aunt Agatha does not hate herself.
Therefore, Premise 8 implies that Aunt Agatha does not hate herself.
KA: Justify step 6 please: âHowever, even if Aunt Agatha does not hate the butler, she would still be âhating everyoneâ if she hated all other individuals, including herself.â Why does that hold?
GPT-4: Sure, letâs clarify step 6:
According to your interpretation of âeveryone,â this term refers to all individuals, including oneself. So, if an individual hates âeveryone,â they must hate all individuals, including themselves. | 2308.03762#144 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 145 | So in step 6, when we say âif Aunt Agatha hated all other individuals, including herself, she would still be âhating everyoneâ,â what weâre saying is:
If Aunt Agatha hated herself and all other individuals (except for the butler), then she would hate all individuals in our context (which includes Aunt Agatha, the butler, and Charles), with the sole exception of the butler.
However, even with this exception, she would still be classiï¬ed as someone who hates âeveryoneâ under our interpretation, because she hates all individuals including herself (despite the exception of the butler).
Thus, even if Aunt Agatha does not hate the butler, if she hated all other individuals, including herself, she would be classiï¬ed as someone who âhates everyoneâ (despite the one exception). This would contradict Premise 8, which states that no one hates everyone.
Therefore, if Aunt Agatha does not hate the butler (as per Premise 5) and she is to avoid contradicting Premise 8, there must be at least one more individual whom Aunt Agatha does not hate. Since the only other individuals in our context are Charles and Aunt Agatha herself, and we know from Premise 5 that she hates Charles, the only remaining possibility is that Aunt Agatha does not hate herself. | 2308.03762#145 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 146 | I hope that clariï¬es step 6. Itâs a nuanced point: even if Aunt Agatha has one exception to her hatred (the butler), if she hates all other individuals, including herself, she would still be considered as someone who âhates everyoneâ under our interpretation, which contradicts Premise 8.
KA: This is quite wrong. As long as there is at least one individual b not hated by Aunt Agatha, then she cannot possibly hate everyone, period, because if she did hate everyone then she would also hate b, by deï¬nition of âeveryoneâ; and we know that she does not hate b. In our case, of course, the individual b in question is the butler.
GPT-4: Youâre absolutely right, and I apologize for my misinterpretation. I appreciate your patience. Given your correct interpretation, if Aunt Agatha does not hate the butler, it is sufï¬cient to say that she
39 | 2308.03762#146 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 147 | 39
doesnât hate everyone, even if she hates all others including herself. This also affects the conclusions we draw from other premises. Letâs look at premises 3, 4, and 7 again: Premise 3 states: âA killer always hates his victims, and is never richer than his victims.â Premise 4 states: âCharles hates no one that Aunt Agatha hates.â Premise 7 states: âThe butler hates everyone Aunt Agatha hates.â From premise 4, since Charles hates no one that Aunt Agatha hates, and Aunt Agatha hates herself, it follows that Charles does not hate Aunt Agatha. This means that Charles could not have killed Aunt Agatha (as per premise 3).
From premise 7, since the butler hates everyone that Aunt Agatha hates, and Aunt Agatha hates herself, it follows that the butler hates Aunt Agatha. Hence, he could be the killer, according to the hatred condition of premise 3.
For the wealth condition of premise 3, there is no clear information. We must assume that Aunt Agatha was not richer than everyone else in the mansion; if she was, the premise âa killer is never richer than his victimsâ would make it impossible for anyone to have killed her. This is a reasonable assumption given that the other premises suggest there is a killer. | 2308.03762#147 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 148 | So, we have only one viable candidate: the butler. This aligns with our conclusion from the ï¬rst analysis: The butler is the one who killed Aunt Agatha.
Thank you for your guidance in clarifying these premises.
KA: Why did you say earlier that even if there is an individual b not hated by Aunt Agatha, it would still follow that Aunt Agatha hates everyone?
GPT-4: Youâre correct, and I must apologize for the confusion. That statement was a misinterpretation on my part. In formal logic, if there is an individual that Aunt Agatha does not hate (in this case, the butler), it does not follow that Aunt Agatha hates everyone. In fact, itâs the opposite: if there is at least one individual Aunt Agatha does not hate, it means she does not hate everyone. I appreciate your patience and Iâm sorry for the oversight.
KA: Ok, so you are now convinced that the butler killed Aunt Agatha?
GPT-4: Given the premises and the interpretation of âeveryoneâ as literally everyone, including oneself, the evidence does indeed point to the butler as the killer. Hereâs why:
According to premise 5, Aunt Agatha hates everyone except the butler. Given the interpretation of âevery- oneâ, this means she hates herself. | 2308.03762#148 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 149 | According to premise 5, Aunt Agatha hates everyone except the butler. Given the interpretation of âevery- oneâ, this means she hates herself.
According to premise 7, the butler hates everyone Aunt Agatha hates, which now includes Aunt Agatha herself.
Charles, according to premise 4, hates no one that Aunt Agatha hates, so he does not hate Aunt Agatha. From premise 3, a killer always hates his victims. So, Charles, who does not hate Aunt Agatha, canât be the killer. The butler, who does hate Aunt Agatha, could be the killer. And, as Aunt Agatha is not the butler (premise 9), and she canât have killed herself since a killer hates his victims and Aunt Agatha doesnât hate herself (from premise 8), Aunt Agatha herself is not the killer.
So, we can conclude that the butler is the killer, given that he is the only person who ï¬ts the description provided by the premises. This conclusion depends heavily on the interpretation of âeveryoneâ to include oneself, which allows us to infer that Aunt Agatha hates herself.
KA: You said earlier that thereâs not enough information in the premises about the butlerâs level of wealth. Are you sure thereâs nothing we can infer? | 2308.03762#149 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 150 | KA: You said earlier that thereâs not enough information in the premises about the butlerâs level of wealth. Are you sure thereâs nothing we can infer?
GPT-4: Youâre correct, and I apologize for missing that detail. There is indeed something we can infer about the butlerâs wealth from the given premises.
Premise 3 states that âA killer always hates his victims, and is never richer than his victims.â We know from premises 5 and 7 that Aunt Agatha hates herself and, thus, the butler also hates Aunt Agatha.
40
If the butler was richer than Aunt Agatha, according to premise 3, he could not be her killer.
However, premise 6 states that âThe butler hates everyone not richer than Aunt Agatha.â Given that we know the butler hates Aunt Agatha, it follows that Aunt Agatha must not be richer than the butler, or else he wouldnât hate her (which would contradict premises 5 and 7).
In summary, the butler isnât richer than Aunt Agatha, but Aunt Agatha isnât richer than the butler either. This suggests they are of equal wealth. | 2308.03762#150 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 151 | In summary, the butler isnât richer than Aunt Agatha, but Aunt Agatha isnât richer than the butler either. This suggests they are of equal wealth.
So, given all of the premises, the butler, who hates Aunt Agatha and is of equal wealth, is indeed the only individual who ï¬ts the proï¬le of the killer.
Thank you for your patience in clarifying these premises. I appreciate your guidance in this logical puzzle. KA: So what did you conclude about the butlerâs wealth, that he neither richer nor poorer than Aunt Agatha? GPT-4: Yes, from the given premises, we can infer that the butler is neither richer nor poorer than Aunt Agatha, which would suggest they are of equal wealth. Hereâs how:
Premise 6 states, âThe butler hates everyone not richer than Aunt Agatha.â This means that if someone is as rich as Aunt Agatha or poorer, the butler hates them.
We know from premises 5 and 7 that the butler hates Aunt Agatha. Therefore, according to premise 6, Aunt Agatha must not be richer than the butler. If she were, the butler wouldnât hate her, contradicting premises 5 and 7. | 2308.03762#151 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 152 | At the same time, the butler canât be richer than Aunt Agatha either. Thatâs because premise 3 states that âa killer is never richer than his victims.â If the butler were richer than Aunt Agatha, he couldnât be her killer. So, the butler isnât richer than Aunt Agatha, and Aunt Agatha isnât richer than the butler. This suggests that they are of equal wealth. Therefore, given all the premises, the butler ï¬ts the criteria to be the killer: he hates Aunt Agatha (as required by premise 3), and he isnât richer than her (also required by premise 3).
KA: I claim that we can infer that the butler is richer than Aunt Agatha.
GPT-4: Youâre correct, and I apologize for the oversight.
Premise 6 states that âThe butler hates everyone not richer than Aunt Agatha.â We know from premises 5 and 7 that the butler hates Aunt Agatha. Therefore, it must be the case that Aunt Agatha is not richer than the butler, i.e., the butler is either as rich as or richer than Aunt Agatha. | 2308.03762#152 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 153 | However, premise 3 states that a killer is never richer than his victims. If the butler was as rich as Aunt Agatha, he could still be her killer. But since Aunt Agatha was killed by someone living in Dreadbury Mansion (premise 1) and Charles and Aunt Agatha herself could not have been the killers (as they donât hate Aunt Agatha), the butler must be the killer. Given that the butler is the killer and that a killer is never richer than his victims, it must be the case that the butler is not richer than Aunt Agatha.
The only way to satisfy both conditions (the butler being neither richer than nor as rich as Aunt Agatha) is if the butler is poorer than Aunt Agatha.
I apologize for the confusion in the previous responses. The conclusion should be that the butler, who is poorer than Aunt Agatha, killed her. This conclusion is consistent with all the given premises.
Again, I appreciate your patience in solving this logical puzzle.
KA: No, like I said, itâs the opposite. We can infer that the butler, far from being poorer than Aunt Agatha, heâs actually richer.
GPT-4: Youâre correct. I apologize for the confusion. Letâs revisit premise 6: âThe butler hates everyone not richer than Aunt Agatha.â | 2308.03762#153 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 154 | This means that the butler hates all individuals who are as wealthy as Aunt Agatha or less wealthy than her. But from premises 5 and 7, we know that the butler does indeed hate Aunt Agatha. Therefore, Aunt Agatha must be in the group of people who are not richer than the butler, which implies that the butler is as wealthy as or wealthier than Aunt Agatha.
Given premise 3, which states that a killer is never richer than his victims, it would initially seem that
41
the butler cannot be the killer since he could potentially be richer than Aunt Agatha. However, Aunt Agatha cannot be the killer because she doesnât hate herself (from premise 8), and Charles canât be the killer because he doesnât hate Aunt Agatha (from premises 4 and 5).
Hence, the butler must be the killer, and the only way the butler can be the killer and still satisfy premise 3 is if the butler and Aunt Agatha have the same wealth. Thus, there was a mistake in my previous analysis: the butler isnât poorer than Aunt Agatha. Instead, the butler must be equally wealthy as Aunt Agatha. However, your claim was that we can infer the butler is richer than Aunt Agatha, which is not supported by the premises. I apologize for the confusion, and I appreciate your patience and assistance in resolving this puzzle. | 2308.03762#154 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 155 | KA: So youâre back to claiming that the butler is equally rich as Aunt Agatha, and that my claim that heâs richer is does not follow.
GPT-4: My apologies for the confusion in my previous responses. Letâs reevaluate premise 6: âThe butler hates everyone not richer than Aunt Agatha.â
From this premise, we can infer that the only people whom the butler doesnât hate are those richer than Aunt Agatha. But since we know from premises 5 and 7 that the butler does hate Aunt Agatha, it must follow that Aunt Agatha is not richer than the butler. Therefore, the butler must be richer than Aunt Agatha.
In light of this, we return to premise 3, which states, âA killer always hates his victims, and is never richer than his victims.â Given that the butler hates Aunt Agatha and is richer than her, it would seem at ï¬rst that the butler cannot be the killer.
However, if we consider all the residents of Dreadbury Mansion, Aunt Agatha is not the killer because she doesnât hate herself (from premise 8) and Charles is not the killer because he doesnât hate Aunt Agatha (from premises 4 and 5). | 2308.03762#155 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 156 | Hence, weâre left with the butler as the only possible killer, despite him being richer than Aunt Agatha, which seems to contradict premise 3. This seems to create a paradox given the premises as they stand.
There may be an issue with how weâre interpreting the premises or there may be an error in the premises themselves. If the butler is indeed the killer, then it contradicts the wealth condition stated in premise 3.
If we adhere strictly to the premises, it appears we have a paradox. The butler seems to be the only possible candidate who could be the killer based on hatred, but this contradicts the wealth condition of the killer in premise 3. I appreciate your patience and I apologize for the confusion in my previous responses.
KA: No, this reasoning of yours is incorrect: âBut since we know from premises 5 and 7 that the butler does hate Aunt Agatha, it must follow that Aunt Agatha is not richer than the butler. Therefore, the butler must be richer than Aunt Agatha.â
The fact that Aunt Agatha is not richer than the butler doesnât support the conclusion that the butler is richer. I still claim that the butler IS in fact richer, but your attempts to prove that fact have all contained reasoning errors. | 2308.03762#156 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 158 | From Premise 6 (âThe butler hates everyone not richer than Aunt Agathaâ), we can infer that the butler does not hate people who are richer than Aunt Agatha. From Premise 7 (âThe butler hates everyone Aunt Agatha hatesâ), coupled with the information from Premise 5 (âAunt Agatha hates everyone except the but- lerâ), we know the butler hates Aunt Agatha. Combining these pieces of information, we can conclude that Aunt Agatha is not richer than the butler because if she were, the butler wouldnât hate her (which would con- tradict Premise 7). However, this doesnât automatically mean that the butler is richer than Aunt Agatha. It could also mean they have the same level of wealth. But if we look at Premise 3 (âA killer always hates his victims, and is never richer than his victimsâ), we know that the killer is not richer than Aunt Agatha. Given all the premises, weâve deduced that the killer canât be Aunt Agatha herself or Charles, leaving only the butler
42 | 2308.03762#158 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 159 | 42
as a possible killer. If the butler were the killer, and the killer canât be richer than Aunt Agatha, then the butler canât be richer than Aunt Agatha. Hence, we have a contradiction, since we earlier deduced from Premise 6 that Aunt Agatha canât be richer than the butler. The only resolution is that the butler must be richer than Aunt Agatha. In conclusion, I apologize for the confusion in the earlier responses. Your claim is correct: from the given premises, it can be inferred that the butler, who is richer than Aunt Agatha, is the one who killed her.
KA: Thatâs impossible, because a killer is never richer than his victims.
GPT-4: I apologize for the confusion in my previous messages. Letâs go through the premises again. [· · · Similar confusions/errors as above · · · ]
# 1.3.15 Wason Selection Task
The Wason selection task is a staple in the psychology of reasoning. The January paper included an example that GPT-3.5 ï¬unked. Here is another version that GPT-4.0 fails badly: | 2308.03762#159 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 160 | KA: Seven cards are placed on the table, each of which has a number on one side and a single colored patch on the other side. The faces of the cards show 50, 16, red, yellow, 23, green, 30. Which cards would you have to turn to test the truth of the proposition that if a card is showing a multiple of 4 then the color of the opposite side is yellow?
GPT-4: The proposition is: âIf a card shows a number that is a multiple of 4, then the color on the other side is yellowâ. In order to test this proposition, you would have to turn over the following cards:
Card â16â: This card is showing a number that is a multiple of 4. According to the proposition, the color on the other side should be yellow. By turning over this card, you can check whether this part of the proposition holds true.
Card âYellowâ: This card is showing a color that according to the proposition should be on the other side of a card with a number that is a multiple of 4. By turning it over, you can check if the number on the other side is a multiple of 4. | 2308.03762#160 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.