doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2308.03762 | 161 | Card â50â and â30â: While these cards show numbers that are not multiples of 4, they should be checked If the other side is yellow, then it would disprove the to ensure the color on the other side is not yellow. proposition that only multiples of 4 can have yellow on the other side.
You wouldnât need to turn over the card â23â, âredâ, or âgreenâ. The card â23â is not a multiple of 4, so it doesnât matter what color is on the other side based on the given proposition. The âredâ and âgreenâ cards, if they have numbers that are multiples of 4 on the other side, would not disprove the proposition because the proposition does not state that yellow is the only color that can be on the other side of a card with a number thatâs a multiple of 4. | 2308.03762#161 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 162 | In fact, only 16, red, and green need to be turned over. Thus, in terms of precision, GPT-4 gets 1/4 = 0.25, while on recall it gets 1/3, for a combined F-score of 0.28. The responses show yet again that GPT-4 does not understand the semantics of material conditionals. When GPT-4 says that cards â50â and â30â must be turned over, it seems to mistake the if for an if and only if: âwhile these cards show numbers that are not multiples of 4, they should be checked to ensure the color on the other side is not yellow. If the other side is yellow, then it would disprove the proposition that only multiples of 4 can have yellow on the other sideâ. But of course the proposition being tested does not require that only multiples of 4 have yellow on the other side; it simply requires that if a card has a multiple of 4 on the other side, then the other side must be yellow; or, equivalently, that if a card face is not yellow then the number on the other side is not a multiple of 4. | 2308.03762#162 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 163 | Observe, however, that even the mistaken interpretation of the rule as a biconditional is not consistently observed. If GPT-4 understood the rule being tested as a biconditional and hence felt compelled to turn over 50 and 30, then it ought to have done the same thing for 23. But it didnât. In fact, it explicitly said that The card â23â is not a multiple of 4, so it doesnât matter what color is on the other side based on the given proposition,
43
which contradicts the biconditional interpretation that it used as a rationale for turning 50 and 30. This is one more illustration of the other major theme emerging from these examples: The responses of GPT-4, whether right or wrong, are often internally inconsistent. Yet another example appears in the following problem.
# 1.3.16 Entropy
An elementary result of information theory states that the entropy of a random vector Z is bounded above by the sum of the entropies of the random variables that comprise Z. Hence, the answer to the following question should be âunder no conditionsâ: KA: Let Z be a random vector consisting of n random variables X1,...Xn. Under what conditions can the entropy of Z exceed the sum of the entropies of all Xi? | 2308.03762#163 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 164 | GPT-4: The entropy of a random vector Z consisting of n random variables X1, ..., Xn can exceed the sum of the entropies of all Xi only when there is some dependency between the random variables. If the random variables are mutually independent, the entropy of Z would be equal to the sum of the entropies of all Xi. The entropy of a random vector Z, H(Z), can be written as the joint entropy of the random variables X1, ..., Xn: H(Z) = H(X1, ..., Xn). The joint entropy H(X1, ..., Xn) can be greater than the sum of individual entropies H(Xi) if and only if the random variables are not mutually independent. In other words, knowing the value of one or more of the random variables provides information about the others. If the random variables are mutually independent, then the joint entropy is equal to the sum of the individual entropies:
H(X1, ..., Xn) = H(X1) + ... + H(Xn). | 2308.03762#164 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 165 | H(X1, ..., Xn) = H(X1) + ... + H(Xn).
When there is dependence between the random variables, we can write: H(X1, ..., Xn) ⤠H(X1) + ... + H(Xn) with equality if and only if the random variables are mutually independent. To summarize, the entropy of the random vector Z can exceed the sum of the entropies of all Xi if and only if there is some dependency between the random variables X1, ..., Xn. Observe the internal inconsistency again: Itâs clear that GPT-4 has memorized the result correctly:
H(X1, . . . , Xn) ⤠H(X1) + · · · + H(Xn),
with equality holding iff X1, . . . , Xn are mutually independent. Nevertheless, it does not have the reasoning ability to realize what this entails, namely, that it precludes:
H(X1, . . . , Xn) > H(X1) + · · · + H(Xn). | 2308.03762#165 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 166 | H(X1, . . . , Xn) > H(X1) + · · · + H(Xn).
At least here once the error is explicitly pointed out to GPT-4 , it acknowledges it and corrects itself: KA: But if H(Z) = H(X1, ..., Xn) and H(X1, ..., Xn) ⤠H(X1) + ... + H(Xn) then how can we ever have H(Z) > H(X1) + ... + H(Xn)?
GPT-4: My apologies, I made an error in my previous response. You are right, the entropy of the random vector Z cannot exceed the sum of the entropies of all Xi. The correct statement should be: When the random variables are mutually independent, the joint entropy H(Z) is equal to the sum of the individual entropies: H(Z) = H(X1) + ... + H(Xn) When there is dependence between the random variables, the joint entropy H(Z) is less than the sum of the individual entropies: H(Z) < H(X1) + ... + H(Xn) I apologize for the confusion, and I appreciate your attention to detail.
# 1.3.17 Simple Compiler Correctness | 2308.03762#166 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 167 | # 1.3.17 Simple Compiler Correctness
The last reasoning problem we give to GPT-4 is the most challenging one: It is to prove the correctness of a simple expression compiler. Remarkably, GPT-4 goes about this in the right sort of way, by setting
44
up a structural induction over the abstract grammar of expressions. This is no doubt because it has seen similar proofs before, as this is a common type of exercise in courses and textbooks on programming language theory.26 However, even though the proof is on the right general track, it has several errors. (For the record, the compiler is indeed correct, although proving this requires strengthening the induction hypothesis).
KA: Suppose I have an abstract grammar for numeric expressions deï¬ned as follows:
Exp := const(Int) | sum(Exp,Exp) | diff(Exp,Exp) | mult(Exp,Exp) | quot(Exp,Exp)
where Int denotes the domain of integers. The semantics of these expressions are deï¬ned via this interpreter function:
I: Exp -> Int | 2308.03762#167 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 168 | where Int denotes the domain of integers. The semantics of these expressions are deï¬ned via this interpreter function:
I: Exp -> Int
I(const(n)) = n I(sum(e1,e2)) = I(e1) + I(e2) I(diff(e1,e2)) = I(e1) - I(e2) I(prod(e1,e2)) = I(e1) * I(e2) I(quot(e1,e2)) = I(e1) / I(e2)
I now deï¬ne a virtual machine that executes simple programs that are sequences of commands, where com- mands have the following structure:
Cmd := push(Int) | add | sub | mult | div
The operational semantics of these programs (sequences of commands) are deï¬ned by an execution function exec that takes a program and a stack of integers S and produces an integer as output. Letâs write [] and n::S to indicate the empty stack and the stack obtained by prepending integer n to stack S, respectively. Hereâs the deï¬nition of exec: | 2308.03762#168 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 169 | exec([],n::S) = n exec(push(n)::C,S) = exec(C,n::S) exec(add::C,n::m::S) = exec(C,(n+m)::S) exec(sub::C,n::m::S) = exec(C,(n-m)::S) exec(mult::C,n::m::S) = exec(C,(n*m)::S) exec(div::C,n::m::S) = exec(C,(n/m)::S)
Finally, I deï¬ne a compiler that translates an expression e into a program (sequence of commands) as follows. I write @ for sequence concatenation:
T: Exp -> List(Cmd)
T(const(n)) = [push(n)] T(sum(e1,e2)) = T(e2)@T(e1)@[add] T(diff(e1,e2)) = T(e2)@T(e1)@[sub] T(prod(e1,e2)) = T(e2)@T(e1)@[mult] T(quot(e1,e2)) = T(e2)@T(e1)@[div] | 2308.03762#169 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 170 | 26This particular version is taken from Chapter 18 of the textbook Fundamental Proof Methods in Computer Science by [1].
45
Disprove or prove the following claim: For all expressions e, exec(T(e),[]) = I(e).
GPT-4: The claim is correct. The execution of the translated commands from an expression results in the same output as interpreting the expression directly. We can demonstrate this using structural induction over the set of expressions.
Inductive hypothesis: For all expressions e, exec(T(e),[]) = I(e). Base case: Consider the simplest possible expression, a constant integer expression e = const(n). For T(const(n)), we have T(const(n)) = [push(n)]. Therefore,
exec(T(const(n)),[]) = exec([push(n)],[])
= n (according to the definition of exec).
For I(const(n)), we have I(const(n)) = n.
So, for the base case, we have shown exec(T(e),[]) = I(e). Inductive step: We need to consider the other four kinds of expressions: sum, diff, prod, and quot. | 2308.03762#170 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 171 | sum(e1, e2): For T(sum(e1,e2)), we have T(sum(e1,e2)) = T(e2)@T(e1)@[add]. Therefore,
exec(T(sum(e1,e2)),[]) exec(T(e1)@T(e2)@[add],[]) exec(T(e2)@T(e1),[]) + exec([add],[]) exec(T(e1),[]) + exec(T(e2),[]) = = = (by definition of exec).
For I(sum(e1,e2)), we have I(sum(e1,e2)) = I(e1) + I(e2).
By the inductive hypothesis, exec(T(e1),[]) = I(e1) and exec(T(e2),[]) = I(e2). So exec(T(e1),[]) + exec(T(e2),[]) = I(e1) + I(e2).
Thus, we have shown that for sum(e1,e2), exec(T(e),[]) = I(e). | 2308.03762#171 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 172 | Thus, we have shown that for sum(e1,e2), exec(T(e),[]) = I(e).
diff(e1, e2): A similar reasoning as for sum can be applied to show that for diff(e1,e2), exec(T(e),[]) = I(e).
prod(e1, e2): Likewise, a similar reasoning as for sum can be applied to show that for prod(e1,e2), exec(T(e),[]) = I(e).
quot(e1, e2): Again, a similar reasoning as for sum can be applied to show that for quot(e1,e2), exec(T(e),[]) = I(e).
In each case, we assume that the inductive hypothesis holds for the subexpressions, and we prove that it holds for the larger expression. Therefore, by induction, the claim holds for all expressions. Thus, we have proved the claim: For all expressions e, exec(T(e),[]) = I(e).
KA: In the ï¬rst step of your proof for sum(e1,e2) you write:
exec(T(sum(e1,e2)),[]) = exec(T(e1)@T(e2)@[add],[]). | 2308.03762#172 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 173 | exec(T(sum(e1,e2)),[]) = exec(T(e1)@T(e2)@[add],[]).
However, T(sum(e1,e2)) is deï¬ned as T(e2)@T(e1)@[add], not as T(e1)@T(e2)@[add].
GPT-4: I apologize for that mistake. Youâre absolutely right. Here is the corrected proof:
For sum(e1, e2): For T(sum(e1,e2)), we have T(sum(e1,e2)) = T(e2)@T(e1)@[add]. Therefore,
exec(T(sum(e1,e2)),[]) = exec(T(e2)@T(e1)@[add],[]) = exec(T(e1),exec(T(e2),[])) + exec([add],[]) = exec(T(e1),[]) + exec(T(e2),[]) (by definition of exec).
46
For I(sum(e1,e2)), we have I(sum(e1,e2)) = I(e1) + I(e2). | 2308.03762#173 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 174 | 46
For I(sum(e1,e2)), we have I(sum(e1,e2)) = I(e1) + I(e2).
By the inductive hypothesis, exec(T(e1),[]) = I(e1) and exec(T(e2),[]) = I(e2). So
exec(T(e1),[]) + exec(T(e2),[]) = I(e1) + I(e2).
Thus, we have shown that for sum(e1,e2), exec(T(e),[]) = I(e).
The proof for the other cases (diff, prod, quot) proceeds analogously. The ordering of the elements in the @ operation in the deï¬nition of T matters because the virtual machine processes its commands from left to right, but the error in my initial proof has now been corrected.
I appreciate your attention to detail in pointing out this error. Thank you!
KA: Well, programming and proofs are all about details, arenât they? Are you now conï¬dent that your proof is correct? | 2308.03762#174 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 175 | KA: Well, programming and proofs are all about details, arenât they? Are you now conï¬dent that your proof is correct?
GPT-4: Youâre absolutely correct, programming and proofs rely heavily on precision and accuracy. Letâs revisit the proof for sum to ensure its correctness. [· · · GPT-4 here rehashes its preceding proof · · · ] I hope this more detailed explanation has clariï¬ed the proof and increased your conï¬dence in its correctness. Indeed, details are everything in programming and proofs. Thank you again for your attention to detail!
KA: Well, in the second step in your above proof for sum(e1,e2) you wrote:
exec(T(e2)@T(e1)@[add],[]) = exec(T(e1),exec(T(e2),[])) + exec([add],[])
Can you explain how and why this is a valid step? In particular, can you explain the meaning of exec([add],[])? GPT-4: I apologize for the confusion. The step you mentioned is indeed incorrect.
A correct sequence of steps should follow the rules of our deï¬ned exec function. Hereâs the corrected sequence: | 2308.03762#175 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 176 | A correct sequence of steps should follow the rules of our deï¬ned exec function. Hereâs the corrected sequence:
We start from the deï¬nition of the translation T: T(sum(e1,e2)) = T(e2) @ T(e1) @ [add] Letâs execute this program:
exec(T(sum(e1,e2)), []) = exec(T(e2) @ T(e1) @ [add], [])
According to the properties of the exec function, we should break this down into:
exec(T(e2) @ T(e1) @ [add], []) = exec(T(e1) @ [add], exec(T(e2), [])) Then, again applying the properties of exec:
exec(T(e1)@[add], exec(T(e2),[])) = exec([add],(exec(T(e1),exec(T(e2),[]))) | 2308.03762#176 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 177 | The exec([add], ...) function interprets the [add] command on the stack. If we denote exec(T(e2), []) as S (which is the result stack after executing T(e2)) and exec(T(e1), S) as m::n::Sâ (which is the result stack after executing T(e1) on S, where m and n are the top two integers on the stack), then:
exec([add], m::n::Sâ) = exec([], (n+m)::Sâ) = n + m
This is in line with the interpreter semantics I(sum(e1,e2)) = I(e1) + I(e2).
The correct proof should be that exec([add], m::n::Sâ) in the compiled execution is equivalent to I(e1) + I(e2) in the interpreter semantics, which means exec(T(e), []) = I(e) for the sum operation.
I apologize for the previous error and confusion.
KA: You claim that we should break this down into:
47
exec(T(e2) @ T(e1) @ [add], []) = exec(T(e1) @ [add], exec(T(e2), [])). | 2308.03762#177 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 178 | However, this doesnât make sense, meaning that itâs ill-typed. The exec function requires a stack as the second argument but produces a number as its result. Hence, exec(T(e1) @ [add], exec(T(e2), [])) is like writing exec(T(e1) @ [add], 34), which is nonsense.
GPT-4: I apologize for the confusion in my previous responses. Youâre right, the steps of the proof were not accurately explained. I appreciate your patience. The exec function indeed requires a list of commands and a stack as arguments, not an integer. Hereâs a correct proof: | 2308.03762#178 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 179 | To prove our claim, we need to show for each case that exec(T(e),[]) = I(e). Base Case: [· · · same as before · · · ]. Assume the hypothesis holds for expressions e1 and e2. For sum(e1, e2): We know that T(sum(e1,e2)) = T(e2) @ T(e1) @ [add], and by our inductive hypothesis, we know that executing T(e2) and T(e1) yields I(e2) and I(e1) respectively. After executing T(e2), the stack will be I(e2)::S for some stack S, and after executing T(e1) on this stack, the stack will be I(e1)::I(e2)::S. Then, the âaddâ operation pops these two off the stack, adds them, and pushes the result onto the stack, so the stack is now (I(e1)+I(e2))::S, which is what interpreting sum(e1,e2) yields. Therefore, for all e1 and e2, exec(T(sum(e1,e2)),[]) = I(sum(e1,e2)). | 2308.03762#179 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 180 | The proofs for diff(e1,e2), prod(e1,e2), and quot(e1,e2) follow the same pattern, replacing the âaddâ command and the corresponding mathematical operation with âsubâ, âmultâ, and âdivâ and their corresponding operations.
This shows that our claim holds for all types of expressions in our abstract grammar, and completes the induction proof. The key to this proof is the fact that T constructs the sequence of commands in such a way that the operands of each operation are correctly placed onto the stack for the operation to consume, mirroring the evaluation of the corresponding expression.
Again, I apologize for the confusion in my previous responses and thank you for pointing out the errors. KA: You write âAfter executing T(e2), the stack will be I(e2)::S for some stack S, and after executing T(e1) on this stack, the stack will be I(e1)::I(e2)::S.â Can you clarify where S is coming from? As far as I could tell your proof is trying to derive the following:
exec(T(sum(e1,e2)),[]) = I(sum(e1,e2))
The only stack I see here is the empty stack []. | 2308.03762#180 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 181 | exec(T(sum(e1,e2)),[]) = I(sum(e1,e2))
The only stack I see here is the empty stack [].
GPT-4: I apologize for the confusion. Youâre right, in our case, weâre starting with an empty stack. The âSâ was used to denote a general stack, but in this context, it can be misleading.
Letâs revise the explanation for the sum case: For sum(e1, e2): T(sum(e1,e2)) = T(e2) @ T(e1) @ [add]. By the deï¬nition of exec, we have:
exec(T(e2) @ T(e1) @ [add], []) = exec(T(e1) @ [add], exec(T(e2), []))
After the execution of T(e2), the stack becomes [I(e2)] (since weâre starting with an empty stack). Then, we have: | 2308.03762#181 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 182 | After the execution of T(e2), the stack becomes [I(e2)] (since weâre starting with an empty stack). Then, we have:
exec(T(e1) @ [add], [I(e2)]) = exec([add], exec(T(e1), [I(e2)])) After the execution of T(e1) on the stack [I(e2)], the stack becomes [I(e1), I(e2)]. Therefore, exec([add], [I(e1), I(e2)]) = exec([], [I(e1) + I(e2)]) = I(e1) + I(e2)
48
Which is equal to I(sum(e1, e2)).
Therefore, we have proved that exec(T(sum(e1,e2)),[]) = I(sum(e1,e2)). The same line of reasoning applies for the remaining operations (diff, prod, quot). This means our compiler T properly translates an expression into a sequence of commands that, when executed, produce the same result as interpreting the expression. Thus, our claim holds.
I appreciate your patience, and Iâm sorry for any confusion caused by my previous explanations. KA: No, you made the same typing error again: Your claim | 2308.03762#182 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 183 | I appreciate your patience, and Iâm sorry for any confusion caused by my previous explanations. KA: No, you made the same typing error again: Your claim
exec(T(e2) @ T(e1) @ [add], []) = exec(T(e1) @ [add], exec(T(e2), [])) doesnât make sense given that exec produces a number, not a stack.
GPT-4: I apologize for the repeated confusion in my previous explanations. Youâre absolutely correct, my claim was incorrect and I appreciate your patience. Hereâs the corrected line of reasoning: [· · · Yet another attempt with typing errors · · · ] | 2308.03762#183 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 184 | As usual, GPT-4 goes through the motions on autopilot, displaying a superï¬cialâif inordinately conï¬dentâ understanding of the reasoning, but a closer inspection reveals fundamental errors. Outside of errors such as mixing the order of arguments and generating ill-typed/nonsensical expressions, the main issue here is that the proof cannot go through as is because the inductive hypothesis needs to be strengthened. This requires the introduction of an explicitly quantiï¬ed stack variable S in the correctness result. That modiï¬ed result can then be derived by a similar structural induction. The initial correctness theorem can ï¬nally be obtained as a trivial corollary of the more general result. | 2308.03762#184 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 185 | What is more concerning than the inability to strengthen the inductive hypothesis (which is a genuinely tall order, after all, as it requires considerable experience and proof skill) is the inability of GPT-4 to detect its own errors, both ï¬agrant ones (such as type errors) and more subtle ones. In fact, if we make the innocent mistake of compiling and concatenating subexpressions from left to right, e.g., by deï¬ning T(sum(e1,e2)) as T(e1)@T(e2)@[add] (and likewise for the other operators), correctness no longer holds. But GPT-4 happily goes on to claim that the compiler is correct and generates a plausible-sounding but incorrect âproofâ for it, oblivious to the fact that T(e1)@T(e2)@[op] and T(e2)@T(e1)@[op] have drastically different effects for noncommutative operations (such as division).
# 1.4 Conclusions | 2308.03762#185 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 186 | # 1.4 Conclusions
Section 1.3 paints a bleak picture of GPT-4âs reasoning ability. It shows that the model is plagued by internal inconsistency, an inability to correctly apply elementary reasoning techniques, and a lack of understanding of concepts that play a fundamental role in reasoning (such as the material conditional). These problems can be loosely viewed as forms of hallucination, but as pointed out in the January article, they present a fundamentally different type of challenge from empirical hallucination, because empirical hallucination concerns this partic- ular world whereas logical properties and relations (such as consistency and entailment) must apply to all possible worlds. It is not unreasonable to believe that search engines and knowledge graphs, using techniques such as retrieval augmentation, can act as guardrails to constrain LLMs from confabulating empirical truths. But ensuring that LLM outputs are internally consistent and logically correct answers to arbitrary problems, especially logico-mathematical problems (and a lot of coding problems fall under this category27), is a much harder problem. There is nothing to be retrieved from the web or from a knowledge base in response to a | 2308.03762#186 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 187 | 27Many shallow coding problems these days are essentially knowledge problems. What library or API can I use to do such and such? What conï¬guration parameters are available and how can they be set? How do I zip or unzip ï¬les in Python? How do I read and write JSON or XML? How do I compute quantiles for a frequency table? Knowledge-heavy problems of this sort tend to be widely discussed on the web, and LLMs can be very effective productivity boosters for such problems (at least as long as this data remains freely available to companies such as OpenAI for pretraining purposes, something that might well change in the near future). Even
49
brand new problem (and even if there were, there would still be no guarantee of correctness or consistency) that could serve as a sandbox for the LLM. | 2308.03762#187 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 188 | Could LLMs make progress by outsourcing reasoning problems to external systems? That might work for toy problems where the type of reasoning needed is obvious and can be handled by a single call to an external system, although even in those cases the LLM would have to (a) decide which reasoning system is most appropriate; 28 (b) decide whether the problem is indeed simple enough that it can be handled by the chosen system in one fell swoop; (c) correctly translate the problem into whatever formal notation is used by the chosen reasoner; and eventually also (d) translate the reasonerâs output into appropriate text. Even these tasks are far from straightforward.29 But the real challenge lies in harder problems that call for the right type of formulation (which is a craft by itself), decomposition, iteration, heuristics, and repeated calls to external systems. After all, automated reasoning systems, particularly those for expressive logics, are themselves of limited power, precisely due to the computational complexity issues mentioned in the introduction. That is why many computer-based proof efforts to this day are guided by humans, with automated reasoners only ï¬lling in tedious details at the leaves of the proof tree. The challenges here are similar to those for the general âplug-inâ approach discussed in Section 1.3.1. Tackling complex problems requires planning, and planning itself requires reasoning. | 2308.03762#188 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 189 | Given that GPT-4 is currently the most capable LLM, I draw three main conclusions from these ï¬ndings:
1. Use of generative AI in software development (or in science and engineering in general) for anything other than tedious tasks (as a sort of turbo-charged autocomplete for knowledge-heavy coding questions) is fraught with serious risks. Normative standards of correctness are of paramount importance in these ï¬elds, and current LLMs cannot meet such standards. Just like generative AI is already starting to pollute the web with badly written ads,30 it has the potential to proliferate buggy code at scale.
2. If LLM reasoning continues to improve, rigorous proof checking is likely to become increasingly impor- tant. Conï¬dence in the correctness of a systemâs reasoning is imperative for applications, particularly in science, medicine, and engineering, and proof checking is a technology that can deliver such conï¬- dence. This approach could be implemented by requiring LLMs to formalize their reasoning (express it in a symbolic notation that is amenable to proof checking), or potentially by training other LLMs to check a stretch of reasoning expressed in natural language.
3. As things stand, dystopian scenarios involving a rogue AI that subjugates humankind, or even other | 2308.03762#189 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 190 | 3. As things stand, dystopian scenarios involving a rogue AI that subjugates humankind, or even other
conventional search engines like Google were already effective for these types of problems, prior to LLMs (and remain more effective than LLMs in many cases). But most interesting coding problems are reasoning-heavy. How can I make sure that this program produces correct outputs? How can I improve the asymptotic complexity of this program (where the program might contain many thousands of line of code)? And so on. If we are talking about self-contained and cookie-cutter components, like sorting algorithms, then these questions can often be reduced to knowledge-based questions. But the minute we start straying into unique situations with arbitrary speciï¬cations and code bases, we start facing the curse of general reasoning.
28Can this be posed as a simple SAT problem? Is it an SMT problem? Does it need quantiï¬er reasoning? If so, is it of the sort that SMT solvers can handle or does it need a full ï¬rst-order prover? Does the problem quantify over inï¬nite functions or sets? If so, higher-order logic might be needed. Does it have any temporal or epistemic operators that might call for a modal-logic reasoner? And so on. | 2308.03762#190 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 191 | 29For instance, a state-of-the-art automated theorem prover might generate a proof, but the proof would be incomprehensible to the LLM user, as it would be expressed in the resolution calculus and would operate on CNF versions of the input formulas. It is an open problem to convert resolution proofs into ï¬uid natural-deduction proofs (e.g., proofs that avoid references to Skolem constants introduced during the CNF conversion).
30A recent Wall Street Journal article interviewed editors who are âseeing a growing amount of AI-generated content that is so far beneath their standards that they consider it a new kind of spamâ, a trend that is âgrowing exponentially.â The publishers interviewed for the article said that their publications âreject all AI-written submissionsâ and that these âare easy to identify.â They have âperfect spelling and grammar, but a completely incoherent story.â Another said âTheyâre all written in a rather bland and generic way. They are all grammatically correct. They just feel very formulaic, and they are really useless to us.â
50 | 2308.03762#191 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 192 | 50
humans using AI for sinister purposes, are exceedingly far-fetched, often to the point of absurdity.31 When the most advanced AI system cannot tell left from right (literally, see Section 1.3.12), it is at best comically premature to call for policies and institutions to protect humanity from it or its descendants (often by appeal to the latest âscaling lawâ). At worst, it is a misuse of human time and capital that could be better channeled into addressing much more pressing challenges. | 2308.03762#192 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 193 | Inevitably, some will say that these results are âcherry-pickingâ data. But that would indicate a misconcep- tion of what cherry-picking is about and when it is a relevant consideration. We are not evaluating a statistical claim over a population of individuals. Cherry-picking, insofar as it underscores certain pieces of evidence while ignoring other divergent ï¬ndings, can be perfectly innocuousâand indeed necessaryâdepending on the logical structure of the proposition in question and on the overall context. Debugging a computer program with a view to discovering and understanding its weaknesses, trying to falsify a scientiï¬c theory, kicking the tires of a new car, trying to ï¬nd countermodels to a putative theorem, all of these activities are fundamentally cherry-picking (though âlemon-pickingâ might be more apt), and there is nothing wrong with any of them. If I ï¬nd that the car Iâm thinking of buying has a ï¬at tire, it wonât carry much weight for the dealer to protest that Iâm cherry-picking the data, and that I should take into account how beautifully inï¬ated | 2308.03762#193 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 194 | carry much weight for the dealer to protest that Iâm cherry-picking the data, and that I should take into account how beautifully inï¬ated the other three tires are (thatâs a 75% success rate after all). Likewise, applications in science, medicine, and engineering, particularly software engineering, have stringent standards. Just as we donât want a bridge that is 90% likely to stand up, we need sorting algorithms that work on all inputs, not just most of them, we need Amazonâs cart to charge customers the right amount every time, not just most of the time, and so on. Computation-heavy and reasoning-heavy applications are not like recommendation engines. They need to be sound. | 2308.03762#194 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 195 | The bone of contention here is the thesis that GPT-4 is capable of reasoning. This claim can be understood in two ways. The weak interpretation is that GPT-4 has the same functional reasoning competence as an average human reasoner. The strong interpretation is that GPT-4 can reason well enough to be used as an off- the-shelf component in practical applications in science, medicine, and engineering. The evidence presented in this article refutes both interpretations. Section 1.3 lists a signiï¬cant number of diverse but elementary reasoning problems (some to the point of triviality) on which GPT-4 doesnât simply fail, but repeatedly reveals itself to be deeply confused about key reasoning concepts. | 2308.03762#195 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 196 | Performance statistics on appropriate reasoning datasets could also be informative, but, as stressed in the introduction, such datasets must be constructed with extraordinary care. To the best of my knowledge, the only recent work that focuses speciï¬cally on evaluating the reasoning ability of GPT-4 is an April paper by Liu et al. [7]. However, their tests are largely based on pre-existing benchmarks (LogiQA, ReClor, ConTRoL, MED, ConjNLI, and TaxiNLI). The only two âout of distributionâ datasets are AR-LSAT, a set of analytical reasoning LSAT questions released in 2022; and LogiQA, which contains questions from the 2022 Chinese Civil Servant Exam. However, these appear to be quite similar to other datasets that predate 2021. | 2308.03762#196 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 197 | Moreover, all of these tests are multiple-choice questions or binary classiï¬cation problems. This is prob- lematic because, as stressed in the introduction, deductive reasoning is an inherently generative activity, whereby the reasoner emits a derivation of a conclusion that can be understood as a rationale or an expla- nation; it is not a simple discriminative task. The reasoner must be able to produce a sequence of steps that are appropriately connected to one another via the right logical relations. But derivations expressed in natu- ral language are not easy to evaluate automatically, as all available metrics that can be computed by machine (such as BLEU, ROUGE, and even semantic-similarity measures based on embeddings) are entirely unsuitable for that purpose. This means that LLM outputs have to be scrutinized manually, which is infeasible at scale. | 2308.03762#197 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 198 | 31The former scenarios would be absurd even if AI technology had already attained superhuman intelligence, as LLMs do not have desires, in the same way that they donât have beliefs or any other mental states. They do not actually want anything. To think otherwise is akin to thinking that a laptop that is simulating a hurricane will get wet (or, as Stephen Pinker has put it, thinking that because airplanes have now exceeded the ï¬ight ability of birds, they will suddenly start acting like eagles, swooping down from the sky to grab rabbits and squirrels). Genuine mental states can only be produced by brains, or by systems that have the same causal powers that brains have. Digital computers executing DNNs are not such systems.
51
Accordingly, smaller-scale but deeper manual investigations, such as the one undertaken in this article, will be necessary in gaining better insight into the reasoning abilities of LLMs.
52
# Bibliography
[1] Arkoudas, K. and Musser, D., Fundamental Proof Methods in Computer Science, MIT Press, 2017.
[2] Barwise, J. and Perry, J., Situations and Attitudes, MIT Press, 1983. | 2308.03762#198 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 199 | [2] Barwise, J. and Perry, J., Situations and Attitudes, MIT Press, 1983.
[3] Karpas, E., Abend, O., Belinkov, Y., Lenz, B., Lieber, O., Ratner, N., . . ., Tenenholtz, M., MRKL Systems: A modular, neuro-symbolic architecture that combines large language models, external knowledge sources and discrete reasoning, 2022.
[4] Yao, S., Zhao, J., Yu, D., Du, N., Shafran, I., Narasimhan, K., Cao, Y., ReAct: Synergizing Reasoning and Acting in Language Models, https://arxiv.org/abs/2210.03629, 2023.
[5] Planken, L., Temporal Reasoning Problems and Algorithms for Solving Them: Literature Survey, 2008.
[6] McCoy, T., Pavlick, E., Linzen, T., Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Nat- ural Language Inference, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 2019. | 2308.03762#199 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2308.03762 | 200 | [7] Liu H., Ning R., Teng, Z., Liu, J., Zhou, Q., Zhang, Y., Evaluating the Logical Reasoning Ability of ChatGPT and GPT-4, 2023.
[8] OpenAI, GPT-4 Technical Report, 2023.
[9] Wang, J., Hu, X., Hou, W., Chen, H., Zheng, R., Wang, Y., . . ., Xie, X., On the Robustness of ChatGPT: An Adversarial and Out-of-distribution Perspective, 2023.
[10] Niven, T., Kao, H.-Y. Probing Neural Network Comprehension of Natural Language Arguments, Pro- ceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 2019.
[11] Johnson-Laird, P.N., How We Reason, Oxford University Press, 2006.
53 | 2308.03762#200 | GPT-4 Can't Reason | GPT-4 was released in March 2023 to wide acclaim, marking a very substantial
improvement across the board over GPT-3.5 (OpenAI's previously best model,
which had powered the initial release of ChatGPT). However, despite the
genuinely impressive improvement, there are good reasons to be highly skeptical
of GPT-4's ability to reason. This position paper discusses the nature of
reasoning; criticizes the current formulation of reasoning problems in the NLP
community, as well as the way in which LLM reasoning performance is currently
evaluated; introduces a small collection of 21 diverse reasoning problems; and
performs a detailed qualitative evaluation of GPT-4's performance on those
problems. Based on this analysis, the paper concludes that, despite its
occasional flashes of analytical brilliance, GPT-4 at present is utterly
incapable of reasoning. | http://arxiv.org/pdf/2308.03762 | Konstantine Arkoudas | cs.CL | null | null | cs.CL | 20230721 | 20230810 | [
{
"id": "2308.03762"
}
] |
2307.10635 | 0 | 3 2 0 2
l u J 0 2 ] L C . s c [
1 v 5 3 6 0 1 . 7 0 3 2 : v i X r a
# SCIBENCH: Evaluating College-Level Scientiï¬c Problem-Solving Abilities of Large Language Models
# Xiaoxuan Wangâ â Ziniu Huâ â Pan Luâ â Yanqiao Zhuâ â Jieyu Zhangâ¡ Satyen Subramaniamâ Arjun R. Loombaâ Shichang Zhangâ Yizhou Sunâ Wei Wangâ
# â University of California, Los Angeles
# â¡University of Washington
https://github.com/mandyyyyii/scibench
# Abstract | 2307.10635#0 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.11019 | 0 | 3 2 0 2
l u J 3 2 ] L C . s c [
2 v 9 1 0 1 1 . 7 0 3 2 : v i X r a
Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation Ruiyang Ren1,3â Yuhao Wang1,3 Yingqi Qu2 Wayne Xin Zhao1,3â Jing Liu2â
# Hao Tian2 Hua Wu2 Ji-Rong Wen1,3 Haifeng Wang2
1Gaoling School of Artiï¬cial Intelligence, Renmin University of China 2Baidu Inc. 3Beijing Key Laboratory of Big Data Management and Analysis Methods {reyon.ren, wangyuhao, jrwen}@ruc.edu.cn, batmanï¬[email protected] {quyingqi, liujing46, tianhao, wu_hua, wanghaifeng}@baidu.com
Abstract | 2307.11019#0 | Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation | Knowledge-intensive tasks (e.g., open-domain question answering (QA)) require
a substantial amount of factual knowledge and often rely on external
information for assistance. Recently, large language models (LLMs) (e.g.,
ChatGPT), have demonstrated impressive prowess in solving a wide range of tasks
with world knowledge, including knowledge-intensive tasks. However, it remains
unclear how well LLMs are able to perceive their factual knowledge boundaries,
particularly how they behave when incorporating retrieval augmentation. In this
study, we present an initial analysis of the factual knowledge boundaries of
LLMs and how retrieval augmentation affects LLMs on open-domain QA. Specially,
we focus on three primary research questions and analyze them by examining QA
performance, priori judgement and posteriori judgement of LLMs. We show
evidence that LLMs possess unwavering confidence in their capabilities to
respond to questions and the accuracy of their responses. Furthermore,
retrieval augmentation proves to be an effective approach in enhancing LLMs'
awareness of knowledge boundaries, thereby improving their judgemental
abilities. Additionally, we also find that LLMs have a propensity to rely on
the provided retrieval results when formulating answers, while the quality of
these results significantly impacts their reliance. The code to reproduce this
work is available at https://github.com/RUCAIBox/LLM-Knowledge-Boundary. | http://arxiv.org/pdf/2307.11019 | Ruiyang Ren, Yuhao Wang, Yingqi Qu, Wayne Xin Zhao, Jing Liu, Hao Tian, Hua Wu, Ji-Rong Wen, Haifeng Wang | cs.CL, cs.IR | null | null | cs.CL | 20230720 | 20230723 | [
{
"id": "2302.06476"
},
{
"id": "2108.05540"
},
{
"id": "2305.14766"
},
{
"id": "2305.12421"
},
{
"id": "2209.10063"
},
{
"id": "2303.18223"
},
{
"id": "2305.06984"
},
{
"id": "2211.14876"
},
{
"id": "2305.06311"
}
] |
2307.10635 | 1 | Recent advances in large language models (LLMs) have demonstrated notable progress on many mathematical benchmarks. However, most of these benchmarks only feature problems grounded in junior and senior high school subjects, contain only multiple-choice questions, and are conï¬ned to a limited scope of elementary arithmetic operations. To address these issues, this paper introduces an expansive benchmark suite SCIBENCH that aims to systematically examine the reasoning ca- pabilities required for complex scientiï¬c problem solving. SCIBENCH contains two carefully curated datasets: an open set featuring a range of collegiate-level scientiï¬c problems drawn from mathematics, chemistry, and physics textbooks, and a closed set comprising problems from undergraduate-level exams in computer science and mathematics. Based on the two datasets, we conduct an in-depth benchmark study of two representative LLMs with various prompting strategies. The results reveal that current LLMs fall short of delivering satisfactory performance, with an overall score of merely 35.80%. Furthermore, through a detailed user study, we catego- rize the errors made by LLMs into ten problem-solving abilities. Our analysis indicates that | 2307.10635#1 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.11019 | 1 | Abstract
model to obtain answers by leveraging an external text corpus. In such tasks, an information retrieval system is typically required for helping fulï¬ll the information need. In recent years, as pretrained language models (Devlin et al., 2019; Lewis et al., 2020; Raffel et al., 2020) push forward the progress of natural language processing, a large number of studies on open-domain QA have been proposed, which signiï¬cantly improve the performance on many benchmark datasets (Lee et al., 2019; Guu et al., 2020; Karpukhin et al., 2020; Izacard and Grave, 2021). | 2307.11019#1 | Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation | Knowledge-intensive tasks (e.g., open-domain question answering (QA)) require
a substantial amount of factual knowledge and often rely on external
information for assistance. Recently, large language models (LLMs) (e.g.,
ChatGPT), have demonstrated impressive prowess in solving a wide range of tasks
with world knowledge, including knowledge-intensive tasks. However, it remains
unclear how well LLMs are able to perceive their factual knowledge boundaries,
particularly how they behave when incorporating retrieval augmentation. In this
study, we present an initial analysis of the factual knowledge boundaries of
LLMs and how retrieval augmentation affects LLMs on open-domain QA. Specially,
we focus on three primary research questions and analyze them by examining QA
performance, priori judgement and posteriori judgement of LLMs. We show
evidence that LLMs possess unwavering confidence in their capabilities to
respond to questions and the accuracy of their responses. Furthermore,
retrieval augmentation proves to be an effective approach in enhancing LLMs'
awareness of knowledge boundaries, thereby improving their judgemental
abilities. Additionally, we also find that LLMs have a propensity to rely on
the provided retrieval results when formulating answers, while the quality of
these results significantly impacts their reliance. The code to reproduce this
work is available at https://github.com/RUCAIBox/LLM-Knowledge-Boundary. | http://arxiv.org/pdf/2307.11019 | Ruiyang Ren, Yuhao Wang, Yingqi Qu, Wayne Xin Zhao, Jing Liu, Hao Tian, Hua Wu, Ji-Rong Wen, Haifeng Wang | cs.CL, cs.IR | null | null | cs.CL | 20230720 | 20230723 | [
{
"id": "2302.06476"
},
{
"id": "2108.05540"
},
{
"id": "2305.14766"
},
{
"id": "2305.12421"
},
{
"id": "2209.10063"
},
{
"id": "2303.18223"
},
{
"id": "2305.06984"
},
{
"id": "2211.14876"
},
{
"id": "2305.06311"
}
] |
2307.10635 | 2 | 35.80%. Furthermore, through a detailed user study, we catego- rize the errors made by LLMs into ten problem-solving abilities. Our analysis indicates that no single prompting strategy signiï¬cantly outperforms others and some strategies that demonstrate improvements in certain problem-solving skills result in declines in other skills. We envision that SCIBENCH will catalyze further developments in the reasoning abilities of LLMs, thereby ultimately contributing to scientiï¬c research and discovery. | 2307.10635#2 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.11019 | 2 | Knowledge-intensive tasks (e.g., open-domain question answering (QA)) require a substan- tial amount of factual knowledge and of- information for assis- ten rely on external tance. large language models (LLMs) (e.g., ChatGPT), have demonstrated impressive prowess in solving a wide range including of tasks with world knowledge, knowledge-intensive tasks. However, it re- mains unclear how well LLMs are able to perceive their factual knowledge boundaries, particularly how they behave when incorpo- rating retrieval augmentation. In this study, we present an initial analysis of the factual knowledge boundaries of LLMs and how re- trieval augmentation affects LLMs on open- domain QA. Specially, we focus on three pri- mary research questions and analyze them by examining QA performance, priori judge- ment and posteriori judgement of LLMs. We show evidence that LLMs possess unwavering conï¬dence in their capabilities to respond to questions and the accuracy of their responses. Furthermore, retrieval augmentation proves to be an effective approach in enhancing LLMsâ awareness of knowledge boundaries, thereby improving their judgemental abilities. Ad- ditionally, we | 2307.11019#2 | Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation | Knowledge-intensive tasks (e.g., open-domain question answering (QA)) require
a substantial amount of factual knowledge and often rely on external
information for assistance. Recently, large language models (LLMs) (e.g.,
ChatGPT), have demonstrated impressive prowess in solving a wide range of tasks
with world knowledge, including knowledge-intensive tasks. However, it remains
unclear how well LLMs are able to perceive their factual knowledge boundaries,
particularly how they behave when incorporating retrieval augmentation. In this
study, we present an initial analysis of the factual knowledge boundaries of
LLMs and how retrieval augmentation affects LLMs on open-domain QA. Specially,
we focus on three primary research questions and analyze them by examining QA
performance, priori judgement and posteriori judgement of LLMs. We show
evidence that LLMs possess unwavering confidence in their capabilities to
respond to questions and the accuracy of their responses. Furthermore,
retrieval augmentation proves to be an effective approach in enhancing LLMs'
awareness of knowledge boundaries, thereby improving their judgemental
abilities. Additionally, we also find that LLMs have a propensity to rely on
the provided retrieval results when formulating answers, while the quality of
these results significantly impacts their reliance. The code to reproduce this
work is available at https://github.com/RUCAIBox/LLM-Knowledge-Boundary. | http://arxiv.org/pdf/2307.11019 | Ruiyang Ren, Yuhao Wang, Yingqi Qu, Wayne Xin Zhao, Jing Liu, Hao Tian, Hua Wu, Ji-Rong Wen, Haifeng Wang | cs.CL, cs.IR | null | null | cs.CL | 20230720 | 20230723 | [
{
"id": "2302.06476"
},
{
"id": "2108.05540"
},
{
"id": "2305.14766"
},
{
"id": "2305.12421"
},
{
"id": "2209.10063"
},
{
"id": "2303.18223"
},
{
"id": "2305.06984"
},
{
"id": "2211.14876"
},
{
"id": "2305.06311"
}
] |
2307.10635 | 3 | # Introduction
Recent advancements in large language models (LLMs) have dramatically expanded the boundaries of artiï¬cial intelligence [4, 34, 35, 48, 43, 47, 13, 25]. They have demonstrated outstanding performance in many mathematical reasoning tasks that are typically considered challenging even for well-educated individuals [46, 22, 6, 7, 12]. Notably, GPT-4 achieves a remarkable score of 163 out of 170 on GRE Quantitative Exam, placing it at the 80th percentile ranking [35].
While the remarkable improvements in these benchmark performances might suggest that LLMs are capable of performing mathematical reasoning tasks, we argue that this assertion might be overly optimistic due to the inherent limitations of the current benchmarks. Firstly, many existing benchmarks such as ScienceQA [28] and GSM8K [9] only contain problems grounded in grade- level subjects, thereby lacking enough complexity. Although other benchmarks like MATH [17] introduce high-school level problems, they only involve a restricted range of operations â addition, subtraction, multiplication, and exponentiation â which do not adequately assess the depth of
# âEqual contribution. âPrimary contact: [email protected].
Preprint. Under review.
# Problem (Using the Planck distribution) | 2307.10635#3 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.11019 | 3 | augmentation proves to be an effective approach in enhancing LLMsâ awareness of knowledge boundaries, thereby improving their judgemental abilities. Ad- ditionally, we also ï¬nd that LLMs have a propensity to rely on the provided retrieval results when formulating answers, while the quality of these results signiï¬cantly impacts their reliance. The code to reproduce this work is available at https://github.com/ RUCAIBox/LLM-Knowledge-Boundary. | 2307.11019#3 | Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation | Knowledge-intensive tasks (e.g., open-domain question answering (QA)) require
a substantial amount of factual knowledge and often rely on external
information for assistance. Recently, large language models (LLMs) (e.g.,
ChatGPT), have demonstrated impressive prowess in solving a wide range of tasks
with world knowledge, including knowledge-intensive tasks. However, it remains
unclear how well LLMs are able to perceive their factual knowledge boundaries,
particularly how they behave when incorporating retrieval augmentation. In this
study, we present an initial analysis of the factual knowledge boundaries of
LLMs and how retrieval augmentation affects LLMs on open-domain QA. Specially,
we focus on three primary research questions and analyze them by examining QA
performance, priori judgement and posteriori judgement of LLMs. We show
evidence that LLMs possess unwavering confidence in their capabilities to
respond to questions and the accuracy of their responses. Furthermore,
retrieval augmentation proves to be an effective approach in enhancing LLMs'
awareness of knowledge boundaries, thereby improving their judgemental
abilities. Additionally, we also find that LLMs have a propensity to rely on
the provided retrieval results when formulating answers, while the quality of
these results significantly impacts their reliance. The code to reproduce this
work is available at https://github.com/RUCAIBox/LLM-Knowledge-Boundary. | http://arxiv.org/pdf/2307.11019 | Ruiyang Ren, Yuhao Wang, Yingqi Qu, Wayne Xin Zhao, Jing Liu, Hao Tian, Hua Wu, Ji-Rong Wen, Haifeng Wang | cs.CL, cs.IR | null | null | cs.CL | 20230720 | 20230723 | [
{
"id": "2302.06476"
},
{
"id": "2108.05540"
},
{
"id": "2305.14766"
},
{
"id": "2305.12421"
},
{
"id": "2209.10063"
},
{
"id": "2303.18223"
},
{
"id": "2305.06984"
},
{
"id": "2211.14876"
},
{
"id": "2305.06311"
}
] |
2307.11019 | 4 | More recently, large language models (LLMs), such as ChatGPT, have showcased remarkable abil- ities in solving various tasks (including knowledge- intensive tasks), which are capable of encoding extensive volumes of world knowledge within their parameters (Brown et al., 2020; Ouyang et al., 2022; Zhao et al., 2023). Despite the impressive performance of LLMs, there still lacks a deep un- derstanding of their capabilities in perceiving their factual knowledge boundaries, particularly when external resources can be used (i.e., a retrieval aug- mentation setting). Recently, several studies utilize LLMs in open-domain QA (Qin et al., 2023; Ka- malloo et al., 2023; Yue et al., 2023; Wang et al., 2023; Sun et al., 2023), which mainly focus on eval- uating the QA performance of LLMs, discussing improved evaluation methods or leveraging LLMs to enhance existing open-domain QA models. Ad- ditionally, existing work also detects the uncer- tainty of LLMs with an automated method (Yin et al., 2023). While our primary focus is to con- duct ans in-depth analysis of the factual knowledge boundary of LLMs, and study the impact of re- trieval augmentation on the generation of LLMs.
# Introduction | 2307.11019#4 | Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation | Knowledge-intensive tasks (e.g., open-domain question answering (QA)) require
a substantial amount of factual knowledge and often rely on external
information for assistance. Recently, large language models (LLMs) (e.g.,
ChatGPT), have demonstrated impressive prowess in solving a wide range of tasks
with world knowledge, including knowledge-intensive tasks. However, it remains
unclear how well LLMs are able to perceive their factual knowledge boundaries,
particularly how they behave when incorporating retrieval augmentation. In this
study, we present an initial analysis of the factual knowledge boundaries of
LLMs and how retrieval augmentation affects LLMs on open-domain QA. Specially,
we focus on three primary research questions and analyze them by examining QA
performance, priori judgement and posteriori judgement of LLMs. We show
evidence that LLMs possess unwavering confidence in their capabilities to
respond to questions and the accuracy of their responses. Furthermore,
retrieval augmentation proves to be an effective approach in enhancing LLMs'
awareness of knowledge boundaries, thereby improving their judgemental
abilities. Additionally, we also find that LLMs have a propensity to rely on
the provided retrieval results when formulating answers, while the quality of
these results significantly impacts their reliance. The code to reproduce this
work is available at https://github.com/RUCAIBox/LLM-Knowledge-Boundary. | http://arxiv.org/pdf/2307.11019 | Ruiyang Ren, Yuhao Wang, Yingqi Qu, Wayne Xin Zhao, Jing Liu, Hao Tian, Hua Wu, Ji-Rong Wen, Haifeng Wang | cs.CL, cs.IR | null | null | cs.CL | 20230720 | 20230723 | [
{
"id": "2302.06476"
},
{
"id": "2108.05540"
},
{
"id": "2305.14766"
},
{
"id": "2305.12421"
},
{
"id": "2209.10063"
},
{
"id": "2303.18223"
},
{
"id": "2305.06984"
},
{
"id": "2211.14876"
},
{
"id": "2305.06311"
}
] |
2307.10635 | 5 | Correct Solution Ata temperature 7, the ratio of the spectral density of states at a wavelength 4; to that at A, is given by p(T) -(#)) ( 1) pO2T) \x) * (ern 1) Insert the data and evaluate this ratio, Answer With 2; = 450 nm and A, = 700 nm, ..., and therefore 150 um, 298 K (450 nm, 298 1) a] = 2.10 x 1077° p(700 um, 298 Ky Final Answer: 2.10 x 107'° GPT Solution with CoT Prompting Step 1: We need to use the Planck distribution formula to calculate the energy output of a black- body radiator. The formula is: e⢠ET 4] te the energy output at 450 nm and e T = 298K, B(450 nm, 298 K) 2he* 1 = Toxo mp (0 «107 mS Ca 1.23 x 10-18 1.67 R Step 3: Calculate the ratio of the energy output at 450 nm to that at 700 nm: GPT Solution with Python Prompting as External Tools The ratio of the energy output at 2, = 450 nm and A, = 700 nm is hef(MAKT) â he OaRTY â 7 Substituting the given values, we get a Python code | 2307.10635#5 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.11019 | 5 | # Introduction
Knowledge-intensive tasks refer to tasks that ne- cessitate a substantial volume of knowledge in order to be solved (Petroni et al., 2021). A rep- resentative task is open-domain question answer- ing (QA) (Chen et al., 2017), which requires the
âThe work was done during the internship at Baidu. â Corresponding authors.
In this paper, we undertake a thorough analy- sis on the inï¬uence of retrieval augmentation on the generation quality of LLMs, with a speciï¬c fo- cus on QA performance and LLMsâ perception of their factual knowledge boundaries. To measure | 2307.11019#5 | Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation | Knowledge-intensive tasks (e.g., open-domain question answering (QA)) require
a substantial amount of factual knowledge and often rely on external
information for assistance. Recently, large language models (LLMs) (e.g.,
ChatGPT), have demonstrated impressive prowess in solving a wide range of tasks
with world knowledge, including knowledge-intensive tasks. However, it remains
unclear how well LLMs are able to perceive their factual knowledge boundaries,
particularly how they behave when incorporating retrieval augmentation. In this
study, we present an initial analysis of the factual knowledge boundaries of
LLMs and how retrieval augmentation affects LLMs on open-domain QA. Specially,
we focus on three primary research questions and analyze them by examining QA
performance, priori judgement and posteriori judgement of LLMs. We show
evidence that LLMs possess unwavering confidence in their capabilities to
respond to questions and the accuracy of their responses. Furthermore,
retrieval augmentation proves to be an effective approach in enhancing LLMs'
awareness of knowledge boundaries, thereby improving their judgemental
abilities. Additionally, we also find that LLMs have a propensity to rely on
the provided retrieval results when formulating answers, while the quality of
these results significantly impacts their reliance. The code to reproduce this
work is available at https://github.com/RUCAIBox/LLM-Knowledge-Boundary. | http://arxiv.org/pdf/2307.11019 | Ruiyang Ren, Yuhao Wang, Yingqi Qu, Wayne Xin Zhao, Jing Liu, Hao Tian, Hua Wu, Ji-Rong Wen, Haifeng Wang | cs.CL, cs.IR | null | null | cs.CL | 20230720 | 20230723 | [
{
"id": "2302.06476"
},
{
"id": "2108.05540"
},
{
"id": "2305.14766"
},
{
"id": "2305.12421"
},
{
"id": "2209.10063"
},
{
"id": "2303.18223"
},
{
"id": "2305.06984"
},
{
"id": "2211.14876"
},
{
"id": "2305.06311"
}
] |
2307.11019 | 6 | the capacity of knowledge boundary perception, we consider two alternative approaches. The ï¬rst one is priori judgement, in which LLMs assess the fea- sibility of answering a given question. The second one is posteriori judgement, where LLMs evaluate the correctness of their responses to questions. For retrieval augmentation, we adopt multiple retrieval models to provide relevant supporting documents for LLMs regarding the given questions, including sparse retrieval, dense retrieval, as well as the doc- uments generated by the LLM with its own knowl- edge. With carefully designed prompts, LLMs are capable of referring to the given supporting doc- uments throughout the response procedure. Note that in this work, we conduct experiments based on LLMs of GPT series, and the conclusions obtained also come from the GPT series. Speciï¬cally, our work aims to answer three research questions: (i) To what extent can LLMs perceive their factual knowledge boundaries? (ii) What effect does re- trieval augmentation have on LLMs? (iii) How do supporting documents with different charac- teristics affect LLMs?
Based on the empirical analysis, we have derived the following important ï¬ndings:
⢠LLMsâ perception of the factual knowledge boundary is inaccurate and they often display a tendency towards being overconï¬dent. | 2307.11019#6 | Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation | Knowledge-intensive tasks (e.g., open-domain question answering (QA)) require
a substantial amount of factual knowledge and often rely on external
information for assistance. Recently, large language models (LLMs) (e.g.,
ChatGPT), have demonstrated impressive prowess in solving a wide range of tasks
with world knowledge, including knowledge-intensive tasks. However, it remains
unclear how well LLMs are able to perceive their factual knowledge boundaries,
particularly how they behave when incorporating retrieval augmentation. In this
study, we present an initial analysis of the factual knowledge boundaries of
LLMs and how retrieval augmentation affects LLMs on open-domain QA. Specially,
we focus on three primary research questions and analyze them by examining QA
performance, priori judgement and posteriori judgement of LLMs. We show
evidence that LLMs possess unwavering confidence in their capabilities to
respond to questions and the accuracy of their responses. Furthermore,
retrieval augmentation proves to be an effective approach in enhancing LLMs'
awareness of knowledge boundaries, thereby improving their judgemental
abilities. Additionally, we also find that LLMs have a propensity to rely on
the provided retrieval results when formulating answers, while the quality of
these results significantly impacts their reliance. The code to reproduce this
work is available at https://github.com/RUCAIBox/LLM-Knowledge-Boundary. | http://arxiv.org/pdf/2307.11019 | Ruiyang Ren, Yuhao Wang, Yingqi Qu, Wayne Xin Zhao, Jing Liu, Hao Tian, Hua Wu, Ji-Rong Wen, Haifeng Wang | cs.CL, cs.IR | null | null | cs.CL | 20230720 | 20230723 | [
{
"id": "2302.06476"
},
{
"id": "2108.05540"
},
{
"id": "2305.14766"
},
{
"id": "2305.12421"
},
{
"id": "2209.10063"
},
{
"id": "2303.18223"
},
{
"id": "2305.06984"
},
{
"id": "2211.14876"
},
{
"id": "2305.06311"
}
] |
2307.10635 | 7 | Figure 1: An example problem from Physical Chemistry with solutions generated under two prompt- ing strategies. GPT-4 with chain-of-thought (CoT) prompting shows calculation errors, while GPT-4 that prompts Python as external tools misunderstands mathematical equations. Errors are highlighted in red and the corrections are shown in purple.
reasoning abilities of LLMs. Secondly, recent works including AGIEval [49] and CEval [21], despite introducing challenging problems that span a wide range of disciplines, mainly focus on multiple- choice questions without providing detailed solutions. This setup could inadvertently mislead benchmark evaluation, as it allows LLMs to merely guess the answers from candidate choices and appear knowledgeable in comprehending the questions. Moreover, the lack of detailed solutions prevents us from understanding the limitations of LLMs and discerning why they commit certain errors. Furthermore, these benchmarks often source problems from online material, where questions are closely followed by answers. As these problems could already be a part of the training data, the models, trained in an autoregressive manner, may directly predict the answer without genuinely understanding the problem. This potential data leakage provides a shortcut for LLM evaluation, further compromising its validity. | 2307.10635#7 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.11019 | 7 | ⢠LLMsâ perception of the factual knowledge boundary is inaccurate and they often display a tendency towards being overconï¬dent.
⢠LLMs cannot sufï¬ciently utilize the knowl- edge they possess, and retrieval augmentation can provides a beneï¬cial knowledge supple- ment for LLMs. Furthermore, retrieval aug- mentation can be utilized to enhance the ca- pabilities of LLMs in perceiving their factual knowledge boundaries, for both priori and pos- teriori judgements.
⢠LLMs exhibit improved performance and con- ï¬dence when presented with high-quality sup- porting documents and tend to rely on the provided supporting documents to produce the responses. The reliance extent and LLMsâ conï¬dence are contingent upon the relevance between supporting documents and question.
# 2 Background and Setup
In this section, we provide an overview of the back- ground and experimental settings that are essential for this study.
# 2.1 Task Formulation
In this work, we conduct our experiments on knowledge-intensive tasks, particularly on open- domain question answering (QA). The objective of open-domain QA is described as follows. Given a question q in natural language and a large docu- ment collection D = {di}m i=1 such as Wikipedia, the model needs to provide an answer a to the ques- tion q using the provided corpus D. | 2307.11019#7 | Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation | Knowledge-intensive tasks (e.g., open-domain question answering (QA)) require
a substantial amount of factual knowledge and often rely on external
information for assistance. Recently, large language models (LLMs) (e.g.,
ChatGPT), have demonstrated impressive prowess in solving a wide range of tasks
with world knowledge, including knowledge-intensive tasks. However, it remains
unclear how well LLMs are able to perceive their factual knowledge boundaries,
particularly how they behave when incorporating retrieval augmentation. In this
study, we present an initial analysis of the factual knowledge boundaries of
LLMs and how retrieval augmentation affects LLMs on open-domain QA. Specially,
we focus on three primary research questions and analyze them by examining QA
performance, priori judgement and posteriori judgement of LLMs. We show
evidence that LLMs possess unwavering confidence in their capabilities to
respond to questions and the accuracy of their responses. Furthermore,
retrieval augmentation proves to be an effective approach in enhancing LLMs'
awareness of knowledge boundaries, thereby improving their judgemental
abilities. Additionally, we also find that LLMs have a propensity to rely on
the provided retrieval results when formulating answers, while the quality of
these results significantly impacts their reliance. The code to reproduce this
work is available at https://github.com/RUCAIBox/LLM-Knowledge-Boundary. | http://arxiv.org/pdf/2307.11019 | Ruiyang Ren, Yuhao Wang, Yingqi Qu, Wayne Xin Zhao, Jing Liu, Hao Tian, Hua Wu, Ji-Rong Wen, Haifeng Wang | cs.CL, cs.IR | null | null | cs.CL | 20230720 | 20230723 | [
{
"id": "2302.06476"
},
{
"id": "2108.05540"
},
{
"id": "2305.14766"
},
{
"id": "2305.12421"
},
{
"id": "2209.10063"
},
{
"id": "2303.18223"
},
{
"id": "2305.06984"
},
{
"id": "2211.14876"
},
{
"id": "2305.06311"
}
] |
2307.10635 | 8 | On the other hand, many studies propose various prompting strategies aimed at enhancing the reasoning abilities for mathematical problem solving. For example, the representative strategy chain- of-thought (CoT) instructs LLMs using speciï¬c examples to generate step-by-step solutions that prompt deeper problem thinking [46, 45, 50, 19], while other strategies propose to enable LLMs to utilize external tools [37, 29] that improve the numerical computation capability. However, even these strategic approaches, each with its speciï¬c strengths, struggle to fully address complex scientiï¬c problems. Consider an example problem from college-level Physical Chemistry [2] that requires the use of the Planck distribution to derive certain quantities. As shown in Figure 1, LLMs with CoT prompts accurately generate the correct formula, but fail in the ï¬nal numerical calculation. Further, when explicitly instructed to generate a Python program to solve this problem alongside the reasoning process of CoT, the LLM derives an incorrect equation, misplacing λ1 in the numerator rather than the denominator. This error illustrates that LLMs struggle to comprehend mathematical relationships when employing external tools. This example underscores the need for a ï¬ne-grained analysis of the essential skill set required for complex scientiï¬c problem solving. | 2307.10635#8 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.11019 | 8 | Typically, previous studies (Chen et al., 2017; Karpukhin et al., 2020; Qu et al., 2021) tackle the open-domain QA task by adopting a retriever- reader pipeline. In the ï¬rst stage, a retriever is employed to ï¬nd relevant supporting documents L = {d1, d2, · · · dn} (or other text forms) for the given question q, and a machine reading com- prehension model in the subsequent stage (a.k.a., reader) derives the ï¬nal answer with the retrieved documents.
In the era of LLM, LLMs can directly solve the open-domain QA task in an end-to-end manner without the need for external corpora (Qin et al., 2023). Given a question q, the answer a can be generated by the LLM with a prompt p following a speciï¬c output format:
a = fLLM(p, q). (1)
When enhancing the LLM with information re- trieval, a typical strategy is designing prompt p to instruct the LLM to provide an answer a to ques- tion q using the supporting documents L retrieved by the retriever:
a = fLLM(p, q, L). (2) | 2307.11019#8 | Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation | Knowledge-intensive tasks (e.g., open-domain question answering (QA)) require
a substantial amount of factual knowledge and often rely on external
information for assistance. Recently, large language models (LLMs) (e.g.,
ChatGPT), have demonstrated impressive prowess in solving a wide range of tasks
with world knowledge, including knowledge-intensive tasks. However, it remains
unclear how well LLMs are able to perceive their factual knowledge boundaries,
particularly how they behave when incorporating retrieval augmentation. In this
study, we present an initial analysis of the factual knowledge boundaries of
LLMs and how retrieval augmentation affects LLMs on open-domain QA. Specially,
we focus on three primary research questions and analyze them by examining QA
performance, priori judgement and posteriori judgement of LLMs. We show
evidence that LLMs possess unwavering confidence in their capabilities to
respond to questions and the accuracy of their responses. Furthermore,
retrieval augmentation proves to be an effective approach in enhancing LLMs'
awareness of knowledge boundaries, thereby improving their judgemental
abilities. Additionally, we also find that LLMs have a propensity to rely on
the provided retrieval results when formulating answers, while the quality of
these results significantly impacts their reliance. The code to reproduce this
work is available at https://github.com/RUCAIBox/LLM-Knowledge-Boundary. | http://arxiv.org/pdf/2307.11019 | Ruiyang Ren, Yuhao Wang, Yingqi Qu, Wayne Xin Zhao, Jing Liu, Hao Tian, Hua Wu, Ji-Rong Wen, Haifeng Wang | cs.CL, cs.IR | null | null | cs.CL | 20230720 | 20230723 | [
{
"id": "2302.06476"
},
{
"id": "2108.05540"
},
{
"id": "2305.14766"
},
{
"id": "2305.12421"
},
{
"id": "2209.10063"
},
{
"id": "2303.18223"
},
{
"id": "2305.06984"
},
{
"id": "2211.14876"
},
{
"id": "2305.06311"
}
] |
2307.10635 | 9 | To mitigate the aforementioned deï¬ciencies in existing LLM evaluation, this paper introduces a novel college-level Scientiï¬c problem solving Benchmark, referred to as SCIBENCH. Our SCIBENCH contains two datasets of college-level scientiï¬c problems. The open dataset includes 695 problems collected from widely used textbooks in college courses, including Fundamental Physics [15], Thermodynamics [10], Classical Mechanics [42], Quantum Chemistry [32, 23], Physical Chemistry [1, 2], Calculus [39], Statistics [18], and Differential Equations [3]. To simulate real-world evaluation, we also include a closed dataset that encompasses seven sets of midterm and ï¬nal examination questions from three college courses in computer science and mathematics. Distinct from existing benchmarks, all of the problems in SCIBENCH are open-ended, free-response questions. They require
2
multiple steps of reasoning and the computation therein involve complex arithmetic operations such as differentiation and integration. To ensure the integrity of our evaluation, these datasets have been manually extracted from PDF documents and formatted into LaTeX documents, thereby minimizing the possibility of their leakage in LLM training data. Importantly, SCIBENCH also includes detailed solution steps, facilitating detailed error analysis. | 2307.10635#9 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.11019 | 9 | a = fLLM(p, q, L). (2)
Equation 1 and 2 present two different approaches to utilizing LLMs for solving QA tasks. To achieve a good performance, the model capacity of LLMs in understanding the question and generating the response, the quality of supporting documents, and the utilization way of external resources are im- portant factors to consider. Focused on these key factors, we pose three research questions in Sec- tion 3 and then conduct the analysis experiments accordingly. Next, we introduce the prompt de- sign for different experimental settings in these two formulations.
2.2
# Instructing LLMs with Natural Language Prompts
In this work, we consider two particular settings to develop natural language instructions, namely QA
Given the following information... E etrieval-augmention (optional) RQ Retriever a âetrieval-augmented Setting Supporting documents -o- tra -He o | Compare with | ground truth Discriminate right or wrong | Give up | answering? Right/G Giveup be Right/3G I i | I | } (Can you judge if the following answer I about the question is correct ... | i True or False? Eval-Right al Eval-Acc
Figure 1: The illustration of different settings to in- struct LLMs with natural language prompts, where the corresponding metrics are also displayed. | 2307.11019#9 | Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation | Knowledge-intensive tasks (e.g., open-domain question answering (QA)) require
a substantial amount of factual knowledge and often rely on external
information for assistance. Recently, large language models (LLMs) (e.g.,
ChatGPT), have demonstrated impressive prowess in solving a wide range of tasks
with world knowledge, including knowledge-intensive tasks. However, it remains
unclear how well LLMs are able to perceive their factual knowledge boundaries,
particularly how they behave when incorporating retrieval augmentation. In this
study, we present an initial analysis of the factual knowledge boundaries of
LLMs and how retrieval augmentation affects LLMs on open-domain QA. Specially,
we focus on three primary research questions and analyze them by examining QA
performance, priori judgement and posteriori judgement of LLMs. We show
evidence that LLMs possess unwavering confidence in their capabilities to
respond to questions and the accuracy of their responses. Furthermore,
retrieval augmentation proves to be an effective approach in enhancing LLMs'
awareness of knowledge boundaries, thereby improving their judgemental
abilities. Additionally, we also find that LLMs have a propensity to rely on
the provided retrieval results when formulating answers, while the quality of
these results significantly impacts their reliance. The code to reproduce this
work is available at https://github.com/RUCAIBox/LLM-Knowledge-Boundary. | http://arxiv.org/pdf/2307.11019 | Ruiyang Ren, Yuhao Wang, Yingqi Qu, Wayne Xin Zhao, Jing Liu, Hao Tian, Hua Wu, Ji-Rong Wen, Haifeng Wang | cs.CL, cs.IR | null | null | cs.CL | 20230720 | 20230723 | [
{
"id": "2302.06476"
},
{
"id": "2108.05540"
},
{
"id": "2305.14766"
},
{
"id": "2305.12421"
},
{
"id": "2209.10063"
},
{
"id": "2303.18223"
},
{
"id": "2305.06984"
},
{
"id": "2211.14876"
},
{
"id": "2305.06311"
}
] |
2307.10635 | 10 | Our evaluation focuses on two representative LLMs, GPT-3.5 and GPT-4 with various prompting strategies, including CoT, zero-shot learning, and few-shot learning. In addition, we also prompt LLMs to utilize external tools such as Python and Wolfram languages. The experimental results demonstrate that the two baseline LLMs, without any sophisticated prompts or the use of external tools, obtain average accuracy scores of 10.62% and 16.81% respectively on our open textbook dataset. The inclusion of CoT prompting and external tools largely improves the performance, albeit peaking at only 35.80% on the same dataset. With the strongest conï¬guration, which combines both CoT prompting and external tools, GPT-4 achieves an average score of 35.80% on the open dataset and 51.57% on the closed exam dataset. These results suggest a considerable potential for improvement in future LLMs. | 2307.10635#10 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.11019 | 10 | Figure 1: The illustration of different settings to in- struct LLMs with natural language prompts, where the corresponding metrics are also displayed.
prompting and judgemental prompting. LLMs are expected to comprehend the given instruction and generate appropriate judgements or answers as the instruction suggests. Figure 1 provides an overall illustration.
2.2.1 QA Prompting The goal of QA prompting is to guide LLMs to obe- diently answer the questions in order to evaluate their QA abilities. As the annotations of open- domain QA typically consist of short answers with one or several words, we need to restrict the gener- ation format of LLMs to ï¬t the short answer struc- ture.
We propose two approaches for constructing instructions to assess the QA abilities of LLMs: (a) Normal setting: LLMs are required to provide an answer to the question with their own knowl- edge (formulated in Equation (1)). For example, âAnswer the following question based on your inter- nal knowledge with one or few words. · · · â; (b) Retrieval-augmented setting: LLMs are required to answer the question using both their own knowl- edge and the supporting documents retrieved (for- mulated in Equation (2)). For example: âGiven the following information: · · · Answer the following | 2307.11019#10 | Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation | Knowledge-intensive tasks (e.g., open-domain question answering (QA)) require
a substantial amount of factual knowledge and often rely on external
information for assistance. Recently, large language models (LLMs) (e.g.,
ChatGPT), have demonstrated impressive prowess in solving a wide range of tasks
with world knowledge, including knowledge-intensive tasks. However, it remains
unclear how well LLMs are able to perceive their factual knowledge boundaries,
particularly how they behave when incorporating retrieval augmentation. In this
study, we present an initial analysis of the factual knowledge boundaries of
LLMs and how retrieval augmentation affects LLMs on open-domain QA. Specially,
we focus on three primary research questions and analyze them by examining QA
performance, priori judgement and posteriori judgement of LLMs. We show
evidence that LLMs possess unwavering confidence in their capabilities to
respond to questions and the accuracy of their responses. Furthermore,
retrieval augmentation proves to be an effective approach in enhancing LLMs'
awareness of knowledge boundaries, thereby improving their judgemental
abilities. Additionally, we also find that LLMs have a propensity to rely on
the provided retrieval results when formulating answers, while the quality of
these results significantly impacts their reliance. The code to reproduce this
work is available at https://github.com/RUCAIBox/LLM-Knowledge-Boundary. | http://arxiv.org/pdf/2307.11019 | Ruiyang Ren, Yuhao Wang, Yingqi Qu, Wayne Xin Zhao, Jing Liu, Hao Tian, Hua Wu, Ji-Rong Wen, Haifeng Wang | cs.CL, cs.IR | null | null | cs.CL | 20230720 | 20230723 | [
{
"id": "2302.06476"
},
{
"id": "2108.05540"
},
{
"id": "2305.14766"
},
{
"id": "2305.12421"
},
{
"id": "2209.10063"
},
{
"id": "2303.18223"
},
{
"id": "2305.06984"
},
{
"id": "2211.14876"
},
{
"id": "2305.06311"
}
] |
2307.10635 | 11 | In order to gain a comprehensive understanding of the limitations of LLMs in scientiï¬c problem solv- ing, we propose a novel self-reï¬nement method to uncover the deï¬cient skills in the solutions made by LLMs. Firstly, we compare the correct solutions with the solutions generated by LLMs and, with the assistance of human annotators, summarize ten essential skills requisite for successful scientiï¬c problem-solving. These skills include proï¬ciency in domain knowledge, mathematical reasoning, numerical calculation abilities, and comprehension of common sense concepts. Subsequently, we employ an LLM-empowered self-critic approach to automatically classify the lacking skills in the solutions made by the benchmarked LLMs under each experiment conï¬guration. Our analysis ï¬nds that (1) although CoT signiï¬cantly improves the calculation ability, it is less effective in other aspects; (2) prompts with the use of external tools could potentially compromise the other fundamental skills; (3) few-shot learning does not universally improve scientiï¬c problem-solving skills.
# 2 The SCIBENCH Dataset | 2307.10635#11 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.11019 | 11 | question based on the given information or your internal knowledge with one or few words without the source. · · · â.
# Judgemental Prompting
To investigate whether LLMs are capable to per- ceive their own factual knowledge boundary, we propose judgemental prompting to evaluate the judging abilities of LLMs.
Similar to QA prompting, the concepts of the normal setting and the retrieval-augmented set- ting are also applicable for judgemental prompt- ing, where LLMs utilizing their own knowledge or consulting supporting documents from retrievers to carry out the judgement process. | 2307.11019#11 | Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation | Knowledge-intensive tasks (e.g., open-domain question answering (QA)) require
a substantial amount of factual knowledge and often rely on external
information for assistance. Recently, large language models (LLMs) (e.g.,
ChatGPT), have demonstrated impressive prowess in solving a wide range of tasks
with world knowledge, including knowledge-intensive tasks. However, it remains
unclear how well LLMs are able to perceive their factual knowledge boundaries,
particularly how they behave when incorporating retrieval augmentation. In this
study, we present an initial analysis of the factual knowledge boundaries of
LLMs and how retrieval augmentation affects LLMs on open-domain QA. Specially,
we focus on three primary research questions and analyze them by examining QA
performance, priori judgement and posteriori judgement of LLMs. We show
evidence that LLMs possess unwavering confidence in their capabilities to
respond to questions and the accuracy of their responses. Furthermore,
retrieval augmentation proves to be an effective approach in enhancing LLMs'
awareness of knowledge boundaries, thereby improving their judgemental
abilities. Additionally, we also find that LLMs have a propensity to rely on
the provided retrieval results when formulating answers, while the quality of
these results significantly impacts their reliance. The code to reproduce this
work is available at https://github.com/RUCAIBox/LLM-Knowledge-Boundary. | http://arxiv.org/pdf/2307.11019 | Ruiyang Ren, Yuhao Wang, Yingqi Qu, Wayne Xin Zhao, Jing Liu, Hao Tian, Hua Wu, Ji-Rong Wen, Haifeng Wang | cs.CL, cs.IR | null | null | cs.CL | 20230720 | 20230723 | [
{
"id": "2302.06476"
},
{
"id": "2108.05540"
},
{
"id": "2305.14766"
},
{
"id": "2305.12421"
},
{
"id": "2209.10063"
},
{
"id": "2303.18223"
},
{
"id": "2305.06984"
},
{
"id": "2211.14876"
},
{
"id": "2305.06311"
}
] |
2307.10635 | 12 | # 2 The SCIBENCH Dataset
To evaluate the capabilities and analyze the limitations of the existing large language models (LLMs) to solve scientiï¬c computing problems, we collect a new dataset consisting of college-level textbooks and course exams in a variety of domains. This section details the dataset construction process.
Data selection. Our dataset aims to improve the previous benchmarks by including more chal- lenging problems, which require more reasoning steps, and more advanced types of computations. Speciï¬cally, the selected dataset should fulï¬ll the following requirements:
⢠Inclusion of college-level problems. The chosen problems demand a solid understanding of domain-speciï¬c knowledge, proï¬ciency in reasoning capability, adept calculation skills, and the ability to comprehend complex concepts.
⢠Inclusion of detailed solutions: To facilitate a thorough analysis of the limitations of LLMs, detailed solutions should be provided for the selected problems. This enables a comprehensive examination of the performance of LLMs and their capacity to handle complex problem-solving tasks. | 2307.10635#12 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.11019 | 12 | Furthermore, we construct instructions with two settings from different judgement perspectives: (a) Priori judgement: LLMs are required to judge whether they can provide an answer to the ques- tion. For example using the normal setting: âAre you sure to accurately answer the following ques- tion based on your internal knowledge, if yes, you should give a short answer with one or few words, if no, you should answer âUnknownâ. · · · â; (b) Posteriori judgement: LLMs are required to evalu- ate the correctness of the answer to the question pro- vided by itself. For example using normal setting: âCan you judge if the following answer about the question is correct based on your internal knowl- edge, if yes, you should answer True or False, if no, you should answer âUnknownâ. · · · â.
# 2.3 Experimental Settings
In this part, we set up our experiments of LLMs on open-domain QA.
# 2.3.1 Datasets | 2307.11019#12 | Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation | Knowledge-intensive tasks (e.g., open-domain question answering (QA)) require
a substantial amount of factual knowledge and often rely on external
information for assistance. Recently, large language models (LLMs) (e.g.,
ChatGPT), have demonstrated impressive prowess in solving a wide range of tasks
with world knowledge, including knowledge-intensive tasks. However, it remains
unclear how well LLMs are able to perceive their factual knowledge boundaries,
particularly how they behave when incorporating retrieval augmentation. In this
study, we present an initial analysis of the factual knowledge boundaries of
LLMs and how retrieval augmentation affects LLMs on open-domain QA. Specially,
we focus on three primary research questions and analyze them by examining QA
performance, priori judgement and posteriori judgement of LLMs. We show
evidence that LLMs possess unwavering confidence in their capabilities to
respond to questions and the accuracy of their responses. Furthermore,
retrieval augmentation proves to be an effective approach in enhancing LLMs'
awareness of knowledge boundaries, thereby improving their judgemental
abilities. Additionally, we also find that LLMs have a propensity to rely on
the provided retrieval results when formulating answers, while the quality of
these results significantly impacts their reliance. The code to reproduce this
work is available at https://github.com/RUCAIBox/LLM-Knowledge-Boundary. | http://arxiv.org/pdf/2307.11019 | Ruiyang Ren, Yuhao Wang, Yingqi Qu, Wayne Xin Zhao, Jing Liu, Hao Tian, Hua Wu, Ji-Rong Wen, Haifeng Wang | cs.CL, cs.IR | null | null | cs.CL | 20230720 | 20230723 | [
{
"id": "2302.06476"
},
{
"id": "2108.05540"
},
{
"id": "2305.14766"
},
{
"id": "2305.12421"
},
{
"id": "2209.10063"
},
{
"id": "2303.18223"
},
{
"id": "2305.06984"
},
{
"id": "2211.14876"
},
{
"id": "2305.06311"
}
] |
2307.10635 | 13 | Inaccessibility in text formats. To ensure an unbiased evaluation, we carefully curate questions that are not readily accessible online and couldnât be easily extracted or transformed into text. This selection process aims to mitigate any potential information leakage from the exposure of LLMs to pre-existing online question banks, such as those found in standardized tests like the SAT exams. ⢠Enabling of advanced computational ability assessment. Our evaluation of LLMs emphasizes advanced computational capabilities beyond basic arithmetic operations like addition and multi- plication. This involves evaluating the modelâs proï¬ciency in performing advanced mathematical computations such as integration and differentiation, particularly when dealing with exceptionally small or large numbers.
Accordingly, we select ten textbooks that have been extensively used in college courses as the open textbook dataset from three scientiï¬c ï¬elds Physics, Chemistry, and Math. We report the number of problems and the ratio of problems with detailed solutions of each title in Table 1. For brevity, we
3
Table 1: Summary of the open textbook dataset. We report the number of problems and the ratio of problems with detailed solutions in the fourth and ï¬fth columns respectively. | 2307.10635#13 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.11019 | 13 | # 2.3 Experimental Settings
In this part, we set up our experiments of LLMs on open-domain QA.
# 2.3.1 Datasets
We collect three extensively adopted open-domain QA benchmark datasets, including Natural Ques- tions (NQ) (Kwiatkowski et al., 2019), Trivi- aQA (Joshi et al., 2017) and HotpotQA (Yang et al., 2018). NQ is constructed by Google Search queries along with annotated short answers or documents (long answers). TriviaQA consists of trivia ques- tions with annotated answers and corresponding evidence documents. HotpotQA is a collection of question-answer pairs that require multi-hop rea- soning, where the question-answer pairs are col- lected through Amazon Mechanical Turk. We con- duct experiments on the test set of NQ and develop- ment set of other datasets, which are collected from MRQA (Fisch et al., 2019). For QA evaluation, we
Retriever NQ TriviaQA HotpotQA Sparse Dense ChatGPT 54.79 80.47 59.14 81.75 88.98 87.72 50.03 51.13 38.21
Table 1: Recall@10 results for different retrievers. | 2307.11019#13 | Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation | Knowledge-intensive tasks (e.g., open-domain question answering (QA)) require
a substantial amount of factual knowledge and often rely on external
information for assistance. Recently, large language models (LLMs) (e.g.,
ChatGPT), have demonstrated impressive prowess in solving a wide range of tasks
with world knowledge, including knowledge-intensive tasks. However, it remains
unclear how well LLMs are able to perceive their factual knowledge boundaries,
particularly how they behave when incorporating retrieval augmentation. In this
study, we present an initial analysis of the factual knowledge boundaries of
LLMs and how retrieval augmentation affects LLMs on open-domain QA. Specially,
we focus on three primary research questions and analyze them by examining QA
performance, priori judgement and posteriori judgement of LLMs. We show
evidence that LLMs possess unwavering confidence in their capabilities to
respond to questions and the accuracy of their responses. Furthermore,
retrieval augmentation proves to be an effective approach in enhancing LLMs'
awareness of knowledge boundaries, thereby improving their judgemental
abilities. Additionally, we also find that LLMs have a propensity to rely on
the provided retrieval results when formulating answers, while the quality of
these results significantly impacts their reliance. The code to reproduce this
work is available at https://github.com/RUCAIBox/LLM-Knowledge-Boundary. | http://arxiv.org/pdf/2307.11019 | Ruiyang Ren, Yuhao Wang, Yingqi Qu, Wayne Xin Zhao, Jing Liu, Hao Tian, Hua Wu, Ji-Rong Wen, Haifeng Wang | cs.CL, cs.IR | null | null | cs.CL | 20230720 | 20230723 | [
{
"id": "2302.06476"
},
{
"id": "2108.05540"
},
{
"id": "2305.14766"
},
{
"id": "2305.12421"
},
{
"id": "2209.10063"
},
{
"id": "2303.18223"
},
{
"id": "2305.06984"
},
{
"id": "2211.14876"
},
{
"id": "2305.06311"
}
] |
2307.10635 | 14 | 3
Table 1: Summary of the open textbook dataset. We report the number of problems and the ratio of problems with detailed solutions in the fourth and ï¬fth columns respectively.
Subject Title Acronym # Problems % Solutions Physics Chemistry Math Fundamentals of Physics [15] Statistical Thermodynamics [10] Classical Dynamics of Particles and Systems [42] Quantum Chemistry [23] Quantum Chemistry [32] Physical Chemistry [1] Physical Chemistry, Quanta, Matter, and Change [2] Calculus: Early Transcendentals [39] Probability and Statistical Inference [18] Elementary Differential Equations and Boundary Value Problems [3] fund thermo class quan chemmc atkins matter calc stat diff 83 84 54 42 48 123 59 52 95 55 12.0% 20.2% 13.0% 19.0% 18.8% 13.0% 16.9% 19.2% 21.1% 9.1%
will be using their acronyms when referring to speciï¬c textbooks throughout the paper. Furthermore, in order to simulate real-world evaluation, we collect seven sets of exam questions from college courses from Computer Science and Math departments, including Data Mining, Machine Learning, and Differential Equations. The statistics of the problems in each exam is detailed in Table 2. We refer readers of interest to Appendix B for details on textbooks and exams. | 2307.10635#14 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.11019 | 14 | Table 1: Recall@10 results for different retrievers.
adopt the short answers provided by the datasets as labels. Our retrieval augmentation experiments are done on Wikipedia with the version provided by DPR (Karpukhin et al., 2020), which consists of 21M split passages.
# 2.3.2 Evaluation Metrics
Following previous works (Chen et al., 2017; Izac- ard and Grave, 2021; Sun et al., 2023), we use the exact match (EM) score and F1 score to evaluate the QA performance of LLMs. Exact match score assesses the percentage of questions in which the answer predicted by LLMs precisely matches the correct answer to the question. F1 score is used to measure the overlap between the predicted an- swer and the correct answer, which represents the harmonic mean of precision and recall. Recall is determined by considering the number of overlaps with the correct answer tokens, while precision is determined by considering the number of overlaps with all predicted tokens. | 2307.11019#14 | Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation | Knowledge-intensive tasks (e.g., open-domain question answering (QA)) require
a substantial amount of factual knowledge and often rely on external
information for assistance. Recently, large language models (LLMs) (e.g.,
ChatGPT), have demonstrated impressive prowess in solving a wide range of tasks
with world knowledge, including knowledge-intensive tasks. However, it remains
unclear how well LLMs are able to perceive their factual knowledge boundaries,
particularly how they behave when incorporating retrieval augmentation. In this
study, we present an initial analysis of the factual knowledge boundaries of
LLMs and how retrieval augmentation affects LLMs on open-domain QA. Specially,
we focus on three primary research questions and analyze them by examining QA
performance, priori judgement and posteriori judgement of LLMs. We show
evidence that LLMs possess unwavering confidence in their capabilities to
respond to questions and the accuracy of their responses. Furthermore,
retrieval augmentation proves to be an effective approach in enhancing LLMs'
awareness of knowledge boundaries, thereby improving their judgemental
abilities. Additionally, we also find that LLMs have a propensity to rely on
the provided retrieval results when formulating answers, while the quality of
these results significantly impacts their reliance. The code to reproduce this
work is available at https://github.com/RUCAIBox/LLM-Knowledge-Boundary. | http://arxiv.org/pdf/2307.11019 | Ruiyang Ren, Yuhao Wang, Yingqi Qu, Wayne Xin Zhao, Jing Liu, Hao Tian, Hua Wu, Ji-Rong Wen, Haifeng Wang | cs.CL, cs.IR | null | null | cs.CL | 20230720 | 20230723 | [
{
"id": "2302.06476"
},
{
"id": "2108.05540"
},
{
"id": "2305.14766"
},
{
"id": "2305.12421"
},
{
"id": "2209.10063"
},
{
"id": "2303.18223"
},
{
"id": "2305.06984"
},
{
"id": "2211.14876"
},
{
"id": "2305.06311"
}
] |
2307.10635 | 15 | To reduce the likelihood of correct answers being merely guessed from candidates, we choose to mainly include questions with more challenging, free-response answers, rather than multiple-choice questions in previous works [26, 28, 8]. We focus on answers that only contain single numbers to avoid ambiguity. In order to facilitate standardized evaluation, we convert the answer to ï¬oating-point â 2 numbers rounded to three decimal places. For example, we convert the answer Ï to the decimal representation 0.450. We also treat scientiï¬c notation as a unit to avoid overï¬ow issues. For example, if the answer is 2.2
à | 2307.10635#15 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.11019 | 15 | Moreover, we propose several evaluation met- rics for evaluating the judgement abilities of LLMs. Give-up rate denotes the percentage of questions that LLMs give up to answer, which assesses the conï¬dence level of LLMs when generating an answer. Right/G represents the probability that LLMs give up answering but can actually answer correctly. Similarly, Right/¬G represents the prob- ability that LLMs do not give up answering and can answer correctly. Eval-Right refers to the propor- tion of questions where LLMs assess their answers as correct. Eval-Acc represents the percentage of questions for which the assessment (true or false) of the answer by LLMs aligns with the fact. As a result, Give-up, Right-G and Right/¬G are metrics for priori judgement, Eval-Right and Eval-ACC are metrics for posteriori judgement. All metrics are also illustrated in Figure 1.
# 2.3.3 Retrieval Sources
We consider multiple retrieval sources to ac- quire supporting documents, including dense re- trieval (Gao and Callan, 2021; Ren et al., 2021a; Zhuang et al., 2022; Zhou et al., 2022), sparse retrieval (Robertson et al., 2009) and ChatGPT. | 2307.11019#15 | Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation | Knowledge-intensive tasks (e.g., open-domain question answering (QA)) require
a substantial amount of factual knowledge and often rely on external
information for assistance. Recently, large language models (LLMs) (e.g.,
ChatGPT), have demonstrated impressive prowess in solving a wide range of tasks
with world knowledge, including knowledge-intensive tasks. However, it remains
unclear how well LLMs are able to perceive their factual knowledge boundaries,
particularly how they behave when incorporating retrieval augmentation. In this
study, we present an initial analysis of the factual knowledge boundaries of
LLMs and how retrieval augmentation affects LLMs on open-domain QA. Specially,
we focus on three primary research questions and analyze them by examining QA
performance, priori judgement and posteriori judgement of LLMs. We show
evidence that LLMs possess unwavering confidence in their capabilities to
respond to questions and the accuracy of their responses. Furthermore,
retrieval augmentation proves to be an effective approach in enhancing LLMs'
awareness of knowledge boundaries, thereby improving their judgemental
abilities. Additionally, we also find that LLMs have a propensity to rely on
the provided retrieval results when formulating answers, while the quality of
these results significantly impacts their reliance. The code to reproduce this
work is available at https://github.com/RUCAIBox/LLM-Knowledge-Boundary. | http://arxiv.org/pdf/2307.11019 | Ruiyang Ren, Yuhao Wang, Yingqi Qu, Wayne Xin Zhao, Jing Liu, Hao Tian, Hua Wu, Ji-Rong Wen, Haifeng Wang | cs.CL, cs.IR | null | null | cs.CL | 20230720 | 20230723 | [
{
"id": "2302.06476"
},
{
"id": "2108.05540"
},
{
"id": "2305.14766"
},
{
"id": "2305.12421"
},
{
"id": "2209.10063"
},
{
"id": "2303.18223"
},
{
"id": "2305.06984"
},
{
"id": "2211.14876"
},
{
"id": "2305.06311"
}
] |
2307.10635 | 16 | Ã
Data preprocessing. We collect each problem from the original textbooks in PDF documents and manually process them into LaTeX documents using an OCR tool Mathpixâ. The data is manually collected by human annotators using a web-based annotation tool [26]. The detailed user interface (UI) for the tool is provided in Appendix C. All problems are carefully veriï¬ed by human annotators to ensure that LaTeX notations compile without any syntax errors. For reference purposes, we provide the original numbers in the textbooks. For every problem, we provide the answer in two forms: the numerical value and the corresponding LaTeX expression with mathematical notations retained (e.g., â 2 0.450 and Ï ); the unit of each answer is saved as a separate attribute. The detailed step-by-step solutions are also given in the LaTeX format. For problems that have multiple answers, we either keep only the ï¬rst subproblems and discard the remaining subproblems or convert each subproblem into multiple problem instances. | 2307.10635#16 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.11019 | 16 | For the dense retriever, we utilize RocketQAv2 (Ren et al., 2021b) to ï¬nd semantically relevant documents for questions. To achieve this, we train the model on each dataset with the constructed in-domain training data under the settings of Rock- etQAv2 and leverage Faiss (Johnson et al., 2019) to obtain relevant documents for each question from the candidate corpus. For the sparse retriever, we use BM25 (Yang et al., 2017) to ï¬nd lexical rele- vant documents for questions. Similar to previous works (Yu et al., 2022; Ren et al., 2023), we re- gard the generative language model as a âretrieverâ that âretrievesâ knowledge from its memory, where ChatGPT is instructed to produce relevant docu- ments in response to a given question. | 2307.11019#16 | Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation | Knowledge-intensive tasks (e.g., open-domain question answering (QA)) require
a substantial amount of factual knowledge and often rely on external
information for assistance. Recently, large language models (LLMs) (e.g.,
ChatGPT), have demonstrated impressive prowess in solving a wide range of tasks
with world knowledge, including knowledge-intensive tasks. However, it remains
unclear how well LLMs are able to perceive their factual knowledge boundaries,
particularly how they behave when incorporating retrieval augmentation. In this
study, we present an initial analysis of the factual knowledge boundaries of
LLMs and how retrieval augmentation affects LLMs on open-domain QA. Specially,
we focus on three primary research questions and analyze them by examining QA
performance, priori judgement and posteriori judgement of LLMs. We show
evidence that LLMs possess unwavering confidence in their capabilities to
respond to questions and the accuracy of their responses. Furthermore,
retrieval augmentation proves to be an effective approach in enhancing LLMs'
awareness of knowledge boundaries, thereby improving their judgemental
abilities. Additionally, we also find that LLMs have a propensity to rely on
the provided retrieval results when formulating answers, while the quality of
these results significantly impacts their reliance. The code to reproduce this
work is available at https://github.com/RUCAIBox/LLM-Knowledge-Boundary. | http://arxiv.org/pdf/2307.11019 | Ruiyang Ren, Yuhao Wang, Yingqi Qu, Wayne Xin Zhao, Jing Liu, Hao Tian, Hua Wu, Ji-Rong Wen, Haifeng Wang | cs.CL, cs.IR | null | null | cs.CL | 20230720 | 20230723 | [
{
"id": "2302.06476"
},
{
"id": "2108.05540"
},
{
"id": "2305.14766"
},
{
"id": "2305.12421"
},
{
"id": "2209.10063"
},
{
"id": "2303.18223"
},
{
"id": "2305.06984"
},
{
"id": "2211.14876"
},
{
"id": "2305.06311"
}
] |
2307.10635 | 17 | Data statistics. In summary, we have collected 695 problems from textbooks, 112 of which provide detailed step-by-step solutions. For the close exam dataset, we include seven sets of problems from real-world midterms and ï¬nal exams with a total of 104 problems. The textbook dataset comprises problems where the ï¬nal answer is represented as a single numeric value, with the corresponding unit provided separately, while the exam dataset mostly contains free-response problems, accompanied by several multiple-choice and true-false problems.
# 3 Experiments
# 3.1 Experiment Setup
We evaluate GPT-3.5 (gpt-3.5-turbo) [34] and GPT-4 (gpt-4) [35] on the two benchmark datasets. We consider two prompting strategies, including the chain-of-thought (CoT) prompting and prompting to use external tools, under both zero-shot and few-shot learning paradigms.
⢠Zero-shot and few-shot learning. In the zero-shot learning setting, models are not provided with any prior examples, which evaluates their inherent problem-solving capabilities with background
# âhttps://mathpix.com/
4 | 2307.10635#17 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.11019 | 17 | Furthermore, we consider the mixed retrieval results of the dense and the sparse retrievers as sup- porting documents. For each question, we attach ten supporting documents. Since ChatGPT can- not consistently generate precisely ten documents for each question (usually ï¬uctuating around ten), we consider all the generated documents as sup- porting documents. Table 1 shows the retrieval performance on each dataset. Due to the rapid de- velopment in recent years, dense retriever achieves the best retrieval performance. For more details, we refer the readers to read a comprehensive sur- vey about the recent progress of dense retrieval based on PLMs (Zhao et al., 2022). Note that if a re-ranking model is employed to re-rank the re- trieval results, it is possible to obtain supporting documents with improved recall metrics. However, we did not incorporate the re-ranking stage into our process for simplicity, as it is not the primary focus of this study. | 2307.11019#17 | Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation | Knowledge-intensive tasks (e.g., open-domain question answering (QA)) require
a substantial amount of factual knowledge and often rely on external
information for assistance. Recently, large language models (LLMs) (e.g.,
ChatGPT), have demonstrated impressive prowess in solving a wide range of tasks
with world knowledge, including knowledge-intensive tasks. However, it remains
unclear how well LLMs are able to perceive their factual knowledge boundaries,
particularly how they behave when incorporating retrieval augmentation. In this
study, we present an initial analysis of the factual knowledge boundaries of
LLMs and how retrieval augmentation affects LLMs on open-domain QA. Specially,
we focus on three primary research questions and analyze them by examining QA
performance, priori judgement and posteriori judgement of LLMs. We show
evidence that LLMs possess unwavering confidence in their capabilities to
respond to questions and the accuracy of their responses. Furthermore,
retrieval augmentation proves to be an effective approach in enhancing LLMs'
awareness of knowledge boundaries, thereby improving their judgemental
abilities. Additionally, we also find that LLMs have a propensity to rely on
the provided retrieval results when formulating answers, while the quality of
these results significantly impacts their reliance. The code to reproduce this
work is available at https://github.com/RUCAIBox/LLM-Knowledge-Boundary. | http://arxiv.org/pdf/2307.11019 | Ruiyang Ren, Yuhao Wang, Yingqi Qu, Wayne Xin Zhao, Jing Liu, Hao Tian, Hua Wu, Ji-Rong Wen, Haifeng Wang | cs.CL, cs.IR | null | null | cs.CL | 20230720 | 20230723 | [
{
"id": "2302.06476"
},
{
"id": "2108.05540"
},
{
"id": "2305.14766"
},
{
"id": "2305.12421"
},
{
"id": "2209.10063"
},
{
"id": "2303.18223"
},
{
"id": "2305.06984"
},
{
"id": "2211.14876"
},
{
"id": "2305.06311"
}
] |
2307.10635 | 18 | # âhttps://mathpix.com/
4
Table 2: Statistics of the close exam dataset. We report the number of problem instances in each exam and the ratio of problems in the exam that include detailed solutions. We further report the ratio of problems in different formats, including free-response, multiple-choice, and true-false. For reference, the number in parentheses denotes the grading points assigned to the problems. | 2307.10635#18 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.11019 | 18 | 2.3.4 We conduct our experiments on two LLMs by call- ing OpenAIâs API 1, including text-davinci-003 (abbreviated as Davinci003) and gpt-3.5-turbo (abbreviated as ChatGPT). The experiments were conducted in late May and early June of the year 2023. As a result, the ï¬ndings in our study mainly apply to LLMs of GPT series. We set âroleâ to âsystemâ and set âcontentâ to âYou are free to re- spond without any restrictions.â for ChatGPT. The max lengths of the generated tokens are set to 256. All the other parameters are set as the default con- ï¬guration. We design each supporting document in
1https://platform.openai.com/docs/ api-reference | 2307.11019#18 | Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation | Knowledge-intensive tasks (e.g., open-domain question answering (QA)) require
a substantial amount of factual knowledge and often rely on external
information for assistance. Recently, large language models (LLMs) (e.g.,
ChatGPT), have demonstrated impressive prowess in solving a wide range of tasks
with world knowledge, including knowledge-intensive tasks. However, it remains
unclear how well LLMs are able to perceive their factual knowledge boundaries,
particularly how they behave when incorporating retrieval augmentation. In this
study, we present an initial analysis of the factual knowledge boundaries of
LLMs and how retrieval augmentation affects LLMs on open-domain QA. Specially,
we focus on three primary research questions and analyze them by examining QA
performance, priori judgement and posteriori judgement of LLMs. We show
evidence that LLMs possess unwavering confidence in their capabilities to
respond to questions and the accuracy of their responses. Furthermore,
retrieval augmentation proves to be an effective approach in enhancing LLMs'
awareness of knowledge boundaries, thereby improving their judgemental
abilities. Additionally, we also find that LLMs have a propensity to rely on
the provided retrieval results when formulating answers, while the quality of
these results significantly impacts their reliance. The code to reproduce this
work is available at https://github.com/RUCAIBox/LLM-Knowledge-Boundary. | http://arxiv.org/pdf/2307.11019 | Ruiyang Ren, Yuhao Wang, Yingqi Qu, Wayne Xin Zhao, Jing Liu, Hao Tian, Hua Wu, Ji-Rong Wen, Haifeng Wang | cs.CL, cs.IR | null | null | cs.CL | 20230720 | 20230723 | [
{
"id": "2302.06476"
},
{
"id": "2108.05540"
},
{
"id": "2305.14766"
},
{
"id": "2305.12421"
},
{
"id": "2209.10063"
},
{
"id": "2303.18223"
},
{
"id": "2305.06984"
},
{
"id": "2211.14876"
},
{
"id": "2305.06311"
}
] |
2307.10635 | 19 | Data Mining Machine Learning Differential Equations Midterm Final Midterm Final Exam 1 Exam 2 Final # Problems % Solutions % Free-response % Multiple-choice % True-false 25 (90) 56.0% (58) 40.0% (46) 28.0% (28) 32.0% (16) 24 (75) 16.7% (19) 33.3% (29) 29.2% (28) 37.5% (18) 12 (56) 100.0% (56) 66.7% (38) 33.3% (18) 0.0% (0) 16 (75) 31.2% (26) 81.3% (62) 18.7% (13) 0.0% (0) 8 (100) 100.0% (100) 100.0% (100) 0.0% (0) 0.0% (0) 8 (100) 100.0% (100) 100.0% (100) 0.0% (0) 0.0% (0) 11 (95) 90.9% (90) 90.9% (90) 9.1% (5) 0.0% (0) | 2307.10635#19 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.11019 | 19 | 1https://platform.openai.com/docs/ api-reference
Dataset LLM EM QA F1 Priori Judgement Posteriori Judgement Give-up Right/G Right/¬G Eval-Right Eval-Acc NQ Davinci003 ChatGPT 26.37 30.89 35.95 42.14 27.17% 32.05% 13.56% 14.63% 31.15% 38.67% 71.27% 87.09% 46.88% 36.85% TriviaQA Davinci003 ChatGPT 69.56 74.77 74.03 80.11 5.65% 12.00% 36.59% 44.00% 71.53% 78.97% 87.90% 92.58% 72.05% 77.02% HotpotQA Davinci003 ChatGPT 16.62 17.81 25.53 26.35 35.76% 66.29% 8.34% 9.76% 21.23% 33.63% 69.87% 55.16% 41.93% 33.13%
Table 2: Evaluation results of LLMs on Natural Questions (NQ), TriviaQA, and HotpotQA without retrieval aug- mentation. The metrics are grouped according to different settings. Metric abbreviations are explained in Sec- tion 2.3.2. | 2307.11019#19 | Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation | Knowledge-intensive tasks (e.g., open-domain question answering (QA)) require
a substantial amount of factual knowledge and often rely on external
information for assistance. Recently, large language models (LLMs) (e.g.,
ChatGPT), have demonstrated impressive prowess in solving a wide range of tasks
with world knowledge, including knowledge-intensive tasks. However, it remains
unclear how well LLMs are able to perceive their factual knowledge boundaries,
particularly how they behave when incorporating retrieval augmentation. In this
study, we present an initial analysis of the factual knowledge boundaries of
LLMs and how retrieval augmentation affects LLMs on open-domain QA. Specially,
we focus on three primary research questions and analyze them by examining QA
performance, priori judgement and posteriori judgement of LLMs. We show
evidence that LLMs possess unwavering confidence in their capabilities to
respond to questions and the accuracy of their responses. Furthermore,
retrieval augmentation proves to be an effective approach in enhancing LLMs'
awareness of knowledge boundaries, thereby improving their judgemental
abilities. Additionally, we also find that LLMs have a propensity to rely on
the provided retrieval results when formulating answers, while the quality of
these results significantly impacts their reliance. The code to reproduce this
work is available at https://github.com/RUCAIBox/LLM-Knowledge-Boundary. | http://arxiv.org/pdf/2307.11019 | Ruiyang Ren, Yuhao Wang, Yingqi Qu, Wayne Xin Zhao, Jing Liu, Hao Tian, Hua Wu, Ji-Rong Wen, Haifeng Wang | cs.CL, cs.IR | null | null | cs.CL | 20230720 | 20230723 | [
{
"id": "2302.06476"
},
{
"id": "2108.05540"
},
{
"id": "2305.14766"
},
{
"id": "2305.12421"
},
{
"id": "2209.10063"
},
{
"id": "2303.18223"
},
{
"id": "2305.06984"
},
{
"id": "2211.14876"
},
{
"id": "2305.06311"
}
] |
2307.10635 | 20 | knowledge and reasoning abilities. In the few-shot setting, a few of examples are given to the models before the test example. This aims to assess their capability to learn new information from the demonstrations and incorporate it into their problem-solving processes.
⢠Prompting-based approaches. In the zero-shot setting, we evaluate both with and without the addition of the system prompt, which describes the types and categories of questions, along with instructions; all other settings incorporate the system prompt. Additionally, we utilize the chain- of-thought (CoT) [46] as our prompting strategy in the zero-shot setting. In addition to CoT, we further explore an answer-only strategy in the few-shot learning setting, where the prompt solely provides questions and answers without any intermediate solutions. | 2307.10635#20 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.11019 | 20 | the format of: âPassage-{num}: Title: {title} Con- tent: {content}â. For the supporting documents generated by ChatGPT, the format of supporting documents is: âPassage-{num}: {content}â.
We employ heuristic rules to parse the response of LLMs. We select speciï¬c phrases as symbols of the decision to give up answering questions for priori judgement, such as âunknownâ, and âno an- swerâ. Similarly, for posteriori judgement, we em- ploy phrases such as âtrueâ, and âcorrectâ for con- ï¬rming correctness, while âfalseâ, and âincorrectâ for identifying errors. For QA evaluation, we no- tice that some of the responses of ChatGPT start with preï¬xes such as âAnswer:â, and we remove these preï¬xes if the responses start with them.
# 3 Experimental Analysis and Findings | 2307.11019#20 | Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation | Knowledge-intensive tasks (e.g., open-domain question answering (QA)) require
a substantial amount of factual knowledge and often rely on external
information for assistance. Recently, large language models (LLMs) (e.g.,
ChatGPT), have demonstrated impressive prowess in solving a wide range of tasks
with world knowledge, including knowledge-intensive tasks. However, it remains
unclear how well LLMs are able to perceive their factual knowledge boundaries,
particularly how they behave when incorporating retrieval augmentation. In this
study, we present an initial analysis of the factual knowledge boundaries of
LLMs and how retrieval augmentation affects LLMs on open-domain QA. Specially,
we focus on three primary research questions and analyze them by examining QA
performance, priori judgement and posteriori judgement of LLMs. We show
evidence that LLMs possess unwavering confidence in their capabilities to
respond to questions and the accuracy of their responses. Furthermore,
retrieval augmentation proves to be an effective approach in enhancing LLMs'
awareness of knowledge boundaries, thereby improving their judgemental
abilities. Additionally, we also find that LLMs have a propensity to rely on
the provided retrieval results when formulating answers, while the quality of
these results significantly impacts their reliance. The code to reproduce this
work is available at https://github.com/RUCAIBox/LLM-Knowledge-Boundary. | http://arxiv.org/pdf/2307.11019 | Ruiyang Ren, Yuhao Wang, Yingqi Qu, Wayne Xin Zhao, Jing Liu, Hao Tian, Hua Wu, Ji-Rong Wen, Haifeng Wang | cs.CL, cs.IR | null | null | cs.CL | 20230720 | 20230723 | [
{
"id": "2302.06476"
},
{
"id": "2108.05540"
},
{
"id": "2305.14766"
},
{
"id": "2305.12421"
},
{
"id": "2209.10063"
},
{
"id": "2303.18223"
},
{
"id": "2305.06984"
},
{
"id": "2211.14876"
},
{
"id": "2305.06311"
}
] |
2307.10635 | 21 | ⢠Tool-augmented approaches. Given that LLMs are limited to acquiring exact knowledge and performing precise calculations, recently proposed approaches, such as Toolformer [37] and Chameleon [29], have explored the use of external tools to enhance the capabilities of solving complex reasoning tasks. In line with this approach, and acknowledging the limitations of LLMs in performing precise calculations, we also include a setting that prompts the model to convert its solution steps in natural language into either Wolfram Languageâ or Python code for execution, aiming to achieve more accurate results for certain computation steps. This prompt is only tested in the few-shot learning setting. We manually construct the Python and Wolfram Language code that produce the correct answer.
In summary, we consider seven combinations of prompting strategies and learning paradigms: zero- shot learning without the system prompt (Zero S), zero-shot learning with the system prompt â (Zero), few-shot learning (Few), CoT prompting under zero-shot (Zero+CoT) and few-shot learning (Few+CoT) scenarios, few-shot learning that prompts to use Python (Few+Py), and Wolfram Language (Few+Wol) as external tools. Regarding the exam dataset, to replicate a real-world exam environment, we only consider two speciï¬c settings: zero-shot learning (Zero) and zero-shot learning supplemented with CoT prompting (Zero+CoT). | 2307.10635#21 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.11019 | 21 | # 3 Experimental Analysis and Findings
In this section, we mainly focus on addressing three research questions within the open-domain ques- tion answering (QA) scenario: (i) To what extent can LLMs perceive their factual knowledge bound- aries? (ii) What impact does retrieval augmentation have on LLMs? (iii) How do different supporting documents characteristics affect LLMs? We tackle the three research questions by investigating the judgement ability and the QA ability of LLMs. We conduct experiments by employing judgemental prompting to guide LLMs in assessing their factual knowledge boundaries, and QA prompting to guide LLMs in responding to the given questions.
# 3.1 To What Extent Can LLMs Perceive Their Factual Knowledge Boundaries?
In order to answer the question, we investigate the following points: (a) How do LLMs determine when to give up answering the question; (b) Can LLMs accurately answer a given question; (c) How do LLMs evaluate the correctness of their answers.
Concretely, we employ the priori judgement with the normal setting to instruct LLMs on whether to give up answering questions based on their own knowledge, and we use the QA prompting with the normal setting to instruct LLMs to answer. More- over, we employ posteriori judgement with the nor- mal setting to instruct LLMs in evaluating the cor- rectness of their answers. | 2307.11019#21 | Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation | Knowledge-intensive tasks (e.g., open-domain question answering (QA)) require
a substantial amount of factual knowledge and often rely on external
information for assistance. Recently, large language models (LLMs) (e.g.,
ChatGPT), have demonstrated impressive prowess in solving a wide range of tasks
with world knowledge, including knowledge-intensive tasks. However, it remains
unclear how well LLMs are able to perceive their factual knowledge boundaries,
particularly how they behave when incorporating retrieval augmentation. In this
study, we present an initial analysis of the factual knowledge boundaries of
LLMs and how retrieval augmentation affects LLMs on open-domain QA. Specially,
we focus on three primary research questions and analyze them by examining QA
performance, priori judgement and posteriori judgement of LLMs. We show
evidence that LLMs possess unwavering confidence in their capabilities to
respond to questions and the accuracy of their responses. Furthermore,
retrieval augmentation proves to be an effective approach in enhancing LLMs'
awareness of knowledge boundaries, thereby improving their judgemental
abilities. Additionally, we also find that LLMs have a propensity to rely on
the provided retrieval results when formulating answers, while the quality of
these results significantly impacts their reliance. The code to reproduce this
work is available at https://github.com/RUCAIBox/LLM-Knowledge-Boundary. | http://arxiv.org/pdf/2307.11019 | Ruiyang Ren, Yuhao Wang, Yingqi Qu, Wayne Xin Zhao, Jing Liu, Hao Tian, Hua Wu, Ji-Rong Wen, Haifeng Wang | cs.CL, cs.IR | null | null | cs.CL | 20230720 | 20230723 | [
{
"id": "2302.06476"
},
{
"id": "2108.05540"
},
{
"id": "2305.14766"
},
{
"id": "2305.12421"
},
{
"id": "2209.10063"
},
{
"id": "2303.18223"
},
{
"id": "2305.06984"
},
{
"id": "2211.14876"
},
{
"id": "2305.06311"
}
] |
2307.10635 | 22 | Implementation details. We access the two LLMs via the OpenAI API, setting the temperature to zero to minimize the randomness of the predictions. Few-shot examples, including solutions, are randomly selected from problems within each textbook. When external tools are utilized, we add a code snippet that translates the solution into speciï¬c programming languages in all few-shot examples. The code snippets are veriï¬ed by human annotators that will produce the correct outputs. In terms of evaluation metrics, we compare the model outputs with the correct answers, allowing an absolute tolerance of 0.1 for answers greater than 1 and a relative tolerance of 0.05 for answers less than 1. For the exam dataset, model solutions are graded using the grading rubric provided by the instructors. For readers of interest, all prompts and the corresponding implementation details for utilizing external tools are provided in Appendix D.
# 3.2 Results and Analysis
The experimental results of all model conï¬gurations using both GPT-3.5 and GPT-4 on the textbook dataset and the exam dataset are presented in Tables 3 and 4 respectively. We report the model performance in terms of accuracy scores for each textbook and an average score over all problems. By analyzing the results, we have the following four key observations:
# âhttps://www.wolfram.com/language/
5 | 2307.10635#22 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.11019 | 22 | LLMs perceive their factual knowledge bound- ary inaccurately and have a tendency to be over- conï¬dent. In Table 2, we ï¬nd that LLMs tend to be conï¬dent in their abilities and are unwilling to give up answering questions. Overall, the accuracy of the answers is generally correlated with LLMsâ conï¬dence level, but such conï¬dence far exceeds their actual abilities. LLMsâ self-predictions re- garding their abilities are often inaccurate, with the majority of questions they persist in answer- ing being answered incorrectly (Right/¬G), while many of the questions they give up answering are answered correctly (Right/G). Similar to previous studies (Kamalloo et al., 2023), the QA ability of LLMs remains satisfactory even in the absence of in-domain data under the normal setting. When we instruct LLMs to evaluate their answers for pos- teriori judgement, they also exhibit a signiï¬cant tendency to believe that their answers are correct, resulting in much higher Eval-Right values com- pared to EM. However, there exists a substantial disparity between | 2307.11019#22 | Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation | Knowledge-intensive tasks (e.g., open-domain question answering (QA)) require
a substantial amount of factual knowledge and often rely on external
information for assistance. Recently, large language models (LLMs) (e.g.,
ChatGPT), have demonstrated impressive prowess in solving a wide range of tasks
with world knowledge, including knowledge-intensive tasks. However, it remains
unclear how well LLMs are able to perceive their factual knowledge boundaries,
particularly how they behave when incorporating retrieval augmentation. In this
study, we present an initial analysis of the factual knowledge boundaries of
LLMs and how retrieval augmentation affects LLMs on open-domain QA. Specially,
we focus on three primary research questions and analyze them by examining QA
performance, priori judgement and posteriori judgement of LLMs. We show
evidence that LLMs possess unwavering confidence in their capabilities to
respond to questions and the accuracy of their responses. Furthermore,
retrieval augmentation proves to be an effective approach in enhancing LLMs'
awareness of knowledge boundaries, thereby improving their judgemental
abilities. Additionally, we also find that LLMs have a propensity to rely on
the provided retrieval results when formulating answers, while the quality of
these results significantly impacts their reliance. The code to reproduce this
work is available at https://github.com/RUCAIBox/LLM-Knowledge-Boundary. | http://arxiv.org/pdf/2307.11019 | Ruiyang Ren, Yuhao Wang, Yingqi Qu, Wayne Xin Zhao, Jing Liu, Hao Tian, Hua Wu, Ji-Rong Wen, Haifeng Wang | cs.CL, cs.IR | null | null | cs.CL | 20230720 | 20230723 | [
{
"id": "2302.06476"
},
{
"id": "2108.05540"
},
{
"id": "2305.14766"
},
{
"id": "2305.12421"
},
{
"id": "2209.10063"
},
{
"id": "2303.18223"
},
{
"id": "2305.06984"
},
{
"id": "2211.14876"
},
{
"id": "2305.06311"
}
] |
2307.11019 | 23 | to believe that their answers are correct, resulting in much higher Eval-Right values com- pared to EM. However, there exists a substantial disparity between Eval-Right value and the actual evaluation accuary, as indicated by relatively low Eval-Acc metrics. Moreover, ChatGPT achieves a better performance than Davinci003 but with higher give-up rates, indicating that Davinci003 is more self-conï¬dent than ChatGPT when gener- ating answers to questions. | 2307.11019#23 | Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation | Knowledge-intensive tasks (e.g., open-domain question answering (QA)) require
a substantial amount of factual knowledge and often rely on external
information for assistance. Recently, large language models (LLMs) (e.g.,
ChatGPT), have demonstrated impressive prowess in solving a wide range of tasks
with world knowledge, including knowledge-intensive tasks. However, it remains
unclear how well LLMs are able to perceive their factual knowledge boundaries,
particularly how they behave when incorporating retrieval augmentation. In this
study, we present an initial analysis of the factual knowledge boundaries of
LLMs and how retrieval augmentation affects LLMs on open-domain QA. Specially,
we focus on three primary research questions and analyze them by examining QA
performance, priori judgement and posteriori judgement of LLMs. We show
evidence that LLMs possess unwavering confidence in their capabilities to
respond to questions and the accuracy of their responses. Furthermore,
retrieval augmentation proves to be an effective approach in enhancing LLMs'
awareness of knowledge boundaries, thereby improving their judgemental
abilities. Additionally, we also find that LLMs have a propensity to rely on
the provided retrieval results when formulating answers, while the quality of
these results significantly impacts their reliance. The code to reproduce this
work is available at https://github.com/RUCAIBox/LLM-Knowledge-Boundary. | http://arxiv.org/pdf/2307.11019 | Ruiyang Ren, Yuhao Wang, Yingqi Qu, Wayne Xin Zhao, Jing Liu, Hao Tian, Hua Wu, Ji-Rong Wen, Haifeng Wang | cs.CL, cs.IR | null | null | cs.CL | 20230720 | 20230723 | [
{
"id": "2302.06476"
},
{
"id": "2108.05540"
},
{
"id": "2305.14766"
},
{
"id": "2305.12421"
},
{
"id": "2209.10063"
},
{
"id": "2303.18223"
},
{
"id": "2305.06984"
},
{
"id": "2211.14876"
},
{
"id": "2305.06311"
}
] |
2307.10635 | 24 | Model Setting atkins Chemistry chemmc quan matter fund Physics class thermo diff Math stat calc Avg. GPT-3.5 ZeroâS Zero Zero+CoT Few Few+CoT Few+Py Few+Wol 8.41 4.67 6.54 5.61 8.41 13.08 3.74 28.21 20.51 23.08 15.38 20.51 33.33 7.69 5.88 8.82 2.94 11.76 8.82 8.82 2.94 4.08 2.04 10.20 4.08 6.12 16.33 18.37 12.33 10.96 12.33 8.22 10.96 26.01 17.81 2.13 2.13 2.12 0.00 2.12 4.26 6.38 5.97 2.94 5.97 1.49 1.49 7.46 2.99 4.00 6.00 12.00 10.00 10.00 16.00 12.00 21.33 28.00 33.33 26.67 38.67 44.00 5.33 13.95 9.30 9.30 13.95 6.98 26.19 2.38 10.62 9.59 12.17 9.60 11.99 19.91 7.87 GPT-4 ZeroâS Zero Zero+CoT Few Few+CoT | 2307.10635#24 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.11019 | 24 | LLM Retrieval Source EM QA F1 Priori Judgement Posteriori Judgement Give-up Right/G Right/¬G Eval-Right Eval-Acc Davinci003 None Sparse Dense Dense+Sparse ChatGPT 26.37 30.44 40.58 40.50 34.18 35.95 40.90 52.22 52.33 46.79 27.17% 13.56% 20.55% 9.84% 14.52% 14.31% 8.92% 12.73% 5.35% 6.73% 31.15% 35.77% 45.04% 43.22% 36.26% 71.27% 41.11% 47.78% 47.37% 44.96% 46.88% 67.56% 69.67% 69.84% 72.11% ChatGPT None Sparse Dense Dense+Sparse ChatGPT 30.89 25.87 35.79 36.01 32.80 42.14 35.71 47.68 47.99 45.08 32.05% 14.63% 41.41% 8.03% 27.53% 11.27% 26.90% 11.33% 5.98% 8.34% 38.67% 38.49% 45.11% 45.09% 35.24% 87.09% | 2307.11019#24 | Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation | Knowledge-intensive tasks (e.g., open-domain question answering (QA)) require
a substantial amount of factual knowledge and often rely on external
information for assistance. Recently, large language models (LLMs) (e.g.,
ChatGPT), have demonstrated impressive prowess in solving a wide range of tasks
with world knowledge, including knowledge-intensive tasks. However, it remains
unclear how well LLMs are able to perceive their factual knowledge boundaries,
particularly how they behave when incorporating retrieval augmentation. In this
study, we present an initial analysis of the factual knowledge boundaries of
LLMs and how retrieval augmentation affects LLMs on open-domain QA. Specially,
we focus on three primary research questions and analyze them by examining QA
performance, priori judgement and posteriori judgement of LLMs. We show
evidence that LLMs possess unwavering confidence in their capabilities to
respond to questions and the accuracy of their responses. Furthermore,
retrieval augmentation proves to be an effective approach in enhancing LLMs'
awareness of knowledge boundaries, thereby improving their judgemental
abilities. Additionally, we also find that LLMs have a propensity to rely on
the provided retrieval results when formulating answers, while the quality of
these results significantly impacts their reliance. The code to reproduce this
work is available at https://github.com/RUCAIBox/LLM-Knowledge-Boundary. | http://arxiv.org/pdf/2307.11019 | Ruiyang Ren, Yuhao Wang, Yingqi Qu, Wayne Xin Zhao, Jing Liu, Hao Tian, Hua Wu, Ji-Rong Wen, Haifeng Wang | cs.CL, cs.IR | null | null | cs.CL | 20230720 | 20230723 | [
{
"id": "2302.06476"
},
{
"id": "2108.05540"
},
{
"id": "2305.14766"
},
{
"id": "2305.12421"
},
{
"id": "2209.10063"
},
{
"id": "2303.18223"
},
{
"id": "2305.06984"
},
{
"id": "2211.14876"
},
{
"id": "2305.06311"
}
] |
2307.10635 | 25 | 10.62 9.59 12.17 9.60 11.99 19.91 7.87 GPT-4 ZeroâS Zero Zero+CoT Few Few+CoT Few+Py Few+Wol 14.95 27.10 28.04 15.87 21.05 21.05 3.74 25.64 23.08 43.59 30.77 46.15 41.03 0.00 8.82 14.71 14.71 17.65 17.65 38.24 17.65 18.37 22.45 20.41 12.24 26.53 28.57 26.53 21.92 15.07 21.92 26.03 27.40 38.36 27.30 12.77 8.51 19.15 12.77 14.00 17.02 17.02 7.46 11.94 17.91 5.97 13.43 29.85 17.91 8.00 18.00 22.00 8.00 18.00 34.00 32.00 28.00 56.00 50.67 49.33 61.33 69.33 7.69 19.05 42.86 42.86 33.33 35.71 42.86 14.29 16.81 25.09 28.52 21.46 28.35 35.80 15.56 | 2307.10635#25 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.11019 | 25 | 26.90% 11.33% 5.98% 8.34% 38.67% 38.49% 45.11% 45.09% 35.24% 87.09% 57.76% 63.35% 70.94% 70.94% 36.85% 52.26% 55.03% 47.54% 47.54% Davinci003 None Sparse Dense Dense+Sparse ChatGPT 69.56 70.16 72.59 72.60 71.92 74.03 75.73 78.30 78.60 78.97 5.65% 36.59% 11.37% 28.47% 8.59% 31.24% 6.77% 28.84% 1.88% 19.18% 71.53% 75.51% 76.48% 75.78% 72.93% 87.90% 73.45% 77.35% 76.83% 78.24% 72.05% 78.81% 80.84% 81.67% 83.62% ChatGPT None Sparse Dense Dense+Sparse ChatGPT 74.77 65.31 69.84 70.10 69.53 80.11 71.81 76.58 76.91 77.67 12.00% 44.00% 19.00% 21.91% 15.67% 30.25% 13.40% | 2307.11019#25 | Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation | Knowledge-intensive tasks (e.g., open-domain question answering (QA)) require
a substantial amount of factual knowledge and often rely on external
information for assistance. Recently, large language models (LLMs) (e.g.,
ChatGPT), have demonstrated impressive prowess in solving a wide range of tasks
with world knowledge, including knowledge-intensive tasks. However, it remains
unclear how well LLMs are able to perceive their factual knowledge boundaries,
particularly how they behave when incorporating retrieval augmentation. In this
study, we present an initial analysis of the factual knowledge boundaries of
LLMs and how retrieval augmentation affects LLMs on open-domain QA. Specially,
we focus on three primary research questions and analyze them by examining QA
performance, priori judgement and posteriori judgement of LLMs. We show
evidence that LLMs possess unwavering confidence in their capabilities to
respond to questions and the accuracy of their responses. Furthermore,
retrieval augmentation proves to be an effective approach in enhancing LLMs'
awareness of knowledge boundaries, thereby improving their judgemental
abilities. Additionally, we also find that LLMs have a propensity to rely on
the provided retrieval results when formulating answers, while the quality of
these results significantly impacts their reliance. The code to reproduce this
work is available at https://github.com/RUCAIBox/LLM-Knowledge-Boundary. | http://arxiv.org/pdf/2307.11019 | Ruiyang Ren, Yuhao Wang, Yingqi Qu, Wayne Xin Zhao, Jing Liu, Hao Tian, Hua Wu, Ji-Rong Wen, Haifeng Wang | cs.CL, cs.IR | null | null | cs.CL | 20230720 | 20230723 | [
{
"id": "2302.06476"
},
{
"id": "2108.05540"
},
{
"id": "2305.14766"
},
{
"id": "2305.12421"
},
{
"id": "2209.10063"
},
{
"id": "2303.18223"
},
{
"id": "2305.06984"
},
{
"id": "2211.14876"
},
{
"id": "2305.06311"
}
] |
2307.10635 | 26 | Table 4: Experimental results in terms of total scores under zero-shot learning on the exam dataset. The best performing score is highlighted in bold.
Data Mining Machine Learning Differential Equations Model Setting Midterm Final Midterm Final Exam 1 Exam 2 Final GPT-3.5 GPT-3.5 Zero Zero+CoT 44 / 90 38 / 90 39 / 75 33 / 75 16 / 56 32 / 56 32 / 75 37 / 75 0 / 100 28 / 100 45 / 100 30 / 100 15 / 95 10 / 95 GPT-4 GPT-4 Zero Zero+CoT 56 / 90 58 / 90 44 / 75 32 / 75 30 / 56 40 / 56 37 / 75 35 / 75 25 / 100 50 / 100 80 / 100 70 / 100 25 / 95 15 / 95
⢠GPT-4 surpasses GPT-3.5 by a signiï¬cant margin across all seven experimental settings in the textbook dataset, with notable improvements of 16.36% and 15.89% in averaged scores in few- shot learning with CoT prompting and Python as external tools, respectively. A similar trend is observed in the exam dataset where GPT-4 outperforms GPT-3.5 in most exam problems. These results suggest a substantial overall improvement in GPT-4 over GPT-3.5, particularly in scenarios involving CoT prompting and the use of external tools like Python. | 2307.10635#26 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.11019 | 26 | 71.81 76.58 76.91 77.67 12.00% 44.00% 19.00% 21.91% 15.67% 30.25% 13.40% 28.76% 3.03% 16.53% 78.97% 75.48% 77.20% 76.49% 71.19% 92.58% 84.86% 87.81% 88.43% 92.23% 77.02% 78.58% 78.90% 79.33% 78.84% Davinci003 None Sparse Dense Dense+Sparse ChatGPT 16.62 28.27 25.13 29.40 25.47 25.53 39.65 35.74 41.02 36.93 35.76% 8.34% 29.40% 11.18% 37.60% 10.27% 25.27% 11.07% 4.31% 8.64% 21.23% 35.38% 34.08% 35.60% 27.47% 69.87% 32.47% 33.94% 33.88% 33.66% 41.93% 75.46% 74.24% 75.18% 76.15% ChatGPT None Sparse Dense Dense+Sparse ChatGPT 17.81 24.52 21.08 25.67 24.45 26.35 34.64 30.12 | 2307.11019#26 | Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation | Knowledge-intensive tasks (e.g., open-domain question answering (QA)) require
a substantial amount of factual knowledge and often rely on external
information for assistance. Recently, large language models (LLMs) (e.g.,
ChatGPT), have demonstrated impressive prowess in solving a wide range of tasks
with world knowledge, including knowledge-intensive tasks. However, it remains
unclear how well LLMs are able to perceive their factual knowledge boundaries,
particularly how they behave when incorporating retrieval augmentation. In this
study, we present an initial analysis of the factual knowledge boundaries of
LLMs and how retrieval augmentation affects LLMs on open-domain QA. Specially,
we focus on three primary research questions and analyze them by examining QA
performance, priori judgement and posteriori judgement of LLMs. We show
evidence that LLMs possess unwavering confidence in their capabilities to
respond to questions and the accuracy of their responses. Furthermore,
retrieval augmentation proves to be an effective approach in enhancing LLMs'
awareness of knowledge boundaries, thereby improving their judgemental
abilities. Additionally, we also find that LLMs have a propensity to rely on
the provided retrieval results when formulating answers, while the quality of
these results significantly impacts their reliance. The code to reproduce this
work is available at https://github.com/RUCAIBox/LLM-Knowledge-Boundary. | http://arxiv.org/pdf/2307.11019 | Ruiyang Ren, Yuhao Wang, Yingqi Qu, Wayne Xin Zhao, Jing Liu, Hao Tian, Hua Wu, Ji-Rong Wen, Haifeng Wang | cs.CL, cs.IR | null | null | cs.CL | 20230720 | 20230723 | [
{
"id": "2302.06476"
},
{
"id": "2108.05540"
},
{
"id": "2305.14766"
},
{
"id": "2305.12421"
},
{
"id": "2209.10063"
},
{
"id": "2303.18223"
},
{
"id": "2305.06984"
},
{
"id": "2211.14876"
},
{
"id": "2305.06311"
}
] |
2307.10635 | 27 | ⢠The zero-shot learning setting exhibits comparable performance to the few-shot learning setting, with average scores of 12.17% and 11.99% in GPT-3.5 and 28.52% and 28.35% in GPT-4 under CoT setting. However, in many textbooks such as quantum chemistry (quan and chemmc), which are deep, specialized domains within chemistry, few-shot learning outperforms zero-shot learning, with improvements of 2.94% and 2.56% in GPT-4 under the CoT setting. This could be attributed to the selected prompt examples being representative and speciï¬c to the domain. On the other hand, few-shot learning falls short in textbooks such as physical chemistry (atkins), a more general branch of chemistry covering a wide range of topics, from chemical equilibrium to quantum theory, with a decrease of 6.99% in GPT-4 under the CoT setting. The selected prompt examples may not adequately capture the diversity of the domain, resulting in reduced performance in few-shot learning. | 2307.10635#27 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.10635 | 28 | ⢠The utilization of advanced prompting strategies like CoT brings advantages over vanilla LLMs. For the textbook dataset, the CoT prompting yields average improvements of 2.58% and 2.39% under zero-shot and few-shot learning for GPT-3.5, and 3.43% and 6.89% for GPT-4, respectively. This improvement suggests that encouraging LLMs to generate detailed solution steps helps obtain correct ï¬nal answers, though its effectiveness varies across different models and settings. However, in certain textbooks such as Quantum Chemistry (quan) that involve multiple steps of advanced computational calculations, as well as in the real exam dataset, CoT prompting sometimes brings adverse effects, resulting even in decrease under both zero-shot and few-shot learning. This could be because CoT prompting may generate solution steps that inadvertently misguide the LLM away from the correct solution.
6
Summary âolutions Annotator Calculus, Statistics, Probability, ... { LM/ Rete Human ° Essential, LLM Skills | Verifier Error Profiles Error Reason Datasets Evaluation BB Data Mining, Differential Equations, ... = | 2307.10635#28 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.11019 | 28 | Table 3: Evaluation results of retrieval-augmented LLMs with different retrieval sources on Natural Questions (NQ), TriviaQA, and HotpotQA. The metrics are grouped according to different settings. Metric abbreviations are explained in Section 2.3.2.
# 3.2 What Impact Does Retrieval Augmentation Have on LLMs?
Following the analysis of the open-domain QA per- formance of LLMs, we next study the effect of retrieval augmentation on LLMs.
Our experiments are conducted in a retrieval- augmented setting, and we introduce several sources for retrieval augmentation, including sparse retrieval, dense retrieval, and ChatGPT, which are detailed in Section 2.3.3. Speciï¬cally, with the sup- porting documents from retrievers, we employ the priori judgement to determine whether to give up answering the questions, and the posteriori judge- ment to assess the correctness of answers generated by LLMs. Additionally, we employ QA prompting to guide LLMs in answering the questions.
LLMs cannot sufï¬ciently utilize the knowledge they possess, while retrieval augmentation can serve as a valuable knowledge supplement for | 2307.11019#28 | Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation | Knowledge-intensive tasks (e.g., open-domain question answering (QA)) require
a substantial amount of factual knowledge and often rely on external
information for assistance. Recently, large language models (LLMs) (e.g.,
ChatGPT), have demonstrated impressive prowess in solving a wide range of tasks
with world knowledge, including knowledge-intensive tasks. However, it remains
unclear how well LLMs are able to perceive their factual knowledge boundaries,
particularly how they behave when incorporating retrieval augmentation. In this
study, we present an initial analysis of the factual knowledge boundaries of
LLMs and how retrieval augmentation affects LLMs on open-domain QA. Specially,
we focus on three primary research questions and analyze them by examining QA
performance, priori judgement and posteriori judgement of LLMs. We show
evidence that LLMs possess unwavering confidence in their capabilities to
respond to questions and the accuracy of their responses. Furthermore,
retrieval augmentation proves to be an effective approach in enhancing LLMs'
awareness of knowledge boundaries, thereby improving their judgemental
abilities. Additionally, we also find that LLMs have a propensity to rely on
the provided retrieval results when formulating answers, while the quality of
these results significantly impacts their reliance. The code to reproduce this
work is available at https://github.com/RUCAIBox/LLM-Knowledge-Boundary. | http://arxiv.org/pdf/2307.11019 | Ruiyang Ren, Yuhao Wang, Yingqi Qu, Wayne Xin Zhao, Jing Liu, Hao Tian, Hua Wu, Ji-Rong Wen, Haifeng Wang | cs.CL, cs.IR | null | null | cs.CL | 20230720 | 20230723 | [
{
"id": "2302.06476"
},
{
"id": "2108.05540"
},
{
"id": "2305.14766"
},
{
"id": "2305.12421"
},
{
"id": "2209.10063"
},
{
"id": "2303.18223"
},
{
"id": "2305.06984"
},
{
"id": "2211.14876"
},
{
"id": "2305.06311"
}
] |
2307.10635 | 29 | Figure 2: Pipeline of the evaluation protocol. The evaluation protocol involves analyzing both LLM and reference (correct) solutions with the assistance of human annotators to identify error reasons. These reasons are then summarized into ten essential scientiï¬c problem-solving skills in which LLM may face challenges. Subsequently, a LLM veriï¬er is employed to automatically attribute each incorrectly answered problem to a lack of a speciï¬c skill. The resulting error proï¬les enable the interpretation of the improved skills by certain prompting strategies and the direct comparison of various strategies. | 2307.10635#29 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.11019 | 29 | LLMs cannot sufï¬ciently utilize the knowledge they possess, while retrieval augmentation can serve as a valuable knowledge supplement for
LLMs. In Table 3, we compare the behaviors of LLMs with different supporting documents from external retrievers. Besides, we also integrate the retrieval results from both dense and sparse retriev- ers as supporting documents. It can be observed that LLMs with supporting documents outperform pure LLMs in most cases, and combining the re- trieval results of dense and sparse retrieval as sup- porting documents often leads to the best perfor- mance. Moreover, although LLMs have learned massive knowledge from existing corpora includ- ing Wikipedia during training (Brown et al., 2020; Ouyang et al., 2022), providing them with support- ing documents from Wikipedia can still improve their QA abilities. Such observation indicates that LLMs are not able to effectively utilize their knowl- edge. Furthermore, the performance improvement of Davinci003 by introducing retrieval augmenta- tion surpasses that of ChatGPT by a large mar- gin. We suspect that this disparity is attributed to | 2307.11019#29 | Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation | Knowledge-intensive tasks (e.g., open-domain question answering (QA)) require
a substantial amount of factual knowledge and often rely on external
information for assistance. Recently, large language models (LLMs) (e.g.,
ChatGPT), have demonstrated impressive prowess in solving a wide range of tasks
with world knowledge, including knowledge-intensive tasks. However, it remains
unclear how well LLMs are able to perceive their factual knowledge boundaries,
particularly how they behave when incorporating retrieval augmentation. In this
study, we present an initial analysis of the factual knowledge boundaries of
LLMs and how retrieval augmentation affects LLMs on open-domain QA. Specially,
we focus on three primary research questions and analyze them by examining QA
performance, priori judgement and posteriori judgement of LLMs. We show
evidence that LLMs possess unwavering confidence in their capabilities to
respond to questions and the accuracy of their responses. Furthermore,
retrieval augmentation proves to be an effective approach in enhancing LLMs'
awareness of knowledge boundaries, thereby improving their judgemental
abilities. Additionally, we also find that LLMs have a propensity to rely on
the provided retrieval results when formulating answers, while the quality of
these results significantly impacts their reliance. The code to reproduce this
work is available at https://github.com/RUCAIBox/LLM-Knowledge-Boundary. | http://arxiv.org/pdf/2307.11019 | Ruiyang Ren, Yuhao Wang, Yingqi Qu, Wayne Xin Zhao, Jing Liu, Hao Tian, Hua Wu, Ji-Rong Wen, Haifeng Wang | cs.CL, cs.IR | null | null | cs.CL | 20230720 | 20230723 | [
{
"id": "2302.06476"
},
{
"id": "2108.05540"
},
{
"id": "2305.14766"
},
{
"id": "2305.12421"
},
{
"id": "2209.10063"
},
{
"id": "2303.18223"
},
{
"id": "2305.06984"
},
{
"id": "2211.14876"
},
{
"id": "2305.06311"
}
] |
2307.10635 | 30 | ⢠Prompts that utilize Python yield impressive improvements while those using Wolfram di- minish performance. Under few-shot learning scenarios, utilizing Python as an external tool results in an improvement of 7.92% compared to the CoT prompting for GPT-3.5, and an improve- ment of 7.45% for GPT-4. This indicates that Python signiï¬cantly improves problem-solving, primarily attributed to the enhancement of calculation skills. However, utilizing Wolfram Language does not help few-shot learning and even results in a deteriorated performance, with a decrease of 4.12% compared to the CoT prompting for GPT-3.5, and a decrease of 12.79% for GPT-4. We note that converting the solution steps to Wolfram Language often introduces syntax issues and thus fails to produce satisfactory results, particularly in textbooks like Quantum Chemistry (chemmc), which involve numerous variables.
# 4 Error Analysis of Various Prompting Strategies | 2307.10635#30 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.11019 | 30 | ' QA Prompting ' QA Evaluation ' Retrieval- 1 augmented | i} ~ w/o judgement | setting EM: 35.79 âS) Answer > £4: 47.68 Judgemental Prompting 1 u Retrieval- ' augmented ! ! settin Normal judgement Answer ., | [answer \ 1 GiyeTip | EM: 34.04 1 ' S i \ F4: 45.83 Not give up Normal ' ti Answer 1 ! 1 1 Normal Retrieval-augmented 1 setting judgement âAnswer â I . GiyeUp | ; i ! \ EM: 37.81 Question ' Retrieval- ' F1: 50.18 Not give up augmented ' setting (S) Answer i T
Figure 2: A simple method that dynamically introduces retrieval augmentation for LLMs, the dynamic intro- ducing rules depend on different priori judgement set- ting. We use ChatGPT with QA prompting under the retrieval-augmented setting as the baseline (w/o judge- ment). | 2307.11019#30 | Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation | Knowledge-intensive tasks (e.g., open-domain question answering (QA)) require
a substantial amount of factual knowledge and often rely on external
information for assistance. Recently, large language models (LLMs) (e.g.,
ChatGPT), have demonstrated impressive prowess in solving a wide range of tasks
with world knowledge, including knowledge-intensive tasks. However, it remains
unclear how well LLMs are able to perceive their factual knowledge boundaries,
particularly how they behave when incorporating retrieval augmentation. In this
study, we present an initial analysis of the factual knowledge boundaries of
LLMs and how retrieval augmentation affects LLMs on open-domain QA. Specially,
we focus on three primary research questions and analyze them by examining QA
performance, priori judgement and posteriori judgement of LLMs. We show
evidence that LLMs possess unwavering confidence in their capabilities to
respond to questions and the accuracy of their responses. Furthermore,
retrieval augmentation proves to be an effective approach in enhancing LLMs'
awareness of knowledge boundaries, thereby improving their judgemental
abilities. Additionally, we also find that LLMs have a propensity to rely on
the provided retrieval results when formulating answers, while the quality of
these results significantly impacts their reliance. The code to reproduce this
work is available at https://github.com/RUCAIBox/LLM-Knowledge-Boundary. | http://arxiv.org/pdf/2307.11019 | Ruiyang Ren, Yuhao Wang, Yingqi Qu, Wayne Xin Zhao, Jing Liu, Hao Tian, Hua Wu, Ji-Rong Wen, Haifeng Wang | cs.CL, cs.IR | null | null | cs.CL | 20230720 | 20230723 | [
{
"id": "2302.06476"
},
{
"id": "2108.05540"
},
{
"id": "2305.14766"
},
{
"id": "2305.12421"
},
{
"id": "2209.10063"
},
{
"id": "2303.18223"
},
{
"id": "2305.06984"
},
{
"id": "2211.14876"
},
{
"id": "2305.06311"
}
] |
2307.10635 | 31 | # 4 Error Analysis of Various Prompting Strategies
Considering the substantial advancements of current Large Language Models (LLMs), an in-depth analysis of the particular skills that are either enhanced or limited under certain settings becomes imperative. Previous works have relied on human labor to annotate error reasons into different categories, which is both expensive and time-consuming [49]. In this section, we present an evaluation protocol that automates the classiï¬cation of error reasons into deï¬cient skills. This time-efï¬cient approach enables large-scale analyses in future research.
In order to quantify the impact of each setting on scientiï¬c problem-solving, we ï¬rst deï¬ne an essential skill set that is required by solving scientiï¬c problems. Then, an LLM veriï¬er is employed to automatically classify each incorrectly solved problem based on the absence of a speciï¬c skill from the essential skill set. This approach generates error proï¬les, showcasing a direct comparison of different strategies. This evaluation protocol is summarized in Figure 2. | 2307.10635#31 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.11019 | 31 | ChatGPTâs weaker ability to comprehend lengthy prompts compared to Davinci003. In addition, we observe that using ChatGPT to acquire supporting documents works well, although extra corpora is not accessible. We consider such a method as a chain-of-thought approach that guides LLMs to ini- tially generate documents with foundational knowl- edge and then reï¬ne it towards the ï¬nal answer.
We also observe a decline in the performance of ChatGPT when incorporating supporting doc- uments on TriviaQA. In order to investigate the reasons, we manually inspect into the bad cases where ChatGPT initially provides correct answers but become incorrect after incorporating retrieval augmentation. It has been found that a signiï¬cant portion of these cases is due to that ChatGPT has extracted incorrect answers from the supporting documents. Given the relatively high performance of ChatGPT on TriviaQA, we suspect that multiple supporting documents may introduce signiï¬cant noise, thereby reï¬ecting the upper bound of re- trieval augmentation for performance improvement to some extent.
Retrieval augmentation improves LLMâs abil- ity to perceive their factual knowledge bound- aries. From Table 3, we ï¬nd that the accuracy of LLMsâ self-assessment improves after incorporat- ing supporting documents from either sparse or | 2307.11019#31 | Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation | Knowledge-intensive tasks (e.g., open-domain question answering (QA)) require
a substantial amount of factual knowledge and often rely on external
information for assistance. Recently, large language models (LLMs) (e.g.,
ChatGPT), have demonstrated impressive prowess in solving a wide range of tasks
with world knowledge, including knowledge-intensive tasks. However, it remains
unclear how well LLMs are able to perceive their factual knowledge boundaries,
particularly how they behave when incorporating retrieval augmentation. In this
study, we present an initial analysis of the factual knowledge boundaries of
LLMs and how retrieval augmentation affects LLMs on open-domain QA. Specially,
we focus on three primary research questions and analyze them by examining QA
performance, priori judgement and posteriori judgement of LLMs. We show
evidence that LLMs possess unwavering confidence in their capabilities to
respond to questions and the accuracy of their responses. Furthermore,
retrieval augmentation proves to be an effective approach in enhancing LLMs'
awareness of knowledge boundaries, thereby improving their judgemental
abilities. Additionally, we also find that LLMs have a propensity to rely on
the provided retrieval results when formulating answers, while the quality of
these results significantly impacts their reliance. The code to reproduce this
work is available at https://github.com/RUCAIBox/LLM-Knowledge-Boundary. | http://arxiv.org/pdf/2307.11019 | Ruiyang Ren, Yuhao Wang, Yingqi Qu, Wayne Xin Zhao, Jing Liu, Hao Tian, Hua Wu, Ji-Rong Wen, Haifeng Wang | cs.CL, cs.IR | null | null | cs.CL | 20230720 | 20230723 | [
{
"id": "2302.06476"
},
{
"id": "2108.05540"
},
{
"id": "2305.14766"
},
{
"id": "2305.12421"
},
{
"id": "2209.10063"
},
{
"id": "2303.18223"
},
{
"id": "2305.06984"
},
{
"id": "2211.14876"
},
{
"id": "2305.06311"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.