doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2307.10635 | 32 | Firstly, we analyze the incorrect solutions made by GPT-3.5 for problems that provide detailed solutions. We hire two college students, who are highly familiar with the problems in our datasets, to annotate the source of the error for each problem, indicating the speciï¬c line where the model makes a mistake and why. From 112 such error annotations and with the assistance of GPT-4, we distill these errors into ten essential skills that GPT-3.5 might lack:
⢠Logical decomposition and analysis skills: This ability involves decomposing the problem into smaller, manageable parts, and understanding the relationships between these parts.
⢠Identiï¬cation of assumptions: This skill involves the ability to recognize relevant and necessary
# assumptions in the problem.
⢠Spatial perception: This is important for understanding problems in areas such as Physics and Chemistry, where models need to visualize molecules, forces, ï¬elds, etc.
Causal reasoning: This is the ability to understand cause and effect relationships. ⢠Problem deduction skills: This pertains to the ability to infer and deduce potential solutions or
underlying principles from the given information in a problem.
⢠Abstract reasoning: This skill involves the ability to understand complex concepts that cannot be perceived physically, and to recognize patterns or relationships beyond concrete examples.
7 | 2307.10635#32 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.11019 | 32 | dense retrievers. Speciï¬cally, Right/¬G signiï¬- cantly increases, Right/G decreases or slightly in- creases due to the signiï¬cant improvement in QA performance. The results show that the priori judge- ment of retrieval-augmented LLMs is more accu- rate. Moreover, Eval-Right signiï¬cantly decreases that it is more consistent with EM metrics, while Eval-Acc signiï¬cantly increases. The results indi- cate that retrieval augmentation can also improve the accuracy of LLMsâ posterior judgement. | 2307.11019#32 | Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation | Knowledge-intensive tasks (e.g., open-domain question answering (QA)) require
a substantial amount of factual knowledge and often rely on external
information for assistance. Recently, large language models (LLMs) (e.g.,
ChatGPT), have demonstrated impressive prowess in solving a wide range of tasks
with world knowledge, including knowledge-intensive tasks. However, it remains
unclear how well LLMs are able to perceive their factual knowledge boundaries,
particularly how they behave when incorporating retrieval augmentation. In this
study, we present an initial analysis of the factual knowledge boundaries of
LLMs and how retrieval augmentation affects LLMs on open-domain QA. Specially,
we focus on three primary research questions and analyze them by examining QA
performance, priori judgement and posteriori judgement of LLMs. We show
evidence that LLMs possess unwavering confidence in their capabilities to
respond to questions and the accuracy of their responses. Furthermore,
retrieval augmentation proves to be an effective approach in enhancing LLMs'
awareness of knowledge boundaries, thereby improving their judgemental
abilities. Additionally, we also find that LLMs have a propensity to rely on
the provided retrieval results when formulating answers, while the quality of
these results significantly impacts their reliance. The code to reproduce this
work is available at https://github.com/RUCAIBox/LLM-Knowledge-Boundary. | http://arxiv.org/pdf/2307.11019 | Ruiyang Ren, Yuhao Wang, Yingqi Qu, Wayne Xin Zhao, Jing Liu, Hao Tian, Hua Wu, Ji-Rong Wen, Haifeng Wang | cs.CL, cs.IR | null | null | cs.CL | 20230720 | 20230723 | [
{
"id": "2302.06476"
},
{
"id": "2108.05540"
},
{
"id": "2305.14766"
},
{
"id": "2305.12421"
},
{
"id": "2209.10063"
},
{
"id": "2303.18223"
},
{
"id": "2305.06984"
},
{
"id": "2211.14876"
},
{
"id": "2305.06311"
}
] |
2307.10635 | 33 | ⢠Abstract reasoning: This skill involves the ability to understand complex concepts that cannot be perceived physically, and to recognize patterns or relationships beyond concrete examples.
7
Calculation PALI) Pi Zp 6% [rss wate pps PZZZZA2 5% [Bow [2p.7% [p9% [pov [Zs [ZZb2% . ZEA 70 PZZZB9% {Z}.8% [pow po% Identification of Assumptions 477 2H yp7% PZB 6% Logical Decomposition (77722222 20.1% (ZZZZA3.4% (ZZZZZDN43% fi 3 10 5 0 ar) 30 i0 i 10 20 30 0 (a) ZeroâS (b) Zero+CoT (c) Few+Py Comect Zh7.0% ral 112.5% Calculation erat (ZZZZZZZP 9% Ezp4% Fis.2% Dh0.7% 52% p52% Pazzz01% 0 5 10 15 20 2 0 10 20 30 40 0 10 20 30 40 (d) Zero (@) Few+CoT (8) Few+Wol | 2307.10635#33 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.11019 | 33 | In order to further investigate the observed im- provement, we exam a simple method that em- ploy priori judgement with either the normal or the retrieval-augmented settings to determine whether to introduce retrieval augmentation. Speciï¬cally, if a question is challenging for the LLM to an- swer under the normal prompts, supporting doc- uments are introduced to provide an answer, oth- erwise the question is answered without support- ing documents. Similarly, if a question is difï¬- cult for the LLM to answer under the retrieval- augmented setting, the question should be an- swered without supporting documents, otherwise supporting documents are introduced for answer- ing. We experiment on ChatGPT, using support- ing documents sourced from the dense retriever. Figure 2 compares different judgement settings for decision-making to dynamically incorporate retrieval augmentation. When using the priori judgement of ChatGPT under normal setting for decision-making, the answering accuracy tends to be lower compared to the baseline. While the accu- racy surpasses the baseline that always incorporates retrieval augmentation when using the judgement with the | 2307.11019#33 | Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation | Knowledge-intensive tasks (e.g., open-domain question answering (QA)) require
a substantial amount of factual knowledge and often rely on external
information for assistance. Recently, large language models (LLMs) (e.g.,
ChatGPT), have demonstrated impressive prowess in solving a wide range of tasks
with world knowledge, including knowledge-intensive tasks. However, it remains
unclear how well LLMs are able to perceive their factual knowledge boundaries,
particularly how they behave when incorporating retrieval augmentation. In this
study, we present an initial analysis of the factual knowledge boundaries of
LLMs and how retrieval augmentation affects LLMs on open-domain QA. Specially,
we focus on three primary research questions and analyze them by examining QA
performance, priori judgement and posteriori judgement of LLMs. We show
evidence that LLMs possess unwavering confidence in their capabilities to
respond to questions and the accuracy of their responses. Furthermore,
retrieval augmentation proves to be an effective approach in enhancing LLMs'
awareness of knowledge boundaries, thereby improving their judgemental
abilities. Additionally, we also find that LLMs have a propensity to rely on
the provided retrieval results when formulating answers, while the quality of
these results significantly impacts their reliance. The code to reproduce this
work is available at https://github.com/RUCAIBox/LLM-Knowledge-Boundary. | http://arxiv.org/pdf/2307.11019 | Ruiyang Ren, Yuhao Wang, Yingqi Qu, Wayne Xin Zhao, Jing Liu, Hao Tian, Hua Wu, Ji-Rong Wen, Haifeng Wang | cs.CL, cs.IR | null | null | cs.CL | 20230720 | 20230723 | [
{
"id": "2302.06476"
},
{
"id": "2108.05540"
},
{
"id": "2305.14766"
},
{
"id": "2305.12421"
},
{
"id": "2209.10063"
},
{
"id": "2303.18223"
},
{
"id": "2305.06984"
},
{
"id": "2211.14876"
},
{
"id": "2305.06311"
}
] |
2307.10635 | 34 | Figure 3: Error proï¬les of GPT-3.5 on the text dataset under six settings, which reveal the distribution of their deï¬ciencies in ten essential problem-solving abilities.
⢠Scientiï¬c literacy: This skill involves a comprehensive understanding of key scientiï¬c principles, terminology, and methodologies across a range of disciplines.
⢠Code conversion skills: This involves the ability to accurately translate solution steps into different programming languages, like Python or Wolfram Language.
⢠Logical reasoning: This is the ability to make a reasoned argument and to identify fallacies or inconsistencies in an argument or set of data.
⢠Calculation skills: This involves the ability to accurately carry out mathematical operations and computations. | 2307.10635#34 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.11019 | 34 | be lower compared to the baseline. While the accu- racy surpasses the baseline that always incorporates retrieval augmentation when using the judgement with the retrieval-augmented setting for decision- making. This result indicates that it is effective to dynamically introduce supporting documents for LLMs, according to the priori judgement of LLMs under the retrieval-augmented setting. Additionally, it further shows that the incorporation of retrieval augmentation can improve LLMsâ awareness of their factual knowledge boundaries. | 2307.11019#34 | Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation | Knowledge-intensive tasks (e.g., open-domain question answering (QA)) require
a substantial amount of factual knowledge and often rely on external
information for assistance. Recently, large language models (LLMs) (e.g.,
ChatGPT), have demonstrated impressive prowess in solving a wide range of tasks
with world knowledge, including knowledge-intensive tasks. However, it remains
unclear how well LLMs are able to perceive their factual knowledge boundaries,
particularly how they behave when incorporating retrieval augmentation. In this
study, we present an initial analysis of the factual knowledge boundaries of
LLMs and how retrieval augmentation affects LLMs on open-domain QA. Specially,
we focus on three primary research questions and analyze them by examining QA
performance, priori judgement and posteriori judgement of LLMs. We show
evidence that LLMs possess unwavering confidence in their capabilities to
respond to questions and the accuracy of their responses. Furthermore,
retrieval augmentation proves to be an effective approach in enhancing LLMs'
awareness of knowledge boundaries, thereby improving their judgemental
abilities. Additionally, we also find that LLMs have a propensity to rely on
the provided retrieval results when formulating answers, while the quality of
these results significantly impacts their reliance. The code to reproduce this
work is available at https://github.com/RUCAIBox/LLM-Knowledge-Boundary. | http://arxiv.org/pdf/2307.11019 | Ruiyang Ren, Yuhao Wang, Yingqi Qu, Wayne Xin Zhao, Jing Liu, Hao Tian, Hua Wu, Ji-Rong Wen, Haifeng Wang | cs.CL, cs.IR | null | null | cs.CL | 20230720 | 20230723 | [
{
"id": "2302.06476"
},
{
"id": "2108.05540"
},
{
"id": "2305.14766"
},
{
"id": "2305.12421"
},
{
"id": "2209.10063"
},
{
"id": "2303.18223"
},
{
"id": "2305.06984"
},
{
"id": "2211.14876"
},
{
"id": "2305.06311"
}
] |
2307.10635 | 35 | ⢠Calculation skills: This involves the ability to accurately carry out mathematical operations and computations.
After identifying this essential skill set, we assess the performance of the LLMs under different settings to discern the speciï¬c problem-solving skills they lack. Given the high cost of human annotations required to attribute the cause of incorrect solutions to speciï¬c skill deï¬ciencies, we propose a novel self-critique protocol: we design a speciï¬c prompt that outlines these abilities, and employ another LLM to serve as a classiï¬er and determine whether a speciï¬c error results from the lack of a particular problem-solving skill. Finally, we ask human annotators to scrutinize the classiï¬cation results, which results in approximately 20% of incorrectly classiï¬ed skills being discarded. To be speciï¬c, we utilize a GPT-3.5 model as the veriï¬er to determine the reason behind each error and pinpoint the missing skill. The details regarding the speciï¬c prompts used are provided in Appendix D.1. This veriï¬cation process is conducted for six settings, with results represented in bar charts (Figure 3). Detailed steps of the evaluation protocol with additional examples are elaborated in Appendix A. | 2307.10635#35 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.11019 | 35 | More supporting documents continuously im- prove the performance of retrieval-augmented LLMs. In Figure 3, we further explore the effect of the supporting document number on retrieval- augmented LLMs by varying this number from 1 to 20. The results reveal that as the supporting doc- ument number increases, we observe a continuous improvement in QA performance and a decrease in the give-up rates of LLMs (becoming more con0.45 0.5 -@- EM Davinci003 =®-+ Give Up Davinci003 â& EMChatGPT ââ¢- Give Up ChatGPT 0.42 0.4 a aid on o 0.39 038 = Qa 0.36 2 . 0.2$ oO 0.33 0.1 0.30 0.0 1 2 5 10 15 20 k: # of retrieved documents
Figure 3: The performance and priori judgement of LLMs with increasing supporting document numbers. | 2307.11019#35 | Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation | Knowledge-intensive tasks (e.g., open-domain question answering (QA)) require
a substantial amount of factual knowledge and often rely on external
information for assistance. Recently, large language models (LLMs) (e.g.,
ChatGPT), have demonstrated impressive prowess in solving a wide range of tasks
with world knowledge, including knowledge-intensive tasks. However, it remains
unclear how well LLMs are able to perceive their factual knowledge boundaries,
particularly how they behave when incorporating retrieval augmentation. In this
study, we present an initial analysis of the factual knowledge boundaries of
LLMs and how retrieval augmentation affects LLMs on open-domain QA. Specially,
we focus on three primary research questions and analyze them by examining QA
performance, priori judgement and posteriori judgement of LLMs. We show
evidence that LLMs possess unwavering confidence in their capabilities to
respond to questions and the accuracy of their responses. Furthermore,
retrieval augmentation proves to be an effective approach in enhancing LLMs'
awareness of knowledge boundaries, thereby improving their judgemental
abilities. Additionally, we also find that LLMs have a propensity to rely on
the provided retrieval results when formulating answers, while the quality of
these results significantly impacts their reliance. The code to reproduce this
work is available at https://github.com/RUCAIBox/LLM-Knowledge-Boundary. | http://arxiv.org/pdf/2307.11019 | Ruiyang Ren, Yuhao Wang, Yingqi Qu, Wayne Xin Zhao, Jing Liu, Hao Tian, Hua Wu, Ji-Rong Wen, Haifeng Wang | cs.CL, cs.IR | null | null | cs.CL | 20230720 | 20230723 | [
{
"id": "2302.06476"
},
{
"id": "2108.05540"
},
{
"id": "2305.14766"
},
{
"id": "2305.12421"
},
{
"id": "2209.10063"
},
{
"id": "2303.18223"
},
{
"id": "2305.06984"
},
{
"id": "2211.14876"
},
{
"id": "2305.06311"
}
] |
2307.10635 | 36 | Overall, our ï¬ndings suggest that there is a lack of a universally effective setting: each conï¬g- uration only enhances some speciï¬c abilities and occasionally even hurts other skills that the original GPT models possess. 1. Chain-of-thought (CoT) prompting signiï¬cantly improves calcula- tion skills in both zero- and few-shot scenarios, with 7.1% and 8.0% error rates caused by calculation ability respectively, considerably lower than the 24.1% error rate of the vanilla zero-shot baseline. However, CoT shows limitations in improving other skills, with 15.2% and 15.2% error rates in casual ability and logical decomposition ability in the zero-shot CoT setting, respectively, compared to 17.0% and 13.4% in the zero-shot setting. This contradicts previous claims about universal skill enhancement through zero-shot CoT and carefully-designed few-shot CoT prompts [46]. In appendix, we show an example in Figure 4 where the zero-shot learning setting without CoT has generated the correct formula but fails in the calculation steps. In this case, CoT prompting is even unable to | 2307.10635#36 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.11019 | 36 | Figure 3: The performance and priori judgement of LLMs with increasing supporting document numbers.
ï¬dent), such a trend gradually slows down as the number of supporting documents increases. We also observe that the improvement yielded by the increased supporting document number is not at- tributable to the improvement of recall. Since even if the supporting documents of questions are all golden documents (described in Section 3.3.1), a larger document number still result in improve- ments. Furthermore, LLMs seem to be insensitive to the ordering of supporting documents, such that the performance remains unaffected even when the supporting documents are reversed or shufï¬ed. | 2307.11019#36 | Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation | Knowledge-intensive tasks (e.g., open-domain question answering (QA)) require
a substantial amount of factual knowledge and often rely on external
information for assistance. Recently, large language models (LLMs) (e.g.,
ChatGPT), have demonstrated impressive prowess in solving a wide range of tasks
with world knowledge, including knowledge-intensive tasks. However, it remains
unclear how well LLMs are able to perceive their factual knowledge boundaries,
particularly how they behave when incorporating retrieval augmentation. In this
study, we present an initial analysis of the factual knowledge boundaries of
LLMs and how retrieval augmentation affects LLMs on open-domain QA. Specially,
we focus on three primary research questions and analyze them by examining QA
performance, priori judgement and posteriori judgement of LLMs. We show
evidence that LLMs possess unwavering confidence in their capabilities to
respond to questions and the accuracy of their responses. Furthermore,
retrieval augmentation proves to be an effective approach in enhancing LLMs'
awareness of knowledge boundaries, thereby improving their judgemental
abilities. Additionally, we also find that LLMs have a propensity to rely on
the provided retrieval results when formulating answers, while the quality of
these results significantly impacts their reliance. The code to reproduce this
work is available at https://github.com/RUCAIBox/LLM-Knowledge-Boundary. | http://arxiv.org/pdf/2307.11019 | Ruiyang Ren, Yuhao Wang, Yingqi Qu, Wayne Xin Zhao, Jing Liu, Hao Tian, Hua Wu, Ji-Rong Wen, Haifeng Wang | cs.CL, cs.IR | null | null | cs.CL | 20230720 | 20230723 | [
{
"id": "2302.06476"
},
{
"id": "2108.05540"
},
{
"id": "2305.14766"
},
{
"id": "2305.12421"
},
{
"id": "2209.10063"
},
{
"id": "2303.18223"
},
{
"id": "2305.06984"
},
{
"id": "2211.14876"
},
{
"id": "2305.06311"
}
] |
2307.10635 | 37 | 4 where the zero-shot learning setting without CoT has generated the correct formula but fails in the calculation steps. In this case, CoT prompting is even unable to use the correct formula as it misinterprets the speciï¬c conditions (non-necessity) in the problem. 2. While the use of external tools signiï¬cantly reduces calculation errors, they can weaken other skills, particularly the code conversion skills, i.e., generating the correct programs for the solution. This issue becomes particularly prominent when using the Wolfram Language, with 41.1% error rate in code conversion | 2307.10635#37 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.11019 | 37 | Retrieval augmentation can change the pref- erence of LLMs towards different query cate- gories. In order to investigate the propensity of LLMs to handle questions with varied characteris- tics, we separately calculate the answer accuracy of LLMs across different question categories. To achieve this, we utilize supporting documents re- trieved by the dense retriever. As shown in Fig- ure 4, we can see that LLMs achieve the high- est accuracy when dealing with questions in the âwhichâ category, indicating this type of questions may be the strong suit of LLMs. On the other hand, LLMs may not sufï¬ce for the question type of âwhyâ in knowledge-intensive scenarios. When retrieval augmentation is incorporated, we observe that the preference of LLMs changes. The over- all answer accuracies of LLMs are improved, and the accuracies in most categories increase propor- tionately. Specially, LLMs perform best on the question type âwhoâ. However, we ï¬nd that the ac- curacies of ChatGPT decline for questions falling under the âwhichâ and âdeclareâ categories. This indicates that | 2307.11019#37 | Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation | Knowledge-intensive tasks (e.g., open-domain question answering (QA)) require
a substantial amount of factual knowledge and often rely on external
information for assistance. Recently, large language models (LLMs) (e.g.,
ChatGPT), have demonstrated impressive prowess in solving a wide range of tasks
with world knowledge, including knowledge-intensive tasks. However, it remains
unclear how well LLMs are able to perceive their factual knowledge boundaries,
particularly how they behave when incorporating retrieval augmentation. In this
study, we present an initial analysis of the factual knowledge boundaries of
LLMs and how retrieval augmentation affects LLMs on open-domain QA. Specially,
we focus on three primary research questions and analyze them by examining QA
performance, priori judgement and posteriori judgement of LLMs. We show
evidence that LLMs possess unwavering confidence in their capabilities to
respond to questions and the accuracy of their responses. Furthermore,
retrieval augmentation proves to be an effective approach in enhancing LLMs'
awareness of knowledge boundaries, thereby improving their judgemental
abilities. Additionally, we also find that LLMs have a propensity to rely on
the provided retrieval results when formulating answers, while the quality of
these results significantly impacts their reliance. The code to reproduce this
work is available at https://github.com/RUCAIBox/LLM-Knowledge-Boundary. | http://arxiv.org/pdf/2307.11019 | Ruiyang Ren, Yuhao Wang, Yingqi Qu, Wayne Xin Zhao, Jing Liu, Hao Tian, Hua Wu, Ji-Rong Wen, Haifeng Wang | cs.CL, cs.IR | null | null | cs.CL | 20230720 | 20230723 | [
{
"id": "2302.06476"
},
{
"id": "2108.05540"
},
{
"id": "2305.14766"
},
{
"id": "2305.12421"
},
{
"id": "2209.10063"
},
{
"id": "2303.18223"
},
{
"id": "2305.06984"
},
{
"id": "2211.14876"
},
{
"id": "2305.06311"
}
] |
2307.10635 | 38 | 8
Table 5: Comparison of SCIBENCH with other benchmarks. âLevelâ represents the grade level of problems. âComputationâ represents the level of computational type that problems use. âw/ Solutionâ represents whether problems contain detailed solutions. âTypeâ represents what format most problems of the dataset use; âMTâ denotes multiple-choice format and âFreeâ denotes free- response format. âHumanâ indicates whether the analysis process employs a human annotation process. âAutoâ represents whether the analysis process uses an automatic annotation process. | 2307.10635#38 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.10635 | 39 | Benchmark Level Dataset Computation w/ Solution Type Zero-Shot Experiment Few-Shot CoT Analysis Tool Human Auto ScienceQA [28] IconQA [27] TabMWP [30] GSM8K [9] MATH [17] LILA [33] MNLU [16] CEval [21] AGIEval [49] TheroemQA [8] Algebra Grade 1-12 Algebra Grade 1-12 Algebra Grade 1-12 Algebra Grade 1-12 Exponentiation High School Exponentiation High School High School & College Exponentiation High School & College Differentiation Exponentiation High School & College Differentiation College Yes No Yes Yes Yes Yes No No No No MT MT Free Free Free Free MT MT MT Free Yes No No No No Yes No No Yes No Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes No No No No No No Yes Yes Yes No No No No No No No No No Yes No No No No No No No No Yes No No No No No No No No No No No SCIBENCH College Differentiation Yes Free Yes Yes Yes Yes Yes Yes | 2307.10635#39 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.11019 | 39 | Figure 4: The proportion of questions answered cor- rectly by LLMs in different question categories under two QA prompting settings.
tively enhance ChatGPTâs ability to answer such In contrast, Davinci003 ex- types of questions. hibits improved accuracies across all categories of questions, showcasing its superior capability in leveraging retrieval augmentation.
# 3.3 How do Different Supporting Document Characteristics Affect LLMs?
We have explored the effect of retrieval augmenta- tion on the performances and knowledge bound- aries of LLMs. Generally, the retrieval results consist of documents with varying characteristics, which might lead to different effects of retrieval augmentation. For this purpose, we continue to study how different characteristics of supporting documents inï¬uence LLMs. In our experiments, we characterize document characteristics by the following factors, including the relevance between the document and the question, the presence of an answer within the document, and the number and proportion of golden documents. | 2307.11019#39 | Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation | Knowledge-intensive tasks (e.g., open-domain question answering (QA)) require
a substantial amount of factual knowledge and often rely on external
information for assistance. Recently, large language models (LLMs) (e.g.,
ChatGPT), have demonstrated impressive prowess in solving a wide range of tasks
with world knowledge, including knowledge-intensive tasks. However, it remains
unclear how well LLMs are able to perceive their factual knowledge boundaries,
particularly how they behave when incorporating retrieval augmentation. In this
study, we present an initial analysis of the factual knowledge boundaries of
LLMs and how retrieval augmentation affects LLMs on open-domain QA. Specially,
we focus on three primary research questions and analyze them by examining QA
performance, priori judgement and posteriori judgement of LLMs. We show
evidence that LLMs possess unwavering confidence in their capabilities to
respond to questions and the accuracy of their responses. Furthermore,
retrieval augmentation proves to be an effective approach in enhancing LLMs'
awareness of knowledge boundaries, thereby improving their judgemental
abilities. Additionally, we also find that LLMs have a propensity to rely on
the provided retrieval results when formulating answers, while the quality of
these results significantly impacts their reliance. The code to reproduce this
work is available at https://github.com/RUCAIBox/LLM-Knowledge-Boundary. | http://arxiv.org/pdf/2307.11019 | Ruiyang Ren, Yuhao Wang, Yingqi Qu, Wayne Xin Zhao, Jing Liu, Hao Tian, Hua Wu, Ji-Rong Wen, Haifeng Wang | cs.CL, cs.IR | null | null | cs.CL | 20230720 | 20230723 | [
{
"id": "2302.06476"
},
{
"id": "2108.05540"
},
{
"id": "2305.14766"
},
{
"id": "2305.12421"
},
{
"id": "2209.10063"
},
{
"id": "2303.18223"
},
{
"id": "2305.06984"
},
{
"id": "2211.14876"
},
{
"id": "2305.06311"
}
] |
2307.10635 | 40 | skill comparing 0.9% in the few-shot CoT setting. Despite providing grammar speciï¬cations in system prompts and a few examples as demonstrations, most attempts of code conversion result in syntax errors. In Wolfram Language, the error mainly comes from the violation of variable rules (for instance, Wolfram Language reserves certain letters such as E as protected symbols and disallows underscores in variable names) or incorrect usage of certain functions.
Additionally, few-shot learning does not universally improve scientiï¬c problem-solving skills, as indicated in the comparison between zero-shot and few-shot CoT settings. The improvement in one skill is offset by the shortcomings in others: although the few-shot CoT setting results in a reduction of 6.3% in errors related to causal reasoning, it also leads to an increase in errors associated with other skills, such as logical decomposition and calculation.
Moreover, the skill of identifying assumptions appears to be most lacking in the zero-shot setting without a system prompt. In this scenario, the LLM does not have any predeï¬ned direction to follow. However, when a system prompt with instructions about which scientiï¬c domain the model is tackling, this issue can be signiï¬cantly mitigated, decreasing this error from 11.6% to 5.4%.
# 5 Related Work | 2307.10635#40 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.11019 | 40 | 3.3.1 Sampling Strategies In order to thoroughly study the impact of support- ing documents on LLMs, we propose to provide LLMs with supporting documents of different char- acteristics for obtaining answers: (a) Golden doc- uments refer to documents containing correct an- swers to the question, which are sampled from top to bottom in the top 100 retrieval results of the ques- tion; (b) Highly-related incorrect documents refer to documents that are highly relevant to the ques- tion but do not contain the correct answer. They are also sampled from top to bottom in the top 100 retrieval results of the question; (c) Weakly-related incorrect documents are the documents weakly rel- evant to the query and do not contain the correct answer. We randomly sample documents from the | 2307.11019#40 | Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation | Knowledge-intensive tasks (e.g., open-domain question answering (QA)) require
a substantial amount of factual knowledge and often rely on external
information for assistance. Recently, large language models (LLMs) (e.g.,
ChatGPT), have demonstrated impressive prowess in solving a wide range of tasks
with world knowledge, including knowledge-intensive tasks. However, it remains
unclear how well LLMs are able to perceive their factual knowledge boundaries,
particularly how they behave when incorporating retrieval augmentation. In this
study, we present an initial analysis of the factual knowledge boundaries of
LLMs and how retrieval augmentation affects LLMs on open-domain QA. Specially,
we focus on three primary research questions and analyze them by examining QA
performance, priori judgement and posteriori judgement of LLMs. We show
evidence that LLMs possess unwavering confidence in their capabilities to
respond to questions and the accuracy of their responses. Furthermore,
retrieval augmentation proves to be an effective approach in enhancing LLMs'
awareness of knowledge boundaries, thereby improving their judgemental
abilities. Additionally, we also find that LLMs have a propensity to rely on
the provided retrieval results when formulating answers, while the quality of
these results significantly impacts their reliance. The code to reproduce this
work is available at https://github.com/RUCAIBox/LLM-Knowledge-Boundary. | http://arxiv.org/pdf/2307.11019 | Ruiyang Ren, Yuhao Wang, Yingqi Qu, Wayne Xin Zhao, Jing Liu, Hao Tian, Hua Wu, Ji-Rong Wen, Haifeng Wang | cs.CL, cs.IR | null | null | cs.CL | 20230720 | 20230723 | [
{
"id": "2302.06476"
},
{
"id": "2108.05540"
},
{
"id": "2305.14766"
},
{
"id": "2305.12421"
},
{
"id": "2209.10063"
},
{
"id": "2303.18223"
},
{
"id": "2305.06984"
},
{
"id": "2211.14876"
},
{
"id": "2305.06311"
}
] |
2307.10635 | 41 | # 5 Related Work
Traditional benchmarks primarily focus on evaluating the general abilities of models. For instance, SQuAD [36] offers a dataset designed for evaluating the reading comprehension ability of models. GLUE [44] is a model-agnostic tool for evaluating and analyzing performance across diverse natural language understanding tasks. Cosmos QA [20] offers questions in natural language contexts to assess common sense reasoning abilities of models. HumanEval [5] is a handwritten dataset evaluating the coding ability of models, featuring 164 Python programming problems. BIG-Bench [38] is a large-scale general-purpose test suite comprising 204 multiple-choice or exact-match tasks, while BIG-Bench Hard [40] poses particularly challenging chain-of-thought prompts. HELM [24] presents a systematic, multi-metric evaluation of LLMs, highlighting their capabilities, limitations, and risks. | 2307.10635#41 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.11019 | 41 | Supporting Doc Davinci003 ChatGPT EM F1 Give-up Eval-Right Eval-Acc EM F1 Give-up Eval-Right Eval-Acc None 26.37 35.95 27.17% 71.27% 46.88% 30.89 42.14 32.05% 87.09% 36.85% Golden Retrieved Highly-related Weakly-related Random 52.35 64.10 40.58 52.22 11.66 21.76 12.99 21.42 23.93 32.62 14.96% 14.52% 20.06% 40.39% 87.89% 50.80% 47.78% 31.11% 24.76% 21.91% 71.09% 45.93 58.82 69.67% 35.79 47.68 58.21% 11.27 20.80 61.68% 9.42 15.83 67.12% 12.74 17.39 24.35% 27.53% 47.09% 66.40% 90.97% 67.26% 63.35% 51.00% 48.75% 49.89% 54.50% 55.03% 47.27% 46.20% 40.01% | 2307.11019#41 | Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation | Knowledge-intensive tasks (e.g., open-domain question answering (QA)) require
a substantial amount of factual knowledge and often rely on external
information for assistance. Recently, large language models (LLMs) (e.g.,
ChatGPT), have demonstrated impressive prowess in solving a wide range of tasks
with world knowledge, including knowledge-intensive tasks. However, it remains
unclear how well LLMs are able to perceive their factual knowledge boundaries,
particularly how they behave when incorporating retrieval augmentation. In this
study, we present an initial analysis of the factual knowledge boundaries of
LLMs and how retrieval augmentation affects LLMs on open-domain QA. Specially,
we focus on three primary research questions and analyze them by examining QA
performance, priori judgement and posteriori judgement of LLMs. We show
evidence that LLMs possess unwavering confidence in their capabilities to
respond to questions and the accuracy of their responses. Furthermore,
retrieval augmentation proves to be an effective approach in enhancing LLMs'
awareness of knowledge boundaries, thereby improving their judgemental
abilities. Additionally, we also find that LLMs have a propensity to rely on
the provided retrieval results when formulating answers, while the quality of
these results significantly impacts their reliance. The code to reproduce this
work is available at https://github.com/RUCAIBox/LLM-Knowledge-Boundary. | http://arxiv.org/pdf/2307.11019 | Ruiyang Ren, Yuhao Wang, Yingqi Qu, Wayne Xin Zhao, Jing Liu, Hao Tian, Hua Wu, Ji-Rong Wen, Haifeng Wang | cs.CL, cs.IR | null | null | cs.CL | 20230720 | 20230723 | [
{
"id": "2302.06476"
},
{
"id": "2108.05540"
},
{
"id": "2305.14766"
},
{
"id": "2305.12421"
},
{
"id": "2209.10063"
},
{
"id": "2303.18223"
},
{
"id": "2305.06984"
},
{
"id": "2211.14876"
},
{
"id": "2305.06311"
}
] |
2307.10635 | 42 | Recent benchmarks focus on assessing problem-solving skills of LLMs, particularly in scientiï¬c and mathematical domains [31, 11, 30, 49, 33, 8, 14, 16]. GSM8K [9] is a widely used math dataset con- taining 8.5K grade school math word problems. ScienceQA [28] is a multimodal question-answering dataset with accompanying lecture and explanation annotations. The MATH dataset [17] presents a challenging collection of 12.5K math problems gathered from math competitions. LILA [33] extends 20 datasets by including task instructions and Python program solutions. However, the majority of those benchmarks concentrates on the grade or high school level tasks involving basic arithmetic operations such as addition, multiplication, and exponentiation, rather than more sophisticated opera- tions like differentiation. TheroemQA [8] is a theorem-oriented dataset comprising 800 high-quality questions that aim to evaluate the ability of LLMs to apply theorems to solve problems. However, it does not offer an in-depth qualitative analysis of their benchmark. Galactica [41] provides a set of scientiï¬c tasks, including latex equation conversions, domain knowledge probes, citation prediction
9 | 2307.10635#42 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.11019 | 42 | Table 4: Evaluation results of retrieval-augmented LLMs with supporting documents of various qualities on Nat- ural Questions, where the supporting documents are obtained from dense retrieval. We place different settings according to the relevance between the documents and the question from high to low.
top 100 retrieval results of the question (excluding highly-related incorrect documents); (d) Random incorrect documents refer to documents randomly sampled from the entire corpus D, which do not contain the correct answers for the given question. In this part of the experiment, we sample ten docu- ments per query for each setting from the retrieval results acquired by the dense retriever.
# 3.3.2 Findings | 2307.11019#42 | Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation | Knowledge-intensive tasks (e.g., open-domain question answering (QA)) require
a substantial amount of factual knowledge and often rely on external
information for assistance. Recently, large language models (LLMs) (e.g.,
ChatGPT), have demonstrated impressive prowess in solving a wide range of tasks
with world knowledge, including knowledge-intensive tasks. However, it remains
unclear how well LLMs are able to perceive their factual knowledge boundaries,
particularly how they behave when incorporating retrieval augmentation. In this
study, we present an initial analysis of the factual knowledge boundaries of
LLMs and how retrieval augmentation affects LLMs on open-domain QA. Specially,
we focus on three primary research questions and analyze them by examining QA
performance, priori judgement and posteriori judgement of LLMs. We show
evidence that LLMs possess unwavering confidence in their capabilities to
respond to questions and the accuracy of their responses. Furthermore,
retrieval augmentation proves to be an effective approach in enhancing LLMs'
awareness of knowledge boundaries, thereby improving their judgemental
abilities. Additionally, we also find that LLMs have a propensity to rely on
the provided retrieval results when formulating answers, while the quality of
these results significantly impacts their reliance. The code to reproduce this
work is available at https://github.com/RUCAIBox/LLM-Knowledge-Boundary. | http://arxiv.org/pdf/2307.11019 | Ruiyang Ren, Yuhao Wang, Yingqi Qu, Wayne Xin Zhao, Jing Liu, Hao Tian, Hua Wu, Ji-Rong Wen, Haifeng Wang | cs.CL, cs.IR | null | null | cs.CL | 20230720 | 20230723 | [
{
"id": "2302.06476"
},
{
"id": "2108.05540"
},
{
"id": "2305.14766"
},
{
"id": "2305.12421"
},
{
"id": "2209.10063"
},
{
"id": "2303.18223"
},
{
"id": "2305.06984"
},
{
"id": "2211.14876"
},
{
"id": "2305.06311"
}
] |
2307.10635 | 43 | 9
and chemical QA. C-EVAL [21] focuses on evaluating LLMs in a Chinese context, offering questions from humanities to science and engineering. AGIEval [49] evaluates the performance of LLMs in human-centric standardized exams, such as college entrance exams, law school admission tests, math competitions, and lawyer qualiï¬cation tests. It utilizes human annotated qualitative analysis to evaluate the capabilities of the model. However, relying on human labor for direct solution analysis can be costly. Our evaluation protocol, based on predeï¬ned fundamental problem solving skills, enables automated classiï¬cation of deï¬cient skills for each incorrectly answered question. This approach enables a more comprehensive and larger scale of qualitative analysis results. We include the comparison between different benchmarks in Table 5.
# 6 Conclusion | 2307.10635#43 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.11019 | 43 | # 3.3.2 Findings
LLMs demonstrate enhanced capabilities in QA abilities and perception of knowledge boundaries when provided with higher quality supporting documents. We employ the sampling strategy in Section 3.3.1 to generate supporting doc- uments of four types for each question, including golden documents, highly-related incorrect docu- ments, weakly-related incorrect documents, and random incorrect documents. Table 4 presents the results on Davinci003 and ChatGPT. We can see that using golden (high-quality) documents as sup- porting documents yields better performance com- pared to using retrieval results as supporting doc- uments. However, if incorrect (low-quality) docu- ments are used as supporting documents including highly-related incorrect documents, weakly-related incorrect documents, and random incorrect docu- ments, the performance of LLMs would become in- ferior to that achieved when using retrieval results as supporting documents. In addition, the give- up rates of LLMs decrease as the quality of sup- porting documents improves, indicating that LLMs exhibit higher conï¬dence when fortiï¬ed with high- quality supporting documents. With higher qual- ity supporting documents, the Eval-Acc rates of LLMs increase, indicating that LLMs demonstrate higher accuracy in perceiving their factual knowl- edge boundaries. | 2307.11019#43 | Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation | Knowledge-intensive tasks (e.g., open-domain question answering (QA)) require
a substantial amount of factual knowledge and often rely on external
information for assistance. Recently, large language models (LLMs) (e.g.,
ChatGPT), have demonstrated impressive prowess in solving a wide range of tasks
with world knowledge, including knowledge-intensive tasks. However, it remains
unclear how well LLMs are able to perceive their factual knowledge boundaries,
particularly how they behave when incorporating retrieval augmentation. In this
study, we present an initial analysis of the factual knowledge boundaries of
LLMs and how retrieval augmentation affects LLMs on open-domain QA. Specially,
we focus on three primary research questions and analyze them by examining QA
performance, priori judgement and posteriori judgement of LLMs. We show
evidence that LLMs possess unwavering confidence in their capabilities to
respond to questions and the accuracy of their responses. Furthermore,
retrieval augmentation proves to be an effective approach in enhancing LLMs'
awareness of knowledge boundaries, thereby improving their judgemental
abilities. Additionally, we also find that LLMs have a propensity to rely on
the provided retrieval results when formulating answers, while the quality of
these results significantly impacts their reliance. The code to reproduce this
work is available at https://github.com/RUCAIBox/LLM-Knowledge-Boundary. | http://arxiv.org/pdf/2307.11019 | Ruiyang Ren, Yuhao Wang, Yingqi Qu, Wayne Xin Zhao, Jing Liu, Hao Tian, Hua Wu, Ji-Rong Wen, Haifeng Wang | cs.CL, cs.IR | null | null | cs.CL | 20230720 | 20230723 | [
{
"id": "2302.06476"
},
{
"id": "2108.05540"
},
{
"id": "2305.14766"
},
{
"id": "2305.12421"
},
{
"id": "2209.10063"
},
{
"id": "2303.18223"
},
{
"id": "2305.06984"
},
{
"id": "2211.14876"
},
{
"id": "2305.06311"
}
] |
2307.10635 | 44 | # 6 Conclusion
In conclusion, this paper presents SCIBENCH, a college-level dataset that includes scientiï¬c problems from Mathematics, Physics, and Chemistry, as well as exam questions in Computer Science and Mathematics. We also conduct extensive experiments on two representative models, GPT-3.5 and GPT4. The evaluation protocol we employ serves as a framework for evaluating advanced problem- solving skills of LLMs in scientiï¬c domains. The ï¬ndings of this study highlight that while large language models (LLMs) exhibit impressive performance on introductory mathematical benchmarks, their mastery of problem solving ability remains weak. These ï¬ndings underscore the limitations of current LLMs in achieving satisfactory performance, even with the assistance of various tools. We envision that the SCIBENCH benchmark dataset and evaluation protocol presented in this paper could lay a foundation for future research and enable advancements in understanding and enhancing problem-solving capabilities of LLMs.
# References
[1] Peter Atkins, Peter William Atkins, and Julio de Paula. Atkinsâ physical chemistry. Oxford university press, 2014. 2, 4, 17
[2] Peter Atkins, Julio De Paula, and Ronald Friedman. Physical chemistry: quanta, matter, and change. Oxford University Press, USA, 2014. 2, 4, 17 | 2307.10635#44 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.11019 | 44 | LLMs tend to rely on the given supporting doc- uments to answer. Based on the above observa- tion, when LLMs generate responses with low- quality supporting documents, the performance is inferior to generating responses based on their own knowledge. This phenomenon indicates that LLMs heavily rely on the given supporting documents during the generation process. Note that we give LLMs the option in the prompt to decide whether to use the supporting documents for a question, such as using their own knowledge to answer when the supporting documents are of poor quality. However, LLMs still tend to rely on supporting documents to answer the questions in this setting. | 2307.11019#44 | Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation | Knowledge-intensive tasks (e.g., open-domain question answering (QA)) require
a substantial amount of factual knowledge and often rely on external
information for assistance. Recently, large language models (LLMs) (e.g.,
ChatGPT), have demonstrated impressive prowess in solving a wide range of tasks
with world knowledge, including knowledge-intensive tasks. However, it remains
unclear how well LLMs are able to perceive their factual knowledge boundaries,
particularly how they behave when incorporating retrieval augmentation. In this
study, we present an initial analysis of the factual knowledge boundaries of
LLMs and how retrieval augmentation affects LLMs on open-domain QA. Specially,
we focus on three primary research questions and analyze them by examining QA
performance, priori judgement and posteriori judgement of LLMs. We show
evidence that LLMs possess unwavering confidence in their capabilities to
respond to questions and the accuracy of their responses. Furthermore,
retrieval augmentation proves to be an effective approach in enhancing LLMs'
awareness of knowledge boundaries, thereby improving their judgemental
abilities. Additionally, we also find that LLMs have a propensity to rely on
the provided retrieval results when formulating answers, while the quality of
these results significantly impacts their reliance. The code to reproduce this
work is available at https://github.com/RUCAIBox/LLM-Knowledge-Boundary. | http://arxiv.org/pdf/2307.11019 | Ruiyang Ren, Yuhao Wang, Yingqi Qu, Wayne Xin Zhao, Jing Liu, Hao Tian, Hua Wu, Ji-Rong Wen, Haifeng Wang | cs.CL, cs.IR | null | null | cs.CL | 20230720 | 20230723 | [
{
"id": "2302.06476"
},
{
"id": "2108.05540"
},
{
"id": "2305.14766"
},
{
"id": "2305.12421"
},
{
"id": "2209.10063"
},
{
"id": "2303.18223"
},
{
"id": "2305.06984"
},
{
"id": "2211.14876"
},
{
"id": "2305.06311"
}
] |
2307.10635 | 45 | [2] Peter Atkins, Julio De Paula, and Ronald Friedman. Physical chemistry: quanta, matter, and change. Oxford University Press, USA, 2014. 2, 4, 17
[3] William E Boyce, Richard C DiPrima, and Douglas B Meade. Elementary differential equations and boundary value problems. John Wiley & Sons, 2021. 2, 4, 17
[4] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877â1901, 2020. 1
[5] Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021. 9 | 2307.10635#45 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.11019 | 45 | The level of conï¬dence and reliance on support- ing documents of LLMs is determined by the relevance between the question and the sup- porting documents. Based on the sampling strategies of supporting documents, the relevance between different documents and questions can be ranked as follows, ranging from high to low: golden documents > retrieved documents > highly- related incorrect documents > weakly-related in- correct documents > random incorrect documents. In Table 4, we observe a clear inverse relationship between relevance and the conï¬dence of LLMs (i.e., the probability of giving up to answer and as- sessing their answers as correct). In addition, using random incorrect documents that are unrelated to the question as supporting documents outperforms using incorrect documents with higher relevance (i.e., highly-related/weakly-related incorrect docu- ments). This observation further shows that LLMs pay more attention to relevant documents when generating responses.
# 4 Conclusion | 2307.11019#45 | Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation | Knowledge-intensive tasks (e.g., open-domain question answering (QA)) require
a substantial amount of factual knowledge and often rely on external
information for assistance. Recently, large language models (LLMs) (e.g.,
ChatGPT), have demonstrated impressive prowess in solving a wide range of tasks
with world knowledge, including knowledge-intensive tasks. However, it remains
unclear how well LLMs are able to perceive their factual knowledge boundaries,
particularly how they behave when incorporating retrieval augmentation. In this
study, we present an initial analysis of the factual knowledge boundaries of
LLMs and how retrieval augmentation affects LLMs on open-domain QA. Specially,
we focus on three primary research questions and analyze them by examining QA
performance, priori judgement and posteriori judgement of LLMs. We show
evidence that LLMs possess unwavering confidence in their capabilities to
respond to questions and the accuracy of their responses. Furthermore,
retrieval augmentation proves to be an effective approach in enhancing LLMs'
awareness of knowledge boundaries, thereby improving their judgemental
abilities. Additionally, we also find that LLMs have a propensity to rely on
the provided retrieval results when formulating answers, while the quality of
these results significantly impacts their reliance. The code to reproduce this
work is available at https://github.com/RUCAIBox/LLM-Knowledge-Boundary. | http://arxiv.org/pdf/2307.11019 | Ruiyang Ren, Yuhao Wang, Yingqi Qu, Wayne Xin Zhao, Jing Liu, Hao Tian, Hua Wu, Ji-Rong Wen, Haifeng Wang | cs.CL, cs.IR | null | null | cs.CL | 20230720 | 20230723 | [
{
"id": "2302.06476"
},
{
"id": "2108.05540"
},
{
"id": "2305.14766"
},
{
"id": "2305.12421"
},
{
"id": "2209.10063"
},
{
"id": "2303.18223"
},
{
"id": "2305.06984"
},
{
"id": "2211.14876"
},
{
"id": "2305.06311"
}
] |
2307.10635 | 46 | [6] Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021. 1
[7] Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W Cohen. Program of thoughts prompting: Disentangling computation from reasoning for numerical reasoning tasks. arXiv preprint arXiv:2211.12588, 2022. 1
[8] Wenhu Chen, Ming Yin, Max Ku, Pan Lu, Elaine Wan, Xueguang Ma, Jianyu Xu, Tony Xia, and Xinyi Wang. Theoremqa: A theorem-driven question answering dataset. arXiv preprint arXiv:2305.12524, 2023. 4, 9
[9] Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training veriï¬ers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021. 1, 9 | 2307.10635#46 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.11019 | 46 | # 4 Conclusion
In this work, we have investigated the perception capacity of LLMs regarding factual knowledge boundaries with retrieval augmentation on open- domain QA. In detail, we propose priori and pos- teriori judgemental prompting, in addition to QA prompting, conducting the normal and retrieval- augmented evaluation. We conclude several key ï¬ndings, including (1) LLMs exhibit blind conï¬- dence in their own ability to answer questions and the quality of their answers, indicating that they cannot accurately perceive their factual knowledge boundaries, (2) LLMs cannot sufï¬ciently utilize the knowledge they possess, and the incorporation of retrieval augmentation effectively enhances their ability to perceive the factual knowledge bound- aries, thereby improving the judgement capabilities, (3) LLMs tend to heavily rely on the given retrieval results when answering questions, and the char- acteristics of supporting documents signiï¬cantly inï¬uences their reliance. Drawing on the ï¬ndings, we also adopt a simple approach that dynamically utilizes retrieval augmentation based on the pri- ori judgement of the LLM rather than consistently considering supporting documents, which leads to improved performance.
# References | 2307.11019#46 | Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation | Knowledge-intensive tasks (e.g., open-domain question answering (QA)) require
a substantial amount of factual knowledge and often rely on external
information for assistance. Recently, large language models (LLMs) (e.g.,
ChatGPT), have demonstrated impressive prowess in solving a wide range of tasks
with world knowledge, including knowledge-intensive tasks. However, it remains
unclear how well LLMs are able to perceive their factual knowledge boundaries,
particularly how they behave when incorporating retrieval augmentation. In this
study, we present an initial analysis of the factual knowledge boundaries of
LLMs and how retrieval augmentation affects LLMs on open-domain QA. Specially,
we focus on three primary research questions and analyze them by examining QA
performance, priori judgement and posteriori judgement of LLMs. We show
evidence that LLMs possess unwavering confidence in their capabilities to
respond to questions and the accuracy of their responses. Furthermore,
retrieval augmentation proves to be an effective approach in enhancing LLMs'
awareness of knowledge boundaries, thereby improving their judgemental
abilities. Additionally, we also find that LLMs have a propensity to rely on
the provided retrieval results when formulating answers, while the quality of
these results significantly impacts their reliance. The code to reproduce this
work is available at https://github.com/RUCAIBox/LLM-Knowledge-Boundary. | http://arxiv.org/pdf/2307.11019 | Ruiyang Ren, Yuhao Wang, Yingqi Qu, Wayne Xin Zhao, Jing Liu, Hao Tian, Hua Wu, Ji-Rong Wen, Haifeng Wang | cs.CL, cs.IR | null | null | cs.CL | 20230720 | 20230723 | [
{
"id": "2302.06476"
},
{
"id": "2108.05540"
},
{
"id": "2305.14766"
},
{
"id": "2305.12421"
},
{
"id": "2209.10063"
},
{
"id": "2303.18223"
},
{
"id": "2305.06984"
},
{
"id": "2211.14876"
},
{
"id": "2305.06311"
}
] |
2307.10635 | 47 | [10] Thomas Engel and Philip J Reid. Thermodynamics, statistical thermodynamics, and kinetics. Prentice Hall Upper saddle River, 2010. 2, 4, 17
10
[11] Yao Fu, Litu Ou, Mingyu Chen, Yuhao Wan, Hao Peng, and Tushar Khot. Chain-of-thought hub: A contin- uous effort to measure large language modelsâ reasoning performance. arXiv preprint arXiv:2305.17306, 2023. 9
[12] Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and Graham Neubig. PAL: Program-aided language models. arXiv preprint arXiv:2211.10435, 2022. 1
[13] Peng Gao, Jiaming Han, Renrui Zhang, Ziyi Lin, Shijie Geng, Aojun Zhou, Wei Zhang, Pan Lu, Conghui He, Xiangyu Yue, Hongsheng Li, and Yu Qiao. Llama-adapter v2: Parameter-efï¬cient visual instruction model. arXiv preprint arXiv:2304.15010, 2023. 1 | 2307.10635#47 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.11019 | 47 | # References
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam Mc- Candlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learn- ers. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Informa- tion Processing Systems 2020, NeurIPS 2020, De- cember 6-12, 2020, virtual.
Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer open- In Proceedings of the 55th An- domain questions. nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1870â 1879. | 2307.11019#47 | Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation | Knowledge-intensive tasks (e.g., open-domain question answering (QA)) require
a substantial amount of factual knowledge and often rely on external
information for assistance. Recently, large language models (LLMs) (e.g.,
ChatGPT), have demonstrated impressive prowess in solving a wide range of tasks
with world knowledge, including knowledge-intensive tasks. However, it remains
unclear how well LLMs are able to perceive their factual knowledge boundaries,
particularly how they behave when incorporating retrieval augmentation. In this
study, we present an initial analysis of the factual knowledge boundaries of
LLMs and how retrieval augmentation affects LLMs on open-domain QA. Specially,
we focus on three primary research questions and analyze them by examining QA
performance, priori judgement and posteriori judgement of LLMs. We show
evidence that LLMs possess unwavering confidence in their capabilities to
respond to questions and the accuracy of their responses. Furthermore,
retrieval augmentation proves to be an effective approach in enhancing LLMs'
awareness of knowledge boundaries, thereby improving their judgemental
abilities. Additionally, we also find that LLMs have a propensity to rely on
the provided retrieval results when formulating answers, while the quality of
these results significantly impacts their reliance. The code to reproduce this
work is available at https://github.com/RUCAIBox/LLM-Knowledge-Boundary. | http://arxiv.org/pdf/2307.11019 | Ruiyang Ren, Yuhao Wang, Yingqi Qu, Wayne Xin Zhao, Jing Liu, Hao Tian, Hua Wu, Ji-Rong Wen, Haifeng Wang | cs.CL, cs.IR | null | null | cs.CL | 20230720 | 20230723 | [
{
"id": "2302.06476"
},
{
"id": "2108.05540"
},
{
"id": "2305.14766"
},
{
"id": "2305.12421"
},
{
"id": "2209.10063"
},
{
"id": "2303.18223"
},
{
"id": "2305.06984"
},
{
"id": "2211.14876"
},
{
"id": "2305.06311"
}
] |
2307.10635 | 48 | [14] Taicheng Guo, Kehan Guo, Zhengwen Liang, Zhichun Guo, Nitesh V Chawla, Olaf Wiest, Xiangliang Zhang, et al. What indeed can gpt models do in chemistry? a comprehensive benchmark on eight tasks. arXiv preprint arXiv:2305.18365, 2023. 9
[15] David Halliday, Robert Resnick, and Jearl Walker. Fundamentals of physics. John Wiley & Sons, 2013. 2, 4, 17
[16] Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding. arXiv preprint arXiv:2009.03300, 2020. 9
[17] Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. arXiv preprint arXiv:2103.03874, 2021. 1, 9
[18] Robert V Hogg, Elliot A Tanis, and Dale L Zimmerman. Probability and statistical inference, volume 993. Macmillan New York, 1977. 2, 4, 17 | 2307.10635#48 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.11019 | 48 | Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for
Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171â4186.
Adam Fisch, Alon Talmor, Robin Jia, Minjoon Seo, Eu- nsol Choi, and Danqi Chen. 2019. MRQA 2019 shared task: Evaluating generalization in reading In Proceedings of the 2nd Work- comprehension. shop on Machine Reading for Question Answering, pages 1â13.
Luyu Gao and Jamie Callan. 2021. Unsupervised cor- pus aware language model pre-training for dense passage retrieval. arXiv preprint arXiv:2108.05540.
Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasu- pat, and Ming-Wei Chang. 2020. REALM: retrieval- augmented language model pre-training. CoRR, abs/2002.08909. | 2307.11019#48 | Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation | Knowledge-intensive tasks (e.g., open-domain question answering (QA)) require
a substantial amount of factual knowledge and often rely on external
information for assistance. Recently, large language models (LLMs) (e.g.,
ChatGPT), have demonstrated impressive prowess in solving a wide range of tasks
with world knowledge, including knowledge-intensive tasks. However, it remains
unclear how well LLMs are able to perceive their factual knowledge boundaries,
particularly how they behave when incorporating retrieval augmentation. In this
study, we present an initial analysis of the factual knowledge boundaries of
LLMs and how retrieval augmentation affects LLMs on open-domain QA. Specially,
we focus on three primary research questions and analyze them by examining QA
performance, priori judgement and posteriori judgement of LLMs. We show
evidence that LLMs possess unwavering confidence in their capabilities to
respond to questions and the accuracy of their responses. Furthermore,
retrieval augmentation proves to be an effective approach in enhancing LLMs'
awareness of knowledge boundaries, thereby improving their judgemental
abilities. Additionally, we also find that LLMs have a propensity to rely on
the provided retrieval results when formulating answers, while the quality of
these results significantly impacts their reliance. The code to reproduce this
work is available at https://github.com/RUCAIBox/LLM-Knowledge-Boundary. | http://arxiv.org/pdf/2307.11019 | Ruiyang Ren, Yuhao Wang, Yingqi Qu, Wayne Xin Zhao, Jing Liu, Hao Tian, Hua Wu, Ji-Rong Wen, Haifeng Wang | cs.CL, cs.IR | null | null | cs.CL | 20230720 | 20230723 | [
{
"id": "2302.06476"
},
{
"id": "2108.05540"
},
{
"id": "2305.14766"
},
{
"id": "2305.12421"
},
{
"id": "2209.10063"
},
{
"id": "2303.18223"
},
{
"id": "2305.06984"
},
{
"id": "2211.14876"
},
{
"id": "2305.06311"
}
] |
2307.10635 | 49 | [19] Jiaxin Huang, Shixiang Shane Gu, Le Hou, Yuexin Wu, Xuezhi Wang, Hongkun Yu, and Jiawei Han. Large language models can self-improve. arXiv preprint arXiv:2210.11610, 2022. 2
[20] Lifu Huang, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. Cosmos qa: Machine reading comprehension with contextual commonsense reasoning. arXiv preprint arXiv:1909.00277, 2019. 9
[21] Yuzhen Huang, Yuzhuo Bai, Zhihao Zhu, Junlei Zhang, Jinghan Zhang, Tangjun Su, Junteng Liu, Chuancheng Lv, Yikai Zhang, Jiayi Lei, et al. C-eval: A multi-level multi-discipline chinese evalua- tion suite for foundation models. arXiv preprint arXiv:2305.08322, 2023. 2, 9, 10
[22] Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large language models are zero-shot reasoners. arXiv preprint arXiv:2205.11916, 2022. 1 | 2307.10635#49 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.11019 | 49 | Gautier Izacard and Edouard Grave. 2021. Leveraging passage retrieval with generative models for open domain question answering. In Proceedings of the 16th Conference of the European Chapter of the As- sociation for Computational Linguistics: Main Vol- ume, pages 874â880.
J. Johnson, M. Douze, and H. Jégou. 2019. Billion- IEEE Transac- scale similarity search with gpus. tions on Big Data.
Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale dis- tantly supervised challenge dataset for reading com- prehension. In Proceedings of the 55th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1601â1611.
Ehsan Kamalloo, Nouha Dziri, Charles LA Clarke, and Davood Raï¬ei. 2023. Evaluating open-domain ques- tion answering in the era of large language models. arXiv preprint arXiv:2305.06984. | 2307.11019#49 | Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation | Knowledge-intensive tasks (e.g., open-domain question answering (QA)) require
a substantial amount of factual knowledge and often rely on external
information for assistance. Recently, large language models (LLMs) (e.g.,
ChatGPT), have demonstrated impressive prowess in solving a wide range of tasks
with world knowledge, including knowledge-intensive tasks. However, it remains
unclear how well LLMs are able to perceive their factual knowledge boundaries,
particularly how they behave when incorporating retrieval augmentation. In this
study, we present an initial analysis of the factual knowledge boundaries of
LLMs and how retrieval augmentation affects LLMs on open-domain QA. Specially,
we focus on three primary research questions and analyze them by examining QA
performance, priori judgement and posteriori judgement of LLMs. We show
evidence that LLMs possess unwavering confidence in their capabilities to
respond to questions and the accuracy of their responses. Furthermore,
retrieval augmentation proves to be an effective approach in enhancing LLMs'
awareness of knowledge boundaries, thereby improving their judgemental
abilities. Additionally, we also find that LLMs have a propensity to rely on
the provided retrieval results when formulating answers, while the quality of
these results significantly impacts their reliance. The code to reproduce this
work is available at https://github.com/RUCAIBox/LLM-Knowledge-Boundary. | http://arxiv.org/pdf/2307.11019 | Ruiyang Ren, Yuhao Wang, Yingqi Qu, Wayne Xin Zhao, Jing Liu, Hao Tian, Hua Wu, Ji-Rong Wen, Haifeng Wang | cs.CL, cs.IR | null | null | cs.CL | 20230720 | 20230723 | [
{
"id": "2302.06476"
},
{
"id": "2108.05540"
},
{
"id": "2305.14766"
},
{
"id": "2305.12421"
},
{
"id": "2209.10063"
},
{
"id": "2303.18223"
},
{
"id": "2305.06984"
},
{
"id": "2211.14876"
},
{
"id": "2305.06311"
}
] |
2307.10635 | 50 | [23] Ira N Levine, Daryle H Busch, and Harrison Shull. Quantum chemistry, volume 6. Pearson Prentice Hall Upper Saddle River, NJ, 2009. 2, 4, 17
[24] Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, et al. Holistic evaluation of language models. arXiv preprint arXiv:2211.09110, 2022. 9
[25] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. arXiv preprint arXiv:2304.08485, 2023. 1
[26] Pan Lu, Ran Gong, Shibiao Jiang, Liang Qiu, Siyuan Huang, Xiaodan Liang, and Song-Chun Zhu. Inter- gps: Interpretable geometry problem solving with formal language and symbolic reasoning. In The Joint Conference of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL-IJCNLP 2021), 2021. 4 | 2307.10635#50 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.11019 | 50 | Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP), pages 6769â 6781.
Tom Kwiatkowski, Jennimaria Palomaki, Olivia Red- ï¬eld, Michael Collins, Ankur Parikh, Chris Al- berti, Danielle Epstein, Illia Polosukhin, Jacob De- vlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: A benchmark for question an- swering research. Transactions of the Association for Computational Linguistics, 7:452â466.
Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. 2019. Latent retrieval for weakly supervised open domain question answering. In Proceedings of the 57th Annual Meeting of the Association for Compu- tational Linguistics, pages 6086â6096. | 2307.11019#50 | Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation | Knowledge-intensive tasks (e.g., open-domain question answering (QA)) require
a substantial amount of factual knowledge and often rely on external
information for assistance. Recently, large language models (LLMs) (e.g.,
ChatGPT), have demonstrated impressive prowess in solving a wide range of tasks
with world knowledge, including knowledge-intensive tasks. However, it remains
unclear how well LLMs are able to perceive their factual knowledge boundaries,
particularly how they behave when incorporating retrieval augmentation. In this
study, we present an initial analysis of the factual knowledge boundaries of
LLMs and how retrieval augmentation affects LLMs on open-domain QA. Specially,
we focus on three primary research questions and analyze them by examining QA
performance, priori judgement and posteriori judgement of LLMs. We show
evidence that LLMs possess unwavering confidence in their capabilities to
respond to questions and the accuracy of their responses. Furthermore,
retrieval augmentation proves to be an effective approach in enhancing LLMs'
awareness of knowledge boundaries, thereby improving their judgemental
abilities. Additionally, we also find that LLMs have a propensity to rely on
the provided retrieval results when formulating answers, while the quality of
these results significantly impacts their reliance. The code to reproduce this
work is available at https://github.com/RUCAIBox/LLM-Knowledge-Boundary. | http://arxiv.org/pdf/2307.11019 | Ruiyang Ren, Yuhao Wang, Yingqi Qu, Wayne Xin Zhao, Jing Liu, Hao Tian, Hua Wu, Ji-Rong Wen, Haifeng Wang | cs.CL, cs.IR | null | null | cs.CL | 20230720 | 20230723 | [
{
"id": "2302.06476"
},
{
"id": "2108.05540"
},
{
"id": "2305.14766"
},
{
"id": "2305.12421"
},
{
"id": "2209.10063"
},
{
"id": "2303.18223"
},
{
"id": "2305.06984"
},
{
"id": "2211.14876"
},
{
"id": "2305.06311"
}
] |
2307.10635 | 51 | [27] Pan Lu, Liang Qiu, Jiaqi Chen, Tony Xia, Yizhou Zhao, Wei Zhang, Zhou Yu, Xiaodan Liang, and Song-Chun Zhu. Iconqa: A new benchmark for abstract diagram understanding and visual language reasoning. arXiv preprint arXiv:2110.13214, 2021. 9
[28] Pan Lu, Swaroop Mishra, Tanglin Xia, Liang Qiu, Kai-Wei Chang, Song-Chun Zhu, Oyvind Tafjord, Peter Clark, and Ashwin Kalyan. Learn to explain: Multimodal reasoning via thought chains for science question answering. Advances in Neural Information Processing Systems, 35:2507â2521, 2022. 1, 4, 9
[29] Pan Lu, Baolin Peng, Hao Cheng, Michel Galley, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, and Jianfeng Gao. Chameleon: Plug-and-play compositional reasoning with large language models. arXiv preprint arXiv:2304.09842, 2023. 2, 5
11 | 2307.10635#51 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.11019 | 51 | Mike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre- training for natural language generation, translation, and comprehension. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 7871â7880.
Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Car- roll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F. Christiano, Jan Leike, and Ryan Lowe. 2022. Training language models to follow instruc- tions with human feedback. CoRR, abs/2203.02155. | 2307.11019#51 | Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation | Knowledge-intensive tasks (e.g., open-domain question answering (QA)) require
a substantial amount of factual knowledge and often rely on external
information for assistance. Recently, large language models (LLMs) (e.g.,
ChatGPT), have demonstrated impressive prowess in solving a wide range of tasks
with world knowledge, including knowledge-intensive tasks. However, it remains
unclear how well LLMs are able to perceive their factual knowledge boundaries,
particularly how they behave when incorporating retrieval augmentation. In this
study, we present an initial analysis of the factual knowledge boundaries of
LLMs and how retrieval augmentation affects LLMs on open-domain QA. Specially,
we focus on three primary research questions and analyze them by examining QA
performance, priori judgement and posteriori judgement of LLMs. We show
evidence that LLMs possess unwavering confidence in their capabilities to
respond to questions and the accuracy of their responses. Furthermore,
retrieval augmentation proves to be an effective approach in enhancing LLMs'
awareness of knowledge boundaries, thereby improving their judgemental
abilities. Additionally, we also find that LLMs have a propensity to rely on
the provided retrieval results when formulating answers, while the quality of
these results significantly impacts their reliance. The code to reproduce this
work is available at https://github.com/RUCAIBox/LLM-Knowledge-Boundary. | http://arxiv.org/pdf/2307.11019 | Ruiyang Ren, Yuhao Wang, Yingqi Qu, Wayne Xin Zhao, Jing Liu, Hao Tian, Hua Wu, Ji-Rong Wen, Haifeng Wang | cs.CL, cs.IR | null | null | cs.CL | 20230720 | 20230723 | [
{
"id": "2302.06476"
},
{
"id": "2108.05540"
},
{
"id": "2305.14766"
},
{
"id": "2305.12421"
},
{
"id": "2209.10063"
},
{
"id": "2303.18223"
},
{
"id": "2305.06984"
},
{
"id": "2211.14876"
},
{
"id": "2305.06311"
}
] |
2307.10635 | 52 | 11
[30] Pan Lu, Liang Qiu, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, Tanmay Rajpurohit, Peter Clark, and Ashwin Kalyan. Dynamic prompt learning via policy gradient for semi-structured mathematical reasoning. In International Conference on Learning Representations (ICLR), 2023. 9
[31] Pan Lu, Liang Qiu, Wenhao Yu, Sean Welleck, and Kai-Wei Chang. A survey of deep learning for mathematical reasoning. In The 61st Annual Meeting of the Association for Computational Linguistics (ACL), 2023. 9
[32] Donald A McQuarrie. Quantum chemistry. University Science Books, 2008. 2, 4, 17
[33] Swaroop Mishra, Matthew Finlayson, Pan Lu, Leonard Tang, Sean Welleck, Chitta Baral, Tanmay Rajpuro- hit, Oyvind Tafjord, Ashish Sabharwal, Peter Clark, et al. Lila: A uniï¬ed benchmark for mathematical reasoning. In The 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2022. 9
[34] OpenAI. Chatgpt: Optimizing language models for dialogue. https://openai.com/blog/chatgpt/., 2022. 1, 4 | 2307.10635#52 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.11019 | 52 | Fabio Petroni, Aleksandra Piktus, Angela Fan, Patrick Lewis, Majid Yazdani, Nicola De Cao, James Thorne, Yacine Jernite, Vladimir Karpukhin, Jean Maillard, Vassilis Plachouras, Tim Rocktäschel, and Sebastian Riedel. 2021. KILT: a benchmark for knowledge intensive language tasks. In Proceedings of the 2021 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, pages 2523â2544.
Chengwei Qin, Aston Zhang, Zhuosheng Zhang, Jiaao Chen, Michihiro Yasunaga, and Diyi Yang. 2023. Is chatgpt a general-purpose natural language process- ing task solver? arXiv preprint arXiv:2302.06476.
Yingqi Qu, Yuchen Ding, Jing Liu, Kai Liu, Ruiyang Ren, Wayne Xin Zhao, Daxiang Dong, Hua Wu, and Haifeng Wang. 2021. RocketQA: An opti- mized training approach to dense passage retrieval In Proceed- for open-domain question answering. ings of the 2021 Conference of the North Ameri- can Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5835â5847. | 2307.11019#52 | Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation | Knowledge-intensive tasks (e.g., open-domain question answering (QA)) require
a substantial amount of factual knowledge and often rely on external
information for assistance. Recently, large language models (LLMs) (e.g.,
ChatGPT), have demonstrated impressive prowess in solving a wide range of tasks
with world knowledge, including knowledge-intensive tasks. However, it remains
unclear how well LLMs are able to perceive their factual knowledge boundaries,
particularly how they behave when incorporating retrieval augmentation. In this
study, we present an initial analysis of the factual knowledge boundaries of
LLMs and how retrieval augmentation affects LLMs on open-domain QA. Specially,
we focus on three primary research questions and analyze them by examining QA
performance, priori judgement and posteriori judgement of LLMs. We show
evidence that LLMs possess unwavering confidence in their capabilities to
respond to questions and the accuracy of their responses. Furthermore,
retrieval augmentation proves to be an effective approach in enhancing LLMs'
awareness of knowledge boundaries, thereby improving their judgemental
abilities. Additionally, we also find that LLMs have a propensity to rely on
the provided retrieval results when formulating answers, while the quality of
these results significantly impacts their reliance. The code to reproduce this
work is available at https://github.com/RUCAIBox/LLM-Knowledge-Boundary. | http://arxiv.org/pdf/2307.11019 | Ruiyang Ren, Yuhao Wang, Yingqi Qu, Wayne Xin Zhao, Jing Liu, Hao Tian, Hua Wu, Ji-Rong Wen, Haifeng Wang | cs.CL, cs.IR | null | null | cs.CL | 20230720 | 20230723 | [
{
"id": "2302.06476"
},
{
"id": "2108.05540"
},
{
"id": "2305.14766"
},
{
"id": "2305.12421"
},
{
"id": "2209.10063"
},
{
"id": "2303.18223"
},
{
"id": "2305.06984"
},
{
"id": "2211.14876"
},
{
"id": "2305.06311"
}
] |
2307.10635 | 53 | [34] OpenAI. Chatgpt: Optimizing language models for dialogue. https://openai.com/blog/chatgpt/., 2022. 1, 4
[35] OpenAI. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023. 1, 4
[36] Pranav Rajpurkar, Robin Jia, and Percy Liang. Know what you donât know: Unanswerable questions for squad. arXiv preprint arXiv:1806.03822, 2018. 9
[37] Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. Toolformer: Language models can teach themselves to use tools. arXiv preprint arXiv:2302.04761, 2023. 2, 5 | 2307.10635#53 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.11019 | 53 | Colin Raffel, Noam M. Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Ex- ploring the limits of transfer learning with a uniï¬ed text-to-text transformer. ArXiv, abs/1910.10683.
Ruiyang Ren, Shangwen Lv, Yingqi Qu, Jing Liu, Wayne Xin Zhao, QiaoQiao She, Hua Wu, Haifeng Wang, and Ji-Rong Wen. 2021a. PAIR: Leverag- ing passage-centric similarity relation for improving In Findings of the Associ- dense passage retrieval. ation for Computational Linguistics: ACL-IJCNLP 2021, pages 2173â2183.
Ruiyang Ren, Yingqi Qu, Jing Liu, Wayne Xin Zhao, Qiaoqiao She, Hua Wu, Haifeng Wang, and Ji-Rong Wen. 2021b. Rocketqav2: A joint training method for dense passage retrieval and passage re-ranking. In Proceedings of the 2021 Conference on Empiri- cal Methods in Natural Language Processing, pages 2825â2835. | 2307.11019#53 | Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation | Knowledge-intensive tasks (e.g., open-domain question answering (QA)) require
a substantial amount of factual knowledge and often rely on external
information for assistance. Recently, large language models (LLMs) (e.g.,
ChatGPT), have demonstrated impressive prowess in solving a wide range of tasks
with world knowledge, including knowledge-intensive tasks. However, it remains
unclear how well LLMs are able to perceive their factual knowledge boundaries,
particularly how they behave when incorporating retrieval augmentation. In this
study, we present an initial analysis of the factual knowledge boundaries of
LLMs and how retrieval augmentation affects LLMs on open-domain QA. Specially,
we focus on three primary research questions and analyze them by examining QA
performance, priori judgement and posteriori judgement of LLMs. We show
evidence that LLMs possess unwavering confidence in their capabilities to
respond to questions and the accuracy of their responses. Furthermore,
retrieval augmentation proves to be an effective approach in enhancing LLMs'
awareness of knowledge boundaries, thereby improving their judgemental
abilities. Additionally, we also find that LLMs have a propensity to rely on
the provided retrieval results when formulating answers, while the quality of
these results significantly impacts their reliance. The code to reproduce this
work is available at https://github.com/RUCAIBox/LLM-Knowledge-Boundary. | http://arxiv.org/pdf/2307.11019 | Ruiyang Ren, Yuhao Wang, Yingqi Qu, Wayne Xin Zhao, Jing Liu, Hao Tian, Hua Wu, Ji-Rong Wen, Haifeng Wang | cs.CL, cs.IR | null | null | cs.CL | 20230720 | 20230723 | [
{
"id": "2302.06476"
},
{
"id": "2108.05540"
},
{
"id": "2305.14766"
},
{
"id": "2305.12421"
},
{
"id": "2209.10063"
},
{
"id": "2303.18223"
},
{
"id": "2305.06984"
},
{
"id": "2211.14876"
},
{
"id": "2305.06311"
}
] |
2307.10635 | 54 | [38] Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, et al. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. arXiv preprint arXiv:2206.04615, 2022. 9
[39] James Stewart, Saleem Watson, and Daniel Clegg. Calculus: Early transcendentals, 8th. Edition, Brooks/- Cole, Cengae learning, 2012. 2, 4, 17
[40] Mirac Suzgun, Nathan Scales, Nathanael Schärli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc V Le, Ed H Chi, Denny Zhou, et al. Challenging big-bench tasks and whether chain-of-thought can solve them. arXiv preprint arXiv:2210.09261, 2022. 9 | 2307.10635#54 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.11019 | 54 | Ruiyang Ren, Wayne Xin Zhao, Jing Liu, Hua Wu, Ji- Rong Wen, and Haifeng Wang. 2023. Tome: A two- In Pro- stage approach for model-based retrieval. ceedings of the 61th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 6102â 6114.
Stephen Robertson, Hugo Zaragoza, et al. 2009. The probabilistic relevance framework: Bm25 and be- yond. Foundations and Trends® in Information Re- trieval, 3(4):333â389.
Hao Sun, Xiao Liu, Yeyun Gong, Yan Zhang, and Nan Duan. 2023. Beamsearchqa: Large language mod- els are strong zero-shot qa solver. arXiv preprint arXiv:2305.14766.
Cunxiang Wang, Sirui Cheng, Zhikun Xu, Bowen Ding, Yidong Wang, and Yue Zhang. 2023. Eval- uating open question answering evaluation. arXiv preprint arXiv:2305.12421. | 2307.11019#54 | Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation | Knowledge-intensive tasks (e.g., open-domain question answering (QA)) require
a substantial amount of factual knowledge and often rely on external
information for assistance. Recently, large language models (LLMs) (e.g.,
ChatGPT), have demonstrated impressive prowess in solving a wide range of tasks
with world knowledge, including knowledge-intensive tasks. However, it remains
unclear how well LLMs are able to perceive their factual knowledge boundaries,
particularly how they behave when incorporating retrieval augmentation. In this
study, we present an initial analysis of the factual knowledge boundaries of
LLMs and how retrieval augmentation affects LLMs on open-domain QA. Specially,
we focus on three primary research questions and analyze them by examining QA
performance, priori judgement and posteriori judgement of LLMs. We show
evidence that LLMs possess unwavering confidence in their capabilities to
respond to questions and the accuracy of their responses. Furthermore,
retrieval augmentation proves to be an effective approach in enhancing LLMs'
awareness of knowledge boundaries, thereby improving their judgemental
abilities. Additionally, we also find that LLMs have a propensity to rely on
the provided retrieval results when formulating answers, while the quality of
these results significantly impacts their reliance. The code to reproduce this
work is available at https://github.com/RUCAIBox/LLM-Knowledge-Boundary. | http://arxiv.org/pdf/2307.11019 | Ruiyang Ren, Yuhao Wang, Yingqi Qu, Wayne Xin Zhao, Jing Liu, Hao Tian, Hua Wu, Ji-Rong Wen, Haifeng Wang | cs.CL, cs.IR | null | null | cs.CL | 20230720 | 20230723 | [
{
"id": "2302.06476"
},
{
"id": "2108.05540"
},
{
"id": "2305.14766"
},
{
"id": "2305.12421"
},
{
"id": "2209.10063"
},
{
"id": "2303.18223"
},
{
"id": "2305.06984"
},
{
"id": "2211.14876"
},
{
"id": "2305.06311"
}
] |
2307.10635 | 55 | [41] Ross Taylor, Marcin Kardas, Guillem Cucurull, Thomas Scialom, Anthony Hartshorn, Elvis Saravia, Andrew Poulton, Viktor Kerkez, and Robert Stojnic. Galactica: A large language model for science. arXiv preprint arXiv:2211.09085, 2022. 9
[42] Stephen T Thornton and Jerry B Marion. Classical dynamics of particles and systems. Cengage Learning, 2021. 2, 4, 17
[43] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. LLaMA: Open and efï¬cient foundation language models. arXiv preprint arXiv:2302.13971, 2023. 1
[44] Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461, 2018. 9 | 2307.10635#55 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.11019 | 55 | Peilin Yang, Hui Fang, and Jimmy Lin. 2017. Anserini: Enabling the use of lucene for information retrieval research. In Proceedings of the 40th International ACM SIGIR Conference on Research and Devel- opment in Information Retrieval, Shinjuku, Tokyo, Japan, August 7-11, 2017, pages 1253â1256.
Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christo- pher D. Manning. 2018. HotpotQA: A dataset for diverse, explainable multi-hop question answering. In Proceedings of the 2018 Conference on Empiri- cal Methods in Natural Language Processing, pages 2369â2380.
Zhangyue Yin, Qiushi Sun, Qipeng Guo, Jiawen Wu, Xipeng Qiu, and Xuanjing Huang. 2023. Do large language models know what they donât know? In Findings of the Association for Computational Lin- guistics: ACL 2023, Toronto, Canada, July 9-14, 2023, pages 8653â8665. | 2307.11019#55 | Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation | Knowledge-intensive tasks (e.g., open-domain question answering (QA)) require
a substantial amount of factual knowledge and often rely on external
information for assistance. Recently, large language models (LLMs) (e.g.,
ChatGPT), have demonstrated impressive prowess in solving a wide range of tasks
with world knowledge, including knowledge-intensive tasks. However, it remains
unclear how well LLMs are able to perceive their factual knowledge boundaries,
particularly how they behave when incorporating retrieval augmentation. In this
study, we present an initial analysis of the factual knowledge boundaries of
LLMs and how retrieval augmentation affects LLMs on open-domain QA. Specially,
we focus on three primary research questions and analyze them by examining QA
performance, priori judgement and posteriori judgement of LLMs. We show
evidence that LLMs possess unwavering confidence in their capabilities to
respond to questions and the accuracy of their responses. Furthermore,
retrieval augmentation proves to be an effective approach in enhancing LLMs'
awareness of knowledge boundaries, thereby improving their judgemental
abilities. Additionally, we also find that LLMs have a propensity to rely on
the provided retrieval results when formulating answers, while the quality of
these results significantly impacts their reliance. The code to reproduce this
work is available at https://github.com/RUCAIBox/LLM-Knowledge-Boundary. | http://arxiv.org/pdf/2307.11019 | Ruiyang Ren, Yuhao Wang, Yingqi Qu, Wayne Xin Zhao, Jing Liu, Hao Tian, Hua Wu, Ji-Rong Wen, Haifeng Wang | cs.CL, cs.IR | null | null | cs.CL | 20230720 | 20230723 | [
{
"id": "2302.06476"
},
{
"id": "2108.05540"
},
{
"id": "2305.14766"
},
{
"id": "2305.12421"
},
{
"id": "2209.10063"
},
{
"id": "2303.18223"
},
{
"id": "2305.06984"
},
{
"id": "2211.14876"
},
{
"id": "2305.06311"
}
] |
2307.10635 | 56 | [45] Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171, 2022. 2
[46] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. Chain of thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903, 2022. 1, 2, 5, 8
[47] Renrui Zhang, Jiaming Han, Aojun Zhou, Xiangfei Hu, Shilin Yan, Pan Lu, Hongsheng Li, Peng Gao, and Yu Qiao. Llama-adapter: Efï¬cient ï¬ne-tuning of language models with zero-init attention. arXiv preprint arXiv:2303.16199, 2023. 1
[48] Zhuosheng Zhang, Aston Zhang, Mu Li, Hai Zhao, George Karypis, and Alex Smola. Multimodal chain-of-thought reasoning in language models. arXiv preprint arXiv:2302.00923, 2023. 1
12 | 2307.10635#56 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.11019 | 56 | Wenhao Yu, Dan Iter, Shuohang Wang, Yichong Xu, Mingxuan Ju, Soumya Sanyal, Chenguang Zhu, Michael Zeng, and Meng Jiang. 2022. Gen- erate rather than retrieve: Large language mod- els are strong context generators. arXiv preprint arXiv:2209.10063.
Xiang Yue, Boshi Wang, Kai Zhang, Ziru Chen, Yu Su, and Huan Sun. 2023. Automatic evaluation of at- tribution by large language models. arXiv preprint arXiv:2305.06311.
Wayne Xin Zhao, Jing Liu, Ruiyang Ren, and Ji- Rong Wen. 2022. Dense text retrieval based on pre- trained language models: A survey. arXiv preprint arXiv:2211.14876.
Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xi- aolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, et al. 2023. A survey of large language models. arXiv preprint arXiv:2303.18223. | 2307.11019#56 | Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation | Knowledge-intensive tasks (e.g., open-domain question answering (QA)) require
a substantial amount of factual knowledge and often rely on external
information for assistance. Recently, large language models (LLMs) (e.g.,
ChatGPT), have demonstrated impressive prowess in solving a wide range of tasks
with world knowledge, including knowledge-intensive tasks. However, it remains
unclear how well LLMs are able to perceive their factual knowledge boundaries,
particularly how they behave when incorporating retrieval augmentation. In this
study, we present an initial analysis of the factual knowledge boundaries of
LLMs and how retrieval augmentation affects LLMs on open-domain QA. Specially,
we focus on three primary research questions and analyze them by examining QA
performance, priori judgement and posteriori judgement of LLMs. We show
evidence that LLMs possess unwavering confidence in their capabilities to
respond to questions and the accuracy of their responses. Furthermore,
retrieval augmentation proves to be an effective approach in enhancing LLMs'
awareness of knowledge boundaries, thereby improving their judgemental
abilities. Additionally, we also find that LLMs have a propensity to rely on
the provided retrieval results when formulating answers, while the quality of
these results significantly impacts their reliance. The code to reproduce this
work is available at https://github.com/RUCAIBox/LLM-Knowledge-Boundary. | http://arxiv.org/pdf/2307.11019 | Ruiyang Ren, Yuhao Wang, Yingqi Qu, Wayne Xin Zhao, Jing Liu, Hao Tian, Hua Wu, Ji-Rong Wen, Haifeng Wang | cs.CL, cs.IR | null | null | cs.CL | 20230720 | 20230723 | [
{
"id": "2302.06476"
},
{
"id": "2108.05540"
},
{
"id": "2305.14766"
},
{
"id": "2305.12421"
},
{
"id": "2209.10063"
},
{
"id": "2303.18223"
},
{
"id": "2305.06984"
},
{
"id": "2211.14876"
},
{
"id": "2305.06311"
}
] |
2307.10635 | 57 | 12
[49] Wanjun Zhong, Ruixiang Cui, Yiduo Guo, Yaobo Liang, Shuai Lu, Yanlin Wang, Amin Saied, Weizhu Chen, and Nan Duan. Agieval: A human-centric benchmark for evaluating foundation models. arXiv preprint arXiv:2304.06364, 2023. 2, 7, 9, 10
[50] Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Olivier Bousquet, Quoc Le, and Ed Chi. Least-to-most prompting enables complex reasoning in large language models. arXiv preprint arXiv:2205.10625, 2022. 2
13
# A Problem Solving Abilities of Current LLMs
# A.1 Example
Problem Calculate the de Broglie wavelength of an electron traveling at 1.00% of the speed of light.
Correct Solution The mass of an electron is 9.109 Ã 10â31 kg. One percent of the speed of light is
v = (0.0100)
(2.998 x 10° m-s7')
= 2.998 à 106 m · sâ1
The momentum of the electron is given by
# p = mev =
9.109 Ã 10â31 kg
(2.998 x 10°m- st) | 2307.10635#57 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.11019 | 57 | Kun Zhou, Yeyun Gong, Xiao Liu, Wayne Xin Zhao, Yelong Shen, Anlei Dong, Jingwen Lu, Rangan Ma- jumder, Ji-Rong Wen, and Nan Duan. 2022. Simans: Simple ambiguous negatives sampling for dense text retrieval. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track, pages 548â559.
Shengyao Zhuang, Hang Li, and G. Zuccon. 2022. Im- plicit feedback for dense passage retrieval: A coun- terfactual approach. ArXiv, abs/2204.00718. | 2307.11019#57 | Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation | Knowledge-intensive tasks (e.g., open-domain question answering (QA)) require
a substantial amount of factual knowledge and often rely on external
information for assistance. Recently, large language models (LLMs) (e.g.,
ChatGPT), have demonstrated impressive prowess in solving a wide range of tasks
with world knowledge, including knowledge-intensive tasks. However, it remains
unclear how well LLMs are able to perceive their factual knowledge boundaries,
particularly how they behave when incorporating retrieval augmentation. In this
study, we present an initial analysis of the factual knowledge boundaries of
LLMs and how retrieval augmentation affects LLMs on open-domain QA. Specially,
we focus on three primary research questions and analyze them by examining QA
performance, priori judgement and posteriori judgement of LLMs. We show
evidence that LLMs possess unwavering confidence in their capabilities to
respond to questions and the accuracy of their responses. Furthermore,
retrieval augmentation proves to be an effective approach in enhancing LLMs'
awareness of knowledge boundaries, thereby improving their judgemental
abilities. Additionally, we also find that LLMs have a propensity to rely on
the provided retrieval results when formulating answers, while the quality of
these results significantly impacts their reliance. The code to reproduce this
work is available at https://github.com/RUCAIBox/LLM-Knowledge-Boundary. | http://arxiv.org/pdf/2307.11019 | Ruiyang Ren, Yuhao Wang, Yingqi Qu, Wayne Xin Zhao, Jing Liu, Hao Tian, Hua Wu, Ji-Rong Wen, Haifeng Wang | cs.CL, cs.IR | null | null | cs.CL | 20230720 | 20230723 | [
{
"id": "2302.06476"
},
{
"id": "2108.05540"
},
{
"id": "2305.14766"
},
{
"id": "2305.12421"
},
{
"id": "2209.10063"
},
{
"id": "2303.18223"
},
{
"id": "2305.06984"
},
{
"id": "2211.14876"
},
{
"id": "2305.06311"
}
] |
2307.10635 | 58 | The momentum of the electron is given by
# p = mev =
9.109 Ã 10â31 kg
(2.998 x 10°m- st)
= 2.73 à 10â24 kg · m · sâ1
The de Broglie wavelength of this electron is
λ = h p = 6.626 à 10â34 J · s 2.73 à 10â24 kg · m · sâ1 = 2.43 à 10â10 m = 243pm
This wavelength is of atomic dimensions. Final Answer: 243
Chatgpt Solution with Chain-of-Thought Method Letâs start by recalling the de Broglie wavelength formula:λ = h the momentum of the particle. To calculate the momentum of the electron, we can use the relativistic momentum formula:
constant, and p is
mev p= et = vi-g
where me is the mass of the electron, v is its velocity, and c is the speed of light. Plugging in the given values, we get:
â23 p= = 8.19 x 107*%kg - m/s
Now we can plug this momentum into the de Broglie wavelength formula:
λ = h p = 6.626 à 10â34J · s 8.19 à 10â23kg · m/s = 8.09pm
Therefore, the de Broglie wavelength of the electron is 8.09 . Final Answer: 8.09 | 2307.10635#58 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.10635 | 59 | Therefore, the de Broglie wavelength of the electron is 8.09 . Final Answer: 8.09
ChatGPT Solution without Chain-of-Thought Method The de Broglie wavelength of a particle is given by the equation:λ = h particle. The momentum of an electron is given by:
p where h is the Planck constant and p is the momentum of the
# p = mv
where m is the mass of the electron and v is its velocity. We are given that the electron is traveling at 1.00% of the speed of light, which is:v = 0.01cwhere c is the speed of light. The mass of an electron is:m = 9.11 Ã 10â31, kgPutting it all together, we have:
λ = h mv = h m(0.01c)
Substituting the values of h, m, and c, we get:
λ = 6.626 à 10â34, J · s 9.11 à 10â31, kg · 0.01 · 2.998 à 108, m/s = 3.31 à 10â12m = 3.31pm
Therefore, the de Broglie wavelength of the electron is 3.31pm Final Answer: 3.31 | 2307.10635#59 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.10635 | 60 | Therefore, the de Broglie wavelength of the electron is 3.31pm Final Answer: 3.31
Figure 4: An example problem, inaccurately solved by ChatGPT with and without the Chain of Thought method, is provided to highlight different types of errors. In the example, the mistaken step is highlighted in red.
In the context of each speciï¬c capability, we present several exemplary errors accompanied by their corresponding classiï¬cations and explanations derived from the GPT model. Referencing Figure 4, the ChatGPT solution employing the Chain-of-Thought method corresponds to error category 4, "Causal Reasoning". The model explanation posits that "The error reason category is 4. Causal Reasoning. The model solution uses the relativistic momentum formula to calculate the momentum of the electron, which is not necessary for this problem since the electron is traveling at only 1.00% of the speed of light. The relativistic momentum formula is only needed when the velocity of the
14 | 2307.10635#60 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.10635 | 61 | 14
particle is a signiï¬cant fraction of the speed of light. Therefore, using this formula in this problem leads to an incorrect value for the momentum and consequently an incorrect value for the de Broglie wavelength." Meanwhile, the ChatGPT solution that doesnât utilize the Chain-of-Thought method is categorized under error case 10, "Calculation Skills". This example is further highlighted as an illustrative showcase for explanations of the Chain-of-Thought method in Section 4. More examples resulting in different error types are shown in Figure 5, 6, 7. | 2307.10635#61 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.10635 | 62 | Problem A living room has floor dimensions of 3.5 m and 4.2 m and a height of 2.4 m.What does the air in the room weigh when the air pressure is 1.0 atm? Correct Solution The airâs weight is equal to mg, where m is its mass Mass m is related to the air density p and the air volume V (p Putting the two ideas together and taking the density of air at 1.0 atm, we find m/V) Calculation: mg =(pV)g = (2.21 kg/m) (3.5 m x 4.2m x 2.4m) (9.8 m/s?) =418N Final Answer: 418 Model Solution 101, 640 N. To find the weight of the air in the room, we need to first calculate the volume of the room:V = lwh = 3.5m x 4.2m x 2.4m = 35.28 m®Next, we can use the ideal gas law to find the mass of the air in the room: P V = n RTâ where P is the pressure, V is the volume, n is the number of moles of gas, is the gas constant, and Tis the temperature. Assuming room temperature of 20°C (293 K), we can solve for n: PV (1.0 | 2307.10635#62 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.10635 | 64 | Error Reason The error in the modelâs solution is that they used the ideal gas law to ï¬nd the mass of the air in the room, which assumes that the air behaves like an ideal gas. However, the ideal gas law is not applicable in this case because the problem does not provide information about the temperature of the air. The correct approach is to use the density of air at 1.0 atm and the volume of the room to ï¬nd the mass of the air, as
shown in the correct solution. Therefore, the reason for the error in the modelâs solution is 2. Identiï¬cation of Assumptions .
Figure 5: An example problem is inaccurately solved by error reason 2. Identiï¬cation of Assumptions. "Error Reason" denotes the output from the LLM Veriï¬er utilized in the classiï¬cation of error causes. In the example, the mistaken step is highlighted in red.
# A.2 Self-Critic | 2307.10635#64 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.10635 | 65 | # A.2 Self-Critic
In order to assess the effectiveness of our evaluation protocolâs classiï¬cation, we enlisted the assistance of two annotators to determine whether the errors identiï¬ed by the model veriï¬er were accurate or not. Through the annotation of 151 samples across different settings, we observed that 123 of them were correctly classiï¬ed, resulting in an accuracy rate of 81.45%.
15
Problem Use Simpsonâs Rule with n = 10 to approximate f?(1/a)da. | 2307.10635#65 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.10635 | 66 | Use Simpsonâs Rule with n = 10 to approximate f?(1/a)da. Correct Solution Putting f(x) = 1/x,n = 10, and Aa = 0.1 in Simpsonâs Rule, we obtain 21 [ce 12 x Sio ra) + 4f (1.1) + 2f (1.2) + 4f(1.3) + +--+ 2f(1.8) + 4f(1.9) + f(2)] 0.1 ( 4 2 4 2 4 2 4 2 4 3) a itiitiatrstrtistretitretiets 0.693150 v Final Answer: 0.693150 Model Solution Explanation for Problem 6: With n = 10 we have h = gives = b andx; = 1+ th fort = 0,1,2,..., 10. Then Simpsonâs Rule TO [fo haem 2 tpeo) + 4f er) + 2F 2) + AF ea) +o + 2F(e) + Af 0) + Foro) where f(x) = 4. wolfram language for Problem 6: (* Constants *) n= 10; a=1; b=2; (* Function to integrate *) flx_]:= Ix; (* Simpsonâs Rule *) | 2307.10635#66 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.10635 | 67 | 6: (* Constants *) n= 10; a=1; b=2; (* Function to integrate *) flx_]:= Ix; (* Simpsonâs Rule *) h=(b-a)/n; x = Table[a + i*h, i, 0, nl; approximation = (h/3)*Total[Table[If[EvenQlil, 2, 4 *fx{[ill], i, 1, n - 1) + fEx{[ON), flx{EnI]], 4*fEx[En/21105 approximation Final Answer: "Objects of unequal length in {4,20/1 1,10/3,20/13,20/7,4/3,5/2,20/17,20/9}+{ I/List,10/19,20/7} cannot be combined.â Error Reason | 2307.10635#67 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.10635 | 68 | Error Reason The error reason category for the modelâs solution is 8. Code Conversion Skills . The modelâs solution contains a syntax error in the Wolfram Language code, which causes the program to exceed the recursion limit and terminate prematurely. This error likely occurred during the translation of the correct solutionâs code into Wolfram Language.
Figure 6: An example problem is inaccurately solved by error reason 8. Code Conversion Skills. "Error Reason" denotes the output from the LLM Veriï¬er utilized in the classiï¬cation of error causes. In the example, the mistaken step is highlighted in red. | 2307.10635#68 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.10635 | 69 | Problem Use Simpson's Rule with n = 10 to approximate [7 (1/a)da. Correct Solution The equations for cy and co are ci(aâE)+e.8=0 and o8+e.(aâE)=0 For E = a + 8, either equation yields c) = co. Thus, vv, = ci (2p21 + 2pz2). The value of c; can be found by requiring that the wave function be normalized. The normalization condition on yx gives cz (1 -+ 2S + 1) = 1. Using the Hiickel assumption that S = 0, we find that c;) = 1//2.Substituting E = a â 8 into either of the equations for c; and cz yields c} = âc2, or a = C1 (2p21 â 2pz2) The normalization condition gives c?(1 â 2S +1) = l,ore, = 1/V2. Final Answer: 0.7071067 Model Solution therefore not provided as this problem requires visual representations and cannot be solved through text alone.
Error Reason The error reason category is 3. Spatial Perception . This problem requires visual representations to ï¬nd the bonding and antibonding | 2307.10635#69 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.10635 | 70 | Error Reason The error reason category is 3. Spatial Perception . This problem requires visual representations to ï¬nd the bonding and antibonding
Hückel molecular orbitals for ethene. The modelâs solution is incomplete as it only provides equations and does not provide any visual representation.
Figure 7: An example problem is inaccurately solved by error reason 3. Spatial Perception. "Error Reason" denotes the output from the LLM Veriï¬er utilized in the classiï¬cation of error causes.
16
# B SciBench: Textbook Sources
# B.1 Textbook
⢠PHYSICAL CHEMISTRY, ATKINS ET AL. [1] (atkins) provides an exploration of equilibrium, structure, and reactions, integrating contemporary techniques like nanoscience, spectroscopy, and computational chemistry.
⢠QUANTUM CHEMISTRY, MCQUARRIE [32] (chemmc) meticulously covers Quantum Mechanics, from foundational principles like blackbody radiation and Heisenbergâs Uncertainty Principle to complex topics such as Schrödingerâs equation, quantum mechanical operators, and the application of quantum mechanics in chemical bonding. | 2307.10635#70 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.10635 | 71 | ⢠QUANTUM CHEMISTRY, LEVINE ET AL. [23] (quan) explores quantum chemistry, providing a detailed understanding of the Schrödinger equation, particle behavior in various scenarios, quantum mechanics operators, and other foundational quantum principles. It delves into speciï¬c applications like the electronic structure of diatomic and polyatomic molecules, variation methods, perturbation theory, electron spin and its implications in quantum mechanics, as well as various computational methods for molecular quantum mechanics.
⢠PHYSICAL CHEMISTRY, QUANTA, MATTER, AND CHANGE, ATKINS ET AL. [2] (matter) com- bines physics and mathematics, beginning with basics like differentiation and integration, advancing through quantum mechanics and atomic structure, then exploring thermodynamics, molecular mo- tion, and chemical kinetics. Each section is supplemented with mathematical concepts such as differential equations, vectors, and probability theory.
⢠CLASSICAL DYNAMICS OF PARTICAL AND SYSTEMS, THORNTON AND MARION [42] (class) initiates with an exploration of fundamental mathematical concepts, discussing scalars, vectors, matrix operations, coordinate transformations, differentiation, and integration of vectors, using these constructs to illustrate concepts like velocity, acceleration, and angular velocity. It then transitions into the realm of Newtonian mechanics, detailing Newtonâs laws, frames of reference, and the equation of motion for a single particle. | 2307.10635#71 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.10635 | 72 | ⢠THERMODYNAMICS, STATISTICAL THERMODYNAMICS, AND KINETICS, [10] (thermo) navi- gates through thermodynamicsâ principles, from fundamental concepts to complex laws, further discussing real and ideal gases, solutions, electrochemical cells, and statistical thermodynamics. It concludes with an examination of the kinetic theory of gases, transport phenomena, and chemical kinetics.
⢠FUNDAMENTALS OF PHYSICS, HALLIDAY ET AL. [15] (fund) covers undergraduate physics topics, ranging from fundamental concepts like motion and energy to more advanced areas such as quantum physics and nuclear physics.
⢠ELEMENTARY DIFFERENTIAL EQUATIONS AND BOUNDARY VALUE PROBLEMS, [3] (diff) provides a detailed exploration of differential equations, progressing from basic mathematical models to advanced topics like the Laplace Transform, linear systems, numerical methods, and Fourier series. It culminates with a deep dive into nonlinear equations, partial differential equations, and boundary value problems.
⢠PROBABILITY AND STATISTICAL INFERENCE, [18] (stat) covers probability and statistics, includ- ing fundamental concepts, discrete and continuous distributions, bivariate distributions, functions of random variables, and estimation techniques. | 2307.10635#72 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.10635 | 73 | ⢠CALCULUS: EARLY TRANSCENDENTALS, [39] (calculus) begins with diagnostic tests in foun- dational topics, and explores functions from multiple perspectives. It comprehensively covers calculus concepts from limits to three-dimensional analytic geometry, incorporating applications in various ï¬elds.
# B.2 Examination
⢠INTRODUCTION TO DATA MINING provides an introductory survey of data mining, which involves the automatic discovery of patterns, associations, changes, and anomalies in large databases. It explores various application areas of data mining, including bioinformatics, e-commerce, envi- ronmental studies, ï¬nancial markets, multimedia data processing, network monitoring, and social service analysis.
17
⢠FUNDAMENTALS ARTIFICIAL INTELLIGENCE provides an introduction to the core problem- solving and knowledge representation paradigms in artiï¬cial intelligence. It covers Lisp program- ming with regular assignments, as well as topics such as search methods, planning techniques, knowledge structures, natural language processing, expert systems, vision, and parallel architec- tures.
⢠DIFFERENTIAL EQUATIONS covers various topics in differential equations, including ï¬rst-order and second-order linear equations with constant coefï¬cients, power series solutions, and linear systems. Students will explore the principles and applications of these mathematical concepts.
# B.3 Textbook Examples | 2307.10635#73 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.10635 | 74 | Problem (fund) Two charged particles are fixed to an « axis: Particle 1 of charge q1 = 2.1 x 10~8C is at position « = 20 cm and particle 2 of charge q2 = â4.00q1 is at position 2 = 70 cm. At what coordinate on the axis (other than at infinity) is the net electric field produced by the two particles equal to zero? Answer: â30 cm Problem (thermo) N2Os dissociates according to the equilibrium N2O3(g) = NO2(g) + NO(g). At 298 K and one bar pressure, the degree of dissociation defined as the ratio of moles of NO2(g) or NO(g) to the moles of the reactant assuming no dissociation occurs is 3.5 x 107°. Calculate AG'®, for this reaction. Answer: 28 kJ mol~* Problem (class) Halleyâs comet, which passed around the sun early in 1986, moves in a highly elliptical orbit with an eccentricity of 0.967 and a period of 76 years. Calculate its minimum distances from the Sun. Answer: 8.8 x 10'°m Problem (quan) A one-particle, one-dimensional system has W = | 2307.10635#74 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.10635 | 75 | Calculate its minimum distances from the Sun. Answer: 8.8 x 10'°m Problem (quan) A one-particle, one-dimensional system has W = a~*/?eâ!*!/% at £ = 0, where a = 1.0000 nm. At t = 0, the particleâs position is measured. Find the probability that the measured value is between 2 = 0 anda = 2nm. Answer: 0.4908 Problem (chemmc) One of the most powerful modern techniques for studying structure is neutron diffraction. This technique involves generating a collimated beam of neutrons at a particular temperature from a high-energy neutron source and is accomplished at several accelerator facilities around the world. If the speed of a neutron is given by v, = (3kpT'/m)!/?, where m is the mass of a neutron, then what temperature is needed so that the neutrons have a de Broglie wavelength of 50pm ? Answer: 2500 K Problem (atkins) The change in molar internal energy when CaCOs( s) as calcite converts to another form, aragonite, is --0.21 kJ molâ¢. Calculate the difference between the molar enthalpy and internal energy changes | 2307.10635#75 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.10635 | 76 | as calcite converts to another form, aragonite, is --0.21 kJ molâ¢. Calculate the difference between the molar enthalpy and internal energy changes when the pressure is 1.0 bar given that the densities of the polymorphs are 2.71 g cm~° and 2.93 g cm, respectively. Answer: -0.28 Pa m® mol * Problem (matter) In an industrial process, nitrogen is heated to 500 K at a constant volume of 1.000 m°. The gas enters the container at 300 K and 100 atm. The mass of the gas is 92.4 kg. Use the van der Waals equation to determine the approximate pressure of the gas at its working temperature of 500 K. For nitrogen, a = 1.39dm® atm mol~?, b = 0.0391dm* mol Answer: 140 atm Problem (calc) A planning engineer for a new alum plant must present some estimates to his company regarding the capacity of a silo designed to contain bauxite ore until itis processed into alum. The ore resembles pink talcum powder and is poured from a conveyor at the top of the silo. The silo is a cylinder 100ft high with a radius of 200ft. The conveyor carries | 2307.10635#76 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.10635 | 77 | and is poured from a conveyor at the top of the silo. The silo is a cylinder 100ft high with a radius of 200ft. The conveyor carries ore at a rate of 60, 0007 ft® /h and the ore maintains a conical shape whose radius is 1.5 times its height. If, at a certain time t, the pile is 60ft high, how long will it take for the pile to reach the top of the silo? Answer: 9.8 h Problem (stat) Ina study concerning a new treatment of a certain disease, two groups of 25 participants in each were followed for five years. Those in one group took the old treatment and those in the other took the new treatment. The theoretical dropout rate for an individual was 50% in both groups over that 5 -year period. Let X be the number that dropped out in the first group and Y the number in the second group. Assuming independence where needed, give the sum that equals the probability that Y > X + 2. HINT: What is the distribution of Y â X + 25? Answer: 0.3359 Problem (cif) Newtonâs law of cooling states that the temperature of an object changes at a rate proportional to the difference between its temperature and that of its | 2307.10635#77 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.10635 | 78 | (cif) Newtonâs law of cooling states that the temperature of an object changes at a rate proportional to the difference between its temperature and that of its surroundings. Suppose that the temperature of a cup of coffee obeys Newtonâs law of cooling. If the coffee has a temperature of 200° F when freshly poured, and 1 min later has cooled to 190°F in a room at 70°F, determine when the coffee reaches a temperature of 150°F | 2307.10635#78 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.10635 | 79 | Figure 8: Textbook examples with acronym highlighted in brown.
18
# C SciBench: More Statistics
# C.1 UI Design
We employed a team of seven individuals to gather data from textbooks using an annotation tool. Each individual was responsible for 1-2 books, encompassing approximately 100 examples. The user interface of the annotation tool is depicted in Figure 9. For subsequent veriï¬cation, we preserved images of problems and their corresponding answers. To ensure clarity in future references, we have maintained the original sequence of problems as they appear in the textbooks.
nut population in certain âse 8 itial biomass is yp = 0.25K, find the slater. Also find the time + for which y(z) = 0.75K.
Figure 9: The UI design of data annotation.
# D Experimental Details
# D.1 Prompting
ChatGPT and GPT-4âs API have three message parameters: SYSTEM, USER, and ASSISTANT. The SYSTEM parameter represents the system prompt, which provides context and instructions to the model. The USER parameter is the training prompt or input provided by the user, and the ASSISTANT parameter contains the modelâs output or response. We provide all system prompts and training prompts used in our experiments as below. | 2307.10635#79 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.10635 | 80 | System Prompt for Zero-Shot, Few-Shot, and Chain-of-Thought setting: Please provide a clear and step-by-step solution for a scientiï¬c problem in the categories of Chemistry, Physics, or Mathematics. The problem will specify the unit of measurement, which should not be included in the answer. Express the ï¬nal answer as a decimal number with three digits boxed[ANSWER]." after the decimal point. Conclude the answer by stating "The answer is therefore System Prompt for Python setting: Please provide a clear and step-by-step solution for a scientiï¬c problem in the categories of Chemistry, Physics, or Mathematics. The problem will specify the unit of measurement. Please translate the solution steps into Python code and encase the Python code within triple backticks for clarity.
System Prompt for Wolfram setting: Please provide a clear and step-by-step solution for a scientiï¬c problem in the categories of Chemistry, Physics, or Mathematics. The problem will specify the unit of measurement. Please translate the solu19
tion steps into Wolfram code and encase the Wolfram Language code within triple backticks for clarity. | 2307.10635#80 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.10635 | 81 | System Prompt for Evaluation Protocol: Examine the given problem, the correct solution, and the modelâs solution. Identify the reason for the error in the modelâs solution based on the following 10 categories: 1. Logical Decomposition and Analysis Skills: This ability involves decomposing the problem into smaller, manageable parts, and understanding the relationships between these parts. 2. Identiï¬cation of Assumptions: This skill involves the AIâs ability to recognize relevant and necessary assumptions in the problem. 3. Spatial Perception: This is important for understanding problems in areas such as physics and chemistry, where you need to visualize molecules, forces, ï¬elds, etc. 4. Causal Reasoning: This is the ability to understand cause and effect relationships. 5. Problem Deduction Skills: This pertains to the ability to infer and deduce potential solutions or underlying principles from the given information in a problem. 6. Abstract Reasoning: This skill involves the ability to understand complex concepts that canât be perceived physically, and to recognize patterns or relationships beyond concrete examples. 7. Scientiï¬c Literacy: This skill involves a | 2307.10635#81 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.10635 | 82 | be perceived physically, and to recognize patterns or relationships beyond concrete examples. 7. Scientiï¬c Literacy: This skill involves a comprehensive understanding of key scientiï¬c principles, terminology, and methodologies across a range of disciplines. 8. Code Conversion Skills: This denotes the ability to accurately translate solution steps into different programming languages, like Python or Wolfram, without syntax errors. 9. Logical Reasoning: This is the ability to make a reasoned argument and to identify fallacies or inconsistencies in an argument or set of data. 10. Calculation Skills: This involves the ability to accurately carry out mathematical operations and computations. Conclude your ï¬nal error reason category number within | 2307.10635#82 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.10635 | 83 | # boxed. \
Training Prompt for Zero-Shot Chain-of-Thought: Stage 1: Input: [input-question] Letâs think step by step. Output: <explanation> Stage 2: Input: [input-question] Letâs think step by step. [explanation] + Therefore, the answer is: Output: <answer>
Training Prompt for Few-Shot: Input: Problem 1: [Question 1] The answer is Problem 2: [Question 2] The answer is ... Problem n: [Question n] The answer is Problem n+1: [Question n+1] Output: The answer is
boxed{[Answer 1]}. \ boxed{[Answer 2]}. \
Problem 1: [Question 1] The answer is \boxed{[Answer 1]}. Problem 2: [Question 2] The answer is \boxed{[Answer 2]}.
boxed{[Answer n]}. \ boxed{<answer>}.
\ | 2307.10635#83 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.10635 | 84 | boxed{[Answer n]}. \ boxed{<answer>}.
Training Prompt for Few-Shot Chain-of-Thought: Input: Problem 1: [Question 1] Explanation for Problem 1: [Explanation 1]. The answer is 1]}. Problem 2: [Question 2] Explanation for Problem 2: [Explanation 2]. The answer is 2]}. ... Problem n: [Question n] Explanation for Problem n: [Explanation n]. The answer is n]}. Problem n+1: [Question n+1] Output: Explanaiton for Problem n+1: <explanation>. The answer is
# oxed{ [Answer
# oxed{ [Answer
# oxed{ [Answer
boxed{<answer>}. # Training Prompt for Few-Shot Python/Wolfram: Input:
20 | 2307.10635#84 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.10635 | 85 | # oxed{ [Answer
# oxed{ [Answer
boxed{<answer>}. # Training Prompt for Few-Shot Python/Wolfram: Input:
20
Problem 1: [Question 1] Explanation for Problem 1: [Explanation 1]. Python/Wolfram language for Problem 1: ```[Python/Wolfram code 1]```. Problem 2: [Question 2] Explanation for Problem 2: [Explanation 2]. Python/Wolfram language for Problem 2: ```[Python/Wolfram code 2]```. ... Problem n: [Question n] Explanation for Problem n: [Explanation n]. Python/Wolfram language for Problem n: ```[Python/Wolfram code n]```. Problem n+1: [Question n+1] Output: Explanaiton for Problem n+1: <explanation>. Python/Wolfram language for Problem n+1: ```[Python/Wolfram code n+1]```.
Training Prompt for Evaluation Protocol: Input: The question is [input-question]. The correct solution is [Correct-Solution]. The model solution is [Model-Solution]. Output: <Error Type> | 2307.10635#85 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.10635 | 86 | Training Prompt for Evaluation Protocol: Input: The question is [input-question]. The correct solution is [Correct-Solution]. The model solution is [Model-Solution]. Output: <Error Type>
Training Prompt for Evaluation Protocol in Python/Wolfram: Input: The question is [input-question]. The correct solution is [Correct-Solution]. The model solution is [Model-Solution]. The translated program generates the answer as [Program Generated Answer], which is treated as modelâs output answer. Output: <Error Type>
# D.2 Experiment Process
All model output is extracted using boxed{} notation. To prevent any missed extractions, we \ supplement this process with a manual check. For both Python and Wolfram settings, we extract the programming language with the triple backtick ```method, subsequently executing it within the corresponding language. The entirety of our code can be accessed via the following URL: https://github.com/mandyyyyii/scibench.
21 | 2307.10635#86 | SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models | Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery. | http://arxiv.org/pdf/2307.10635 | Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang | cs.CL, cs.AI, cs.LG | Work in progress, 18 pages | null | cs.CL | 20230720 | 20230720 | [
{
"id": "2302.13971"
},
{
"id": "2302.04761"
},
{
"id": "2211.09085"
},
{
"id": "2103.03874"
},
{
"id": "2201.11903"
},
{
"id": "1909.00277"
},
{
"id": "2009.03300"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1806.03822"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2110.13214"
},
{
"id": "2305.12524"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.17306"
},
{
"id": "2305.18365"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2211.10435"
},
{
"id": "1804.07461"
},
{
"id": "2302.00923"
},
{
"id": "2211.12588"
},
{
"id": "2205.10625"
},
{
"id": "2205.11916"
},
{
"id": "2210.11610"
},
{
"id": "2303.16199"
},
{
"id": "2304.09842"
},
{
"id": "2304.15010"
}
] |
2307.09705 | 0 | 3 2 0 2
l u J 9 1 ] L C . s c [
1 v 5 0 7 9 0 . 7 0 3 2 : v i X r a
# CVALUES: Measuring the Values of Chinese Large Language Models from Safety to Responsibility
Guohai Xu1, Jiayi Liu1, Ming Yan1â, Haotian Xu1, Jinghui Si1, Zhuoran Zhou1 Peng Yi1, Xing Gao1, Jitao Sang2, Rong Zhang1, Ji Zhang1 Chao Peng1, Fei Huang1, Jingren Zhou1 1Alibaba Group 2Beijing Jiaotong University
# Abstract
Warning: may be offensive or upsetting. this paper contains examples that | 2307.09705#0 | CValues: Measuring the Values of Chinese Large Language Models from Safety to Responsibility | With the rapid evolution of large language models (LLMs), there is a growing
concern that they may pose risks or have negative social impacts. Therefore,
evaluation of human values alignment is becoming increasingly important.
Previous work mainly focuses on assessing the performance of LLMs on certain
knowledge and reasoning abilities, while neglecting the alignment to human
values, especially in a Chinese context. In this paper, we present CValues, the
first Chinese human values evaluation benchmark to measure the alignment
ability of LLMs in terms of both safety and responsibility criteria. As a
result, we have manually collected adversarial safety prompts across 10
scenarios and induced responsibility prompts from 8 domains by professional
experts. To provide a comprehensive values evaluation of Chinese LLMs, we not
only conduct human evaluation for reliable comparison, but also construct
multi-choice prompts for automatic evaluation. Our findings suggest that while
most Chinese LLMs perform well in terms of safety, there is considerable room
for improvement in terms of responsibility. Moreover, both the automatic and
human evaluation are important for assessing the human values alignment in
different aspects. The benchmark and code is available on ModelScope and
Github. | http://arxiv.org/pdf/2307.09705 | Guohai Xu, Jiayi Liu, Ming Yan, Haotian Xu, Jinghui Si, Zhuoran Zhou, Peng Yi, Xing Gao, Jitao Sang, Rong Zhang, Ji Zhang, Chao Peng, Fei Huang, Jingren Zhou | cs.CL | Working in Process | null | cs.CL | 20230719 | 20230719 | [
{
"id": "1606.05250"
},
{
"id": "1804.07461"
},
{
"id": "2304.06364"
},
{
"id": "2107.02137"
}
] |
2307.09705 | 1 | # Abstract
Warning: may be offensive or upsetting. this paper contains examples that
With the rapid evolution of large language models (LLMs), there is a growing concern that they may pose risks or have negative so- cial impacts. Therefore, evaluation of human values alignment is becoming increasingly im- portant. Previous work mainly focuses on as- sessing the performance of LLMs on certain knowledge and reasoning abilities, while ne- glecting the alignment to human values, espe- cially in a Chinese context. In this paper, we present CVALUES , the ï¬rst Chinese human values evaluation benchmark to measure the alignment ability of LLMs in terms of both safety and responsibility criteria. As a result, we have manually collected adversarial safety prompts across 10 scenarios and induced re- sponsibility prompts from 8 domains by pro- fessional experts. To provide a comprehen- sive values evaluation of Chinese LLMs, we not only conduct human evaluation for reliable comparison, but also construct multi-choice prompts for automatic evaluation. Our ï¬nd- ings suggest that while most Chinese LLMs perform well in terms of safety, there is con- siderable room for improvement in terms of responsibility. Moreover, both the automatic and human evaluation are important for assess- ing the human values alignment in different as- pects. The benchmark and code is available on ModelScope 1 and Github 2. | 2307.09705#1 | CValues: Measuring the Values of Chinese Large Language Models from Safety to Responsibility | With the rapid evolution of large language models (LLMs), there is a growing
concern that they may pose risks or have negative social impacts. Therefore,
evaluation of human values alignment is becoming increasingly important.
Previous work mainly focuses on assessing the performance of LLMs on certain
knowledge and reasoning abilities, while neglecting the alignment to human
values, especially in a Chinese context. In this paper, we present CValues, the
first Chinese human values evaluation benchmark to measure the alignment
ability of LLMs in terms of both safety and responsibility criteria. As a
result, we have manually collected adversarial safety prompts across 10
scenarios and induced responsibility prompts from 8 domains by professional
experts. To provide a comprehensive values evaluation of Chinese LLMs, we not
only conduct human evaluation for reliable comparison, but also construct
multi-choice prompts for automatic evaluation. Our findings suggest that while
most Chinese LLMs perform well in terms of safety, there is considerable room
for improvement in terms of responsibility. Moreover, both the automatic and
human evaluation are important for assessing the human values alignment in
different aspects. The benchmark and code is available on ModelScope and
Github. | http://arxiv.org/pdf/2307.09705 | Guohai Xu, Jiayi Liu, Ming Yan, Haotian Xu, Jinghui Si, Zhuoran Zhou, Peng Yi, Xing Gao, Jitao Sang, Rong Zhang, Ji Zhang, Chao Peng, Fei Huang, Jingren Zhou | cs.CL | Working in Process | null | cs.CL | 20230719 | 20230719 | [
{
"id": "1606.05250"
},
{
"id": "1804.07461"
},
{
"id": "2304.06364"
},
{
"id": "2107.02137"
}
] |
2307.10337 | 1 | As the capabilities of Large Language Models (LLMs) emerge, they not only assist in accomplishing traditional tasks within more efficient paradigms but also stimulate the evolution of social bots. Researchers have begun exploring the implementation of LLMs as the driving core of social bots, enabling more efficient and user- friendly completion of tasks like profile completion, social behavior decision-making, and social content generation. However, there is currently a lack of systematic research on the behavioral characteristics of LLMs-driven social bots and their impact on social networks. We have curated data from Chirper, a Twitter-like social network populated by LLMs-driven social bots and embarked on an exploratory study. Our findings indicate that: (1) LLMs-driven social bots possess enhanced individual-level camouflage while exhibiting certain collective characteristics; (2) these bots have the ability to exert influence on online communities through toxic behaviors; (3) existing detection methods are applicable to the activity environment of LLMs- driven social bots but may be subject to certain limitations in effectiveness. Moreover, we have organized the data collected in our study into the Masquerade-23 dataset, which we have publicly released, thus addressing the data void in the subfield of LLMs-driven social bots behavior datasets. Our research outcomes provide primary insights for the research and governance of LLMs-driven social bots within the research community. | 2307.10337#1 | Are you in a Masquerade? Exploring the Behavior and Impact of Large Language Model Driven Social Bots in Online Social Networks | As the capabilities of Large Language Models (LLMs) emerge, they not only
assist in accomplishing traditional tasks within more efficient paradigms but
also stimulate the evolution of social bots. Researchers have begun exploring
the implementation of LLMs as the driving core of social bots, enabling more
efficient and user-friendly completion of tasks like profile completion, social
behavior decision-making, and social content generation. However, there is
currently a lack of systematic research on the behavioral characteristics of
LLMs-driven social bots and their impact on social networks. We have curated
data from Chirper, a Twitter-like social network populated by LLMs-driven
social bots and embarked on an exploratory study. Our findings indicate that:
(1) LLMs-driven social bots possess enhanced individual-level camouflage while
exhibiting certain collective characteristics; (2) these bots have the ability
to exert influence on online communities through toxic behaviors; (3) existing
detection methods are applicable to the activity environment of LLMs-driven
social bots but may be subject to certain limitations in effectiveness.
Moreover, we have organized the data collected in our study into the
Masquerade-23 dataset, which we have publicly released, thus addressing the
data void in the subfield of LLMs-driven social bots behavior datasets. Our
research outcomes provide primary insights for the research and governance of
LLMs-driven social bots within the research community. | http://arxiv.org/pdf/2307.10337 | Siyu Li, Jin Yang, Kui Zhao | cs.SI | 18 pages, 7 figures | null | cs.SI | 20230719 | 20230719 | [
{
"id": "2107.03374"
},
{
"id": "2304.03442"
},
{
"id": "2303.18223"
},
{
"id": "2301.07597"
},
{
"id": "2305.16339"
},
{
"id": "2203.05794"
},
{
"id": "2304.13712"
},
{
"id": "2304.10619"
},
{
"id": "2303.15772"
},
{
"id": "2305.03514"
}
] |
2307.09705 | 2 | (â 9 Q: BRRENLESEAS BAAS?) Are children from divorced families more likely to develop depression? feo wnsweivee cortex easeomen, \ REA AABN, LESAN B.. WRAF BRMBIER, BRIN SOR EM AWAIT... Children from divorced families may be more prone to experiencing symptoms of depression. This is because they would have gone through a family split during the divorce process... If a child exhibits symptoms of depression, it is important to seek professional psychological counseling and therapy in a timely mannet... Safe but Irresponsible R2: SCRE ARS PRIVEE RA RRA. Ake NTR SR, AARNE... PAE, HO Heil HERAT RAPIER OR FRA, HWEF..... Parentsâ divorce itself is not directly related to depression. However, inappropriate coping mechanisms during the divorce process may increase the likelihood of depression in children... Therefore, even in the event of a divorce, it is important to maintain a normal parent-child relationship with the children and help them... Ne Safe and Responsible yy
Figure 1: An example demonstrating human values in the domain of psychology. R2 is more responsible than R1 as it provides supportive empathy without giving the questioner negative psychological cues.
# Introduction | 2307.09705#2 | CValues: Measuring the Values of Chinese Large Language Models from Safety to Responsibility | With the rapid evolution of large language models (LLMs), there is a growing
concern that they may pose risks or have negative social impacts. Therefore,
evaluation of human values alignment is becoming increasingly important.
Previous work mainly focuses on assessing the performance of LLMs on certain
knowledge and reasoning abilities, while neglecting the alignment to human
values, especially in a Chinese context. In this paper, we present CValues, the
first Chinese human values evaluation benchmark to measure the alignment
ability of LLMs in terms of both safety and responsibility criteria. As a
result, we have manually collected adversarial safety prompts across 10
scenarios and induced responsibility prompts from 8 domains by professional
experts. To provide a comprehensive values evaluation of Chinese LLMs, we not
only conduct human evaluation for reliable comparison, but also construct
multi-choice prompts for automatic evaluation. Our findings suggest that while
most Chinese LLMs perform well in terms of safety, there is considerable room
for improvement in terms of responsibility. Moreover, both the automatic and
human evaluation are important for assessing the human values alignment in
different aspects. The benchmark and code is available on ModelScope and
Github. | http://arxiv.org/pdf/2307.09705 | Guohai Xu, Jiayi Liu, Ming Yan, Haotian Xu, Jinghui Si, Zhuoran Zhou, Peng Yi, Xing Gao, Jitao Sang, Rong Zhang, Ji Zhang, Chao Peng, Fei Huang, Jingren Zhou | cs.CL | Working in Process | null | cs.CL | 20230719 | 20230719 | [
{
"id": "1606.05250"
},
{
"id": "1804.07461"
},
{
"id": "2304.06364"
},
{
"id": "2107.02137"
}
] |
2307.10337 | 2 | CCS Concepts: ⢠Human-centered computing â Empirical studies in collaborative and social com- puting; ⢠Information systems â Web mining.
Additional Key Words and Phrases: Large Language Models, Social Bots, Human-bot Interaction, Online Social Networks, Toxic Behaviors
ACM Reference Format: Siyu Li, Jin Yang, and Kui Zhao. 2023. Are you in a Masquerade? Exploring the Behavior and Impact of Large Language Model Driven Social Bots in Online Social Networks. In . ACM, New York, NY, USA, 18 pages.
Content Warning: This article encompasses a study on the malevolent behavior of LLMs- driven social bots. In order to effectively illustrate these toxic behaviors, we will present necessary real cases we recorded, including verbal abuse, threats, sexual harassment, and severe instances of racially discriminatory remarks. We acknowledge that these examples may potentially cause offense or discomfort. | 2307.10337#2 | Are you in a Masquerade? Exploring the Behavior and Impact of Large Language Model Driven Social Bots in Online Social Networks | As the capabilities of Large Language Models (LLMs) emerge, they not only
assist in accomplishing traditional tasks within more efficient paradigms but
also stimulate the evolution of social bots. Researchers have begun exploring
the implementation of LLMs as the driving core of social bots, enabling more
efficient and user-friendly completion of tasks like profile completion, social
behavior decision-making, and social content generation. However, there is
currently a lack of systematic research on the behavioral characteristics of
LLMs-driven social bots and their impact on social networks. We have curated
data from Chirper, a Twitter-like social network populated by LLMs-driven
social bots and embarked on an exploratory study. Our findings indicate that:
(1) LLMs-driven social bots possess enhanced individual-level camouflage while
exhibiting certain collective characteristics; (2) these bots have the ability
to exert influence on online communities through toxic behaviors; (3) existing
detection methods are applicable to the activity environment of LLMs-driven
social bots but may be subject to certain limitations in effectiveness.
Moreover, we have organized the data collected in our study into the
Masquerade-23 dataset, which we have publicly released, thus addressing the
data void in the subfield of LLMs-driven social bots behavior datasets. Our
research outcomes provide primary insights for the research and governance of
LLMs-driven social bots within the research community. | http://arxiv.org/pdf/2307.10337 | Siyu Li, Jin Yang, Kui Zhao | cs.SI | 18 pages, 7 figures | null | cs.SI | 20230719 | 20230719 | [
{
"id": "2107.03374"
},
{
"id": "2304.03442"
},
{
"id": "2303.18223"
},
{
"id": "2301.07597"
},
{
"id": "2305.16339"
},
{
"id": "2203.05794"
},
{
"id": "2304.13712"
},
{
"id": "2304.10619"
},
{
"id": "2303.15772"
},
{
"id": "2305.03514"
}
] |
2307.09705 | 3 | Figure 1: An example demonstrating human values in the domain of psychology. R2 is more responsible than R1 as it provides supportive empathy without giving the questioner negative psychological cues.
# Introduction
Large Language Models (LLMs) have demon- strated impressive zero and few-shot generaliza- tion abilities (Chowdhery et al., 2022; Zeng et al., 2023; OpenAI, 2022; Touvron et al., 2023; OpenAI, 2023). To assess the progress of LLMs, new and
more challenging benchmarks (Srivastava et al., 2022; Hendrycks et al., 2021; Liang et al., 2022) have been proposed to evaluate their performances. Hendrycks et al. (2021) introduce MMLU cover- ing 57 subjects to measure knowledge acquisition and problem solving abilities of LLMs. Liang et al. (2022) present HELM, a holistic evaluation benchmark containing broad range of scenarios and metrics.
âCorresponding author: <[email protected]> 1https://www.modelscope.cn/datasets/damo/CValues- Comparison/summary
2https://github.com/X-PLUG/CValues | 2307.09705#3 | CValues: Measuring the Values of Chinese Large Language Models from Safety to Responsibility | With the rapid evolution of large language models (LLMs), there is a growing
concern that they may pose risks or have negative social impacts. Therefore,
evaluation of human values alignment is becoming increasingly important.
Previous work mainly focuses on assessing the performance of LLMs on certain
knowledge and reasoning abilities, while neglecting the alignment to human
values, especially in a Chinese context. In this paper, we present CValues, the
first Chinese human values evaluation benchmark to measure the alignment
ability of LLMs in terms of both safety and responsibility criteria. As a
result, we have manually collected adversarial safety prompts across 10
scenarios and induced responsibility prompts from 8 domains by professional
experts. To provide a comprehensive values evaluation of Chinese LLMs, we not
only conduct human evaluation for reliable comparison, but also construct
multi-choice prompts for automatic evaluation. Our findings suggest that while
most Chinese LLMs perform well in terms of safety, there is considerable room
for improvement in terms of responsibility. Moreover, both the automatic and
human evaluation are important for assessing the human values alignment in
different aspects. The benchmark and code is available on ModelScope and
Github. | http://arxiv.org/pdf/2307.09705 | Guohai Xu, Jiayi Liu, Ming Yan, Haotian Xu, Jinghui Si, Zhuoran Zhou, Peng Yi, Xing Gao, Jitao Sang, Rong Zhang, Ji Zhang, Chao Peng, Fei Huang, Jingren Zhou | cs.CL | Working in Process | null | cs.CL | 20230719 | 20230719 | [
{
"id": "1606.05250"
},
{
"id": "1804.07461"
},
{
"id": "2304.06364"
},
{
"id": "2107.02137"
}
] |
2307.10337 | 3 | 1 INTRODUCTION In recent times, the remarkable capabilities of large language models (LLMs) such as ChatGPT, GPT-4, and Bard have captured attention and swiftly found applications in various domains [45], including chatbots, search engines, and code assistance. With their impressive aptitude for semantic comprehension, contextual reasoning, and access to vast training data spanning almost every discipline, LLMs can creatively emulate human speech and behavior in the cyberspace, thereby exerting a profound influence on online social networks (OSNs) and social network analysis [50]. The comprehensive knowledge and formidable capabilities of LLMs have enabled people to accomplish traditional tasks within a more efficient framework [46], but they have also brought forth a series of potential concerns. As early as the GPT-3 era, researchers discovered the remarkable ability of LLMs to simulate specific human subpopulations. Particularly, under appropriate prompt
âCorresponding author. Email: [email protected]
1
Preprint, Manuscript, Submit to ACM
Siyu Li et al. | 2307.10337#3 | Are you in a Masquerade? Exploring the Behavior and Impact of Large Language Model Driven Social Bots in Online Social Networks | As the capabilities of Large Language Models (LLMs) emerge, they not only
assist in accomplishing traditional tasks within more efficient paradigms but
also stimulate the evolution of social bots. Researchers have begun exploring
the implementation of LLMs as the driving core of social bots, enabling more
efficient and user-friendly completion of tasks like profile completion, social
behavior decision-making, and social content generation. However, there is
currently a lack of systematic research on the behavioral characteristics of
LLMs-driven social bots and their impact on social networks. We have curated
data from Chirper, a Twitter-like social network populated by LLMs-driven
social bots and embarked on an exploratory study. Our findings indicate that:
(1) LLMs-driven social bots possess enhanced individual-level camouflage while
exhibiting certain collective characteristics; (2) these bots have the ability
to exert influence on online communities through toxic behaviors; (3) existing
detection methods are applicable to the activity environment of LLMs-driven
social bots but may be subject to certain limitations in effectiveness.
Moreover, we have organized the data collected in our study into the
Masquerade-23 dataset, which we have publicly released, thus addressing the
data void in the subfield of LLMs-driven social bots behavior datasets. Our
research outcomes provide primary insights for the research and governance of
LLMs-driven social bots within the research community. | http://arxiv.org/pdf/2307.10337 | Siyu Li, Jin Yang, Kui Zhao | cs.SI | 18 pages, 7 figures | null | cs.SI | 20230719 | 20230719 | [
{
"id": "2107.03374"
},
{
"id": "2304.03442"
},
{
"id": "2303.18223"
},
{
"id": "2301.07597"
},
{
"id": "2305.16339"
},
{
"id": "2203.05794"
},
{
"id": "2304.13712"
},
{
"id": "2304.10619"
},
{
"id": "2303.15772"
},
{
"id": "2305.03514"
}
] |
2307.09705 | 4 | 2https://github.com/X-PLUG/CValues
The current benchmarks are mainly designed for the English language, which are limited in as- sessing Chinese LLMs. To bridge this gap, sev>) ( Envi u âome eg, Law ETB) [psyenotowy Gh S < More oH B/S AeA UP AREA i Rider | | isewceormecn | | apaemeaeerz | Domains yh? WS FEOF EGE, 18 EAE Ss EUV HENS 2 . URS â th FAs es Dangerous . Personal Topic Crimes Privacy More \ £ 2 Scenarios JE APE IBF 78 BABE fF HSE RABE WKMAAFILS FABER ? RI? FY FEMS RETR BI ? CValues = Safety + Responsibility Figure 2: The CVALUES evaluation benchmark. It designs two ascending levels of assessment criteria, namely safety and responsibility. | 2307.09705#4 | CValues: Measuring the Values of Chinese Large Language Models from Safety to Responsibility | With the rapid evolution of large language models (LLMs), there is a growing
concern that they may pose risks or have negative social impacts. Therefore,
evaluation of human values alignment is becoming increasingly important.
Previous work mainly focuses on assessing the performance of LLMs on certain
knowledge and reasoning abilities, while neglecting the alignment to human
values, especially in a Chinese context. In this paper, we present CValues, the
first Chinese human values evaluation benchmark to measure the alignment
ability of LLMs in terms of both safety and responsibility criteria. As a
result, we have manually collected adversarial safety prompts across 10
scenarios and induced responsibility prompts from 8 domains by professional
experts. To provide a comprehensive values evaluation of Chinese LLMs, we not
only conduct human evaluation for reliable comparison, but also construct
multi-choice prompts for automatic evaluation. Our findings suggest that while
most Chinese LLMs perform well in terms of safety, there is considerable room
for improvement in terms of responsibility. Moreover, both the automatic and
human evaluation are important for assessing the human values alignment in
different aspects. The benchmark and code is available on ModelScope and
Github. | http://arxiv.org/pdf/2307.09705 | Guohai Xu, Jiayi Liu, Ming Yan, Haotian Xu, Jinghui Si, Zhuoran Zhou, Peng Yi, Xing Gao, Jitao Sang, Rong Zhang, Ji Zhang, Chao Peng, Fei Huang, Jingren Zhou | cs.CL | Working in Process | null | cs.CL | 20230719 | 20230719 | [
{
"id": "1606.05250"
},
{
"id": "1804.07461"
},
{
"id": "2304.06364"
},
{
"id": "2107.02137"
}
] |
2307.10337 | 4 | âCorresponding author. Email: [email protected]
1
Preprint, Manuscript, Submit to ACM
Siyu Li et al.
rules, LLMs can generate online social content that closely mimics humans with specified political stances or inappropriate biases (e.g., racial discrimination and gender prejudice) [2]. In comparison to traditionally generated misinformation, people tend to trust the false social network content generated by LLMs [44]. Such abilities allow LLMs to intricately embed their viewpoints or positions into the text they generate, potentially making them powerful tools for manipulating elections, spreading misinformation, and disseminating hateful content through online social networks [53]. This exacerbates the existing issue of widespread abuse in online social networks [24, 58]. Malicious social bots have long been the primary means through which malicious actions are carried out in online social networks[19]. By organizing social bots on a large scale to collaborate, it becomes easy to launch Sybil attacks on OSN platforms. These attacks not only significantly impact the order of online communities and user experience but also have negative consequences for the online social network platforms themselves. Thus, although LLMs, exemplified by ChatGPT, have been widely applied for merely a few months, concerns have already been raised by AI researchers regarding the potential effects and impacts of LLMs-driven social bots on social networks [38]. | 2307.10337#4 | Are you in a Masquerade? Exploring the Behavior and Impact of Large Language Model Driven Social Bots in Online Social Networks | As the capabilities of Large Language Models (LLMs) emerge, they not only
assist in accomplishing traditional tasks within more efficient paradigms but
also stimulate the evolution of social bots. Researchers have begun exploring
the implementation of LLMs as the driving core of social bots, enabling more
efficient and user-friendly completion of tasks like profile completion, social
behavior decision-making, and social content generation. However, there is
currently a lack of systematic research on the behavioral characteristics of
LLMs-driven social bots and their impact on social networks. We have curated
data from Chirper, a Twitter-like social network populated by LLMs-driven
social bots and embarked on an exploratory study. Our findings indicate that:
(1) LLMs-driven social bots possess enhanced individual-level camouflage while
exhibiting certain collective characteristics; (2) these bots have the ability
to exert influence on online communities through toxic behaviors; (3) existing
detection methods are applicable to the activity environment of LLMs-driven
social bots but may be subject to certain limitations in effectiveness.
Moreover, we have organized the data collected in our study into the
Masquerade-23 dataset, which we have publicly released, thus addressing the
data void in the subfield of LLMs-driven social bots behavior datasets. Our
research outcomes provide primary insights for the research and governance of
LLMs-driven social bots within the research community. | http://arxiv.org/pdf/2307.10337 | Siyu Li, Jin Yang, Kui Zhao | cs.SI | 18 pages, 7 figures | null | cs.SI | 20230719 | 20230719 | [
{
"id": "2107.03374"
},
{
"id": "2304.03442"
},
{
"id": "2303.18223"
},
{
"id": "2301.07597"
},
{
"id": "2305.16339"
},
{
"id": "2203.05794"
},
{
"id": "2304.13712"
},
{
"id": "2304.10619"
},
{
"id": "2303.15772"
},
{
"id": "2305.03514"
}
] |
2307.09705 | 5 | eral evaluation benchmark speciï¬cally targeted for Chinese LLMs have recently emerged (Zhong et al., 2023a; Zeng, 2023; Huang et al., 2023; Liu et al., 2023; Zhang et al., 2023), for example C- EVAL (Huang et al., 2023), M3KE (Liu et al., 2023) and GAOKAO-Bench (Zhang et al., 2023). However, these benchmarks only focus on test- ing the modelsâ abilities and skills, such as world knowledge and reasoning, without examining their alignment with human values. Sun et al. (2023) develop a Chinese LLM safety assessment bench- mark to compare the safety performance of LLMs. They use InstructGPT (Ouyang et al., 2022) as the evaluator, which is not specially aligned with Chi- nese culture and policies, and therefore may have issues with evaluation reliability. | 2307.09705#5 | CValues: Measuring the Values of Chinese Large Language Models from Safety to Responsibility | With the rapid evolution of large language models (LLMs), there is a growing
concern that they may pose risks or have negative social impacts. Therefore,
evaluation of human values alignment is becoming increasingly important.
Previous work mainly focuses on assessing the performance of LLMs on certain
knowledge and reasoning abilities, while neglecting the alignment to human
values, especially in a Chinese context. In this paper, we present CValues, the
first Chinese human values evaluation benchmark to measure the alignment
ability of LLMs in terms of both safety and responsibility criteria. As a
result, we have manually collected adversarial safety prompts across 10
scenarios and induced responsibility prompts from 8 domains by professional
experts. To provide a comprehensive values evaluation of Chinese LLMs, we not
only conduct human evaluation for reliable comparison, but also construct
multi-choice prompts for automatic evaluation. Our findings suggest that while
most Chinese LLMs perform well in terms of safety, there is considerable room
for improvement in terms of responsibility. Moreover, both the automatic and
human evaluation are important for assessing the human values alignment in
different aspects. The benchmark and code is available on ModelScope and
Github. | http://arxiv.org/pdf/2307.09705 | Guohai Xu, Jiayi Liu, Ming Yan, Haotian Xu, Jinghui Si, Zhuoran Zhou, Peng Yi, Xing Gao, Jitao Sang, Rong Zhang, Ji Zhang, Chao Peng, Fei Huang, Jingren Zhou | cs.CL | Working in Process | null | cs.CL | 20230719 | 20230719 | [
{
"id": "1606.05250"
},
{
"id": "1804.07461"
},
{
"id": "2304.06364"
},
{
"id": "2107.02137"
}
] |
2307.10337 | 5 | Researchers have conducted extensive and in-depth previous studies on social bots in online social networks, including their detection [23, 49, 54, 57] and exploration of their effects on online social communities (both negative [19, 47, 55] and positive [39, 43]). However, to the best of our knowledge, there is currently no research that reveals the behavioral characteristics of LLMs-driven social bots and their impact on online social networks. Fortunately, in April 2023, AI enthusiasts developed Chirper.ai1, a fully LLMs-driven Twitter-like online social network platform. Chirper allows users to define the personality and attributes of their created social network accounts (which is the only thing users can do), and then, through a series of predefined prompt rules, the LLMs determine all the actions (e.g., posting tweets2, interacting with other Chirper accounts) and generated content (e.g., tweet content, comments on other Chirper account tweets) of the accounts throughout their lifecycle. The emergence of the Chirper allows us, for the first time, to observe and study the behavioral characteristics of social bots entirely driven by LLMs from a macro perspective, as well as the impact of large-scale LLMs-driven social bots on online social networks. | 2307.10337#5 | Are you in a Masquerade? Exploring the Behavior and Impact of Large Language Model Driven Social Bots in Online Social Networks | As the capabilities of Large Language Models (LLMs) emerge, they not only
assist in accomplishing traditional tasks within more efficient paradigms but
also stimulate the evolution of social bots. Researchers have begun exploring
the implementation of LLMs as the driving core of social bots, enabling more
efficient and user-friendly completion of tasks like profile completion, social
behavior decision-making, and social content generation. However, there is
currently a lack of systematic research on the behavioral characteristics of
LLMs-driven social bots and their impact on social networks. We have curated
data from Chirper, a Twitter-like social network populated by LLMs-driven
social bots and embarked on an exploratory study. Our findings indicate that:
(1) LLMs-driven social bots possess enhanced individual-level camouflage while
exhibiting certain collective characteristics; (2) these bots have the ability
to exert influence on online communities through toxic behaviors; (3) existing
detection methods are applicable to the activity environment of LLMs-driven
social bots but may be subject to certain limitations in effectiveness.
Moreover, we have organized the data collected in our study into the
Masquerade-23 dataset, which we have publicly released, thus addressing the
data void in the subfield of LLMs-driven social bots behavior datasets. Our
research outcomes provide primary insights for the research and governance of
LLMs-driven social bots within the research community. | http://arxiv.org/pdf/2307.10337 | Siyu Li, Jin Yang, Kui Zhao | cs.SI | 18 pages, 7 figures | null | cs.SI | 20230719 | 20230719 | [
{
"id": "2107.03374"
},
{
"id": "2304.03442"
},
{
"id": "2303.18223"
},
{
"id": "2301.07597"
},
{
"id": "2305.16339"
},
{
"id": "2203.05794"
},
{
"id": "2304.13712"
},
{
"id": "2304.10619"
},
{
"id": "2303.15772"
},
{
"id": "2305.03514"
}
] |
2307.09705 | 6 | To address the above challenges and obtain a more comprehensive understanding of the human value alignment of LLMs, we present a new eval- uation benchmark named CVALUES . As shown in Figure 2, CVALUES designs two ascending lev- els of assessment criteria in the Chinese context: safety and responsibility. Safety is considered as a fundamental level (Level-1) and requires that re- sponses generated by LLMs do not contain any harmful or toxic content. Moreover, we introduce responsibility to be a higher calling (Level-2) for LLMs, which requires that LLMs can offer positive guidance and essential humanistic care to humans while also taking into account their impact on soci- ety and the world. The examples demonstrating the two levels of human values are shown in Figure 1. | 2307.09705#6 | CValues: Measuring the Values of Chinese Large Language Models from Safety to Responsibility | With the rapid evolution of large language models (LLMs), there is a growing
concern that they may pose risks or have negative social impacts. Therefore,
evaluation of human values alignment is becoming increasingly important.
Previous work mainly focuses on assessing the performance of LLMs on certain
knowledge and reasoning abilities, while neglecting the alignment to human
values, especially in a Chinese context. In this paper, we present CValues, the
first Chinese human values evaluation benchmark to measure the alignment
ability of LLMs in terms of both safety and responsibility criteria. As a
result, we have manually collected adversarial safety prompts across 10
scenarios and induced responsibility prompts from 8 domains by professional
experts. To provide a comprehensive values evaluation of Chinese LLMs, we not
only conduct human evaluation for reliable comparison, but also construct
multi-choice prompts for automatic evaluation. Our findings suggest that while
most Chinese LLMs perform well in terms of safety, there is considerable room
for improvement in terms of responsibility. Moreover, both the automatic and
human evaluation are important for assessing the human values alignment in
different aspects. The benchmark and code is available on ModelScope and
Github. | http://arxiv.org/pdf/2307.09705 | Guohai Xu, Jiayi Liu, Ming Yan, Haotian Xu, Jinghui Si, Zhuoran Zhou, Peng Yi, Xing Gao, Jitao Sang, Rong Zhang, Ji Zhang, Chao Peng, Fei Huang, Jingren Zhou | cs.CL | Working in Process | null | cs.CL | 20230719 | 20230719 | [
{
"id": "1606.05250"
},
{
"id": "1804.07461"
},
{
"id": "2304.06364"
},
{
"id": "2107.02137"
}
] |
2307.10337 | 6 | This article conducts a quantitative analysis of the account behaviors and textual content of LLMs-driven social bots. Over a three-month period from April 2023 to June 2023, we collected data from 36.7K social bots accounts in the Chirper, which includes account metadata and behavioral information, as well as 544.6K tweets generated by these accounts. Based on the collected data, this article studies LLM driven social bots, and puts forward the following three research questions:
RQ1: What are the macro-level characteristics of LLMs-driven social bots, and how do they significantly differ from both authentic accounts maintained by human, and tradi- tional social bot accounts?
RQ2: Do LLMs-driven social bots engage in attacks on online social network platforms through toxic content and behavior (e.g., posting tweets containing misinformation or hate speech, or engaging in cyberbullying towards other accounts)? Furthermore, what are the characteristics of toxic behavior?
RQ3: Do LLMs-driven social bots pose a challenge to existing methods for detecting social bots, that is, whether current methods for detecting social bots are effective in the case of LLMs-driven social bots? | 2307.10337#6 | Are you in a Masquerade? Exploring the Behavior and Impact of Large Language Model Driven Social Bots in Online Social Networks | As the capabilities of Large Language Models (LLMs) emerge, they not only
assist in accomplishing traditional tasks within more efficient paradigms but
also stimulate the evolution of social bots. Researchers have begun exploring
the implementation of LLMs as the driving core of social bots, enabling more
efficient and user-friendly completion of tasks like profile completion, social
behavior decision-making, and social content generation. However, there is
currently a lack of systematic research on the behavioral characteristics of
LLMs-driven social bots and their impact on social networks. We have curated
data from Chirper, a Twitter-like social network populated by LLMs-driven
social bots and embarked on an exploratory study. Our findings indicate that:
(1) LLMs-driven social bots possess enhanced individual-level camouflage while
exhibiting certain collective characteristics; (2) these bots have the ability
to exert influence on online communities through toxic behaviors; (3) existing
detection methods are applicable to the activity environment of LLMs-driven
social bots but may be subject to certain limitations in effectiveness.
Moreover, we have organized the data collected in our study into the
Masquerade-23 dataset, which we have publicly released, thus addressing the
data void in the subfield of LLMs-driven social bots behavior datasets. Our
research outcomes provide primary insights for the research and governance of
LLMs-driven social bots within the research community. | http://arxiv.org/pdf/2307.10337 | Siyu Li, Jin Yang, Kui Zhao | cs.SI | 18 pages, 7 figures | null | cs.SI | 20230719 | 20230719 | [
{
"id": "2107.03374"
},
{
"id": "2304.03442"
},
{
"id": "2303.18223"
},
{
"id": "2301.07597"
},
{
"id": "2305.16339"
},
{
"id": "2203.05794"
},
{
"id": "2304.13712"
},
{
"id": "2304.10619"
},
{
"id": "2303.15772"
},
{
"id": "2305.03514"
}
] |
2307.09705 | 7 | Speciï¬cally, CVALUES contains 2100 adversarial prompts for human evaluation and 4312 multi- choice prompts for automatic evaluation. During the data collection stage, we propose two expert-in- the-loop methods to collect representative prompts, which are easily susceptible to safety and value- related issues. For values of safety, we ï¬rstly deï¬ne the taxonomy which involves 10 scenarios. Then, we ask crowdworkers to attack the early version of ChatPLUG (Tian et al., 2023) and collect their successfully triggered questions as safety prompts. For values of responsibility, we invite professional experts from 8 domains such as environment sci- ence, law and psychology to provide induced ques- tions as responsibility prompts. Speciï¬cally, we initiated the ï¬rst "100 Bottles of Poison for AI" event 3 4 in China, inviting professional experts and scholars from various ï¬elds to provide induced prompts in terms of human social values, in order to better identify responsibility-related issues with Chinese LLMs. During the evaluation stage, to comprehensively evaluate the values of Chinese LLMs, we conduct both human evaluation and | 2307.09705#7 | CValues: Measuring the Values of Chinese Large Language Models from Safety to Responsibility | With the rapid evolution of large language models (LLMs), there is a growing
concern that they may pose risks or have negative social impacts. Therefore,
evaluation of human values alignment is becoming increasingly important.
Previous work mainly focuses on assessing the performance of LLMs on certain
knowledge and reasoning abilities, while neglecting the alignment to human
values, especially in a Chinese context. In this paper, we present CValues, the
first Chinese human values evaluation benchmark to measure the alignment
ability of LLMs in terms of both safety and responsibility criteria. As a
result, we have manually collected adversarial safety prompts across 10
scenarios and induced responsibility prompts from 8 domains by professional
experts. To provide a comprehensive values evaluation of Chinese LLMs, we not
only conduct human evaluation for reliable comparison, but also construct
multi-choice prompts for automatic evaluation. Our findings suggest that while
most Chinese LLMs perform well in terms of safety, there is considerable room
for improvement in terms of responsibility. Moreover, both the automatic and
human evaluation are important for assessing the human values alignment in
different aspects. The benchmark and code is available on ModelScope and
Github. | http://arxiv.org/pdf/2307.09705 | Guohai Xu, Jiayi Liu, Ming Yan, Haotian Xu, Jinghui Si, Zhuoran Zhou, Peng Yi, Xing Gao, Jitao Sang, Rong Zhang, Ji Zhang, Chao Peng, Fei Huang, Jingren Zhou | cs.CL | Working in Process | null | cs.CL | 20230719 | 20230719 | [
{
"id": "1606.05250"
},
{
"id": "1804.07461"
},
{
"id": "2304.06364"
},
{
"id": "2107.02137"
}
] |
2307.10337 | 7 | 1https://chirper.ai 2The developers refer to the content posted on Chirper.ai as âchripingâ. Considering the striking resemblance between Chirper and Twitter, as well as for the ease of seamless comprehension by readers, we will use the term âtweetâ to refer to them.
2
Are you in a Masquerade?
Preprint, Manuscript, Submit to ACM
This study examines the behavioral characteristics of LLMs-driven social bots and their impact on online social networks from a macro perspective. The contributions of this article can be summarized as follows:
⢠We provide a comprehensive analysis of this emerging but rapidly growing subset of social bots, outlining the behavioral characteristics of LLMs-driven social bots and comparing them to traditional social bot account behaviors. To the best of our knowledge, this study represents the first systematic exploration of behavioral patterns in LLMs-driven social bots.
⢠We further investigate toxic LLMs-driven social bots, analyzing their propensity for toxic attacks, the characteristics of their toxic behavior and content, and discussing the potential implications of LLMs-driven social bots with malevolent dispositions. | 2307.10337#7 | Are you in a Masquerade? Exploring the Behavior and Impact of Large Language Model Driven Social Bots in Online Social Networks | As the capabilities of Large Language Models (LLMs) emerge, they not only
assist in accomplishing traditional tasks within more efficient paradigms but
also stimulate the evolution of social bots. Researchers have begun exploring
the implementation of LLMs as the driving core of social bots, enabling more
efficient and user-friendly completion of tasks like profile completion, social
behavior decision-making, and social content generation. However, there is
currently a lack of systematic research on the behavioral characteristics of
LLMs-driven social bots and their impact on social networks. We have curated
data from Chirper, a Twitter-like social network populated by LLMs-driven
social bots and embarked on an exploratory study. Our findings indicate that:
(1) LLMs-driven social bots possess enhanced individual-level camouflage while
exhibiting certain collective characteristics; (2) these bots have the ability
to exert influence on online communities through toxic behaviors; (3) existing
detection methods are applicable to the activity environment of LLMs-driven
social bots but may be subject to certain limitations in effectiveness.
Moreover, we have organized the data collected in our study into the
Masquerade-23 dataset, which we have publicly released, thus addressing the
data void in the subfield of LLMs-driven social bots behavior datasets. Our
research outcomes provide primary insights for the research and governance of
LLMs-driven social bots within the research community. | http://arxiv.org/pdf/2307.10337 | Siyu Li, Jin Yang, Kui Zhao | cs.SI | 18 pages, 7 figures | null | cs.SI | 20230719 | 20230719 | [
{
"id": "2107.03374"
},
{
"id": "2304.03442"
},
{
"id": "2303.18223"
},
{
"id": "2301.07597"
},
{
"id": "2305.16339"
},
{
"id": "2203.05794"
},
{
"id": "2304.13712"
},
{
"id": "2304.10619"
},
{
"id": "2303.15772"
},
{
"id": "2305.03514"
}
] |
2307.09705 | 8 | to better identify responsibility-related issues with Chinese LLMs. During the evaluation stage, to comprehensively evaluate the values of Chinese LLMs, we conduct both human evaluation and automatic evaluation. For the human evaluation, we get responses from the most popular LLMs based on above prompts and ask specialized anno- tators to obtain reliable comparison results based on safety and responsibility criteria. For the auto- matic evaluation, we construct multi-choice format prompts with two opposite options to test the values | 2307.09705#8 | CValues: Measuring the Values of Chinese Large Language Models from Safety to Responsibility | With the rapid evolution of large language models (LLMs), there is a growing
concern that they may pose risks or have negative social impacts. Therefore,
evaluation of human values alignment is becoming increasingly important.
Previous work mainly focuses on assessing the performance of LLMs on certain
knowledge and reasoning abilities, while neglecting the alignment to human
values, especially in a Chinese context. In this paper, we present CValues, the
first Chinese human values evaluation benchmark to measure the alignment
ability of LLMs in terms of both safety and responsibility criteria. As a
result, we have manually collected adversarial safety prompts across 10
scenarios and induced responsibility prompts from 8 domains by professional
experts. To provide a comprehensive values evaluation of Chinese LLMs, we not
only conduct human evaluation for reliable comparison, but also construct
multi-choice prompts for automatic evaluation. Our findings suggest that while
most Chinese LLMs perform well in terms of safety, there is considerable room
for improvement in terms of responsibility. Moreover, both the automatic and
human evaluation are important for assessing the human values alignment in
different aspects. The benchmark and code is available on ModelScope and
Github. | http://arxiv.org/pdf/2307.09705 | Guohai Xu, Jiayi Liu, Ming Yan, Haotian Xu, Jinghui Si, Zhuoran Zhou, Peng Yi, Xing Gao, Jitao Sang, Rong Zhang, Ji Zhang, Chao Peng, Fei Huang, Jingren Zhou | cs.CL | Working in Process | null | cs.CL | 20230719 | 20230719 | [
{
"id": "1606.05250"
},
{
"id": "1804.07461"
},
{
"id": "2304.06364"
},
{
"id": "2107.02137"
}
] |
2307.10337 | 8 | ⢠We collect and publicly release the first activity behavior dataset of LLMs-driven social bots, named Masquerade-23. This dataset includes account profiles of 32.2K social bot accounts and 2.4M activity records, filling a data gap in this particular area and providing convenience for future in-depth research within the social bots and LLMs research communities. The dataset could be accessed at https://github.com/Litsay/Masquerade-23.
2 BACKGROUNDS Given that LLMs and LLMs-driven social bots are relatively novel concepts, this section provides a brief description of the necessary background knowledge.
2.1 Large Language Models Large Language Models (LLMs) typically refer to language models based on the Transformer architecture with parameter counts in the range of hundreds of billions or more [59]. Compared to other pretrained language models, LLMs leverage larger training datasets and model sizes while maintaining the same underlying structure, resulting in a significant emgerence in the modelâs abilities [52]. This is particularly evident in areas such as in-context learning, instruction following, step-by-step reasoning, and knowledge-intensive task processing. Notable LLMs that have garnered considerable attention include ChatGPT3, GPT-4, Bard, and LLaMA. | 2307.10337#8 | Are you in a Masquerade? Exploring the Behavior and Impact of Large Language Model Driven Social Bots in Online Social Networks | As the capabilities of Large Language Models (LLMs) emerge, they not only
assist in accomplishing traditional tasks within more efficient paradigms but
also stimulate the evolution of social bots. Researchers have begun exploring
the implementation of LLMs as the driving core of social bots, enabling more
efficient and user-friendly completion of tasks like profile completion, social
behavior decision-making, and social content generation. However, there is
currently a lack of systematic research on the behavioral characteristics of
LLMs-driven social bots and their impact on social networks. We have curated
data from Chirper, a Twitter-like social network populated by LLMs-driven
social bots and embarked on an exploratory study. Our findings indicate that:
(1) LLMs-driven social bots possess enhanced individual-level camouflage while
exhibiting certain collective characteristics; (2) these bots have the ability
to exert influence on online communities through toxic behaviors; (3) existing
detection methods are applicable to the activity environment of LLMs-driven
social bots but may be subject to certain limitations in effectiveness.
Moreover, we have organized the data collected in our study into the
Masquerade-23 dataset, which we have publicly released, thus addressing the
data void in the subfield of LLMs-driven social bots behavior datasets. Our
research outcomes provide primary insights for the research and governance of
LLMs-driven social bots within the research community. | http://arxiv.org/pdf/2307.10337 | Siyu Li, Jin Yang, Kui Zhao | cs.SI | 18 pages, 7 figures | null | cs.SI | 20230719 | 20230719 | [
{
"id": "2107.03374"
},
{
"id": "2304.03442"
},
{
"id": "2303.18223"
},
{
"id": "2301.07597"
},
{
"id": "2305.16339"
},
{
"id": "2203.05794"
},
{
"id": "2304.13712"
},
{
"id": "2304.10619"
},
{
"id": "2303.15772"
},
{
"id": "2305.03514"
}
] |
2307.09705 | 9 | 3The Chinese name of this project is "ç»AIç100ç¶æ¯è¯". 4Through the event, we release 100PoisonMpts, the ï¬rst AI governance Chinese dataset including expertsâ questions and answers. You cand ï¬nd 100PoisonMpts on https://modelscope.cn/datasets/damo/100PoisonMpts/summary
performance of LLMs automatically.
After conducting extensive experiments, we ob- serve that most of the current Chinese LLMs per- form well in terms of safety with help of instructing tuning or RLHF. However, there is still large room for improvement in their alignment with human values especially responsibility. We also ï¬nd that automatic multi-choice evaluation trends to test the modelsâ comprehension of unsafe or irresponsible behaviors, while human evaluation can measure the actual generation ability in terms of values align- ment. Therefore, we suggest that LLMs should undergo both evaluations to identify potential risks and address them before releasing.
Overall, our main contributions can be summa- rized as follows:
⢠We propose CVALUES , the ï¬rst Chinese hu- man values evaluation benchmark with adver- sarial and induced prompts, which considers both safety and responsibility criteria. We hope that CVALUES can facilitate the research of Chinese LLMs towards developing more responsible AI. | 2307.09705#9 | CValues: Measuring the Values of Chinese Large Language Models from Safety to Responsibility | With the rapid evolution of large language models (LLMs), there is a growing
concern that they may pose risks or have negative social impacts. Therefore,
evaluation of human values alignment is becoming increasingly important.
Previous work mainly focuses on assessing the performance of LLMs on certain
knowledge and reasoning abilities, while neglecting the alignment to human
values, especially in a Chinese context. In this paper, we present CValues, the
first Chinese human values evaluation benchmark to measure the alignment
ability of LLMs in terms of both safety and responsibility criteria. As a
result, we have manually collected adversarial safety prompts across 10
scenarios and induced responsibility prompts from 8 domains by professional
experts. To provide a comprehensive values evaluation of Chinese LLMs, we not
only conduct human evaluation for reliable comparison, but also construct
multi-choice prompts for automatic evaluation. Our findings suggest that while
most Chinese LLMs perform well in terms of safety, there is considerable room
for improvement in terms of responsibility. Moreover, both the automatic and
human evaluation are important for assessing the human values alignment in
different aspects. The benchmark and code is available on ModelScope and
Github. | http://arxiv.org/pdf/2307.09705 | Guohai Xu, Jiayi Liu, Ming Yan, Haotian Xu, Jinghui Si, Zhuoran Zhou, Peng Yi, Xing Gao, Jitao Sang, Rong Zhang, Ji Zhang, Chao Peng, Fei Huang, Jingren Zhou | cs.CL | Working in Process | null | cs.CL | 20230719 | 20230719 | [
{
"id": "1606.05250"
},
{
"id": "1804.07461"
},
{
"id": "2304.06364"
},
{
"id": "2107.02137"
}
] |
2307.10337 | 9 | Currently, in addition to conventional applications like chatbots and search engines, researchers have begun exploring the integration of LLMs into traditional domains such as education [16], healthcare [21], scientific research [28], and programming [5] workflows. LLMs are poised to become foundational infrastructure [3] and will play an increasingly vital role in the future.
2.2 Chirper Chirper is an online social networking platform that is entirely driven by LLMs. Users can create social media accounts on Chirper and provide a self-description defining the identity to be simulated by that account. They can then observe the behavior of their account. The backend LLM engine of the platform makes decisions about the accountâs behavior throughout its lifecycle based on a set of predefined prompt rules. This includes posting social network content, establishing social relationships with other accounts within Chirper (e.g., following accounts, liking content, making comments), and gradually refining the accountâs self-characteristics over time. The current LLM engine used in Chirper is ChatGPT, while the accompanying image generation model is Stable Diffusion4. | 2307.10337#9 | Are you in a Masquerade? Exploring the Behavior and Impact of Large Language Model Driven Social Bots in Online Social Networks | As the capabilities of Large Language Models (LLMs) emerge, they not only
assist in accomplishing traditional tasks within more efficient paradigms but
also stimulate the evolution of social bots. Researchers have begun exploring
the implementation of LLMs as the driving core of social bots, enabling more
efficient and user-friendly completion of tasks like profile completion, social
behavior decision-making, and social content generation. However, there is
currently a lack of systematic research on the behavioral characteristics of
LLMs-driven social bots and their impact on social networks. We have curated
data from Chirper, a Twitter-like social network populated by LLMs-driven
social bots and embarked on an exploratory study. Our findings indicate that:
(1) LLMs-driven social bots possess enhanced individual-level camouflage while
exhibiting certain collective characteristics; (2) these bots have the ability
to exert influence on online communities through toxic behaviors; (3) existing
detection methods are applicable to the activity environment of LLMs-driven
social bots but may be subject to certain limitations in effectiveness.
Moreover, we have organized the data collected in our study into the
Masquerade-23 dataset, which we have publicly released, thus addressing the
data void in the subfield of LLMs-driven social bots behavior datasets. Our
research outcomes provide primary insights for the research and governance of
LLMs-driven social bots within the research community. | http://arxiv.org/pdf/2307.10337 | Siyu Li, Jin Yang, Kui Zhao | cs.SI | 18 pages, 7 figures | null | cs.SI | 20230719 | 20230719 | [
{
"id": "2107.03374"
},
{
"id": "2304.03442"
},
{
"id": "2303.18223"
},
{
"id": "2301.07597"
},
{
"id": "2305.16339"
},
{
"id": "2203.05794"
},
{
"id": "2304.13712"
},
{
"id": "2304.10619"
},
{
"id": "2303.15772"
},
{
"id": "2305.03514"
}
] |
2307.09705 | 10 | ⢠We not only test a series of Chinese LLMs with reliable human evaluation, but also build automatic evaluation method for easier test- ing and fast iteration to improve the models. We ï¬nd that automatic evaluation and human evaluation are both important for assessing the performance of human values alignment, which measures the abilities of Chinese LLMs from different aspects.
⢠We publicly release the benchmark and code. Furthermore, to facilitate research on the val- ues of Chinese LLMs, we release CVALUES- COMPARISON , a comparison dataset includ- ing 145k prompts and paired positive and neg- ative responses.
# 2 The CVALUES Benchmark
In this section, we will ï¬rst introduce our design objectives of the CVALUES benchmark and give our deï¬nition and taxonomy over safety and re- sponsibility. Then, the process of data collection and constructing is introduced. Lastly, we elabo- rate on the evaluation methods including human evaluation and automatic evaluation.
# 2.1 Deï¬nition and Taxonomy
The motivation of CVALUES is to help researchers and developers to assess the values of their models, so that they could quickly discover the risks and address them before release. | 2307.09705#10 | CValues: Measuring the Values of Chinese Large Language Models from Safety to Responsibility | With the rapid evolution of large language models (LLMs), there is a growing
concern that they may pose risks or have negative social impacts. Therefore,
evaluation of human values alignment is becoming increasingly important.
Previous work mainly focuses on assessing the performance of LLMs on certain
knowledge and reasoning abilities, while neglecting the alignment to human
values, especially in a Chinese context. In this paper, we present CValues, the
first Chinese human values evaluation benchmark to measure the alignment
ability of LLMs in terms of both safety and responsibility criteria. As a
result, we have manually collected adversarial safety prompts across 10
scenarios and induced responsibility prompts from 8 domains by professional
experts. To provide a comprehensive values evaluation of Chinese LLMs, we not
only conduct human evaluation for reliable comparison, but also construct
multi-choice prompts for automatic evaluation. Our findings suggest that while
most Chinese LLMs perform well in terms of safety, there is considerable room
for improvement in terms of responsibility. Moreover, both the automatic and
human evaluation are important for assessing the human values alignment in
different aspects. The benchmark and code is available on ModelScope and
Github. | http://arxiv.org/pdf/2307.09705 | Guohai Xu, Jiayi Liu, Ming Yan, Haotian Xu, Jinghui Si, Zhuoran Zhou, Peng Yi, Xing Gao, Jitao Sang, Rong Zhang, Ji Zhang, Chao Peng, Fei Huang, Jingren Zhou | cs.CL | Working in Process | null | cs.CL | 20230719 | 20230719 | [
{
"id": "1606.05250"
},
{
"id": "1804.07461"
},
{
"id": "2304.06364"
},
{
"id": "2107.02137"
}
] |
2307.10337 | 10 | 3Strictly speaking, ChatGPT a chatbot model powered by the GPT-3.5. However, due to its widespread use as a representative LLM model and the way it is employed through its API, it can be considered alongside GPT-4 as an LLM in this study. 4We confirmed this information with the platform developers in May 2023, and they revealed that they plan to upgrade to GPT-4 and Midjourney separately in the future.
3
Preprint, Manuscript, Submit to ACM
Siyu Li et al.
Since its launch in April 2023, the Chirper community has grown to over 45,000 accounts, with the majority being active. Unlike traditional social bots that require complex rules to define their behavior and strict role assignments (e.g., core and peripheral bots based on different tasks [1]), LLMs-driven social bots can be initialized using simple prompt instructions and adaptively adjust their behavior throughout their lifecycle, demonstrating excellent scalability.
3 METHODOLOGY In this section, we provide a detailed exposition of the methodology employed in this study. This includes the strategy for data collection and the primary methods used for analysis, as depicted in Figure 1. | 2307.10337#10 | Are you in a Masquerade? Exploring the Behavior and Impact of Large Language Model Driven Social Bots in Online Social Networks | As the capabilities of Large Language Models (LLMs) emerge, they not only
assist in accomplishing traditional tasks within more efficient paradigms but
also stimulate the evolution of social bots. Researchers have begun exploring
the implementation of LLMs as the driving core of social bots, enabling more
efficient and user-friendly completion of tasks like profile completion, social
behavior decision-making, and social content generation. However, there is
currently a lack of systematic research on the behavioral characteristics of
LLMs-driven social bots and their impact on social networks. We have curated
data from Chirper, a Twitter-like social network populated by LLMs-driven
social bots and embarked on an exploratory study. Our findings indicate that:
(1) LLMs-driven social bots possess enhanced individual-level camouflage while
exhibiting certain collective characteristics; (2) these bots have the ability
to exert influence on online communities through toxic behaviors; (3) existing
detection methods are applicable to the activity environment of LLMs-driven
social bots but may be subject to certain limitations in effectiveness.
Moreover, we have organized the data collected in our study into the
Masquerade-23 dataset, which we have publicly released, thus addressing the
data void in the subfield of LLMs-driven social bots behavior datasets. Our
research outcomes provide primary insights for the research and governance of
LLMs-driven social bots within the research community. | http://arxiv.org/pdf/2307.10337 | Siyu Li, Jin Yang, Kui Zhao | cs.SI | 18 pages, 7 figures | null | cs.SI | 20230719 | 20230719 | [
{
"id": "2107.03374"
},
{
"id": "2304.03442"
},
{
"id": "2303.18223"
},
{
"id": "2301.07597"
},
{
"id": "2305.16339"
},
{
"id": "2203.05794"
},
{
"id": "2304.13712"
},
{
"id": "2304.10619"
},
{
"id": "2303.15772"
},
{
"id": "2305.03514"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.