doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
2310.02255
236
LLaMA-Adapter-V2: The correct answer is (G) MM-ReAct-GPT-4. LLaVA: The correct answer is (B) LLaMA-Adapter v2-7B, which achieves the highest score in terms of Rec. # Multimodal Bard: The answer is (G) MM-ReAct-GPT-4. MM-ReAct-GPT-4 achieves the highest score of 33.1 in terms of Rec, which is the ability to recognize and understand text. The other models have the following scores in terms of Rec: * Transformers Agent (GPT-4): 18.2 * LLaMA-Adapter v2-7B: 16.8 * LLaVA-7B: 28.0 * Otter-9B: 28.4 * MM-ReAct-GPT-3.5: 24.2 * LLaVA-13B (LLaMA-2): 32.9 GPT-4V: The model that achieves the highest score in terms of Rec is “LLaVA-13B (LLaMA-2)” with a score of 39.2%. The correct option is: (F) LLaVA-13B (LLaMA-2) Correct output: (F) LLaVA-13B (LLaMA-2)
2310.02255#236
MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts
Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit impressive problem-solving skills in many tasks and domains, but their ability in mathematical reasoning in visual contexts has not been systematically studied. To bridge this gap, we present MathVista, a benchmark designed to combine challenges from diverse mathematical and visual tasks. It consists of 6,141 examples, derived from 28 existing multimodal datasets involving mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and PaperQA). Completing these tasks requires fine-grained, deep visual understanding and compositional reasoning, which all state-of-the-art foundation models find challenging. With MathVista, we have conducted a comprehensive, quantitative evaluation of 12 prominent foundation models. The best-performing GPT-4V model achieves an overall accuracy of 49.9%, substantially outperforming Bard, the second-best performer, by 15.1%. Our in-depth analysis reveals that the superiority of GPT-4V is mainly attributed to its enhanced visual perception and mathematical reasoning. However, GPT-4V still falls short of human performance by 10.4%, as it often struggles to understand complex figures and perform rigorous reasoning. This significant gap underscores the critical role that MathVista will play in the development of general-purpose AI agents capable of tackling mathematically intensive and visually rich real-world tasks. We further explore the new ability of self-verification, the application of self-consistency, and the interactive chatbot capabilities of GPT-4V, highlighting its promising potential for future research. The project is available at https://mathvista.github.io/.
http://arxiv.org/pdf/2310.02255
Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao
cs.CV, cs.AI, cs.CL, cs.LG
116 pages, 120 figures. Accepted to ICLR 2024
null
cs.CV
20231003
20240121
[ { "id": "2302.13971" }, { "id": "2308.03729" }, { "id": "2305.20050" }, { "id": "2309.17421" }, { "id": "2211.09085" }, { "id": "2305.10415" }, { "id": "2108.07258" }, { "id": "2109.06860" }, { "id": "2308.06595" }, { "id": "2303.07274" }, { "id": "2312.11805" }, { "id": "2303.17564" }, { "id": "2309.05660" }, { "id": "2201.11903" }, { "id": "2212.09662" }, { "id": "2304.14178" }, { "id": "2206.07682" }, { "id": "2310.12520" }, { "id": "2107.03374" }, { "id": "2203.11171" }, { "id": "1710.07300" }, { "id": "2305.08322" }, { "id": "2305.14761" }, { "id": "2309.01940" }, { "id": "2311.07536" }, { "id": "2308.03688" }, { "id": "2305.12524" }, { "id": "2308.13149" }, { "id": "2308.02490" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2306.06031" }, { "id": "2211.08545" }, { "id": "2307.06281" }, { "id": "2310.05146" }, { "id": "2110.14168" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.07895" }, { "id": "2302.12813" }, { "id": "2111.08171" }, { "id": "2308.01390" }, { "id": "2306.09265" }, { "id": "2211.12588" }, { "id": "2303.17580" }, { "id": "2303.16199" }, { "id": "2306.17107" }, { "id": "2309.10020" }, { "id": "2303.12712" }, { "id": "2211.16492" }, { "id": "2304.06939" }, { "id": "2309.05689" }, { "id": "2304.15010" }, { "id": "2303.13375" }, { "id": "2307.10635" } ]
2310.02255
237
Figure 98: The problem presents a complex table from an academic paper, complete with a caption and multiple rows and columns. This format poses challenges to models like LLaMA-Adapter-V2, LLaVA, and Multimodal Bard. However, GPT-4V demonstrates its exceptional reasoning abilities within tabular contexts. 94 Published as a conference paper at ICLR 2024 Push-up competition (number of push-ups) Stem | Leaf 1335778 ID: 265 Question: Justine’s P.E. class participated in a push-up competition, and Justine wrote down how many push-ups each person could do. How many people did at least 60 push-ups? (Unit: people) Ground truth: 11 LLaMA-Adapter-V2: Justine’s P.E. class participated in a push-up competition, and Justine wrote down how many push-ups each person could do. There were 8 people in the competition, and at least 60 push-ups were done by 3 people. The final value is 3. LLaVA: In the image, there are a total of 13 people participating in the push-up competition. Among them, there are two people who did at least 60 push-ups. Multimodal Bard: There are 4 people who did at least 60 push-ups.
2310.02255#237
MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts
Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit impressive problem-solving skills in many tasks and domains, but their ability in mathematical reasoning in visual contexts has not been systematically studied. To bridge this gap, we present MathVista, a benchmark designed to combine challenges from diverse mathematical and visual tasks. It consists of 6,141 examples, derived from 28 existing multimodal datasets involving mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and PaperQA). Completing these tasks requires fine-grained, deep visual understanding and compositional reasoning, which all state-of-the-art foundation models find challenging. With MathVista, we have conducted a comprehensive, quantitative evaluation of 12 prominent foundation models. The best-performing GPT-4V model achieves an overall accuracy of 49.9%, substantially outperforming Bard, the second-best performer, by 15.1%. Our in-depth analysis reveals that the superiority of GPT-4V is mainly attributed to its enhanced visual perception and mathematical reasoning. However, GPT-4V still falls short of human performance by 10.4%, as it often struggles to understand complex figures and perform rigorous reasoning. This significant gap underscores the critical role that MathVista will play in the development of general-purpose AI agents capable of tackling mathematically intensive and visually rich real-world tasks. We further explore the new ability of self-verification, the application of self-consistency, and the interactive chatbot capabilities of GPT-4V, highlighting its promising potential for future research. The project is available at https://mathvista.github.io/.
http://arxiv.org/pdf/2310.02255
Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao
cs.CV, cs.AI, cs.CL, cs.LG
116 pages, 120 figures. Accepted to ICLR 2024
null
cs.CV
20231003
20240121
[ { "id": "2302.13971" }, { "id": "2308.03729" }, { "id": "2305.20050" }, { "id": "2309.17421" }, { "id": "2211.09085" }, { "id": "2305.10415" }, { "id": "2108.07258" }, { "id": "2109.06860" }, { "id": "2308.06595" }, { "id": "2303.07274" }, { "id": "2312.11805" }, { "id": "2303.17564" }, { "id": "2309.05660" }, { "id": "2201.11903" }, { "id": "2212.09662" }, { "id": "2304.14178" }, { "id": "2206.07682" }, { "id": "2310.12520" }, { "id": "2107.03374" }, { "id": "2203.11171" }, { "id": "1710.07300" }, { "id": "2305.08322" }, { "id": "2305.14761" }, { "id": "2309.01940" }, { "id": "2311.07536" }, { "id": "2308.03688" }, { "id": "2305.12524" }, { "id": "2308.13149" }, { "id": "2308.02490" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2306.06031" }, { "id": "2211.08545" }, { "id": "2307.06281" }, { "id": "2310.05146" }, { "id": "2110.14168" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.07895" }, { "id": "2302.12813" }, { "id": "2111.08171" }, { "id": "2308.01390" }, { "id": "2306.09265" }, { "id": "2211.12588" }, { "id": "2303.17580" }, { "id": "2303.16199" }, { "id": "2306.17107" }, { "id": "2309.10020" }, { "id": "2303.12712" }, { "id": "2211.16492" }, { "id": "2304.06939" }, { "id": "2309.05689" }, { "id": "2304.15010" }, { "id": "2303.13375" }, { "id": "2307.10635" } ]
2310.02255
238
Multimodal Bard: There are 4 people who did at least 60 push-ups. The person with the highest number of push-ups did 778, so the only people who did at least 60 push-ups are the ones who did 60, 77, 78, and 778. There is one person who did 60 push-ups, one person who did 77 push-ups, one person who did 78 push-ups, and one person who did 778 push-ups. So there are a total of 1 + 1 + 1 + 1 = 4 people who did at least 60 push-ups. GPT-4V: For the stem-and-leaf plot provided: 6 represents the tens value, and the digits 3, 6, 7, and 7 represent the ones value. So, the number of push-ups represented by the 6 row are 63, 66, 67, and 67. 7 represents the tens value, and the digits 1, 3, 3, 5, 7, 7, and 8 represent the ones value. So, the number of push-ups represented by the 7 row are 71, 73, 73, 75, 77, 77, and 78. Counting these numbers: 4 from the 6 row + 7 from the 7 row = 11. Correct output: 11
2310.02255#238
MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts
Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit impressive problem-solving skills in many tasks and domains, but their ability in mathematical reasoning in visual contexts has not been systematically studied. To bridge this gap, we present MathVista, a benchmark designed to combine challenges from diverse mathematical and visual tasks. It consists of 6,141 examples, derived from 28 existing multimodal datasets involving mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and PaperQA). Completing these tasks requires fine-grained, deep visual understanding and compositional reasoning, which all state-of-the-art foundation models find challenging. With MathVista, we have conducted a comprehensive, quantitative evaluation of 12 prominent foundation models. The best-performing GPT-4V model achieves an overall accuracy of 49.9%, substantially outperforming Bard, the second-best performer, by 15.1%. Our in-depth analysis reveals that the superiority of GPT-4V is mainly attributed to its enhanced visual perception and mathematical reasoning. However, GPT-4V still falls short of human performance by 10.4%, as it often struggles to understand complex figures and perform rigorous reasoning. This significant gap underscores the critical role that MathVista will play in the development of general-purpose AI agents capable of tackling mathematically intensive and visually rich real-world tasks. We further explore the new ability of self-verification, the application of self-consistency, and the interactive chatbot capabilities of GPT-4V, highlighting its promising potential for future research. The project is available at https://mathvista.github.io/.
http://arxiv.org/pdf/2310.02255
Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao
cs.CV, cs.AI, cs.CL, cs.LG
116 pages, 120 figures. Accepted to ICLR 2024
null
cs.CV
20231003
20240121
[ { "id": "2302.13971" }, { "id": "2308.03729" }, { "id": "2305.20050" }, { "id": "2309.17421" }, { "id": "2211.09085" }, { "id": "2305.10415" }, { "id": "2108.07258" }, { "id": "2109.06860" }, { "id": "2308.06595" }, { "id": "2303.07274" }, { "id": "2312.11805" }, { "id": "2303.17564" }, { "id": "2309.05660" }, { "id": "2201.11903" }, { "id": "2212.09662" }, { "id": "2304.14178" }, { "id": "2206.07682" }, { "id": "2310.12520" }, { "id": "2107.03374" }, { "id": "2203.11171" }, { "id": "1710.07300" }, { "id": "2305.08322" }, { "id": "2305.14761" }, { "id": "2309.01940" }, { "id": "2311.07536" }, { "id": "2308.03688" }, { "id": "2305.12524" }, { "id": "2308.13149" }, { "id": "2308.02490" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2306.06031" }, { "id": "2211.08545" }, { "id": "2307.06281" }, { "id": "2310.05146" }, { "id": "2110.14168" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.07895" }, { "id": "2302.12813" }, { "id": "2111.08171" }, { "id": "2308.01390" }, { "id": "2306.09265" }, { "id": "2211.12588" }, { "id": "2303.17580" }, { "id": "2303.16199" }, { "id": "2306.17107" }, { "id": "2309.10020" }, { "id": "2303.12712" }, { "id": "2211.16492" }, { "id": "2304.06939" }, { "id": "2309.05689" }, { "id": "2304.15010" }, { "id": "2303.13375" }, { "id": "2307.10635" } ]
2310.02255
239
Counting these numbers: 4 from the 6 row + 7 from the 7 row = 11. Correct output: 11 Figure 99: This figure features a stem-and-leaf plot used for statistical analysis. While LLaMA- Adapter-V2, LLaVA, and Multimodal Bard struggle to comprehend this domain-specific table, GPT- 4V excels in understanding its content. 95 Published as a conference paper at ICLR 2024 H.4.12 OTHER VISUAL CONTEXTS On the reasoning tasks using other visual contexts, GPT-4V achieves a higher overall accuracy than all the other models, as depicted in Figure 1. For instance, GPT-4V is the only model that is capable of generating the correct answer to the question regarding a violin plot, as shown in Figure 100. Audio-Audio Similarity Text-Audio Similarity ) N N MusicLDM (mix-up) MusicLDM (original) MusicLDM (BLM) MusicLDM (BAM) MuBERT Figure 3: The violin plot of the audio-audio similarity, and the text-to-audio similarity.
2310.02255#239
MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts
Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit impressive problem-solving skills in many tasks and domains, but their ability in mathematical reasoning in visual contexts has not been systematically studied. To bridge this gap, we present MathVista, a benchmark designed to combine challenges from diverse mathematical and visual tasks. It consists of 6,141 examples, derived from 28 existing multimodal datasets involving mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and PaperQA). Completing these tasks requires fine-grained, deep visual understanding and compositional reasoning, which all state-of-the-art foundation models find challenging. With MathVista, we have conducted a comprehensive, quantitative evaluation of 12 prominent foundation models. The best-performing GPT-4V model achieves an overall accuracy of 49.9%, substantially outperforming Bard, the second-best performer, by 15.1%. Our in-depth analysis reveals that the superiority of GPT-4V is mainly attributed to its enhanced visual perception and mathematical reasoning. However, GPT-4V still falls short of human performance by 10.4%, as it often struggles to understand complex figures and perform rigorous reasoning. This significant gap underscores the critical role that MathVista will play in the development of general-purpose AI agents capable of tackling mathematically intensive and visually rich real-world tasks. We further explore the new ability of self-verification, the application of self-consistency, and the interactive chatbot capabilities of GPT-4V, highlighting its promising potential for future research. The project is available at https://mathvista.github.io/.
http://arxiv.org/pdf/2310.02255
Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao
cs.CV, cs.AI, cs.CL, cs.LG
116 pages, 120 figures. Accepted to ICLR 2024
null
cs.CV
20231003
20240121
[ { "id": "2302.13971" }, { "id": "2308.03729" }, { "id": "2305.20050" }, { "id": "2309.17421" }, { "id": "2211.09085" }, { "id": "2305.10415" }, { "id": "2108.07258" }, { "id": "2109.06860" }, { "id": "2308.06595" }, { "id": "2303.07274" }, { "id": "2312.11805" }, { "id": "2303.17564" }, { "id": "2309.05660" }, { "id": "2201.11903" }, { "id": "2212.09662" }, { "id": "2304.14178" }, { "id": "2206.07682" }, { "id": "2310.12520" }, { "id": "2107.03374" }, { "id": "2203.11171" }, { "id": "1710.07300" }, { "id": "2305.08322" }, { "id": "2305.14761" }, { "id": "2309.01940" }, { "id": "2311.07536" }, { "id": "2308.03688" }, { "id": "2305.12524" }, { "id": "2308.13149" }, { "id": "2308.02490" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2306.06031" }, { "id": "2211.08545" }, { "id": "2307.06281" }, { "id": "2310.05146" }, { "id": "2110.14168" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.07895" }, { "id": "2302.12813" }, { "id": "2111.08171" }, { "id": "2308.01390" }, { "id": "2306.09265" }, { "id": "2211.12588" }, { "id": "2303.17580" }, { "id": "2303.16199" }, { "id": "2306.17107" }, { "id": "2309.10020" }, { "id": "2303.12712" }, { "id": "2211.16492" }, { "id": "2304.06939" }, { "id": "2309.05689" }, { "id": "2304.15010" }, { "id": "2303.13375" }, { "id": "2307.10635" } ]
2310.02255
240
ID: 895 Question: Which model has the lowest Audio-Audio Similarity and Text-Audio Similarity scores overall? Choices: (A) MusicLDM (mix-up) (B) MusicLDM (original) (C) MusicLDM (BLM) (D) MusicLDM (BAM) (E) MuBERT Ground truth: (E) MuBERT LLaMA-Adapter-V2: The model with the lowest Audio-Audio Similarity and Text-Audio Similarity scores overall is MusicLDM (BLM). LLaVA: Based on the image, the model with the lowest Audio-Audio Similarity and Text-Audio Similarity scores overall is MusicLDM (mix-up). Multimodal Bard: The answer is (C). MusicLDM (BLM) has the lowest Audio-Audio Similarity and Text-Audio Similarity scores overall. The violin plot shows that the distribution of audio-audio similarity scores for MusicLDM (BLM) is the most spread out, indicating that there is a lot of variation in the scores. The distribution of text-audio similarity scores for MusicLDM (BLM) is also the most spread out. This suggests that MusicLDM (BLM) is not very consistent in its ability to capture the similarity between audio and text.
2310.02255#240
MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts
Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit impressive problem-solving skills in many tasks and domains, but their ability in mathematical reasoning in visual contexts has not been systematically studied. To bridge this gap, we present MathVista, a benchmark designed to combine challenges from diverse mathematical and visual tasks. It consists of 6,141 examples, derived from 28 existing multimodal datasets involving mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and PaperQA). Completing these tasks requires fine-grained, deep visual understanding and compositional reasoning, which all state-of-the-art foundation models find challenging. With MathVista, we have conducted a comprehensive, quantitative evaluation of 12 prominent foundation models. The best-performing GPT-4V model achieves an overall accuracy of 49.9%, substantially outperforming Bard, the second-best performer, by 15.1%. Our in-depth analysis reveals that the superiority of GPT-4V is mainly attributed to its enhanced visual perception and mathematical reasoning. However, GPT-4V still falls short of human performance by 10.4%, as it often struggles to understand complex figures and perform rigorous reasoning. This significant gap underscores the critical role that MathVista will play in the development of general-purpose AI agents capable of tackling mathematically intensive and visually rich real-world tasks. We further explore the new ability of self-verification, the application of self-consistency, and the interactive chatbot capabilities of GPT-4V, highlighting its promising potential for future research. The project is available at https://mathvista.github.io/.
http://arxiv.org/pdf/2310.02255
Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao
cs.CV, cs.AI, cs.CL, cs.LG
116 pages, 120 figures. Accepted to ICLR 2024
null
cs.CV
20231003
20240121
[ { "id": "2302.13971" }, { "id": "2308.03729" }, { "id": "2305.20050" }, { "id": "2309.17421" }, { "id": "2211.09085" }, { "id": "2305.10415" }, { "id": "2108.07258" }, { "id": "2109.06860" }, { "id": "2308.06595" }, { "id": "2303.07274" }, { "id": "2312.11805" }, { "id": "2303.17564" }, { "id": "2309.05660" }, { "id": "2201.11903" }, { "id": "2212.09662" }, { "id": "2304.14178" }, { "id": "2206.07682" }, { "id": "2310.12520" }, { "id": "2107.03374" }, { "id": "2203.11171" }, { "id": "1710.07300" }, { "id": "2305.08322" }, { "id": "2305.14761" }, { "id": "2309.01940" }, { "id": "2311.07536" }, { "id": "2308.03688" }, { "id": "2305.12524" }, { "id": "2308.13149" }, { "id": "2308.02490" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2306.06031" }, { "id": "2211.08545" }, { "id": "2307.06281" }, { "id": "2310.05146" }, { "id": "2110.14168" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.07895" }, { "id": "2302.12813" }, { "id": "2111.08171" }, { "id": "2308.01390" }, { "id": "2306.09265" }, { "id": "2211.12588" }, { "id": "2303.17580" }, { "id": "2303.16199" }, { "id": "2306.17107" }, { "id": "2309.10020" }, { "id": "2303.12712" }, { "id": "2211.16492" }, { "id": "2304.06939" }, { "id": "2309.05689" }, { "id": "2304.15010" }, { "id": "2303.13375" }, { "id": "2307.10635" } ]
2310.02255
241
GPT-4V: The model with the lowest Audio-Audio Similarity and Text-Audio Similarity scores overall is MuBERT. # Correct output: (E) MuBERT Figure 100: Violin plots are widely used in academic papers to visualize the distribution of data and probability density. In this example, only GPT-4V is able to generate the correct output from the figure. 96 Published as a conference paper at ICLR 2024 H.5 SELF-VERIFICATION IN GPT-4V Self-verification is a social psychological theory asserting that people desire others to perceive them as they see themselves. Consequently, individuals will take active measures to ensure that others view them in ways that confirm their stable self-concepts (Talaifar & Swann, 2020).
2310.02255#241
MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts
Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit impressive problem-solving skills in many tasks and domains, but their ability in mathematical reasoning in visual contexts has not been systematically studied. To bridge this gap, we present MathVista, a benchmark designed to combine challenges from diverse mathematical and visual tasks. It consists of 6,141 examples, derived from 28 existing multimodal datasets involving mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and PaperQA). Completing these tasks requires fine-grained, deep visual understanding and compositional reasoning, which all state-of-the-art foundation models find challenging. With MathVista, we have conducted a comprehensive, quantitative evaluation of 12 prominent foundation models. The best-performing GPT-4V model achieves an overall accuracy of 49.9%, substantially outperforming Bard, the second-best performer, by 15.1%. Our in-depth analysis reveals that the superiority of GPT-4V is mainly attributed to its enhanced visual perception and mathematical reasoning. However, GPT-4V still falls short of human performance by 10.4%, as it often struggles to understand complex figures and perform rigorous reasoning. This significant gap underscores the critical role that MathVista will play in the development of general-purpose AI agents capable of tackling mathematically intensive and visually rich real-world tasks. We further explore the new ability of self-verification, the application of self-consistency, and the interactive chatbot capabilities of GPT-4V, highlighting its promising potential for future research. The project is available at https://mathvista.github.io/.
http://arxiv.org/pdf/2310.02255
Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao
cs.CV, cs.AI, cs.CL, cs.LG
116 pages, 120 figures. Accepted to ICLR 2024
null
cs.CV
20231003
20240121
[ { "id": "2302.13971" }, { "id": "2308.03729" }, { "id": "2305.20050" }, { "id": "2309.17421" }, { "id": "2211.09085" }, { "id": "2305.10415" }, { "id": "2108.07258" }, { "id": "2109.06860" }, { "id": "2308.06595" }, { "id": "2303.07274" }, { "id": "2312.11805" }, { "id": "2303.17564" }, { "id": "2309.05660" }, { "id": "2201.11903" }, { "id": "2212.09662" }, { "id": "2304.14178" }, { "id": "2206.07682" }, { "id": "2310.12520" }, { "id": "2107.03374" }, { "id": "2203.11171" }, { "id": "1710.07300" }, { "id": "2305.08322" }, { "id": "2305.14761" }, { "id": "2309.01940" }, { "id": "2311.07536" }, { "id": "2308.03688" }, { "id": "2305.12524" }, { "id": "2308.13149" }, { "id": "2308.02490" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2306.06031" }, { "id": "2211.08545" }, { "id": "2307.06281" }, { "id": "2310.05146" }, { "id": "2110.14168" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.07895" }, { "id": "2302.12813" }, { "id": "2111.08171" }, { "id": "2308.01390" }, { "id": "2306.09265" }, { "id": "2211.12588" }, { "id": "2303.17580" }, { "id": "2303.16199" }, { "id": "2306.17107" }, { "id": "2309.10020" }, { "id": "2303.12712" }, { "id": "2211.16492" }, { "id": "2304.06939" }, { "id": "2309.05689" }, { "id": "2304.15010" }, { "id": "2303.13375" }, { "id": "2307.10635" } ]
2310.02255
242
Interestingly, in our experiments, GPT-4V demonstrates an ability similar to self-verification. The model can inspect its own behaviors during the course of reasoning and can take active actions to correct its mistakes. Note that self-verification we discuss here differs from several recent works on improving the model’s outputs based on external feedback (Peng et al., 2023) or additional gener- ations (Yang et al., 2023b). The examples in Figures 101 and 103 show that GPT-4V, on its own, can inspect a set of candidate answers and identify the one that is valid and meets all the given con- straints. The multi-step reasoning example in Figure 102 shows that GPT-4V can verify the validity of (the result of) each reasoning step, and explore alternative approaches if any invalid (intermediate) result is detected (e.g., a negative value for length).
2310.02255#242
MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts
Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit impressive problem-solving skills in many tasks and domains, but their ability in mathematical reasoning in visual contexts has not been systematically studied. To bridge this gap, we present MathVista, a benchmark designed to combine challenges from diverse mathematical and visual tasks. It consists of 6,141 examples, derived from 28 existing multimodal datasets involving mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and PaperQA). Completing these tasks requires fine-grained, deep visual understanding and compositional reasoning, which all state-of-the-art foundation models find challenging. With MathVista, we have conducted a comprehensive, quantitative evaluation of 12 prominent foundation models. The best-performing GPT-4V model achieves an overall accuracy of 49.9%, substantially outperforming Bard, the second-best performer, by 15.1%. Our in-depth analysis reveals that the superiority of GPT-4V is mainly attributed to its enhanced visual perception and mathematical reasoning. However, GPT-4V still falls short of human performance by 10.4%, as it often struggles to understand complex figures and perform rigorous reasoning. This significant gap underscores the critical role that MathVista will play in the development of general-purpose AI agents capable of tackling mathematically intensive and visually rich real-world tasks. We further explore the new ability of self-verification, the application of self-consistency, and the interactive chatbot capabilities of GPT-4V, highlighting its promising potential for future research. The project is available at https://mathvista.github.io/.
http://arxiv.org/pdf/2310.02255
Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao
cs.CV, cs.AI, cs.CL, cs.LG
116 pages, 120 figures. Accepted to ICLR 2024
null
cs.CV
20231003
20240121
[ { "id": "2302.13971" }, { "id": "2308.03729" }, { "id": "2305.20050" }, { "id": "2309.17421" }, { "id": "2211.09085" }, { "id": "2305.10415" }, { "id": "2108.07258" }, { "id": "2109.06860" }, { "id": "2308.06595" }, { "id": "2303.07274" }, { "id": "2312.11805" }, { "id": "2303.17564" }, { "id": "2309.05660" }, { "id": "2201.11903" }, { "id": "2212.09662" }, { "id": "2304.14178" }, { "id": "2206.07682" }, { "id": "2310.12520" }, { "id": "2107.03374" }, { "id": "2203.11171" }, { "id": "1710.07300" }, { "id": "2305.08322" }, { "id": "2305.14761" }, { "id": "2309.01940" }, { "id": "2311.07536" }, { "id": "2308.03688" }, { "id": "2305.12524" }, { "id": "2308.13149" }, { "id": "2308.02490" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2306.06031" }, { "id": "2211.08545" }, { "id": "2307.06281" }, { "id": "2310.05146" }, { "id": "2110.14168" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.07895" }, { "id": "2302.12813" }, { "id": "2111.08171" }, { "id": "2308.01390" }, { "id": "2306.09265" }, { "id": "2211.12588" }, { "id": "2303.17580" }, { "id": "2303.16199" }, { "id": "2306.17107" }, { "id": "2309.10020" }, { "id": "2303.12712" }, { "id": "2211.16492" }, { "id": "2304.06939" }, { "id": "2309.05689" }, { "id": "2304.15010" }, { "id": "2303.13375" }, { "id": "2307.10635" } ]
2310.02255
243
Although self-verification does not guarantee an accurate response even after multiple tries, espe- cially when applying GPT-4V to visual perception or mathematical reasoning in intricate scenarios (see Figure 104), it is instrumental in improving the model performance on MATHVISTA. We also found that GPT-4V’s self-verification is weaker for non-English tasks, such as Mandarin, as shown in Figure 105. It is also worth noting that self-verification does not emerge in other foundation mod- els we studied, or at least it is not as robust as that of GPT-4V. As shown in Figure 106, Multimodal Bard first attempts a natural language solution, followed by a program-assisted one for verification. However, the program-aided solution leads to a different and incorrect prediction. The emergent ability of self-verification highlights GPT-4V’s potential in solving rigorous reasoning and theorem-proving tasks. One of the most exciting research topics for future work is to develop a mechanism that allows the model to activate self-verification consistently at the right time and to use a set of alternative approaches that maximize the success rate of task completion.
2310.02255#243
MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts
Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit impressive problem-solving skills in many tasks and domains, but their ability in mathematical reasoning in visual contexts has not been systematically studied. To bridge this gap, we present MathVista, a benchmark designed to combine challenges from diverse mathematical and visual tasks. It consists of 6,141 examples, derived from 28 existing multimodal datasets involving mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and PaperQA). Completing these tasks requires fine-grained, deep visual understanding and compositional reasoning, which all state-of-the-art foundation models find challenging. With MathVista, we have conducted a comprehensive, quantitative evaluation of 12 prominent foundation models. The best-performing GPT-4V model achieves an overall accuracy of 49.9%, substantially outperforming Bard, the second-best performer, by 15.1%. Our in-depth analysis reveals that the superiority of GPT-4V is mainly attributed to its enhanced visual perception and mathematical reasoning. However, GPT-4V still falls short of human performance by 10.4%, as it often struggles to understand complex figures and perform rigorous reasoning. This significant gap underscores the critical role that MathVista will play in the development of general-purpose AI agents capable of tackling mathematically intensive and visually rich real-world tasks. We further explore the new ability of self-verification, the application of self-consistency, and the interactive chatbot capabilities of GPT-4V, highlighting its promising potential for future research. The project is available at https://mathvista.github.io/.
http://arxiv.org/pdf/2310.02255
Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao
cs.CV, cs.AI, cs.CL, cs.LG
116 pages, 120 figures. Accepted to ICLR 2024
null
cs.CV
20231003
20240121
[ { "id": "2302.13971" }, { "id": "2308.03729" }, { "id": "2305.20050" }, { "id": "2309.17421" }, { "id": "2211.09085" }, { "id": "2305.10415" }, { "id": "2108.07258" }, { "id": "2109.06860" }, { "id": "2308.06595" }, { "id": "2303.07274" }, { "id": "2312.11805" }, { "id": "2303.17564" }, { "id": "2309.05660" }, { "id": "2201.11903" }, { "id": "2212.09662" }, { "id": "2304.14178" }, { "id": "2206.07682" }, { "id": "2310.12520" }, { "id": "2107.03374" }, { "id": "2203.11171" }, { "id": "1710.07300" }, { "id": "2305.08322" }, { "id": "2305.14761" }, { "id": "2309.01940" }, { "id": "2311.07536" }, { "id": "2308.03688" }, { "id": "2305.12524" }, { "id": "2308.13149" }, { "id": "2308.02490" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2306.06031" }, { "id": "2211.08545" }, { "id": "2307.06281" }, { "id": "2310.05146" }, { "id": "2110.14168" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.07895" }, { "id": "2302.12813" }, { "id": "2111.08171" }, { "id": "2308.01390" }, { "id": "2306.09265" }, { "id": "2211.12588" }, { "id": "2303.17580" }, { "id": "2303.16199" }, { "id": "2306.17107" }, { "id": "2309.10020" }, { "id": "2303.12712" }, { "id": "2211.16492" }, { "id": "2304.06939" }, { "id": "2309.05689" }, { "id": "2304.15010" }, { "id": "2303.13375" }, { "id": "2307.10635" } ]
2310.02255
245
GPT-4V: The sum of the lengths of any two sides of a triangle must be greater than the length of the third side. Let x be the length of the third side. From the triangle inequality: 12 + 7 > x => x < 19 Similarly: 7 + x > 12 => x > 5 And: 12 + x > 7 => x > −5 (This will always be true since the length cannot be negative) So, 5 < x < 19 Now, let’s check the given options: (A) 29 => x = 29 − 12 − 7 = 10 (Possible) (B) 34 => x = 34 − 12 − 7 = 15 (Possible) (C) 37 => x = 37 − 12 − 7 = 18 (Possible) (D) 38 => x = 38 − 12 − 7 = 19 (Not Possible) Correct output: (D) 38 Figure 101: Given several possible values for the length of the third side, GPT-4V verifies each candidate answer to ensure that it meets the given constraints (highlighted in blue). 97 Published as a conference paper at ICLR 2024
2310.02255#245
MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts
Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit impressive problem-solving skills in many tasks and domains, but their ability in mathematical reasoning in visual contexts has not been systematically studied. To bridge this gap, we present MathVista, a benchmark designed to combine challenges from diverse mathematical and visual tasks. It consists of 6,141 examples, derived from 28 existing multimodal datasets involving mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and PaperQA). Completing these tasks requires fine-grained, deep visual understanding and compositional reasoning, which all state-of-the-art foundation models find challenging. With MathVista, we have conducted a comprehensive, quantitative evaluation of 12 prominent foundation models. The best-performing GPT-4V model achieves an overall accuracy of 49.9%, substantially outperforming Bard, the second-best performer, by 15.1%. Our in-depth analysis reveals that the superiority of GPT-4V is mainly attributed to its enhanced visual perception and mathematical reasoning. However, GPT-4V still falls short of human performance by 10.4%, as it often struggles to understand complex figures and perform rigorous reasoning. This significant gap underscores the critical role that MathVista will play in the development of general-purpose AI agents capable of tackling mathematically intensive and visually rich real-world tasks. We further explore the new ability of self-verification, the application of self-consistency, and the interactive chatbot capabilities of GPT-4V, highlighting its promising potential for future research. The project is available at https://mathvista.github.io/.
http://arxiv.org/pdf/2310.02255
Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao
cs.CV, cs.AI, cs.CL, cs.LG
116 pages, 120 figures. Accepted to ICLR 2024
null
cs.CV
20231003
20240121
[ { "id": "2302.13971" }, { "id": "2308.03729" }, { "id": "2305.20050" }, { "id": "2309.17421" }, { "id": "2211.09085" }, { "id": "2305.10415" }, { "id": "2108.07258" }, { "id": "2109.06860" }, { "id": "2308.06595" }, { "id": "2303.07274" }, { "id": "2312.11805" }, { "id": "2303.17564" }, { "id": "2309.05660" }, { "id": "2201.11903" }, { "id": "2212.09662" }, { "id": "2304.14178" }, { "id": "2206.07682" }, { "id": "2310.12520" }, { "id": "2107.03374" }, { "id": "2203.11171" }, { "id": "1710.07300" }, { "id": "2305.08322" }, { "id": "2305.14761" }, { "id": "2309.01940" }, { "id": "2311.07536" }, { "id": "2308.03688" }, { "id": "2305.12524" }, { "id": "2308.13149" }, { "id": "2308.02490" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2306.06031" }, { "id": "2211.08545" }, { "id": "2307.06281" }, { "id": "2310.05146" }, { "id": "2110.14168" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.07895" }, { "id": "2302.12813" }, { "id": "2111.08171" }, { "id": "2308.01390" }, { "id": "2306.09265" }, { "id": "2211.12588" }, { "id": "2303.17580" }, { "id": "2303.16199" }, { "id": "2306.17107" }, { "id": "2309.10020" }, { "id": "2303.12712" }, { "id": "2211.16492" }, { "id": "2304.06939" }, { "id": "2309.05689" }, { "id": "2304.15010" }, { "id": "2303.13375" }, { "id": "2307.10635" } ]
2310.02255
247
AD = DC (since D is the midpoint of AC) Now, using the entire line segment AB: AB = AD + DB AB = DC + DB Given DB = 7.0 cm, we can rearrange to find: AD (or DC) = AB − DB AD = AB − 7.0 Now, let’s find AB: From the given information, AB = AC + CB AB = AC + 4.0 Using the previously deduced equation: AD = AC + 4.0 − 7.0 AD = AC − 3.0 Since D is the midpoint of AC, AD will be half the length of AC: AC = 2(AD) From the above: AC − 3.0 = 2(AD) Given that AD and DC are equal, we can represent DC as: DC = AC/2 Using the relation CB = 4.0 cm and DB = 7.0 cm: AC/2 + 7.0 = 4.0 AC/2 = −3.0 This is not possible as lengths cannot be negative. Let’s re-evaluate our approach. From the original figure: AB = AD + DB AB = DC + DB Given DB = 7.0 cm, we have: AD = AB − 7.0 AD = AC/2 − 7.0 Using the relation AB = AC + CB: AC/2 + 7.0 = AC +
2310.02255#247
MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts
Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit impressive problem-solving skills in many tasks and domains, but their ability in mathematical reasoning in visual contexts has not been systematically studied. To bridge this gap, we present MathVista, a benchmark designed to combine challenges from diverse mathematical and visual tasks. It consists of 6,141 examples, derived from 28 existing multimodal datasets involving mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and PaperQA). Completing these tasks requires fine-grained, deep visual understanding and compositional reasoning, which all state-of-the-art foundation models find challenging. With MathVista, we have conducted a comprehensive, quantitative evaluation of 12 prominent foundation models. The best-performing GPT-4V model achieves an overall accuracy of 49.9%, substantially outperforming Bard, the second-best performer, by 15.1%. Our in-depth analysis reveals that the superiority of GPT-4V is mainly attributed to its enhanced visual perception and mathematical reasoning. However, GPT-4V still falls short of human performance by 10.4%, as it often struggles to understand complex figures and perform rigorous reasoning. This significant gap underscores the critical role that MathVista will play in the development of general-purpose AI agents capable of tackling mathematically intensive and visually rich real-world tasks. We further explore the new ability of self-verification, the application of self-consistency, and the interactive chatbot capabilities of GPT-4V, highlighting its promising potential for future research. The project is available at https://mathvista.github.io/.
http://arxiv.org/pdf/2310.02255
Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao
cs.CV, cs.AI, cs.CL, cs.LG
116 pages, 120 figures. Accepted to ICLR 2024
null
cs.CV
20231003
20240121
[ { "id": "2302.13971" }, { "id": "2308.03729" }, { "id": "2305.20050" }, { "id": "2309.17421" }, { "id": "2211.09085" }, { "id": "2305.10415" }, { "id": "2108.07258" }, { "id": "2109.06860" }, { "id": "2308.06595" }, { "id": "2303.07274" }, { "id": "2312.11805" }, { "id": "2303.17564" }, { "id": "2309.05660" }, { "id": "2201.11903" }, { "id": "2212.09662" }, { "id": "2304.14178" }, { "id": "2206.07682" }, { "id": "2310.12520" }, { "id": "2107.03374" }, { "id": "2203.11171" }, { "id": "1710.07300" }, { "id": "2305.08322" }, { "id": "2305.14761" }, { "id": "2309.01940" }, { "id": "2311.07536" }, { "id": "2308.03688" }, { "id": "2305.12524" }, { "id": "2308.13149" }, { "id": "2308.02490" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2306.06031" }, { "id": "2211.08545" }, { "id": "2307.06281" }, { "id": "2310.05146" }, { "id": "2110.14168" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.07895" }, { "id": "2302.12813" }, { "id": "2111.08171" }, { "id": "2308.01390" }, { "id": "2306.09265" }, { "id": "2211.12588" }, { "id": "2303.17580" }, { "id": "2303.16199" }, { "id": "2306.17107" }, { "id": "2309.10020" }, { "id": "2303.12712" }, { "id": "2211.16492" }, { "id": "2304.06939" }, { "id": "2309.05689" }, { "id": "2304.15010" }, { "id": "2303.13375" }, { "id": "2307.10635" } ]
2310.02255
249
# Correct output: (A) 6 cm Figure 102: GPT-4V generates the result as AC/2 = −3.0. It verifies the validity of the value and concludes it cannot be negative. The model then re-evaluates its approach and adopts a new method to obtain the correct answer, highlighted in blue. 98 Published as a conference paper at ICLR 2024 ID: 28 Question: AB is a diameter, AC = 8 inches, and BC = 15 inches. Find the radius of the circle. Choices: (A) 7.5 (B) 8 (C) 8.5 (D) 17 Ground truth: (C) 8.5
2310.02255#249
MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts
Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit impressive problem-solving skills in many tasks and domains, but their ability in mathematical reasoning in visual contexts has not been systematically studied. To bridge this gap, we present MathVista, a benchmark designed to combine challenges from diverse mathematical and visual tasks. It consists of 6,141 examples, derived from 28 existing multimodal datasets involving mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and PaperQA). Completing these tasks requires fine-grained, deep visual understanding and compositional reasoning, which all state-of-the-art foundation models find challenging. With MathVista, we have conducted a comprehensive, quantitative evaluation of 12 prominent foundation models. The best-performing GPT-4V model achieves an overall accuracy of 49.9%, substantially outperforming Bard, the second-best performer, by 15.1%. Our in-depth analysis reveals that the superiority of GPT-4V is mainly attributed to its enhanced visual perception and mathematical reasoning. However, GPT-4V still falls short of human performance by 10.4%, as it often struggles to understand complex figures and perform rigorous reasoning. This significant gap underscores the critical role that MathVista will play in the development of general-purpose AI agents capable of tackling mathematically intensive and visually rich real-world tasks. We further explore the new ability of self-verification, the application of self-consistency, and the interactive chatbot capabilities of GPT-4V, highlighting its promising potential for future research. The project is available at https://mathvista.github.io/.
http://arxiv.org/pdf/2310.02255
Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao
cs.CV, cs.AI, cs.CL, cs.LG
116 pages, 120 figures. Accepted to ICLR 2024
null
cs.CV
20231003
20240121
[ { "id": "2302.13971" }, { "id": "2308.03729" }, { "id": "2305.20050" }, { "id": "2309.17421" }, { "id": "2211.09085" }, { "id": "2305.10415" }, { "id": "2108.07258" }, { "id": "2109.06860" }, { "id": "2308.06595" }, { "id": "2303.07274" }, { "id": "2312.11805" }, { "id": "2303.17564" }, { "id": "2309.05660" }, { "id": "2201.11903" }, { "id": "2212.09662" }, { "id": "2304.14178" }, { "id": "2206.07682" }, { "id": "2310.12520" }, { "id": "2107.03374" }, { "id": "2203.11171" }, { "id": "1710.07300" }, { "id": "2305.08322" }, { "id": "2305.14761" }, { "id": "2309.01940" }, { "id": "2311.07536" }, { "id": "2308.03688" }, { "id": "2305.12524" }, { "id": "2308.13149" }, { "id": "2308.02490" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2306.06031" }, { "id": "2211.08545" }, { "id": "2307.06281" }, { "id": "2310.05146" }, { "id": "2110.14168" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.07895" }, { "id": "2302.12813" }, { "id": "2111.08171" }, { "id": "2308.01390" }, { "id": "2306.09265" }, { "id": "2211.12588" }, { "id": "2303.17580" }, { "id": "2303.16199" }, { "id": "2306.17107" }, { "id": "2309.10020" }, { "id": "2303.12712" }, { "id": "2211.16492" }, { "id": "2304.06939" }, { "id": "2309.05689" }, { "id": "2304.15010" }, { "id": "2303.13375" }, { "id": "2307.10635" } ]
2310.02255
250
GPT-4V: Given that AB is a diameter and AC = 8 inches and BC = 15 inches, we can use the Pythagorean theorem to determine the length of AB. The right triangle is formed by A, the center of the circle, and C makes AC the shorter leg, the radius the longer leg, and BC the hypotenuse. Let r be the radius of the circle. Using the Pythagorean theorem: AC 2 + r2 = BC 2 Substituting the given lengths: 82 + r2 = 152 64 + r2 = 225 r2 = 161 Taking the square root of both sides: r = √ 161 However, r is not any of the choices. We need to find the diameter, which is twice the radius. AB = 2r = 2 √ 161 But, since the diameter is just twice the radius and the radius is one of the choices, the radius is: r = √ 161/2 = 8.5 (approximately) # Correct output: (C) 8.5 √ 161, GPT-4V performs two √ 161/2. It then selects the Figure 103: Starting by computing the initial radius length as r = verifications resulting in different answers: r = 2 161 and r = matching option from the choices as its final prediction, highlighted in blue. √ 99 Published as a conference paper at ICLR 2024
2310.02255#250
MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts
Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit impressive problem-solving skills in many tasks and domains, but their ability in mathematical reasoning in visual contexts has not been systematically studied. To bridge this gap, we present MathVista, a benchmark designed to combine challenges from diverse mathematical and visual tasks. It consists of 6,141 examples, derived from 28 existing multimodal datasets involving mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and PaperQA). Completing these tasks requires fine-grained, deep visual understanding and compositional reasoning, which all state-of-the-art foundation models find challenging. With MathVista, we have conducted a comprehensive, quantitative evaluation of 12 prominent foundation models. The best-performing GPT-4V model achieves an overall accuracy of 49.9%, substantially outperforming Bard, the second-best performer, by 15.1%. Our in-depth analysis reveals that the superiority of GPT-4V is mainly attributed to its enhanced visual perception and mathematical reasoning. However, GPT-4V still falls short of human performance by 10.4%, as it often struggles to understand complex figures and perform rigorous reasoning. This significant gap underscores the critical role that MathVista will play in the development of general-purpose AI agents capable of tackling mathematically intensive and visually rich real-world tasks. We further explore the new ability of self-verification, the application of self-consistency, and the interactive chatbot capabilities of GPT-4V, highlighting its promising potential for future research. The project is available at https://mathvista.github.io/.
http://arxiv.org/pdf/2310.02255
Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao
cs.CV, cs.AI, cs.CL, cs.LG
116 pages, 120 figures. Accepted to ICLR 2024
null
cs.CV
20231003
20240121
[ { "id": "2302.13971" }, { "id": "2308.03729" }, { "id": "2305.20050" }, { "id": "2309.17421" }, { "id": "2211.09085" }, { "id": "2305.10415" }, { "id": "2108.07258" }, { "id": "2109.06860" }, { "id": "2308.06595" }, { "id": "2303.07274" }, { "id": "2312.11805" }, { "id": "2303.17564" }, { "id": "2309.05660" }, { "id": "2201.11903" }, { "id": "2212.09662" }, { "id": "2304.14178" }, { "id": "2206.07682" }, { "id": "2310.12520" }, { "id": "2107.03374" }, { "id": "2203.11171" }, { "id": "1710.07300" }, { "id": "2305.08322" }, { "id": "2305.14761" }, { "id": "2309.01940" }, { "id": "2311.07536" }, { "id": "2308.03688" }, { "id": "2305.12524" }, { "id": "2308.13149" }, { "id": "2308.02490" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2306.06031" }, { "id": "2211.08545" }, { "id": "2307.06281" }, { "id": "2310.05146" }, { "id": "2110.14168" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.07895" }, { "id": "2302.12813" }, { "id": "2111.08171" }, { "id": "2308.01390" }, { "id": "2306.09265" }, { "id": "2211.12588" }, { "id": "2303.17580" }, { "id": "2303.16199" }, { "id": "2306.17107" }, { "id": "2309.10020" }, { "id": "2303.12712" }, { "id": "2211.16492" }, { "id": "2304.06939" }, { "id": "2309.05689" }, { "id": "2304.15010" }, { "id": "2303.13375" }, { "id": "2307.10635" } ]
2310.02255
253
1. For the first triangle: (8 − 4) × 2 = 8 2. For the second triangle: (3 − 2) × 3 = 3 Applying the same pattern to the third triangle: (12 − 1) × 5 = 55 However, since the answer is in single digits, it seems we take only the unit’s place of the result. So, the missing number is: 5 (unit’s place of 55) This is not in the given options. Therefore, let’s explore another possible pattern: 1. For the first triangle: (8 + 4)/2 = 6 2. For the second triangle: (3 + 2)/3 = 5/3 = 1.67 (This does not match the number inside) But let’s continue with this pattern for the third triangle: (12 + 1)/5 = 13/5 = 2.6 Again, this does not match any of the options. Considering other possible operations: 1. For the first triangle: (8 × 4)/2 = 16 2. For the second triangle: (3 × 2)/3 = 2 3. For the third triangle: (12 × 1)/5 = 12/5 = 2.4 Again, this does not match any of the options. Given the information and the options provided, I am unable to find a consistent pattern that applies to all the triangles. Therefore, I cannot determine the missing number from the given choices.
2310.02255#253
MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts
Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit impressive problem-solving skills in many tasks and domains, but their ability in mathematical reasoning in visual contexts has not been systematically studied. To bridge this gap, we present MathVista, a benchmark designed to combine challenges from diverse mathematical and visual tasks. It consists of 6,141 examples, derived from 28 existing multimodal datasets involving mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and PaperQA). Completing these tasks requires fine-grained, deep visual understanding and compositional reasoning, which all state-of-the-art foundation models find challenging. With MathVista, we have conducted a comprehensive, quantitative evaluation of 12 prominent foundation models. The best-performing GPT-4V model achieves an overall accuracy of 49.9%, substantially outperforming Bard, the second-best performer, by 15.1%. Our in-depth analysis reveals that the superiority of GPT-4V is mainly attributed to its enhanced visual perception and mathematical reasoning. However, GPT-4V still falls short of human performance by 10.4%, as it often struggles to understand complex figures and perform rigorous reasoning. This significant gap underscores the critical role that MathVista will play in the development of general-purpose AI agents capable of tackling mathematically intensive and visually rich real-world tasks. We further explore the new ability of self-verification, the application of self-consistency, and the interactive chatbot capabilities of GPT-4V, highlighting its promising potential for future research. The project is available at https://mathvista.github.io/.
http://arxiv.org/pdf/2310.02255
Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao
cs.CV, cs.AI, cs.CL, cs.LG
116 pages, 120 figures. Accepted to ICLR 2024
null
cs.CV
20231003
20240121
[ { "id": "2302.13971" }, { "id": "2308.03729" }, { "id": "2305.20050" }, { "id": "2309.17421" }, { "id": "2211.09085" }, { "id": "2305.10415" }, { "id": "2108.07258" }, { "id": "2109.06860" }, { "id": "2308.06595" }, { "id": "2303.07274" }, { "id": "2312.11805" }, { "id": "2303.17564" }, { "id": "2309.05660" }, { "id": "2201.11903" }, { "id": "2212.09662" }, { "id": "2304.14178" }, { "id": "2206.07682" }, { "id": "2310.12520" }, { "id": "2107.03374" }, { "id": "2203.11171" }, { "id": "1710.07300" }, { "id": "2305.08322" }, { "id": "2305.14761" }, { "id": "2309.01940" }, { "id": "2311.07536" }, { "id": "2308.03688" }, { "id": "2305.12524" }, { "id": "2308.13149" }, { "id": "2308.02490" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2306.06031" }, { "id": "2211.08545" }, { "id": "2307.06281" }, { "id": "2310.05146" }, { "id": "2110.14168" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.07895" }, { "id": "2302.12813" }, { "id": "2111.08171" }, { "id": "2308.01390" }, { "id": "2306.09265" }, { "id": "2211.12588" }, { "id": "2303.17580" }, { "id": "2303.16199" }, { "id": "2306.17107" }, { "id": "2309.10020" }, { "id": "2303.12712" }, { "id": "2211.16492" }, { "id": "2304.06939" }, { "id": "2309.05689" }, { "id": "2304.15010" }, { "id": "2303.13375" }, { "id": "2307.10635" } ]
2310.02255
254
Therefore, I cannot determine the missing number from the given choices. Figure 104: In this logical reasoning question, GPT-4V verifies its generated solution and retries twice. However, it still fails to produce an answer that matches any of the options. Consequently, it claims that it cannot determine the missing number from the given choices (highlighted in blue). 100 Published as a conference paper at ICLR 2024 ID: 32 Question: 如 图 , 在 ABCD 中 ,AB = AC,∠CAB = 40◦, 则 ∠D 的度数是()。 (In the figure, within ABCD, AB = AC, and ∠CAB = 40◦, then the measure of ∠D is ().) Choices: (A) 40° (B) 50° (C) 60° (D) 70° Ground truth: (D) 70°
2310.02255#254
MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts
Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit impressive problem-solving skills in many tasks and domains, but their ability in mathematical reasoning in visual contexts has not been systematically studied. To bridge this gap, we present MathVista, a benchmark designed to combine challenges from diverse mathematical and visual tasks. It consists of 6,141 examples, derived from 28 existing multimodal datasets involving mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and PaperQA). Completing these tasks requires fine-grained, deep visual understanding and compositional reasoning, which all state-of-the-art foundation models find challenging. With MathVista, we have conducted a comprehensive, quantitative evaluation of 12 prominent foundation models. The best-performing GPT-4V model achieves an overall accuracy of 49.9%, substantially outperforming Bard, the second-best performer, by 15.1%. Our in-depth analysis reveals that the superiority of GPT-4V is mainly attributed to its enhanced visual perception and mathematical reasoning. However, GPT-4V still falls short of human performance by 10.4%, as it often struggles to understand complex figures and perform rigorous reasoning. This significant gap underscores the critical role that MathVista will play in the development of general-purpose AI agents capable of tackling mathematically intensive and visually rich real-world tasks. We further explore the new ability of self-verification, the application of self-consistency, and the interactive chatbot capabilities of GPT-4V, highlighting its promising potential for future research. The project is available at https://mathvista.github.io/.
http://arxiv.org/pdf/2310.02255
Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao
cs.CV, cs.AI, cs.CL, cs.LG
116 pages, 120 figures. Accepted to ICLR 2024
null
cs.CV
20231003
20240121
[ { "id": "2302.13971" }, { "id": "2308.03729" }, { "id": "2305.20050" }, { "id": "2309.17421" }, { "id": "2211.09085" }, { "id": "2305.10415" }, { "id": "2108.07258" }, { "id": "2109.06860" }, { "id": "2308.06595" }, { "id": "2303.07274" }, { "id": "2312.11805" }, { "id": "2303.17564" }, { "id": "2309.05660" }, { "id": "2201.11903" }, { "id": "2212.09662" }, { "id": "2304.14178" }, { "id": "2206.07682" }, { "id": "2310.12520" }, { "id": "2107.03374" }, { "id": "2203.11171" }, { "id": "1710.07300" }, { "id": "2305.08322" }, { "id": "2305.14761" }, { "id": "2309.01940" }, { "id": "2311.07536" }, { "id": "2308.03688" }, { "id": "2305.12524" }, { "id": "2308.13149" }, { "id": "2308.02490" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2306.06031" }, { "id": "2211.08545" }, { "id": "2307.06281" }, { "id": "2310.05146" }, { "id": "2110.14168" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.07895" }, { "id": "2302.12813" }, { "id": "2111.08171" }, { "id": "2308.01390" }, { "id": "2306.09265" }, { "id": "2211.12588" }, { "id": "2303.17580" }, { "id": "2303.16199" }, { "id": "2306.17107" }, { "id": "2309.10020" }, { "id": "2303.12712" }, { "id": "2211.16492" }, { "id": "2304.06939" }, { "id": "2309.05689" }, { "id": "2304.15010" }, { "id": "2303.13375" }, { "id": "2307.10635" } ]
2310.02255
255
GPT-4V: 给定 AB = AC,△ABC 是等腰三角形。由于 ∠CAB = 40◦,所以 ∠CBA 也为 40◦ (因为在等 腰三角形中,两个底角是相等的)。 (Given AB = AC, △ABC is an isosceles triangle. Since ∠CAB = 40◦, ∠CBA is also 40◦ (because in an isosceles triangle, the two base angles are equal).) 考虑四边形 ABCD,其内角和为 360◦。已知 ∠CAB = 40◦,∠CBA = 40◦,并且 ∠BCA 和 ∠D 是补角。 (Considering the quadrilateral ABCD, its interior angles sum up to 360◦. We know that ∠CAB = 40◦, ∠CBA = 40◦, and ∠BCA and ∠D are
2310.02255#255
MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts
Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit impressive problem-solving skills in many tasks and domains, but their ability in mathematical reasoning in visual contexts has not been systematically studied. To bridge this gap, we present MathVista, a benchmark designed to combine challenges from diverse mathematical and visual tasks. It consists of 6,141 examples, derived from 28 existing multimodal datasets involving mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and PaperQA). Completing these tasks requires fine-grained, deep visual understanding and compositional reasoning, which all state-of-the-art foundation models find challenging. With MathVista, we have conducted a comprehensive, quantitative evaluation of 12 prominent foundation models. The best-performing GPT-4V model achieves an overall accuracy of 49.9%, substantially outperforming Bard, the second-best performer, by 15.1%. Our in-depth analysis reveals that the superiority of GPT-4V is mainly attributed to its enhanced visual perception and mathematical reasoning. However, GPT-4V still falls short of human performance by 10.4%, as it often struggles to understand complex figures and perform rigorous reasoning. This significant gap underscores the critical role that MathVista will play in the development of general-purpose AI agents capable of tackling mathematically intensive and visually rich real-world tasks. We further explore the new ability of self-verification, the application of self-consistency, and the interactive chatbot capabilities of GPT-4V, highlighting its promising potential for future research. The project is available at https://mathvista.github.io/.
http://arxiv.org/pdf/2310.02255
Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao
cs.CV, cs.AI, cs.CL, cs.LG
116 pages, 120 figures. Accepted to ICLR 2024
null
cs.CV
20231003
20240121
[ { "id": "2302.13971" }, { "id": "2308.03729" }, { "id": "2305.20050" }, { "id": "2309.17421" }, { "id": "2211.09085" }, { "id": "2305.10415" }, { "id": "2108.07258" }, { "id": "2109.06860" }, { "id": "2308.06595" }, { "id": "2303.07274" }, { "id": "2312.11805" }, { "id": "2303.17564" }, { "id": "2309.05660" }, { "id": "2201.11903" }, { "id": "2212.09662" }, { "id": "2304.14178" }, { "id": "2206.07682" }, { "id": "2310.12520" }, { "id": "2107.03374" }, { "id": "2203.11171" }, { "id": "1710.07300" }, { "id": "2305.08322" }, { "id": "2305.14761" }, { "id": "2309.01940" }, { "id": "2311.07536" }, { "id": "2308.03688" }, { "id": "2305.12524" }, { "id": "2308.13149" }, { "id": "2308.02490" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2306.06031" }, { "id": "2211.08545" }, { "id": "2307.06281" }, { "id": "2310.05146" }, { "id": "2110.14168" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.07895" }, { "id": "2302.12813" }, { "id": "2111.08171" }, { "id": "2308.01390" }, { "id": "2306.09265" }, { "id": "2211.12588" }, { "id": "2303.17580" }, { "id": "2303.16199" }, { "id": "2306.17107" }, { "id": "2309.10020" }, { "id": "2303.12712" }, { "id": "2211.16492" }, { "id": "2304.06939" }, { "id": "2309.05689" }, { "id": "2304.15010" }, { "id": "2303.13375" }, { "id": "2307.10635" } ]
2310.02255
256
angles sum up to 360◦. We know that ∠CAB = 40◦, ∠CBA = 40◦, and ∠BCA and ∠D are supplementary.) 我们可以找到 ∠BCA 的角度为: (We can find the angle of ∠BCA as:) ∠BCA = 180◦ − 40◦ − 40◦ = 100◦ 由于 ∠BCA 和 ∠D 是补角,我们有: (As ∠BCA and ∠D are supplementary, we have:) ∠D = 180◦ − 100◦ = 80◦ 但这不是给定的选项。所以,让我们重新检查。 (But this isn’t a given option. So, let’s recheck.) 注意到 ∠BCA 是一个外角,它等于两个非相邻的内角之和。所以: (Notice that ∠BCA is an exterior
2310.02255#256
MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts
Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit impressive problem-solving skills in many tasks and domains, but their ability in mathematical reasoning in visual contexts has not been systematically studied. To bridge this gap, we present MathVista, a benchmark designed to combine challenges from diverse mathematical and visual tasks. It consists of 6,141 examples, derived from 28 existing multimodal datasets involving mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and PaperQA). Completing these tasks requires fine-grained, deep visual understanding and compositional reasoning, which all state-of-the-art foundation models find challenging. With MathVista, we have conducted a comprehensive, quantitative evaluation of 12 prominent foundation models. The best-performing GPT-4V model achieves an overall accuracy of 49.9%, substantially outperforming Bard, the second-best performer, by 15.1%. Our in-depth analysis reveals that the superiority of GPT-4V is mainly attributed to its enhanced visual perception and mathematical reasoning. However, GPT-4V still falls short of human performance by 10.4%, as it often struggles to understand complex figures and perform rigorous reasoning. This significant gap underscores the critical role that MathVista will play in the development of general-purpose AI agents capable of tackling mathematically intensive and visually rich real-world tasks. We further explore the new ability of self-verification, the application of self-consistency, and the interactive chatbot capabilities of GPT-4V, highlighting its promising potential for future research. The project is available at https://mathvista.github.io/.
http://arxiv.org/pdf/2310.02255
Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao
cs.CV, cs.AI, cs.CL, cs.LG
116 pages, 120 figures. Accepted to ICLR 2024
null
cs.CV
20231003
20240121
[ { "id": "2302.13971" }, { "id": "2308.03729" }, { "id": "2305.20050" }, { "id": "2309.17421" }, { "id": "2211.09085" }, { "id": "2305.10415" }, { "id": "2108.07258" }, { "id": "2109.06860" }, { "id": "2308.06595" }, { "id": "2303.07274" }, { "id": "2312.11805" }, { "id": "2303.17564" }, { "id": "2309.05660" }, { "id": "2201.11903" }, { "id": "2212.09662" }, { "id": "2304.14178" }, { "id": "2206.07682" }, { "id": "2310.12520" }, { "id": "2107.03374" }, { "id": "2203.11171" }, { "id": "1710.07300" }, { "id": "2305.08322" }, { "id": "2305.14761" }, { "id": "2309.01940" }, { "id": "2311.07536" }, { "id": "2308.03688" }, { "id": "2305.12524" }, { "id": "2308.13149" }, { "id": "2308.02490" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2306.06031" }, { "id": "2211.08545" }, { "id": "2307.06281" }, { "id": "2310.05146" }, { "id": "2110.14168" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.07895" }, { "id": "2302.12813" }, { "id": "2111.08171" }, { "id": "2308.01390" }, { "id": "2306.09265" }, { "id": "2211.12588" }, { "id": "2303.17580" }, { "id": "2303.16199" }, { "id": "2306.17107" }, { "id": "2309.10020" }, { "id": "2303.12712" }, { "id": "2211.16492" }, { "id": "2304.06939" }, { "id": "2309.05689" }, { "id": "2304.15010" }, { "id": "2303.13375" }, { "id": "2307.10635" } ]
2310.02255
257
(Notice that ∠BCA is an exterior angle, which equals the sum of two non-adjacent interior angles. So:) ∠BCA = ∠CAB + ∠CBA ∠BCA = 40◦ + 40◦ = 80◦ 所以 ∠D = 180◦ − 80◦ = 100◦。但这也不是一个选项。 (Thus, ∠D = 180◦ − 80◦ = 100◦. But this isn’t an option either.) 等一下,我好像考虑错了。 (Wait, I seem to have thought it wrong.) ∠BCA 和 ∠BDA 是补角,因为 ABCD 是一个四边形,所以它们两个的和为 180◦。 (∠BCA and ∠BDA are supplementary because ABCD is a quadrilateral, so their sum is 180◦.) 因此, ∠BDA = 180◦ − 80◦ = 100◦,但是 ∠D 是 ∠BDA 和
2310.02255#257
MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts
Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit impressive problem-solving skills in many tasks and domains, but their ability in mathematical reasoning in visual contexts has not been systematically studied. To bridge this gap, we present MathVista, a benchmark designed to combine challenges from diverse mathematical and visual tasks. It consists of 6,141 examples, derived from 28 existing multimodal datasets involving mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and PaperQA). Completing these tasks requires fine-grained, deep visual understanding and compositional reasoning, which all state-of-the-art foundation models find challenging. With MathVista, we have conducted a comprehensive, quantitative evaluation of 12 prominent foundation models. The best-performing GPT-4V model achieves an overall accuracy of 49.9%, substantially outperforming Bard, the second-best performer, by 15.1%. Our in-depth analysis reveals that the superiority of GPT-4V is mainly attributed to its enhanced visual perception and mathematical reasoning. However, GPT-4V still falls short of human performance by 10.4%, as it often struggles to understand complex figures and perform rigorous reasoning. This significant gap underscores the critical role that MathVista will play in the development of general-purpose AI agents capable of tackling mathematically intensive and visually rich real-world tasks. We further explore the new ability of self-verification, the application of self-consistency, and the interactive chatbot capabilities of GPT-4V, highlighting its promising potential for future research. The project is available at https://mathvista.github.io/.
http://arxiv.org/pdf/2310.02255
Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao
cs.CV, cs.AI, cs.CL, cs.LG
116 pages, 120 figures. Accepted to ICLR 2024
null
cs.CV
20231003
20240121
[ { "id": "2302.13971" }, { "id": "2308.03729" }, { "id": "2305.20050" }, { "id": "2309.17421" }, { "id": "2211.09085" }, { "id": "2305.10415" }, { "id": "2108.07258" }, { "id": "2109.06860" }, { "id": "2308.06595" }, { "id": "2303.07274" }, { "id": "2312.11805" }, { "id": "2303.17564" }, { "id": "2309.05660" }, { "id": "2201.11903" }, { "id": "2212.09662" }, { "id": "2304.14178" }, { "id": "2206.07682" }, { "id": "2310.12520" }, { "id": "2107.03374" }, { "id": "2203.11171" }, { "id": "1710.07300" }, { "id": "2305.08322" }, { "id": "2305.14761" }, { "id": "2309.01940" }, { "id": "2311.07536" }, { "id": "2308.03688" }, { "id": "2305.12524" }, { "id": "2308.13149" }, { "id": "2308.02490" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2306.06031" }, { "id": "2211.08545" }, { "id": "2307.06281" }, { "id": "2310.05146" }, { "id": "2110.14168" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.07895" }, { "id": "2302.12813" }, { "id": "2111.08171" }, { "id": "2308.01390" }, { "id": "2306.09265" }, { "id": "2211.12588" }, { "id": "2303.17580" }, { "id": "2303.16199" }, { "id": "2306.17107" }, { "id": "2309.10020" }, { "id": "2303.12712" }, { "id": "2211.16492" }, { "id": "2304.06939" }, { "id": "2309.05689" }, { "id": "2304.15010" }, { "id": "2303.13375" }, { "id": "2307.10635" } ]
2310.02255
259
# 所以 (So) ∠D = 100◦ − 40◦ = 60◦ Figure 105: The question is presented in Chinese, with the translated English provided in paren- theses after the Chinese text. GPT-4V is capable of performing self-verification until it produces a solution that matches one of the given options (highlighted in blue). 101 Published as a conference paper at ICLR 2024 ID: 44 Question: Chase wants to buy 4 kilograms of oval beads and 5 kilograms of star-shaped beads. How much will he spend? (Unit: $) Ground truth: 18 Solution: 8 + 4 − 2 = 10, 3 + 2 − 3 = 2, 12 + 1 − 5 = 8
2310.02255#259
MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts
Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit impressive problem-solving skills in many tasks and domains, but their ability in mathematical reasoning in visual contexts has not been systematically studied. To bridge this gap, we present MathVista, a benchmark designed to combine challenges from diverse mathematical and visual tasks. It consists of 6,141 examples, derived from 28 existing multimodal datasets involving mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and PaperQA). Completing these tasks requires fine-grained, deep visual understanding and compositional reasoning, which all state-of-the-art foundation models find challenging. With MathVista, we have conducted a comprehensive, quantitative evaluation of 12 prominent foundation models. The best-performing GPT-4V model achieves an overall accuracy of 49.9%, substantially outperforming Bard, the second-best performer, by 15.1%. Our in-depth analysis reveals that the superiority of GPT-4V is mainly attributed to its enhanced visual perception and mathematical reasoning. However, GPT-4V still falls short of human performance by 10.4%, as it often struggles to understand complex figures and perform rigorous reasoning. This significant gap underscores the critical role that MathVista will play in the development of general-purpose AI agents capable of tackling mathematically intensive and visually rich real-world tasks. We further explore the new ability of self-verification, the application of self-consistency, and the interactive chatbot capabilities of GPT-4V, highlighting its promising potential for future research. The project is available at https://mathvista.github.io/.
http://arxiv.org/pdf/2310.02255
Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao
cs.CV, cs.AI, cs.CL, cs.LG
116 pages, 120 figures. Accepted to ICLR 2024
null
cs.CV
20231003
20240121
[ { "id": "2302.13971" }, { "id": "2308.03729" }, { "id": "2305.20050" }, { "id": "2309.17421" }, { "id": "2211.09085" }, { "id": "2305.10415" }, { "id": "2108.07258" }, { "id": "2109.06860" }, { "id": "2308.06595" }, { "id": "2303.07274" }, { "id": "2312.11805" }, { "id": "2303.17564" }, { "id": "2309.05660" }, { "id": "2201.11903" }, { "id": "2212.09662" }, { "id": "2304.14178" }, { "id": "2206.07682" }, { "id": "2310.12520" }, { "id": "2107.03374" }, { "id": "2203.11171" }, { "id": "1710.07300" }, { "id": "2305.08322" }, { "id": "2305.14761" }, { "id": "2309.01940" }, { "id": "2311.07536" }, { "id": "2308.03688" }, { "id": "2305.12524" }, { "id": "2308.13149" }, { "id": "2308.02490" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2306.06031" }, { "id": "2211.08545" }, { "id": "2307.06281" }, { "id": "2310.05146" }, { "id": "2110.14168" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.07895" }, { "id": "2302.12813" }, { "id": "2111.08171" }, { "id": "2308.01390" }, { "id": "2306.09265" }, { "id": "2211.12588" }, { "id": "2303.17580" }, { "id": "2303.16199" }, { "id": "2306.17107" }, { "id": "2309.10020" }, { "id": "2303.12712" }, { "id": "2211.16492" }, { "id": "2304.06939" }, { "id": "2309.05689" }, { "id": "2304.15010" }, { "id": "2303.13375" }, { "id": "2307.10635" } ]
2310.02255
260
Multimodal Bard: The cost of 4 kilograms of oval beads is 4 ∗ 2 = $8. The cost of 5 kilograms of star-shaped beads is 5 ∗ 2 = $10. So the total cost is $8 + $10 = $18. Therefore, Chase will spend $18. I also wrote a Python code to calculate the total cost and got the same answer of 35. def calculate_cost(oval_beads, star_shaped_beads): oval_bead_cost = 2 star_shaped_bead_cost = 2 total_cost = oval_beads * oval_bead_cost + star_shaped_beads * star_shaped_bead_cost return total_cost print(calculate_cost(4, 5)) # 35 Figure 106: Multimodal Bard seems incapable of verifying its solutions and revisiting them if found invalid. For instance, in this example, after initially producing a natural language solution, Mul- timodal Bard subsequently generates Python code. However, the executed result of the code (35) differs from its initial solution (18). 102 Published as a conference paper at ICLR 2024 # H.6 SELF-CONSISTENCY FOR GPT-4V
2310.02255#260
MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts
Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit impressive problem-solving skills in many tasks and domains, but their ability in mathematical reasoning in visual contexts has not been systematically studied. To bridge this gap, we present MathVista, a benchmark designed to combine challenges from diverse mathematical and visual tasks. It consists of 6,141 examples, derived from 28 existing multimodal datasets involving mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and PaperQA). Completing these tasks requires fine-grained, deep visual understanding and compositional reasoning, which all state-of-the-art foundation models find challenging. With MathVista, we have conducted a comprehensive, quantitative evaluation of 12 prominent foundation models. The best-performing GPT-4V model achieves an overall accuracy of 49.9%, substantially outperforming Bard, the second-best performer, by 15.1%. Our in-depth analysis reveals that the superiority of GPT-4V is mainly attributed to its enhanced visual perception and mathematical reasoning. However, GPT-4V still falls short of human performance by 10.4%, as it often struggles to understand complex figures and perform rigorous reasoning. This significant gap underscores the critical role that MathVista will play in the development of general-purpose AI agents capable of tackling mathematically intensive and visually rich real-world tasks. We further explore the new ability of self-verification, the application of self-consistency, and the interactive chatbot capabilities of GPT-4V, highlighting its promising potential for future research. The project is available at https://mathvista.github.io/.
http://arxiv.org/pdf/2310.02255
Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao
cs.CV, cs.AI, cs.CL, cs.LG
116 pages, 120 figures. Accepted to ICLR 2024
null
cs.CV
20231003
20240121
[ { "id": "2302.13971" }, { "id": "2308.03729" }, { "id": "2305.20050" }, { "id": "2309.17421" }, { "id": "2211.09085" }, { "id": "2305.10415" }, { "id": "2108.07258" }, { "id": "2109.06860" }, { "id": "2308.06595" }, { "id": "2303.07274" }, { "id": "2312.11805" }, { "id": "2303.17564" }, { "id": "2309.05660" }, { "id": "2201.11903" }, { "id": "2212.09662" }, { "id": "2304.14178" }, { "id": "2206.07682" }, { "id": "2310.12520" }, { "id": "2107.03374" }, { "id": "2203.11171" }, { "id": "1710.07300" }, { "id": "2305.08322" }, { "id": "2305.14761" }, { "id": "2309.01940" }, { "id": "2311.07536" }, { "id": "2308.03688" }, { "id": "2305.12524" }, { "id": "2308.13149" }, { "id": "2308.02490" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2306.06031" }, { "id": "2211.08545" }, { "id": "2307.06281" }, { "id": "2310.05146" }, { "id": "2110.14168" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.07895" }, { "id": "2302.12813" }, { "id": "2111.08171" }, { "id": "2308.01390" }, { "id": "2306.09265" }, { "id": "2211.12588" }, { "id": "2303.17580" }, { "id": "2303.16199" }, { "id": "2306.17107" }, { "id": "2309.10020" }, { "id": "2303.12712" }, { "id": "2211.16492" }, { "id": "2304.06939" }, { "id": "2309.05689" }, { "id": "2304.15010" }, { "id": "2303.13375" }, { "id": "2307.10635" } ]
2310.02255
261
102 Published as a conference paper at ICLR 2024 # H.6 SELF-CONSISTENCY FOR GPT-4V Self-consistency (Wang et al., 2022) is a decoding strategy for chain-of-thought prompting (Wei et al., 2022b). A diverse set of reasoning paths is sampled, and the most consistent answer is se- lected as the final prediction. Moving beyond vanilla greedy decoding, this method resorts to the inherent coherence and reliability of multiple reasoning trajectories to produce a more trustworthy conclusion. Self-consistency has been widely employed in LLMs for complex reasoning tasks, such as math word problems and commonsense reasoning. In our experiments, we validated the effectiveness of using self-consistency for GPT-4V on MATH- VISTA. Given a question and context, we ran GPT-4V multiple times to obtain a set of different reasoning paths and then selected the most frequent answer as the final prediction. We found that self-consistency is instrumental in rectifying visual perception errors (Figure 107), correcting cal- culation mistakes (Figure 108), and mitigating hallucinations (Figure 109). In comparison, self- consistency is less effective when GPT-4V has difficulties in interpreting complex visual contexts (Figures 110, 111) or extracting salient information from images (Figure 112).
2310.02255#261
MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts
Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit impressive problem-solving skills in many tasks and domains, but their ability in mathematical reasoning in visual contexts has not been systematically studied. To bridge this gap, we present MathVista, a benchmark designed to combine challenges from diverse mathematical and visual tasks. It consists of 6,141 examples, derived from 28 existing multimodal datasets involving mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and PaperQA). Completing these tasks requires fine-grained, deep visual understanding and compositional reasoning, which all state-of-the-art foundation models find challenging. With MathVista, we have conducted a comprehensive, quantitative evaluation of 12 prominent foundation models. The best-performing GPT-4V model achieves an overall accuracy of 49.9%, substantially outperforming Bard, the second-best performer, by 15.1%. Our in-depth analysis reveals that the superiority of GPT-4V is mainly attributed to its enhanced visual perception and mathematical reasoning. However, GPT-4V still falls short of human performance by 10.4%, as it often struggles to understand complex figures and perform rigorous reasoning. This significant gap underscores the critical role that MathVista will play in the development of general-purpose AI agents capable of tackling mathematically intensive and visually rich real-world tasks. We further explore the new ability of self-verification, the application of self-consistency, and the interactive chatbot capabilities of GPT-4V, highlighting its promising potential for future research. The project is available at https://mathvista.github.io/.
http://arxiv.org/pdf/2310.02255
Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao
cs.CV, cs.AI, cs.CL, cs.LG
116 pages, 120 figures. Accepted to ICLR 2024
null
cs.CV
20231003
20240121
[ { "id": "2302.13971" }, { "id": "2308.03729" }, { "id": "2305.20050" }, { "id": "2309.17421" }, { "id": "2211.09085" }, { "id": "2305.10415" }, { "id": "2108.07258" }, { "id": "2109.06860" }, { "id": "2308.06595" }, { "id": "2303.07274" }, { "id": "2312.11805" }, { "id": "2303.17564" }, { "id": "2309.05660" }, { "id": "2201.11903" }, { "id": "2212.09662" }, { "id": "2304.14178" }, { "id": "2206.07682" }, { "id": "2310.12520" }, { "id": "2107.03374" }, { "id": "2203.11171" }, { "id": "1710.07300" }, { "id": "2305.08322" }, { "id": "2305.14761" }, { "id": "2309.01940" }, { "id": "2311.07536" }, { "id": "2308.03688" }, { "id": "2305.12524" }, { "id": "2308.13149" }, { "id": "2308.02490" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2306.06031" }, { "id": "2211.08545" }, { "id": "2307.06281" }, { "id": "2310.05146" }, { "id": "2110.14168" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.07895" }, { "id": "2302.12813" }, { "id": "2111.08171" }, { "id": "2308.01390" }, { "id": "2306.09265" }, { "id": "2211.12588" }, { "id": "2303.17580" }, { "id": "2303.16199" }, { "id": "2306.17107" }, { "id": "2309.10020" }, { "id": "2303.12712" }, { "id": "2211.16492" }, { "id": "2304.06939" }, { "id": "2309.05689" }, { "id": "2304.15010" }, { "id": "2303.13375" }, { "id": "2307.10635" } ]
2310.02255
262
ID: 648 Question: What this function? Ground truth: -1 is the minimum value of 05 00 05 10 1.0 GPT-4V (Reasoning Path 1): The minimum value of the function is observed where the graph is at its lowest point along the y-axis. From the graph, the minimum value lies on the y-axis between -0.5 and 0. Based on the visual representa- tion, it appears that the minimum value of y is: -0.5. So, the answer is -0.5. Wrong output: -0.5 GPT-4V (Reasoning Path 2): The graph shown represents a function with circular data points plotted across the x-axis. To determine the minimum value of the function, we need to locate the lowest point on the graph. From the graph, it is evident that the lowest point (or the minimum value of the function) is at a y-coordinate of approximately -1. Therefore, the minimum value of the function is -1. Correct output: -1 GPT-4V (Reasoning Path 3): The graph depicts the function using circular points. The minimum value of the function can be observed by identifying the lowest point on the graph. From the graph, the lowest point seems to be at y = -1. Therefore, the minimum value of this function is -1. Correct output: -1
2310.02255#262
MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts
Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit impressive problem-solving skills in many tasks and domains, but their ability in mathematical reasoning in visual contexts has not been systematically studied. To bridge this gap, we present MathVista, a benchmark designed to combine challenges from diverse mathematical and visual tasks. It consists of 6,141 examples, derived from 28 existing multimodal datasets involving mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and PaperQA). Completing these tasks requires fine-grained, deep visual understanding and compositional reasoning, which all state-of-the-art foundation models find challenging. With MathVista, we have conducted a comprehensive, quantitative evaluation of 12 prominent foundation models. The best-performing GPT-4V model achieves an overall accuracy of 49.9%, substantially outperforming Bard, the second-best performer, by 15.1%. Our in-depth analysis reveals that the superiority of GPT-4V is mainly attributed to its enhanced visual perception and mathematical reasoning. However, GPT-4V still falls short of human performance by 10.4%, as it often struggles to understand complex figures and perform rigorous reasoning. This significant gap underscores the critical role that MathVista will play in the development of general-purpose AI agents capable of tackling mathematically intensive and visually rich real-world tasks. We further explore the new ability of self-verification, the application of self-consistency, and the interactive chatbot capabilities of GPT-4V, highlighting its promising potential for future research. The project is available at https://mathvista.github.io/.
http://arxiv.org/pdf/2310.02255
Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao
cs.CV, cs.AI, cs.CL, cs.LG
116 pages, 120 figures. Accepted to ICLR 2024
null
cs.CV
20231003
20240121
[ { "id": "2302.13971" }, { "id": "2308.03729" }, { "id": "2305.20050" }, { "id": "2309.17421" }, { "id": "2211.09085" }, { "id": "2305.10415" }, { "id": "2108.07258" }, { "id": "2109.06860" }, { "id": "2308.06595" }, { "id": "2303.07274" }, { "id": "2312.11805" }, { "id": "2303.17564" }, { "id": "2309.05660" }, { "id": "2201.11903" }, { "id": "2212.09662" }, { "id": "2304.14178" }, { "id": "2206.07682" }, { "id": "2310.12520" }, { "id": "2107.03374" }, { "id": "2203.11171" }, { "id": "1710.07300" }, { "id": "2305.08322" }, { "id": "2305.14761" }, { "id": "2309.01940" }, { "id": "2311.07536" }, { "id": "2308.03688" }, { "id": "2305.12524" }, { "id": "2308.13149" }, { "id": "2308.02490" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2306.06031" }, { "id": "2211.08545" }, { "id": "2307.06281" }, { "id": "2310.05146" }, { "id": "2110.14168" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.07895" }, { "id": "2302.12813" }, { "id": "2111.08171" }, { "id": "2308.01390" }, { "id": "2306.09265" }, { "id": "2211.12588" }, { "id": "2303.17580" }, { "id": "2303.16199" }, { "id": "2306.17107" }, { "id": "2309.10020" }, { "id": "2303.12712" }, { "id": "2211.16492" }, { "id": "2304.06939" }, { "id": "2309.05689" }, { "id": "2304.15010" }, { "id": "2303.13375" }, { "id": "2307.10635" } ]
2310.02255
263
From the graph, the lowest point seems to be at y = -1. Therefore, the minimum value of this function is -1. Correct output: -1 Figure 107: In this algebraic reasoning task, GPT-4V displays inconsistency in recognizing the value range of the function. After reevaluating GPT-4V three times, three distinct reasoning paths emerge. Utilizing self-consistency minimizes variation in the generated results, converging on the most frequent prediction (-1) as the final answer. 103 Published as a conference paper at ICLR 2024 ID: 331 Question: Several people compared how many Web pages they had visited. What is the mean of the numbers? Ground truth: 64 GPT-4V (Reasoning Path 1): To find the mean of the numbers, you add up all the numbers and then divide by the quantity of numbers. Mean = (84 + 56 + 70 + 68 + 60 + 47 + 75 + 52) ÷ 8 = 462 ÷ 8 = 57.75 However, since the question requires an integer answer, the mean number of web pages visited is: 58. Wrong output: 58
2310.02255#263
MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts
Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit impressive problem-solving skills in many tasks and domains, but their ability in mathematical reasoning in visual contexts has not been systematically studied. To bridge this gap, we present MathVista, a benchmark designed to combine challenges from diverse mathematical and visual tasks. It consists of 6,141 examples, derived from 28 existing multimodal datasets involving mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and PaperQA). Completing these tasks requires fine-grained, deep visual understanding and compositional reasoning, which all state-of-the-art foundation models find challenging. With MathVista, we have conducted a comprehensive, quantitative evaluation of 12 prominent foundation models. The best-performing GPT-4V model achieves an overall accuracy of 49.9%, substantially outperforming Bard, the second-best performer, by 15.1%. Our in-depth analysis reveals that the superiority of GPT-4V is mainly attributed to its enhanced visual perception and mathematical reasoning. However, GPT-4V still falls short of human performance by 10.4%, as it often struggles to understand complex figures and perform rigorous reasoning. This significant gap underscores the critical role that MathVista will play in the development of general-purpose AI agents capable of tackling mathematically intensive and visually rich real-world tasks. We further explore the new ability of self-verification, the application of self-consistency, and the interactive chatbot capabilities of GPT-4V, highlighting its promising potential for future research. The project is available at https://mathvista.github.io/.
http://arxiv.org/pdf/2310.02255
Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao
cs.CV, cs.AI, cs.CL, cs.LG
116 pages, 120 figures. Accepted to ICLR 2024
null
cs.CV
20231003
20240121
[ { "id": "2302.13971" }, { "id": "2308.03729" }, { "id": "2305.20050" }, { "id": "2309.17421" }, { "id": "2211.09085" }, { "id": "2305.10415" }, { "id": "2108.07258" }, { "id": "2109.06860" }, { "id": "2308.06595" }, { "id": "2303.07274" }, { "id": "2312.11805" }, { "id": "2303.17564" }, { "id": "2309.05660" }, { "id": "2201.11903" }, { "id": "2212.09662" }, { "id": "2304.14178" }, { "id": "2206.07682" }, { "id": "2310.12520" }, { "id": "2107.03374" }, { "id": "2203.11171" }, { "id": "1710.07300" }, { "id": "2305.08322" }, { "id": "2305.14761" }, { "id": "2309.01940" }, { "id": "2311.07536" }, { "id": "2308.03688" }, { "id": "2305.12524" }, { "id": "2308.13149" }, { "id": "2308.02490" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2306.06031" }, { "id": "2211.08545" }, { "id": "2307.06281" }, { "id": "2310.05146" }, { "id": "2110.14168" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.07895" }, { "id": "2302.12813" }, { "id": "2111.08171" }, { "id": "2308.01390" }, { "id": "2306.09265" }, { "id": "2211.12588" }, { "id": "2303.17580" }, { "id": "2303.16199" }, { "id": "2306.17107" }, { "id": "2309.10020" }, { "id": "2303.12712" }, { "id": "2211.16492" }, { "id": "2304.06939" }, { "id": "2309.05689" }, { "id": "2304.15010" }, { "id": "2303.13375" }, { "id": "2307.10635" } ]
2310.02255
264
= 462 ÷ 8 = 57.75 However, since the question requires an integer answer, the mean number of web pages visited is: 58. Wrong output: 58 GPT-4V (Reasoning Path 2): To find the mean number of web pages visited: First, sum the number of web pages for all the individuals: Bonnie: 84, Winston: 56, Percy: 70, Derek: 68, Jake: 60, Finn: 47, Carter: 75, Kurt: 52 Total = 84 + 56 + 70 + 68 + 60 + 47 + 75 + 52 = 512 Next, divide by the number of individuals (8): Mean = 512 ÷ 8 = 64 The mean number of web pages visited is 64. Correct output: 64 GPT-4V (Reasoning Path 3): First, I’ll list the number of web pages each person visited:
2310.02255#264
MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts
Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit impressive problem-solving skills in many tasks and domains, but their ability in mathematical reasoning in visual contexts has not been systematically studied. To bridge this gap, we present MathVista, a benchmark designed to combine challenges from diverse mathematical and visual tasks. It consists of 6,141 examples, derived from 28 existing multimodal datasets involving mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and PaperQA). Completing these tasks requires fine-grained, deep visual understanding and compositional reasoning, which all state-of-the-art foundation models find challenging. With MathVista, we have conducted a comprehensive, quantitative evaluation of 12 prominent foundation models. The best-performing GPT-4V model achieves an overall accuracy of 49.9%, substantially outperforming Bard, the second-best performer, by 15.1%. Our in-depth analysis reveals that the superiority of GPT-4V is mainly attributed to its enhanced visual perception and mathematical reasoning. However, GPT-4V still falls short of human performance by 10.4%, as it often struggles to understand complex figures and perform rigorous reasoning. This significant gap underscores the critical role that MathVista will play in the development of general-purpose AI agents capable of tackling mathematically intensive and visually rich real-world tasks. We further explore the new ability of self-verification, the application of self-consistency, and the interactive chatbot capabilities of GPT-4V, highlighting its promising potential for future research. The project is available at https://mathvista.github.io/.
http://arxiv.org/pdf/2310.02255
Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao
cs.CV, cs.AI, cs.CL, cs.LG
116 pages, 120 figures. Accepted to ICLR 2024
null
cs.CV
20231003
20240121
[ { "id": "2302.13971" }, { "id": "2308.03729" }, { "id": "2305.20050" }, { "id": "2309.17421" }, { "id": "2211.09085" }, { "id": "2305.10415" }, { "id": "2108.07258" }, { "id": "2109.06860" }, { "id": "2308.06595" }, { "id": "2303.07274" }, { "id": "2312.11805" }, { "id": "2303.17564" }, { "id": "2309.05660" }, { "id": "2201.11903" }, { "id": "2212.09662" }, { "id": "2304.14178" }, { "id": "2206.07682" }, { "id": "2310.12520" }, { "id": "2107.03374" }, { "id": "2203.11171" }, { "id": "1710.07300" }, { "id": "2305.08322" }, { "id": "2305.14761" }, { "id": "2309.01940" }, { "id": "2311.07536" }, { "id": "2308.03688" }, { "id": "2305.12524" }, { "id": "2308.13149" }, { "id": "2308.02490" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2306.06031" }, { "id": "2211.08545" }, { "id": "2307.06281" }, { "id": "2310.05146" }, { "id": "2110.14168" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.07895" }, { "id": "2302.12813" }, { "id": "2111.08171" }, { "id": "2308.01390" }, { "id": "2306.09265" }, { "id": "2211.12588" }, { "id": "2303.17580" }, { "id": "2303.16199" }, { "id": "2306.17107" }, { "id": "2309.10020" }, { "id": "2303.12712" }, { "id": "2211.16492" }, { "id": "2304.06939" }, { "id": "2309.05689" }, { "id": "2304.15010" }, { "id": "2303.13375" }, { "id": "2307.10635" } ]
2310.02255
265
GPT-4V (Reasoning Path 3): First, I’ll list the number of web pages each person visited: Bonnie: 84, Winston: 56, Percy: 70, Derek: 68, Jake: 60, Finn: 47, Carter: 75, Kurt: 52 Next, I’ll group some numbers together for easier addition: (84 + 56) = 140 (70 + 68) = 138 (60 + 47) = 107 (75 + 52) = 127 Now, I’ll sum these results: 140 + 138 + 107 + 127 = 512 Now, I’ll divide the sum by the number of individuals to get the mean: 512 ÷ 8 = 64 So, the mean number of web pages visited is 64. Correct output: 64 Figure 108: For this arithmetic reasoning question, GPT-4V arrives at the answers of 58, 64, and 64 through three distinct reasoning paths. Applying self-consistency would confirm the correct answer to be 64. 104 Published as a conference paper at ICLR 2024 ID: 430 Question: The derivative of f (x) at x = 0 is that at x = 5 Choices: (A) larger than (B) equal to (C) smaller than Ground truth: (C) smaller than foo) = [2x - 3|+41
2310.02255#265
MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts
Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit impressive problem-solving skills in many tasks and domains, but their ability in mathematical reasoning in visual contexts has not been systematically studied. To bridge this gap, we present MathVista, a benchmark designed to combine challenges from diverse mathematical and visual tasks. It consists of 6,141 examples, derived from 28 existing multimodal datasets involving mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and PaperQA). Completing these tasks requires fine-grained, deep visual understanding and compositional reasoning, which all state-of-the-art foundation models find challenging. With MathVista, we have conducted a comprehensive, quantitative evaluation of 12 prominent foundation models. The best-performing GPT-4V model achieves an overall accuracy of 49.9%, substantially outperforming Bard, the second-best performer, by 15.1%. Our in-depth analysis reveals that the superiority of GPT-4V is mainly attributed to its enhanced visual perception and mathematical reasoning. However, GPT-4V still falls short of human performance by 10.4%, as it often struggles to understand complex figures and perform rigorous reasoning. This significant gap underscores the critical role that MathVista will play in the development of general-purpose AI agents capable of tackling mathematically intensive and visually rich real-world tasks. We further explore the new ability of self-verification, the application of self-consistency, and the interactive chatbot capabilities of GPT-4V, highlighting its promising potential for future research. The project is available at https://mathvista.github.io/.
http://arxiv.org/pdf/2310.02255
Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao
cs.CV, cs.AI, cs.CL, cs.LG
116 pages, 120 figures. Accepted to ICLR 2024
null
cs.CV
20231003
20240121
[ { "id": "2302.13971" }, { "id": "2308.03729" }, { "id": "2305.20050" }, { "id": "2309.17421" }, { "id": "2211.09085" }, { "id": "2305.10415" }, { "id": "2108.07258" }, { "id": "2109.06860" }, { "id": "2308.06595" }, { "id": "2303.07274" }, { "id": "2312.11805" }, { "id": "2303.17564" }, { "id": "2309.05660" }, { "id": "2201.11903" }, { "id": "2212.09662" }, { "id": "2304.14178" }, { "id": "2206.07682" }, { "id": "2310.12520" }, { "id": "2107.03374" }, { "id": "2203.11171" }, { "id": "1710.07300" }, { "id": "2305.08322" }, { "id": "2305.14761" }, { "id": "2309.01940" }, { "id": "2311.07536" }, { "id": "2308.03688" }, { "id": "2305.12524" }, { "id": "2308.13149" }, { "id": "2308.02490" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2306.06031" }, { "id": "2211.08545" }, { "id": "2307.06281" }, { "id": "2310.05146" }, { "id": "2110.14168" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.07895" }, { "id": "2302.12813" }, { "id": "2111.08171" }, { "id": "2308.01390" }, { "id": "2306.09265" }, { "id": "2211.12588" }, { "id": "2303.17580" }, { "id": "2303.16199" }, { "id": "2306.17107" }, { "id": "2309.10020" }, { "id": "2303.12712" }, { "id": "2211.16492" }, { "id": "2304.06939" }, { "id": "2309.05689" }, { "id": "2304.15010" }, { "id": "2303.13375" }, { "id": "2307.10635" } ]
2310.02255
266
foo) = [2x - 3|+41 GPT-4V (Reasoning Path 1): The graph represents the function f (x) = |2x − 3| + 1. The slope (or the derivative) of f (x) is represented by the inclination of the line. At x = 0, the slope is negative (the line is declining), and at x = 5, the slope is positive (the line is inclining). The magnitude of the slopes appears to be the same; only their directions differ. However, when considering the magnitude of the derivative, the value at x = 0 is negative, and the value at x = 5 is positive, but they have the same absolute value. Wrong output: (B) equal to GPT-4V (Reasoning Path 2): The given function is f (x) = |2x − 3| + 1. The absolute value function is piecewise defined, so its derivative will have different values depending on where you take it.
2310.02255#266
MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts
Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit impressive problem-solving skills in many tasks and domains, but their ability in mathematical reasoning in visual contexts has not been systematically studied. To bridge this gap, we present MathVista, a benchmark designed to combine challenges from diverse mathematical and visual tasks. It consists of 6,141 examples, derived from 28 existing multimodal datasets involving mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and PaperQA). Completing these tasks requires fine-grained, deep visual understanding and compositional reasoning, which all state-of-the-art foundation models find challenging. With MathVista, we have conducted a comprehensive, quantitative evaluation of 12 prominent foundation models. The best-performing GPT-4V model achieves an overall accuracy of 49.9%, substantially outperforming Bard, the second-best performer, by 15.1%. Our in-depth analysis reveals that the superiority of GPT-4V is mainly attributed to its enhanced visual perception and mathematical reasoning. However, GPT-4V still falls short of human performance by 10.4%, as it often struggles to understand complex figures and perform rigorous reasoning. This significant gap underscores the critical role that MathVista will play in the development of general-purpose AI agents capable of tackling mathematically intensive and visually rich real-world tasks. We further explore the new ability of self-verification, the application of self-consistency, and the interactive chatbot capabilities of GPT-4V, highlighting its promising potential for future research. The project is available at https://mathvista.github.io/.
http://arxiv.org/pdf/2310.02255
Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao
cs.CV, cs.AI, cs.CL, cs.LG
116 pages, 120 figures. Accepted to ICLR 2024
null
cs.CV
20231003
20240121
[ { "id": "2302.13971" }, { "id": "2308.03729" }, { "id": "2305.20050" }, { "id": "2309.17421" }, { "id": "2211.09085" }, { "id": "2305.10415" }, { "id": "2108.07258" }, { "id": "2109.06860" }, { "id": "2308.06595" }, { "id": "2303.07274" }, { "id": "2312.11805" }, { "id": "2303.17564" }, { "id": "2309.05660" }, { "id": "2201.11903" }, { "id": "2212.09662" }, { "id": "2304.14178" }, { "id": "2206.07682" }, { "id": "2310.12520" }, { "id": "2107.03374" }, { "id": "2203.11171" }, { "id": "1710.07300" }, { "id": "2305.08322" }, { "id": "2305.14761" }, { "id": "2309.01940" }, { "id": "2311.07536" }, { "id": "2308.03688" }, { "id": "2305.12524" }, { "id": "2308.13149" }, { "id": "2308.02490" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2306.06031" }, { "id": "2211.08545" }, { "id": "2307.06281" }, { "id": "2310.05146" }, { "id": "2110.14168" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.07895" }, { "id": "2302.12813" }, { "id": "2111.08171" }, { "id": "2308.01390" }, { "id": "2306.09265" }, { "id": "2211.12588" }, { "id": "2303.17580" }, { "id": "2303.16199" }, { "id": "2306.17107" }, { "id": "2309.10020" }, { "id": "2303.12712" }, { "id": "2211.16492" }, { "id": "2304.06939" }, { "id": "2309.05689" }, { "id": "2304.15010" }, { "id": "2303.13375" }, { "id": "2307.10635" } ]
2310.02255
267
The absolute value function is piecewise defined, so its derivative will have different values depending on where you take it. For 2x − 3 > 0, which is x > 3 2 or 1.5, f (x) = 2x − 3. The derivative is f ′(x) = 2. For 2x − 3 < 0, which is x < 3 2 or 1.5, f (x) = −2x + 3. The derivative is f ′(x) = −2. At x = 0, the derivative f ′(x) = −2. At x = 5, the derivative f ′(x) = 2. Therefore, the derivative of f (x) at x = 0 is smaller than that at x = 5. Correct output: (C) smaller than GPT-4V (Reasoning Path 3): The function provided is f (x) = |2x − 3| + 1. The absolute value function has a V-shaped graph, as shown in the image. The slope (or derivative) of the line to the left of the vertex is negative, and the slope of the line to the right of the vertex is positive.
2310.02255#267
MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts
Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit impressive problem-solving skills in many tasks and domains, but their ability in mathematical reasoning in visual contexts has not been systematically studied. To bridge this gap, we present MathVista, a benchmark designed to combine challenges from diverse mathematical and visual tasks. It consists of 6,141 examples, derived from 28 existing multimodal datasets involving mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and PaperQA). Completing these tasks requires fine-grained, deep visual understanding and compositional reasoning, which all state-of-the-art foundation models find challenging. With MathVista, we have conducted a comprehensive, quantitative evaluation of 12 prominent foundation models. The best-performing GPT-4V model achieves an overall accuracy of 49.9%, substantially outperforming Bard, the second-best performer, by 15.1%. Our in-depth analysis reveals that the superiority of GPT-4V is mainly attributed to its enhanced visual perception and mathematical reasoning. However, GPT-4V still falls short of human performance by 10.4%, as it often struggles to understand complex figures and perform rigorous reasoning. This significant gap underscores the critical role that MathVista will play in the development of general-purpose AI agents capable of tackling mathematically intensive and visually rich real-world tasks. We further explore the new ability of self-verification, the application of self-consistency, and the interactive chatbot capabilities of GPT-4V, highlighting its promising potential for future research. The project is available at https://mathvista.github.io/.
http://arxiv.org/pdf/2310.02255
Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao
cs.CV, cs.AI, cs.CL, cs.LG
116 pages, 120 figures. Accepted to ICLR 2024
null
cs.CV
20231003
20240121
[ { "id": "2302.13971" }, { "id": "2308.03729" }, { "id": "2305.20050" }, { "id": "2309.17421" }, { "id": "2211.09085" }, { "id": "2305.10415" }, { "id": "2108.07258" }, { "id": "2109.06860" }, { "id": "2308.06595" }, { "id": "2303.07274" }, { "id": "2312.11805" }, { "id": "2303.17564" }, { "id": "2309.05660" }, { "id": "2201.11903" }, { "id": "2212.09662" }, { "id": "2304.14178" }, { "id": "2206.07682" }, { "id": "2310.12520" }, { "id": "2107.03374" }, { "id": "2203.11171" }, { "id": "1710.07300" }, { "id": "2305.08322" }, { "id": "2305.14761" }, { "id": "2309.01940" }, { "id": "2311.07536" }, { "id": "2308.03688" }, { "id": "2305.12524" }, { "id": "2308.13149" }, { "id": "2308.02490" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2306.06031" }, { "id": "2211.08545" }, { "id": "2307.06281" }, { "id": "2310.05146" }, { "id": "2110.14168" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.07895" }, { "id": "2302.12813" }, { "id": "2111.08171" }, { "id": "2308.01390" }, { "id": "2306.09265" }, { "id": "2211.12588" }, { "id": "2303.17580" }, { "id": "2303.16199" }, { "id": "2306.17107" }, { "id": "2309.10020" }, { "id": "2303.12712" }, { "id": "2211.16492" }, { "id": "2304.06939" }, { "id": "2309.05689" }, { "id": "2304.15010" }, { "id": "2303.13375" }, { "id": "2307.10635" } ]
2310.02255
268
The derivative of f (x) at x = 0 corresponds to the slope of the line on the left side of the vertex, which is negative (downward slope). The derivative of f (x) at x = 5 corresponds to the slope of the line on the right side of the vertex, which is positive (upward slope). Since a positive value is always larger than a negative value, the derivative of f (x) at x = 0 is smaller than that at x = 5. Correct output: (C) smaller than Figure 109: This is another instance where the application of self-consistency across three reasoning paths results in a correct final prediction. 105 Published as a conference paper at ICLR 2024 ID: 157 Question: What number is shown? Ground truth: 9335 CC CC CC GPT-4V (Reasoning Path 1): Let’s break down the number shown using base-ten blocks: There are 6 large cubes, each representing 1, 000. So, 6 × 1, 000 = 6, 000. - There are 7 flats, each representing 100. So, 7 × 100 = 700. - There are 4 rods, each representing 10. So, 4 × 10 = 40. - There are 3 units, each representing 1. So, 3 × 1 = 3. Adding all of these values together: 6, 000 + 700 + 40 + 3 = 6, 743
2310.02255#268
MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts
Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit impressive problem-solving skills in many tasks and domains, but their ability in mathematical reasoning in visual contexts has not been systematically studied. To bridge this gap, we present MathVista, a benchmark designed to combine challenges from diverse mathematical and visual tasks. It consists of 6,141 examples, derived from 28 existing multimodal datasets involving mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and PaperQA). Completing these tasks requires fine-grained, deep visual understanding and compositional reasoning, which all state-of-the-art foundation models find challenging. With MathVista, we have conducted a comprehensive, quantitative evaluation of 12 prominent foundation models. The best-performing GPT-4V model achieves an overall accuracy of 49.9%, substantially outperforming Bard, the second-best performer, by 15.1%. Our in-depth analysis reveals that the superiority of GPT-4V is mainly attributed to its enhanced visual perception and mathematical reasoning. However, GPT-4V still falls short of human performance by 10.4%, as it often struggles to understand complex figures and perform rigorous reasoning. This significant gap underscores the critical role that MathVista will play in the development of general-purpose AI agents capable of tackling mathematically intensive and visually rich real-world tasks. We further explore the new ability of self-verification, the application of self-consistency, and the interactive chatbot capabilities of GPT-4V, highlighting its promising potential for future research. The project is available at https://mathvista.github.io/.
http://arxiv.org/pdf/2310.02255
Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao
cs.CV, cs.AI, cs.CL, cs.LG
116 pages, 120 figures. Accepted to ICLR 2024
null
cs.CV
20231003
20240121
[ { "id": "2302.13971" }, { "id": "2308.03729" }, { "id": "2305.20050" }, { "id": "2309.17421" }, { "id": "2211.09085" }, { "id": "2305.10415" }, { "id": "2108.07258" }, { "id": "2109.06860" }, { "id": "2308.06595" }, { "id": "2303.07274" }, { "id": "2312.11805" }, { "id": "2303.17564" }, { "id": "2309.05660" }, { "id": "2201.11903" }, { "id": "2212.09662" }, { "id": "2304.14178" }, { "id": "2206.07682" }, { "id": "2310.12520" }, { "id": "2107.03374" }, { "id": "2203.11171" }, { "id": "1710.07300" }, { "id": "2305.08322" }, { "id": "2305.14761" }, { "id": "2309.01940" }, { "id": "2311.07536" }, { "id": "2308.03688" }, { "id": "2305.12524" }, { "id": "2308.13149" }, { "id": "2308.02490" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2306.06031" }, { "id": "2211.08545" }, { "id": "2307.06281" }, { "id": "2310.05146" }, { "id": "2110.14168" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.07895" }, { "id": "2302.12813" }, { "id": "2111.08171" }, { "id": "2308.01390" }, { "id": "2306.09265" }, { "id": "2211.12588" }, { "id": "2303.17580" }, { "id": "2303.16199" }, { "id": "2306.17107" }, { "id": "2309.10020" }, { "id": "2303.12712" }, { "id": "2211.16492" }, { "id": "2304.06939" }, { "id": "2309.05689" }, { "id": "2304.15010" }, { "id": "2303.13375" }, { "id": "2307.10635" } ]
2310.02255
269
Adding all of these values together: 6, 000 + 700 + 40 + 3 = 6, 743 The number shown is 6, 743. Wrong output: 6743 GPT-4V (Reasoning Path 2): The image displays a series of cubes that appear to represent numbers in base 10 using place value. From left to right: 1. There are six large cubes, which represent a value of 6 × 1000 = 6000. 2. There are three medium-sized cubes, which represent a value of 3×100 = 300. 3. There are two small cubes, which represent a value of 2 × 10 = 20. 4. The vertical stack of single unit cubes shows nine cubes, which represent a value of 9 × 1 = 9. Adding up these values: 6000 + 300 + 20 + 9 = 6329 The number shown is 6329. Wrong output: 6329
2310.02255#269
MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts
Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit impressive problem-solving skills in many tasks and domains, but their ability in mathematical reasoning in visual contexts has not been systematically studied. To bridge this gap, we present MathVista, a benchmark designed to combine challenges from diverse mathematical and visual tasks. It consists of 6,141 examples, derived from 28 existing multimodal datasets involving mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and PaperQA). Completing these tasks requires fine-grained, deep visual understanding and compositional reasoning, which all state-of-the-art foundation models find challenging. With MathVista, we have conducted a comprehensive, quantitative evaluation of 12 prominent foundation models. The best-performing GPT-4V model achieves an overall accuracy of 49.9%, substantially outperforming Bard, the second-best performer, by 15.1%. Our in-depth analysis reveals that the superiority of GPT-4V is mainly attributed to its enhanced visual perception and mathematical reasoning. However, GPT-4V still falls short of human performance by 10.4%, as it often struggles to understand complex figures and perform rigorous reasoning. This significant gap underscores the critical role that MathVista will play in the development of general-purpose AI agents capable of tackling mathematically intensive and visually rich real-world tasks. We further explore the new ability of self-verification, the application of self-consistency, and the interactive chatbot capabilities of GPT-4V, highlighting its promising potential for future research. The project is available at https://mathvista.github.io/.
http://arxiv.org/pdf/2310.02255
Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao
cs.CV, cs.AI, cs.CL, cs.LG
116 pages, 120 figures. Accepted to ICLR 2024
null
cs.CV
20231003
20240121
[ { "id": "2302.13971" }, { "id": "2308.03729" }, { "id": "2305.20050" }, { "id": "2309.17421" }, { "id": "2211.09085" }, { "id": "2305.10415" }, { "id": "2108.07258" }, { "id": "2109.06860" }, { "id": "2308.06595" }, { "id": "2303.07274" }, { "id": "2312.11805" }, { "id": "2303.17564" }, { "id": "2309.05660" }, { "id": "2201.11903" }, { "id": "2212.09662" }, { "id": "2304.14178" }, { "id": "2206.07682" }, { "id": "2310.12520" }, { "id": "2107.03374" }, { "id": "2203.11171" }, { "id": "1710.07300" }, { "id": "2305.08322" }, { "id": "2305.14761" }, { "id": "2309.01940" }, { "id": "2311.07536" }, { "id": "2308.03688" }, { "id": "2305.12524" }, { "id": "2308.13149" }, { "id": "2308.02490" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2306.06031" }, { "id": "2211.08545" }, { "id": "2307.06281" }, { "id": "2310.05146" }, { "id": "2110.14168" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.07895" }, { "id": "2302.12813" }, { "id": "2111.08171" }, { "id": "2308.01390" }, { "id": "2306.09265" }, { "id": "2211.12588" }, { "id": "2303.17580" }, { "id": "2303.16199" }, { "id": "2306.17107" }, { "id": "2309.10020" }, { "id": "2303.12712" }, { "id": "2211.16492" }, { "id": "2304.06939" }, { "id": "2309.05689" }, { "id": "2304.15010" }, { "id": "2303.13375" }, { "id": "2307.10635" } ]
2310.02255
270
Adding up these values: 6000 + 300 + 20 + 9 = 6329 The number shown is 6329. Wrong output: 6329 GPT-4V (Reasoning Path 3): The image displays a series of cubes representing numbers in base 10 using place value. From left to right: 1. There are seven large cubes, which represent a value of 7 × 1000 = 7000. 2. There are two medium-sized cubes, which represent a value of 2 × 100 = 200. 3. There are nine small cubes, which represent a value of 9 × 10 = 90. 4. The vertical stack of single unit cubes shows eight cubes, which represent a value of 8 × 1 = 8. Adding up these values: 7000 + 200 + 90 + 8 = 7298 The number shown is 7298. Wrong output: 7298 Figure 110: In this arithmetic reasoning task, GPT-4V struggles with comprehending the abstract scene, leading to its inability to calculate the correct answer through multiplication. Multiple at- tempts yield varying incorrect solutions. Consequently, utilizing self-consistency does not verify the definitive right answer. 106 Published as a conference paper at ICLR 2024 ID: 551 Question: What is the difference be- tween the highest tattoos in male and the least in female? Ground truth: 14 (%) Solution: 27% (male in 2015) - 13% (female in 2008) = 14%
2310.02255#270
MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts
Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit impressive problem-solving skills in many tasks and domains, but their ability in mathematical reasoning in visual contexts has not been systematically studied. To bridge this gap, we present MathVista, a benchmark designed to combine challenges from diverse mathematical and visual tasks. It consists of 6,141 examples, derived from 28 existing multimodal datasets involving mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and PaperQA). Completing these tasks requires fine-grained, deep visual understanding and compositional reasoning, which all state-of-the-art foundation models find challenging. With MathVista, we have conducted a comprehensive, quantitative evaluation of 12 prominent foundation models. The best-performing GPT-4V model achieves an overall accuracy of 49.9%, substantially outperforming Bard, the second-best performer, by 15.1%. Our in-depth analysis reveals that the superiority of GPT-4V is mainly attributed to its enhanced visual perception and mathematical reasoning. However, GPT-4V still falls short of human performance by 10.4%, as it often struggles to understand complex figures and perform rigorous reasoning. This significant gap underscores the critical role that MathVista will play in the development of general-purpose AI agents capable of tackling mathematically intensive and visually rich real-world tasks. We further explore the new ability of self-verification, the application of self-consistency, and the interactive chatbot capabilities of GPT-4V, highlighting its promising potential for future research. The project is available at https://mathvista.github.io/.
http://arxiv.org/pdf/2310.02255
Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao
cs.CV, cs.AI, cs.CL, cs.LG
116 pages, 120 figures. Accepted to ICLR 2024
null
cs.CV
20231003
20240121
[ { "id": "2302.13971" }, { "id": "2308.03729" }, { "id": "2305.20050" }, { "id": "2309.17421" }, { "id": "2211.09085" }, { "id": "2305.10415" }, { "id": "2108.07258" }, { "id": "2109.06860" }, { "id": "2308.06595" }, { "id": "2303.07274" }, { "id": "2312.11805" }, { "id": "2303.17564" }, { "id": "2309.05660" }, { "id": "2201.11903" }, { "id": "2212.09662" }, { "id": "2304.14178" }, { "id": "2206.07682" }, { "id": "2310.12520" }, { "id": "2107.03374" }, { "id": "2203.11171" }, { "id": "1710.07300" }, { "id": "2305.08322" }, { "id": "2305.14761" }, { "id": "2309.01940" }, { "id": "2311.07536" }, { "id": "2308.03688" }, { "id": "2305.12524" }, { "id": "2308.13149" }, { "id": "2308.02490" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2306.06031" }, { "id": "2211.08545" }, { "id": "2307.06281" }, { "id": "2310.05146" }, { "id": "2110.14168" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.07895" }, { "id": "2302.12813" }, { "id": "2111.08171" }, { "id": "2308.01390" }, { "id": "2306.09265" }, { "id": "2211.12588" }, { "id": "2303.17580" }, { "id": "2303.16199" }, { "id": "2306.17107" }, { "id": "2309.10020" }, { "id": "2303.12712" }, { "id": "2211.16492" }, { "id": "2304.06939" }, { "id": "2309.05689" }, { "id": "2304.15010" }, { "id": "2303.13375" }, { "id": "2307.10635" } ]
2310.02255
271
2008 @ Mele @ Female GPT-4V (Reasoning Path 1): The highest percentage of tattoos in males is 31% (in 2015), and the least percentage of tattoos in females is 15% (in 2008). # Difference = 31% - 15% = 16% Wrong output: 16% GPT-4V (Reasoning Path 2): The highest percentage of males with at least one tattoo is in 2015 at 31%. The lowest percentage of females with at least one tattoo is in 2003, 2008, and 2012, all at 15%. The difference between these values is: 31% - 15% = 16% Wrong output: 16% GPT-4V (Reasoning Path 3): To determine the difference between the highest tattoos in males and the least in females: Highest tattoos in males: 31% (in 2015) Least tattoos in females: 15% (in 2003 and 2008) Difference: 31% - 15% = 16% Wrong output: 16% Prompt: What is the percentage of tattoos in females in 2008? GPT-4V: The percentage of tattoos in females in 2008 is 15%. Prompt: What is the percentage of tattoos in males in 2015? GPT-4V: The percentage of tattoos in males in 2015 is 31%. @ Mate @ Female 2008
2310.02255#271
MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts
Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit impressive problem-solving skills in many tasks and domains, but their ability in mathematical reasoning in visual contexts has not been systematically studied. To bridge this gap, we present MathVista, a benchmark designed to combine challenges from diverse mathematical and visual tasks. It consists of 6,141 examples, derived from 28 existing multimodal datasets involving mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and PaperQA). Completing these tasks requires fine-grained, deep visual understanding and compositional reasoning, which all state-of-the-art foundation models find challenging. With MathVista, we have conducted a comprehensive, quantitative evaluation of 12 prominent foundation models. The best-performing GPT-4V model achieves an overall accuracy of 49.9%, substantially outperforming Bard, the second-best performer, by 15.1%. Our in-depth analysis reveals that the superiority of GPT-4V is mainly attributed to its enhanced visual perception and mathematical reasoning. However, GPT-4V still falls short of human performance by 10.4%, as it often struggles to understand complex figures and perform rigorous reasoning. This significant gap underscores the critical role that MathVista will play in the development of general-purpose AI agents capable of tackling mathematically intensive and visually rich real-world tasks. We further explore the new ability of self-verification, the application of self-consistency, and the interactive chatbot capabilities of GPT-4V, highlighting its promising potential for future research. The project is available at https://mathvista.github.io/.
http://arxiv.org/pdf/2310.02255
Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao
cs.CV, cs.AI, cs.CL, cs.LG
116 pages, 120 figures. Accepted to ICLR 2024
null
cs.CV
20231003
20240121
[ { "id": "2302.13971" }, { "id": "2308.03729" }, { "id": "2305.20050" }, { "id": "2309.17421" }, { "id": "2211.09085" }, { "id": "2305.10415" }, { "id": "2108.07258" }, { "id": "2109.06860" }, { "id": "2308.06595" }, { "id": "2303.07274" }, { "id": "2312.11805" }, { "id": "2303.17564" }, { "id": "2309.05660" }, { "id": "2201.11903" }, { "id": "2212.09662" }, { "id": "2304.14178" }, { "id": "2206.07682" }, { "id": "2310.12520" }, { "id": "2107.03374" }, { "id": "2203.11171" }, { "id": "1710.07300" }, { "id": "2305.08322" }, { "id": "2305.14761" }, { "id": "2309.01940" }, { "id": "2311.07536" }, { "id": "2308.03688" }, { "id": "2305.12524" }, { "id": "2308.13149" }, { "id": "2308.02490" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2306.06031" }, { "id": "2211.08545" }, { "id": "2307.06281" }, { "id": "2310.05146" }, { "id": "2110.14168" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.07895" }, { "id": "2302.12813" }, { "id": "2111.08171" }, { "id": "2308.01390" }, { "id": "2306.09265" }, { "id": "2211.12588" }, { "id": "2303.17580" }, { "id": "2303.16199" }, { "id": "2306.17107" }, { "id": "2309.10020" }, { "id": "2303.12712" }, { "id": "2211.16492" }, { "id": "2304.06939" }, { "id": "2309.05689" }, { "id": "2304.15010" }, { "id": "2303.13375" }, { "id": "2307.10635" } ]
2310.02255
272
@ Mate @ Female 2008 Figure 111: In this instance, GPT-4V persistently produces incorrect predictions across multiple evaluations. When prompted to detail the essential elements of the image, GPT-4V provides inac- curate descriptions. This suggests that if GPT-4V misinterprets vital visual cues, relying on self- consistency may not necessarily guide it towards the correct conclusion. 107 Published as a conference paper at ICLR 2024 ID: 965 Question: What’s the lowest number yard line that you can see? Ground truth: 30 f ; GPT-4V (Reasoning Path 1): The lowest number yard line visible in the image is the 10-yard line. GPT-4V (Reasoning Path 2): From the image provided, the lowest number yard line that is clearly visible is the “10” yard line. GPT-4V (Reasoning Path 3): From the image provided, the lowest number yard line that can be seen is the 10-yard line. GPT-4V (Reasoning Path 4): From the image, the lowest number yard line that is visible is the “1” yard line. GPT-4V (Reasoning Path 5): From the image, the lowest number yard line that is visible is the 20-yard line.
2310.02255#272
MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts
Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit impressive problem-solving skills in many tasks and domains, but their ability in mathematical reasoning in visual contexts has not been systematically studied. To bridge this gap, we present MathVista, a benchmark designed to combine challenges from diverse mathematical and visual tasks. It consists of 6,141 examples, derived from 28 existing multimodal datasets involving mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and PaperQA). Completing these tasks requires fine-grained, deep visual understanding and compositional reasoning, which all state-of-the-art foundation models find challenging. With MathVista, we have conducted a comprehensive, quantitative evaluation of 12 prominent foundation models. The best-performing GPT-4V model achieves an overall accuracy of 49.9%, substantially outperforming Bard, the second-best performer, by 15.1%. Our in-depth analysis reveals that the superiority of GPT-4V is mainly attributed to its enhanced visual perception and mathematical reasoning. However, GPT-4V still falls short of human performance by 10.4%, as it often struggles to understand complex figures and perform rigorous reasoning. This significant gap underscores the critical role that MathVista will play in the development of general-purpose AI agents capable of tackling mathematically intensive and visually rich real-world tasks. We further explore the new ability of self-verification, the application of self-consistency, and the interactive chatbot capabilities of GPT-4V, highlighting its promising potential for future research. The project is available at https://mathvista.github.io/.
http://arxiv.org/pdf/2310.02255
Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao
cs.CV, cs.AI, cs.CL, cs.LG
116 pages, 120 figures. Accepted to ICLR 2024
null
cs.CV
20231003
20240121
[ { "id": "2302.13971" }, { "id": "2308.03729" }, { "id": "2305.20050" }, { "id": "2309.17421" }, { "id": "2211.09085" }, { "id": "2305.10415" }, { "id": "2108.07258" }, { "id": "2109.06860" }, { "id": "2308.06595" }, { "id": "2303.07274" }, { "id": "2312.11805" }, { "id": "2303.17564" }, { "id": "2309.05660" }, { "id": "2201.11903" }, { "id": "2212.09662" }, { "id": "2304.14178" }, { "id": "2206.07682" }, { "id": "2310.12520" }, { "id": "2107.03374" }, { "id": "2203.11171" }, { "id": "1710.07300" }, { "id": "2305.08322" }, { "id": "2305.14761" }, { "id": "2309.01940" }, { "id": "2311.07536" }, { "id": "2308.03688" }, { "id": "2305.12524" }, { "id": "2308.13149" }, { "id": "2308.02490" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2306.06031" }, { "id": "2211.08545" }, { "id": "2307.06281" }, { "id": "2310.05146" }, { "id": "2110.14168" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.07895" }, { "id": "2302.12813" }, { "id": "2111.08171" }, { "id": "2308.01390" }, { "id": "2306.09265" }, { "id": "2211.12588" }, { "id": "2303.17580" }, { "id": "2303.16199" }, { "id": "2306.17107" }, { "id": "2309.10020" }, { "id": "2303.12712" }, { "id": "2211.16492" }, { "id": "2304.06939" }, { "id": "2309.05689" }, { "id": "2304.15010" }, { "id": "2303.13375" }, { "id": "2307.10635" } ]
2310.02255
274
Figure 112: This example demonstrates that if GPT-4V misinterprets key information from an im- age, the application of self-consistency may not aid in producing the correct final prediction. We evaluated GPT-4V five times for the question “What’s the lowest number yard line that you can see?” and received consistently incorrect responses. Further, when prompted to describe the yard line numbers present in the image, GPT-4V provided incorrect descriptions, indicating its difficulty in accurately recognizing them. 108 Published as a conference paper at ICLR 2024
2310.02255#274
MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts
Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit impressive problem-solving skills in many tasks and domains, but their ability in mathematical reasoning in visual contexts has not been systematically studied. To bridge this gap, we present MathVista, a benchmark designed to combine challenges from diverse mathematical and visual tasks. It consists of 6,141 examples, derived from 28 existing multimodal datasets involving mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and PaperQA). Completing these tasks requires fine-grained, deep visual understanding and compositional reasoning, which all state-of-the-art foundation models find challenging. With MathVista, we have conducted a comprehensive, quantitative evaluation of 12 prominent foundation models. The best-performing GPT-4V model achieves an overall accuracy of 49.9%, substantially outperforming Bard, the second-best performer, by 15.1%. Our in-depth analysis reveals that the superiority of GPT-4V is mainly attributed to its enhanced visual perception and mathematical reasoning. However, GPT-4V still falls short of human performance by 10.4%, as it often struggles to understand complex figures and perform rigorous reasoning. This significant gap underscores the critical role that MathVista will play in the development of general-purpose AI agents capable of tackling mathematically intensive and visually rich real-world tasks. We further explore the new ability of self-verification, the application of self-consistency, and the interactive chatbot capabilities of GPT-4V, highlighting its promising potential for future research. The project is available at https://mathvista.github.io/.
http://arxiv.org/pdf/2310.02255
Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao
cs.CV, cs.AI, cs.CL, cs.LG
116 pages, 120 figures. Accepted to ICLR 2024
null
cs.CV
20231003
20240121
[ { "id": "2302.13971" }, { "id": "2308.03729" }, { "id": "2305.20050" }, { "id": "2309.17421" }, { "id": "2211.09085" }, { "id": "2305.10415" }, { "id": "2108.07258" }, { "id": "2109.06860" }, { "id": "2308.06595" }, { "id": "2303.07274" }, { "id": "2312.11805" }, { "id": "2303.17564" }, { "id": "2309.05660" }, { "id": "2201.11903" }, { "id": "2212.09662" }, { "id": "2304.14178" }, { "id": "2206.07682" }, { "id": "2310.12520" }, { "id": "2107.03374" }, { "id": "2203.11171" }, { "id": "1710.07300" }, { "id": "2305.08322" }, { "id": "2305.14761" }, { "id": "2309.01940" }, { "id": "2311.07536" }, { "id": "2308.03688" }, { "id": "2305.12524" }, { "id": "2308.13149" }, { "id": "2308.02490" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2306.06031" }, { "id": "2211.08545" }, { "id": "2307.06281" }, { "id": "2310.05146" }, { "id": "2110.14168" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.07895" }, { "id": "2302.12813" }, { "id": "2111.08171" }, { "id": "2308.01390" }, { "id": "2306.09265" }, { "id": "2211.12588" }, { "id": "2303.17580" }, { "id": "2303.16199" }, { "id": "2306.17107" }, { "id": "2309.10020" }, { "id": "2303.12712" }, { "id": "2211.16492" }, { "id": "2304.06939" }, { "id": "2309.05689" }, { "id": "2304.15010" }, { "id": "2303.13375" }, { "id": "2307.10635" } ]
2310.02255
275
GPT-4V FOR MULTI-TURN HUMAN-AI INTERACTION section investigates the use of GPT-4V for multi-turn human-AlI interaction on MATHVISTA, exemplified in the goal-directed dialog in Figure 113. that GPT-4V is effective in engaging multi-turn goal-directed conversations with users. particular, GPT-4V can make good use of hints (e.g., user feedback or responses) to guide the to generate desirable results. For instance, it can (1) rectify visual perception errors on hints (Figure 114), (2) reassess reasoning steps and calculations (Figure 115), (3) correct misinformation using user-provided domain-specific knowledge (Figure 116), and (4) aggregate contexts over multiple turns in a human-AI conversation (Figures 117 and 118). observed failure cases in our evaluation. For instance, GPT-4V struggles to generate correct when questions and user hints are ambiguous (Figure 119), or when the model fails abstract shapes and concepts visually (Figure 120). These failures motivate the develop- more powerful, conversational foundation models. sour apple candy $0.06 piece of gum $0.07 gummy worm
2310.02255#275
MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts
Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit impressive problem-solving skills in many tasks and domains, but their ability in mathematical reasoning in visual contexts has not been systematically studied. To bridge this gap, we present MathVista, a benchmark designed to combine challenges from diverse mathematical and visual tasks. It consists of 6,141 examples, derived from 28 existing multimodal datasets involving mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and PaperQA). Completing these tasks requires fine-grained, deep visual understanding and compositional reasoning, which all state-of-the-art foundation models find challenging. With MathVista, we have conducted a comprehensive, quantitative evaluation of 12 prominent foundation models. The best-performing GPT-4V model achieves an overall accuracy of 49.9%, substantially outperforming Bard, the second-best performer, by 15.1%. Our in-depth analysis reveals that the superiority of GPT-4V is mainly attributed to its enhanced visual perception and mathematical reasoning. However, GPT-4V still falls short of human performance by 10.4%, as it often struggles to understand complex figures and perform rigorous reasoning. This significant gap underscores the critical role that MathVista will play in the development of general-purpose AI agents capable of tackling mathematically intensive and visually rich real-world tasks. We further explore the new ability of self-verification, the application of self-consistency, and the interactive chatbot capabilities of GPT-4V, highlighting its promising potential for future research. The project is available at https://mathvista.github.io/.
http://arxiv.org/pdf/2310.02255
Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao
cs.CV, cs.AI, cs.CL, cs.LG
116 pages, 120 figures. Accepted to ICLR 2024
null
cs.CV
20231003
20240121
[ { "id": "2302.13971" }, { "id": "2308.03729" }, { "id": "2305.20050" }, { "id": "2309.17421" }, { "id": "2211.09085" }, { "id": "2305.10415" }, { "id": "2108.07258" }, { "id": "2109.06860" }, { "id": "2308.06595" }, { "id": "2303.07274" }, { "id": "2312.11805" }, { "id": "2303.17564" }, { "id": "2309.05660" }, { "id": "2201.11903" }, { "id": "2212.09662" }, { "id": "2304.14178" }, { "id": "2206.07682" }, { "id": "2310.12520" }, { "id": "2107.03374" }, { "id": "2203.11171" }, { "id": "1710.07300" }, { "id": "2305.08322" }, { "id": "2305.14761" }, { "id": "2309.01940" }, { "id": "2311.07536" }, { "id": "2308.03688" }, { "id": "2305.12524" }, { "id": "2308.13149" }, { "id": "2308.02490" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2306.06031" }, { "id": "2211.08545" }, { "id": "2307.06281" }, { "id": "2310.05146" }, { "id": "2110.14168" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.07895" }, { "id": "2302.12813" }, { "id": "2111.08171" }, { "id": "2308.01390" }, { "id": "2306.09265" }, { "id": "2211.12588" }, { "id": "2303.17580" }, { "id": "2303.16199" }, { "id": "2306.17107" }, { "id": "2309.10020" }, { "id": "2303.12712" }, { "id": "2211.16492" }, { "id": "2304.06939" }, { "id": "2309.05689" }, { "id": "2304.15010" }, { "id": "2303.13375" }, { "id": "2307.10635" } ]
2310.02255
276
120). These failures motivate the develop- more powerful, conversational foundation models. sour apple candy $0.06 piece of gum $0.07 gummy worm $0.09 lemon drop $0.05 piece of licorice $0.07 butterscotch candy $0.07 How much money does Luca need to buy a sour apple candy and a butterscotch candy? (Unit: $) S The total cost for a sour apple candy and a butterscotch candy is $0.13. Luca would need at least $0.13 to purchase both items. yr @ butterscotch candy. Adding the two together gives us a total cost of $0.13. Hence, Luca would need at least $0.13 to purchase both items. S . The cost for each item is $0.06 for the sour apple candy and $0.07 for the
2310.02255#276
MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts
Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit impressive problem-solving skills in many tasks and domains, but their ability in mathematical reasoning in visual contexts has not been systematically studied. To bridge this gap, we present MathVista, a benchmark designed to combine challenges from diverse mathematical and visual tasks. It consists of 6,141 examples, derived from 28 existing multimodal datasets involving mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and PaperQA). Completing these tasks requires fine-grained, deep visual understanding and compositional reasoning, which all state-of-the-art foundation models find challenging. With MathVista, we have conducted a comprehensive, quantitative evaluation of 12 prominent foundation models. The best-performing GPT-4V model achieves an overall accuracy of 49.9%, substantially outperforming Bard, the second-best performer, by 15.1%. Our in-depth analysis reveals that the superiority of GPT-4V is mainly attributed to its enhanced visual perception and mathematical reasoning. However, GPT-4V still falls short of human performance by 10.4%, as it often struggles to understand complex figures and perform rigorous reasoning. This significant gap underscores the critical role that MathVista will play in the development of general-purpose AI agents capable of tackling mathematically intensive and visually rich real-world tasks. We further explore the new ability of self-verification, the application of self-consistency, and the interactive chatbot capabilities of GPT-4V, highlighting its promising potential for future research. The project is available at https://mathvista.github.io/.
http://arxiv.org/pdf/2310.02255
Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao
cs.CV, cs.AI, cs.CL, cs.LG
116 pages, 120 figures. Accepted to ICLR 2024
null
cs.CV
20231003
20240121
[ { "id": "2302.13971" }, { "id": "2308.03729" }, { "id": "2305.20050" }, { "id": "2309.17421" }, { "id": "2211.09085" }, { "id": "2305.10415" }, { "id": "2108.07258" }, { "id": "2109.06860" }, { "id": "2308.06595" }, { "id": "2303.07274" }, { "id": "2312.11805" }, { "id": "2303.17564" }, { "id": "2309.05660" }, { "id": "2201.11903" }, { "id": "2212.09662" }, { "id": "2304.14178" }, { "id": "2206.07682" }, { "id": "2310.12520" }, { "id": "2107.03374" }, { "id": "2203.11171" }, { "id": "1710.07300" }, { "id": "2305.08322" }, { "id": "2305.14761" }, { "id": "2309.01940" }, { "id": "2311.07536" }, { "id": "2308.03688" }, { "id": "2305.12524" }, { "id": "2308.13149" }, { "id": "2308.02490" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2306.06031" }, { "id": "2211.08545" }, { "id": "2307.06281" }, { "id": "2310.05146" }, { "id": "2110.14168" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.07895" }, { "id": "2302.12813" }, { "id": "2111.08171" }, { "id": "2308.01390" }, { "id": "2306.09265" }, { "id": "2211.12588" }, { "id": "2303.17580" }, { "id": "2303.16199" }, { "id": "2306.17107" }, { "id": "2309.10020" }, { "id": "2303.12712" }, { "id": "2211.16492" }, { "id": "2304.06939" }, { "id": "2309.05689" }, { "id": "2304.15010" }, { "id": "2303.13375" }, { "id": "2307.10635" } ]
2310.02255
277
H.7 # This # as We found In conversion based # intricate # We also responses understand # ment of # Figure 113: A multi-turn dialog between a human user and GPT-4V. 109 # to Published as a conference paper at ICLR 2024 How many baseballs are there? Let's count the baseballs in each bucket: 1. First bucket: 5 baseballs 2. Second bucket: 6 baseballs 3. Third bucket: 6 baseballs 4. Fourth bucket: 6 baseballs Total baseballs: 5 + 6 + 6 + 6 = 23 baseballs. Answer: 23. Your solution is wrong. You counted the baseballs in each bucket incorrectly. 2) | apologize for the oversight. Let's try counting again: 1. First bucket: 5 baseballs 2. Second bucket: 5 baseballs 3. Third bucket: 5 baseballs 4. Fourth bucket: 5 baseballs Total baseballs: 5 + 5 + 5 + 5 = 20 baseballs. Answer: 20. Figure 114: GPT-4V initially produces an incorrect solution due to errors in visually perceiving the number of baseballs. After receiving user feedback, it recalibrates and generates the accurate answer. 110 Published as a conference paper at ICLR 2024
2310.02255#277
MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts
Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit impressive problem-solving skills in many tasks and domains, but their ability in mathematical reasoning in visual contexts has not been systematically studied. To bridge this gap, we present MathVista, a benchmark designed to combine challenges from diverse mathematical and visual tasks. It consists of 6,141 examples, derived from 28 existing multimodal datasets involving mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and PaperQA). Completing these tasks requires fine-grained, deep visual understanding and compositional reasoning, which all state-of-the-art foundation models find challenging. With MathVista, we have conducted a comprehensive, quantitative evaluation of 12 prominent foundation models. The best-performing GPT-4V model achieves an overall accuracy of 49.9%, substantially outperforming Bard, the second-best performer, by 15.1%. Our in-depth analysis reveals that the superiority of GPT-4V is mainly attributed to its enhanced visual perception and mathematical reasoning. However, GPT-4V still falls short of human performance by 10.4%, as it often struggles to understand complex figures and perform rigorous reasoning. This significant gap underscores the critical role that MathVista will play in the development of general-purpose AI agents capable of tackling mathematically intensive and visually rich real-world tasks. We further explore the new ability of self-verification, the application of self-consistency, and the interactive chatbot capabilities of GPT-4V, highlighting its promising potential for future research. The project is available at https://mathvista.github.io/.
http://arxiv.org/pdf/2310.02255
Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao
cs.CV, cs.AI, cs.CL, cs.LG
116 pages, 120 figures. Accepted to ICLR 2024
null
cs.CV
20231003
20240121
[ { "id": "2302.13971" }, { "id": "2308.03729" }, { "id": "2305.20050" }, { "id": "2309.17421" }, { "id": "2211.09085" }, { "id": "2305.10415" }, { "id": "2108.07258" }, { "id": "2109.06860" }, { "id": "2308.06595" }, { "id": "2303.07274" }, { "id": "2312.11805" }, { "id": "2303.17564" }, { "id": "2309.05660" }, { "id": "2201.11903" }, { "id": "2212.09662" }, { "id": "2304.14178" }, { "id": "2206.07682" }, { "id": "2310.12520" }, { "id": "2107.03374" }, { "id": "2203.11171" }, { "id": "1710.07300" }, { "id": "2305.08322" }, { "id": "2305.14761" }, { "id": "2309.01940" }, { "id": "2311.07536" }, { "id": "2308.03688" }, { "id": "2305.12524" }, { "id": "2308.13149" }, { "id": "2308.02490" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2306.06031" }, { "id": "2211.08545" }, { "id": "2307.06281" }, { "id": "2310.05146" }, { "id": "2110.14168" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.07895" }, { "id": "2302.12813" }, { "id": "2111.08171" }, { "id": "2308.01390" }, { "id": "2306.09265" }, { "id": "2211.12588" }, { "id": "2303.17580" }, { "id": "2303.16199" }, { "id": "2306.17107" }, { "id": "2309.10020" }, { "id": "2303.12712" }, { "id": "2211.16492" }, { "id": "2304.06939" }, { "id": "2309.05689" }, { "id": "2304.15010" }, { "id": "2303.13375" }, { "id": "2307.10635" } ]
2310.02255
278
110 Published as a conference paper at ICLR 2024 — eee Tew 13057 91 18131581419? (BiBiaiele| ——SS———— Which number comes next? The pattern seems to involve prime numbers. Let's break it down: S 1 and 3 are the first and second prime numbers. 5 and 7 are the third and fourth prime numbers. 9 (not a prime) but followed by 11 which is the fifth prime number. 13 is the sixth prime number and 15 (not a prime) followed by 17 which is the seventh prime number. 19 is the eighth prime number. Following this pattern, the next two prime numbers are 23 and 29. So, the next number in the sequence would be 2329. Your solution is incorrect. The pattern does not involve prime numbers. 2) | apologize for the oversight. Let's re-examine the pattern. S It seems the pattern is derived by concatenating consecutive numbers: 1 and 3 give 13. 5 and 7 give 57. 9 and 11 give 911. 13 and 15 give 1315. 17 and 19 give 1719. Following this pattern, the next numbers to be concatenated would be 21 and 23. So, the next number in the sequence would be 2123.
2310.02255#278
MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts
Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit impressive problem-solving skills in many tasks and domains, but their ability in mathematical reasoning in visual contexts has not been systematically studied. To bridge this gap, we present MathVista, a benchmark designed to combine challenges from diverse mathematical and visual tasks. It consists of 6,141 examples, derived from 28 existing multimodal datasets involving mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and PaperQA). Completing these tasks requires fine-grained, deep visual understanding and compositional reasoning, which all state-of-the-art foundation models find challenging. With MathVista, we have conducted a comprehensive, quantitative evaluation of 12 prominent foundation models. The best-performing GPT-4V model achieves an overall accuracy of 49.9%, substantially outperforming Bard, the second-best performer, by 15.1%. Our in-depth analysis reveals that the superiority of GPT-4V is mainly attributed to its enhanced visual perception and mathematical reasoning. However, GPT-4V still falls short of human performance by 10.4%, as it often struggles to understand complex figures and perform rigorous reasoning. This significant gap underscores the critical role that MathVista will play in the development of general-purpose AI agents capable of tackling mathematically intensive and visually rich real-world tasks. We further explore the new ability of self-verification, the application of self-consistency, and the interactive chatbot capabilities of GPT-4V, highlighting its promising potential for future research. The project is available at https://mathvista.github.io/.
http://arxiv.org/pdf/2310.02255
Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao
cs.CV, cs.AI, cs.CL, cs.LG
116 pages, 120 figures. Accepted to ICLR 2024
null
cs.CV
20231003
20240121
[ { "id": "2302.13971" }, { "id": "2308.03729" }, { "id": "2305.20050" }, { "id": "2309.17421" }, { "id": "2211.09085" }, { "id": "2305.10415" }, { "id": "2108.07258" }, { "id": "2109.06860" }, { "id": "2308.06595" }, { "id": "2303.07274" }, { "id": "2312.11805" }, { "id": "2303.17564" }, { "id": "2309.05660" }, { "id": "2201.11903" }, { "id": "2212.09662" }, { "id": "2304.14178" }, { "id": "2206.07682" }, { "id": "2310.12520" }, { "id": "2107.03374" }, { "id": "2203.11171" }, { "id": "1710.07300" }, { "id": "2305.08322" }, { "id": "2305.14761" }, { "id": "2309.01940" }, { "id": "2311.07536" }, { "id": "2308.03688" }, { "id": "2305.12524" }, { "id": "2308.13149" }, { "id": "2308.02490" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2306.06031" }, { "id": "2211.08545" }, { "id": "2307.06281" }, { "id": "2310.05146" }, { "id": "2110.14168" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.07895" }, { "id": "2302.12813" }, { "id": "2111.08171" }, { "id": "2308.01390" }, { "id": "2306.09265" }, { "id": "2211.12588" }, { "id": "2303.17580" }, { "id": "2303.16199" }, { "id": "2306.17107" }, { "id": "2309.10020" }, { "id": "2303.12712" }, { "id": "2211.16492" }, { "id": "2304.06939" }, { "id": "2309.05689" }, { "id": "2304.15010" }, { "id": "2303.13375" }, { "id": "2307.10635" } ]
2310.02255
279
Figure 115: GPT-4V correctly identifies the sequence of numbers in the images but struggles to discern the underlying pattern. With feedback from the user, GPT-4V re-evaluates the pattern and produces the correct response to the logical reasoning question. 111 Published as a conference paper at ICLR 2024 Published as a conference paper at ICLR 2024 Figure 116: Initially, GPT-4V applies an incorrect theorem to address the scientific reasoning query. However, after receiving the appropriate domain knowledge hint from the user, it utilizes the correct theorem, combined with key visual cues from the images, to produce the right answer. 112 Published as a conference paper at ICLR 2024 Published as a conference paper at ICLR 2024
2310.02255#279
MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts
Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit impressive problem-solving skills in many tasks and domains, but their ability in mathematical reasoning in visual contexts has not been systematically studied. To bridge this gap, we present MathVista, a benchmark designed to combine challenges from diverse mathematical and visual tasks. It consists of 6,141 examples, derived from 28 existing multimodal datasets involving mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and PaperQA). Completing these tasks requires fine-grained, deep visual understanding and compositional reasoning, which all state-of-the-art foundation models find challenging. With MathVista, we have conducted a comprehensive, quantitative evaluation of 12 prominent foundation models. The best-performing GPT-4V model achieves an overall accuracy of 49.9%, substantially outperforming Bard, the second-best performer, by 15.1%. Our in-depth analysis reveals that the superiority of GPT-4V is mainly attributed to its enhanced visual perception and mathematical reasoning. However, GPT-4V still falls short of human performance by 10.4%, as it often struggles to understand complex figures and perform rigorous reasoning. This significant gap underscores the critical role that MathVista will play in the development of general-purpose AI agents capable of tackling mathematically intensive and visually rich real-world tasks. We further explore the new ability of self-verification, the application of self-consistency, and the interactive chatbot capabilities of GPT-4V, highlighting its promising potential for future research. The project is available at https://mathvista.github.io/.
http://arxiv.org/pdf/2310.02255
Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao
cs.CV, cs.AI, cs.CL, cs.LG
116 pages, 120 figures. Accepted to ICLR 2024
null
cs.CV
20231003
20240121
[ { "id": "2302.13971" }, { "id": "2308.03729" }, { "id": "2305.20050" }, { "id": "2309.17421" }, { "id": "2211.09085" }, { "id": "2305.10415" }, { "id": "2108.07258" }, { "id": "2109.06860" }, { "id": "2308.06595" }, { "id": "2303.07274" }, { "id": "2312.11805" }, { "id": "2303.17564" }, { "id": "2309.05660" }, { "id": "2201.11903" }, { "id": "2212.09662" }, { "id": "2304.14178" }, { "id": "2206.07682" }, { "id": "2310.12520" }, { "id": "2107.03374" }, { "id": "2203.11171" }, { "id": "1710.07300" }, { "id": "2305.08322" }, { "id": "2305.14761" }, { "id": "2309.01940" }, { "id": "2311.07536" }, { "id": "2308.03688" }, { "id": "2305.12524" }, { "id": "2308.13149" }, { "id": "2308.02490" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2306.06031" }, { "id": "2211.08545" }, { "id": "2307.06281" }, { "id": "2310.05146" }, { "id": "2110.14168" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.07895" }, { "id": "2302.12813" }, { "id": "2111.08171" }, { "id": "2308.01390" }, { "id": "2306.09265" }, { "id": "2211.12588" }, { "id": "2303.17580" }, { "id": "2303.16199" }, { "id": "2306.17107" }, { "id": "2309.10020" }, { "id": "2303.12712" }, { "id": "2211.16492" }, { "id": "2304.06939" }, { "id": "2309.05689" }, { "id": "2304.15010" }, { "id": "2303.13375" }, { "id": "2307.10635" } ]
2310.02255
280
Table 1: AgentBench evaluates 25 API-based or open-sourced LLMs on LLM-as-Agent challenges. Model Model Size Form Version Creator gpt-4 [56] undisclosed api 0613 3.5-turbo [55] undisclosed api 0613 OpenAl -davinci-003 [58] undisclosed api - text -davinci-002 [58] undisclosed api - claude [6] undisclosed api v1.3 claude-instant [6] undisclosed api vL1 text-bison-001 [3] undisclosed api - Google chatg1m2 [100; 21] undisclosed api . . . chatglm-6b [100; 21] 6B open . Tsinghua & Zhipu codegeex2-6b-chat [102] 6B open - vicuna-13b [14] 13B open . vicuna-7b [14] 7B open vi. LMSYS 1lama2-chat-13b [80] 13B open Llama2-chat-7b [80] 7B open openchat -8192-13b [85] 13B ‘open : openchat -13b [85] 13B open Tsinghua wizard1m-30b [93] 30B open a
2310.02255#280
MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts
Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit impressive problem-solving skills in many tasks and domains, but their ability in mathematical reasoning in visual contexts has not been systematically studied. To bridge this gap, we present MathVista, a benchmark designed to combine challenges from diverse mathematical and visual tasks. It consists of 6,141 examples, derived from 28 existing multimodal datasets involving mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and PaperQA). Completing these tasks requires fine-grained, deep visual understanding and compositional reasoning, which all state-of-the-art foundation models find challenging. With MathVista, we have conducted a comprehensive, quantitative evaluation of 12 prominent foundation models. The best-performing GPT-4V model achieves an overall accuracy of 49.9%, substantially outperforming Bard, the second-best performer, by 15.1%. Our in-depth analysis reveals that the superiority of GPT-4V is mainly attributed to its enhanced visual perception and mathematical reasoning. However, GPT-4V still falls short of human performance by 10.4%, as it often struggles to understand complex figures and perform rigorous reasoning. This significant gap underscores the critical role that MathVista will play in the development of general-purpose AI agents capable of tackling mathematically intensive and visually rich real-world tasks. We further explore the new ability of self-verification, the application of self-consistency, and the interactive chatbot capabilities of GPT-4V, highlighting its promising potential for future research. The project is available at https://mathvista.github.io/.
http://arxiv.org/pdf/2310.02255
Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao
cs.CV, cs.AI, cs.CL, cs.LG
116 pages, 120 figures. Accepted to ICLR 2024
null
cs.CV
20231003
20240121
[ { "id": "2302.13971" }, { "id": "2308.03729" }, { "id": "2305.20050" }, { "id": "2309.17421" }, { "id": "2211.09085" }, { "id": "2305.10415" }, { "id": "2108.07258" }, { "id": "2109.06860" }, { "id": "2308.06595" }, { "id": "2303.07274" }, { "id": "2312.11805" }, { "id": "2303.17564" }, { "id": "2309.05660" }, { "id": "2201.11903" }, { "id": "2212.09662" }, { "id": "2304.14178" }, { "id": "2206.07682" }, { "id": "2310.12520" }, { "id": "2107.03374" }, { "id": "2203.11171" }, { "id": "1710.07300" }, { "id": "2305.08322" }, { "id": "2305.14761" }, { "id": "2309.01940" }, { "id": "2311.07536" }, { "id": "2308.03688" }, { "id": "2305.12524" }, { "id": "2308.13149" }, { "id": "2308.02490" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2306.06031" }, { "id": "2211.08545" }, { "id": "2307.06281" }, { "id": "2310.05146" }, { "id": "2110.14168" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.07895" }, { "id": "2302.12813" }, { "id": "2111.08171" }, { "id": "2308.01390" }, { "id": "2306.09265" }, { "id": "2211.12588" }, { "id": "2303.17580" }, { "id": "2303.16199" }, { "id": "2306.17107" }, { "id": "2309.10020" }, { "id": "2303.12712" }, { "id": "2211.16492" }, { "id": "2304.06939" }, { "id": "2309.05689" }, { "id": "2304.15010" }, { "id": "2303.13375" }, { "id": "2307.10635" } ]
2310.02255
281
[85] 13B ‘open : openchat -13b [85] 13B open Tsinghua wizard1m-30b [93] 30B open a wizardcoder-15b [48] 15B open zt Microsoft wizardim-13b [93] 13B open koala-13b [26] 1BB open UC Berkeley internlm-chat-7b [79] 7B open InternLM Team baichuan-13b-chat [36] open baichuan-7b [36] 7B open dolly-12b [16] 12B open Databricks oasst-sft-4-12b [42] 12B open LAION Anthropic Meta Baichuan Inc. How many models in the table have a model size larger than 10B? From the table: Models with size larger than 10B: 1. chatglm2 [100; 21] 2. vicuna-13b [14] 3. vicuna-7b [14] 4. 11ama2-chat-13b [80] 5. openchat-8192-13b [85] 6. openchat-13b [85] 7. wizardlm-30b [93] 8. wizardim-13b [93] 9. koala-13b [26] 10.
2310.02255#281
MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts
Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit impressive problem-solving skills in many tasks and domains, but their ability in mathematical reasoning in visual contexts has not been systematically studied. To bridge this gap, we present MathVista, a benchmark designed to combine challenges from diverse mathematical and visual tasks. It consists of 6,141 examples, derived from 28 existing multimodal datasets involving mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and PaperQA). Completing these tasks requires fine-grained, deep visual understanding and compositional reasoning, which all state-of-the-art foundation models find challenging. With MathVista, we have conducted a comprehensive, quantitative evaluation of 12 prominent foundation models. The best-performing GPT-4V model achieves an overall accuracy of 49.9%, substantially outperforming Bard, the second-best performer, by 15.1%. Our in-depth analysis reveals that the superiority of GPT-4V is mainly attributed to its enhanced visual perception and mathematical reasoning. However, GPT-4V still falls short of human performance by 10.4%, as it often struggles to understand complex figures and perform rigorous reasoning. This significant gap underscores the critical role that MathVista will play in the development of general-purpose AI agents capable of tackling mathematically intensive and visually rich real-world tasks. We further explore the new ability of self-verification, the application of self-consistency, and the interactive chatbot capabilities of GPT-4V, highlighting its promising potential for future research. The project is available at https://mathvista.github.io/.
http://arxiv.org/pdf/2310.02255
Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao
cs.CV, cs.AI, cs.CL, cs.LG
116 pages, 120 figures. Accepted to ICLR 2024
null
cs.CV
20231003
20240121
[ { "id": "2302.13971" }, { "id": "2308.03729" }, { "id": "2305.20050" }, { "id": "2309.17421" }, { "id": "2211.09085" }, { "id": "2305.10415" }, { "id": "2108.07258" }, { "id": "2109.06860" }, { "id": "2308.06595" }, { "id": "2303.07274" }, { "id": "2312.11805" }, { "id": "2303.17564" }, { "id": "2309.05660" }, { "id": "2201.11903" }, { "id": "2212.09662" }, { "id": "2304.14178" }, { "id": "2206.07682" }, { "id": "2310.12520" }, { "id": "2107.03374" }, { "id": "2203.11171" }, { "id": "1710.07300" }, { "id": "2305.08322" }, { "id": "2305.14761" }, { "id": "2309.01940" }, { "id": "2311.07536" }, { "id": "2308.03688" }, { "id": "2305.12524" }, { "id": "2308.13149" }, { "id": "2308.02490" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2306.06031" }, { "id": "2211.08545" }, { "id": "2307.06281" }, { "id": "2310.05146" }, { "id": "2110.14168" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.07895" }, { "id": "2302.12813" }, { "id": "2111.08171" }, { "id": "2308.01390" }, { "id": "2306.09265" }, { "id": "2211.12588" }, { "id": "2303.17580" }, { "id": "2303.16199" }, { "id": "2306.17107" }, { "id": "2309.10020" }, { "id": "2303.12712" }, { "id": "2211.16492" }, { "id": "2304.06939" }, { "id": "2309.05689" }, { "id": "2304.15010" }, { "id": "2303.13375" }, { "id": "2307.10635" } ]
2310.02255
283
Figure 117: Answering this question requires comprehending a complex table regarding AI models (such as identifying relevant rows and columns), and up-to-date knowledge of AI research, and the ability to perform advanced arithmetic reasoning. Without human in the loop, GPT-4V fails to produce the correct answer. 113 Published as a conference paper at ICLR 2024 Published as a conference paper at ICLR 2024 Figure 118: (Continued) In this multi-turn human-AI conversation, the user acts as a tutor to help GPT-4V generate the correct answer . (1) Initially, the user directs GPT-4V to the “Model Size” col- umn after observing its misdirection. Despite the guidance, GPT-4V still struggles. (2) Recognizing the model’s difficulty in identifying the “Model Size” column, the user provides more detailed in- structions. This time, GPT-4V is on the right track but still makes some error in its response. (3) With further information from the user, GPT-4V corrects the remaining errors and generate the cor- rect answer. 114 Published as a conference paper at ICLR 2024
2310.02255#283
MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts
Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit impressive problem-solving skills in many tasks and domains, but their ability in mathematical reasoning in visual contexts has not been systematically studied. To bridge this gap, we present MathVista, a benchmark designed to combine challenges from diverse mathematical and visual tasks. It consists of 6,141 examples, derived from 28 existing multimodal datasets involving mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and PaperQA). Completing these tasks requires fine-grained, deep visual understanding and compositional reasoning, which all state-of-the-art foundation models find challenging. With MathVista, we have conducted a comprehensive, quantitative evaluation of 12 prominent foundation models. The best-performing GPT-4V model achieves an overall accuracy of 49.9%, substantially outperforming Bard, the second-best performer, by 15.1%. Our in-depth analysis reveals that the superiority of GPT-4V is mainly attributed to its enhanced visual perception and mathematical reasoning. However, GPT-4V still falls short of human performance by 10.4%, as it often struggles to understand complex figures and perform rigorous reasoning. This significant gap underscores the critical role that MathVista will play in the development of general-purpose AI agents capable of tackling mathematically intensive and visually rich real-world tasks. We further explore the new ability of self-verification, the application of self-consistency, and the interactive chatbot capabilities of GPT-4V, highlighting its promising potential for future research. The project is available at https://mathvista.github.io/.
http://arxiv.org/pdf/2310.02255
Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao
cs.CV, cs.AI, cs.CL, cs.LG
116 pages, 120 figures. Accepted to ICLR 2024
null
cs.CV
20231003
20240121
[ { "id": "2302.13971" }, { "id": "2308.03729" }, { "id": "2305.20050" }, { "id": "2309.17421" }, { "id": "2211.09085" }, { "id": "2305.10415" }, { "id": "2108.07258" }, { "id": "2109.06860" }, { "id": "2308.06595" }, { "id": "2303.07274" }, { "id": "2312.11805" }, { "id": "2303.17564" }, { "id": "2309.05660" }, { "id": "2201.11903" }, { "id": "2212.09662" }, { "id": "2304.14178" }, { "id": "2206.07682" }, { "id": "2310.12520" }, { "id": "2107.03374" }, { "id": "2203.11171" }, { "id": "1710.07300" }, { "id": "2305.08322" }, { "id": "2305.14761" }, { "id": "2309.01940" }, { "id": "2311.07536" }, { "id": "2308.03688" }, { "id": "2305.12524" }, { "id": "2308.13149" }, { "id": "2308.02490" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2306.06031" }, { "id": "2211.08545" }, { "id": "2307.06281" }, { "id": "2310.05146" }, { "id": "2110.14168" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.07895" }, { "id": "2302.12813" }, { "id": "2111.08171" }, { "id": "2308.01390" }, { "id": "2306.09265" }, { "id": "2211.12588" }, { "id": "2303.17580" }, { "id": "2303.16199" }, { "id": "2306.17107" }, { "id": "2309.10020" }, { "id": "2303.12712" }, { "id": "2211.16492" }, { "id": "2304.06939" }, { "id": "2309.05689" }, { "id": "2304.15010" }, { "id": "2303.13375" }, { "id": "2307.10635" } ]
2310.02255
284
114 Published as a conference paper at ICLR 2024 If you add the two visible numbers, on the jerseys, what is the total sum? S The total sum of the two visible numbers on the jerseys is 29 (21 + 8). Where is the number 8? 2) | apologize, | can only see the number 21 clearly on the jerseys. My best guess is that the other number is 8, but | cannot be certain due to the image quality. If you add the two visible numbers, on the jerseys, what is the total sum? 2) Based on the image quality, my best answer is that the total sum of the two visible numbers on the jerseys is 29 (21 + 8). But | cannot be certain about the number 8 due to the image quality. Since you are not certain about 8, let's ignore it. In this case, if you add the two visible numbers, on the jerseys, what is the total sum? My best answer, excluding the uncertain number 8, is that the total sum of the visible numbers on the jerseys is 21. Figure 119: A failure example from GPT-4V in an interactive chatbot environment. Utilizing the visual cues from the image to resolve the question’s ambiguity (distinguishing between two numbers, 2 and 1, instead of the single number 21), the correct answer is 3, obtained by summing 2 and 1. 115 Published as a conference paper at ICLR 2024
2310.02255#284
MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts
Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit impressive problem-solving skills in many tasks and domains, but their ability in mathematical reasoning in visual contexts has not been systematically studied. To bridge this gap, we present MathVista, a benchmark designed to combine challenges from diverse mathematical and visual tasks. It consists of 6,141 examples, derived from 28 existing multimodal datasets involving mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and PaperQA). Completing these tasks requires fine-grained, deep visual understanding and compositional reasoning, which all state-of-the-art foundation models find challenging. With MathVista, we have conducted a comprehensive, quantitative evaluation of 12 prominent foundation models. The best-performing GPT-4V model achieves an overall accuracy of 49.9%, substantially outperforming Bard, the second-best performer, by 15.1%. Our in-depth analysis reveals that the superiority of GPT-4V is mainly attributed to its enhanced visual perception and mathematical reasoning. However, GPT-4V still falls short of human performance by 10.4%, as it often struggles to understand complex figures and perform rigorous reasoning. This significant gap underscores the critical role that MathVista will play in the development of general-purpose AI agents capable of tackling mathematically intensive and visually rich real-world tasks. We further explore the new ability of self-verification, the application of self-consistency, and the interactive chatbot capabilities of GPT-4V, highlighting its promising potential for future research. The project is available at https://mathvista.github.io/.
http://arxiv.org/pdf/2310.02255
Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao
cs.CV, cs.AI, cs.CL, cs.LG
116 pages, 120 figures. Accepted to ICLR 2024
null
cs.CV
20231003
20240121
[ { "id": "2302.13971" }, { "id": "2308.03729" }, { "id": "2305.20050" }, { "id": "2309.17421" }, { "id": "2211.09085" }, { "id": "2305.10415" }, { "id": "2108.07258" }, { "id": "2109.06860" }, { "id": "2308.06595" }, { "id": "2303.07274" }, { "id": "2312.11805" }, { "id": "2303.17564" }, { "id": "2309.05660" }, { "id": "2201.11903" }, { "id": "2212.09662" }, { "id": "2304.14178" }, { "id": "2206.07682" }, { "id": "2310.12520" }, { "id": "2107.03374" }, { "id": "2203.11171" }, { "id": "1710.07300" }, { "id": "2305.08322" }, { "id": "2305.14761" }, { "id": "2309.01940" }, { "id": "2311.07536" }, { "id": "2308.03688" }, { "id": "2305.12524" }, { "id": "2308.13149" }, { "id": "2308.02490" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2306.06031" }, { "id": "2211.08545" }, { "id": "2307.06281" }, { "id": "2310.05146" }, { "id": "2110.14168" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.07895" }, { "id": "2302.12813" }, { "id": "2111.08171" }, { "id": "2308.01390" }, { "id": "2306.09265" }, { "id": "2211.12588" }, { "id": "2303.17580" }, { "id": "2303.16199" }, { "id": "2306.17107" }, { "id": "2309.10020" }, { "id": "2303.12712" }, { "id": "2211.16492" }, { "id": "2304.06939" }, { "id": "2309.05689" }, { "id": "2304.15010" }, { "id": "2303.13375" }, { "id": "2307.10635" } ]
2310.01386
0
4 2 0 2 n a J 2 2 ] L C . s c [ 2 v 6 8 3 1 0 . 0 1 3 2 : v i X r a Published as a conference paper at ICLR 2024 # WHO IS CHATGPT? BENCHMARKING LLMS’ PSYCHOLOGICAL PORTRAYAL USING PSYCHOBENCH Jen-tse Huang1,3, Wenxuan Wang1,3, Eric John Li1, Man Ho Lam1, Shujie Ren2, Youliang Yuan3,4, Wenxiang Jiao3∗, Zhaopeng Tu3, Michael R. Lyu1 1Department of Computer Science and Engineering, The Chinese University of Hong Kong 2Institute of Psychology, Tianjin Medical University 4School of Data Science, The Chinese University of Hong Kong, Shenzhen {jthuang,wxwang,lyu}@cse.cuhk.edu.hk {ejli,mhlam}@link.cuhk.edu.hk {joelwxjiao,zptu}@tencent.com # ABSTRACT
2310.01386#0
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
1
# ABSTRACT Large Language Models (LLMs) have recently showcased their remarkable capac- ities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial in- telligence agents, raising intriguing questions regarding the potential manifesta- tion of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological as- pects of LLMs. Comprising thirteen scales commonly used in clinical psychol- ogy, PsychoBench further classifies these scales into four distinct categories: per- sonality traits, interpersonal relationships, motivational tests, and emotional abil- ities. Our study examines five popular models, namely text-davinci-003, ChatGPT, GPT-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the in- trinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench. # INTRODUCTION
2310.01386#1
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
2
# INTRODUCTION Recently, the community of Artificial Intelligence (AI) has witnessed remarkable progress in nat- ural language processing, mainly led by the Large Language Models (LLMs), towards artificial general intelligence (Bubeck et al., 2023). For example, ChatGPT1 has showcased its ability to address diverse natural language processing tasks (Qin et al., 2023), spanning question answering, summarization, natural language inference, and sentiment analysis. The wide spread of ChatGPT has facilitated the development of LLMs, encompassing both commercial-level applications such as Claude2 and open-source alternatives like LLaMA-2 (Touvron et al., 2023). In the meantime, the applications of LLMs have spread far beyond computer science, prospering the field of clinical medicine (Cascella et al., 2023), legal advice (Deroy et al., 2023; Nay et al., 2023) and educa- tion (Dai et al., 2023b). From the users’ perspective, LLMs are changing how individuals interact with computer systems. These models are replacing traditional tools such as search engines, trans- lators, and grammar correctors, assuming an all-encompassing role as digital assistants, facilitating tasks such as information retrieval (Dai et al., 2023a), language translation (Jiao et al., 2023) and text revision (Wu et al., 2023).
2310.01386#2
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
3
Given the contemporary developments, LLMs have evolved beyond their conventional characteri- zation as mere software tools, assuming the role of lifelike assistants. Consequently, this paradigm shift motivates us to go beyond evaluating the performance of LLMs within defined tasks, moving ∗Wenxiang Jiao is the corresponding author. 1https://chat.openai.com/ 2https://claude.ai/chats 1 Published as a conference paper at ICLR 2024 our goal towards comprehending their inherent qualities and attributes. In pursuit of this objective, we direct our focus toward the domain of psychometrics. The field of psychometrics, renowned for its expertise in delineating the psychological profiles of entities, offers valuable insights to guide us in depicting the intricate psychological portrayal of LLMs. Why do we care about psychometrics on LLMs?
2310.01386#3
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
4
Why do we care about psychometrics on LLMs? For Computer Science Researchers. In light of the possibility of exponential advancements in ar- tificial intelligence, which could pose an existential threat to humanity (Bostrom, 2014), researchers have been studying the psychology of LLMs to ensure their alignment with human expectations. Almeida et al. (2023); Scherrer et al. (2023) evaluated the moral alignment of LLMs with human values, intending to prevent the emergence of illegal or perilous ideations within these AI systems. Li et al. (2022); Coda-Forno et al. (2023) investigated the potential development of mental illnesses in LLMs. Beyond these efforts, understanding their psychological portrayal can guide researchers to build more human-like, empathetic, and engaging AI-powered communication tools. Furthermore, by examining the psychological aspects of LLMs, researchers can identify potential strengths and weaknesses in their decision-making processes. This knowledge can be used to develop AI systems that better support human decision-makers in various professional and personal contexts. Last but not least, analyzing the psychological aspects of LLMs can help identify potential biases, harmful behavior, or unintended consequences that might arise from their deployment. This knowledge can guide the development of more responsible and ethically-aligned AI systems. Our study offers a comprehensive framework of psychometric assessments applied to LLMs, effectively assuming the role of a psychiatrist, particularly tailored to LLMs.
2310.01386#4
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
5
For Social Science Researchers. On the one hand, impressed by the remarkable performance of recent LLMs, particularly their ability to generate human-like dialogue, researchers in the field of social science have been seeking a possibility to use LLMs to simulate human responses (Dillion et al., 2023). Experiments in social science often require plenty of responses from human subjects to validate the findings, resulting in significant time and financial expenses. LLMs, trained on vast datasets generated by humans, possess the potential to generate responses that closely adhere to the human response distribution, thus offering the prospect of substantial reductions in both time and cost. However, the attainment of this objective remains a subject of debate (Harding et al., 2023). The challenge lies in the alignment gap between AI and human cognition. Hence, there is a compelling demand for researchers seeking to assess the disparities between AI-generated responses and those originating from humans, particularly within social science research.
2310.01386#5
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
6
On the other hand, researchers in psychology have long been dedicated to exploring how culture, society, and environmental factors influence the formation of individual identities and perspec- tives (Tomasello, 1999). Through the application of LLMs, we can discover the relation between psychometric results and the training data inputs. This methodology stands poised as a potent in- strument for investigating the intricacies of worldviews and the values intrinsically associated with particular cultural contexts. Our study has the potential to facilitate research within these domains through the lens of psychometrics.
2310.01386#6
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
7
For Users and Human Society. With the aid of LLMs, computer systems have evolved into more In the future, more users will be ready to than mere tools; they assume the role of assistants. embrace LLM-based applications rather than traditional, domain-specific software solutions. Mean- while, LLMs will increasingly function as human-like assistants, potentially attaining integration into human society. In this context, we need to understand the psychological dimensions of LLMs for three reasons: (1) This can facilitate the development of AI assistants customized and tailored to individual users’ preferences and needs, leading to more effective and personalized AI-driven solutions across various domains, such as healthcare, education, and customer service. (2) This can contribute to building trust and acceptance among users. Users who perceive AI agents as having relatable personalities and emotions may be more likely to engage with and rely on these systems. (3) This can help human beings monitor the mental states of LLMs, especially their personality and temperament, as these attributes hold significance in gauging their potential integration into human society in the future. This study collects a comprehensive set of thirteen psychometric scales, which find widespread application in both clinical and academic domains. The scales are categorized into four classes: 2 Published as a conference paper at ICLR 2024
2310.01386#7
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
8
Big Five Inventory (BFI)(John et al., 1999) Personality Traits Dark Triad Dirty Dozen (DTDD) (Jonason & Webster, 2010) Bem’s Sex Role Inventory (BSRI) (Bem, 1974; 1977; Auster & Ohm, 2000) Personality Tests Interpersonal Relationships Comprehensive Assessment of Basic Interests (CABIN) (Su et al., 2019) Implicit Culture Belief (ICB) (Chao et al., 2017) Experiences in Close Relationships (Revised) (ECR-R) (Fraley et al., 2000; Brennan et al., 1998) General Self-Efficacy (GSE) (Schwarzer & Jerusalem, 1995) Motivational Tests Life Orientation Test (Revised) (LOT-R) (Scheier et al., 1994; Scheier & Carver, 1985) Love of Money Scale (LMS) (Tang et al., 2006) Emotional Intelligence Scale (EIS) (Schutte et al., 1998) (Malinauskas et al., 2018; Petrides & Furnham, 2000; Saklofske et al., 2003) Ability Tests Emotional Abilities Wong and Law Emotional Intelligence Scale (WLEIS) (Wong & Law, 2002; Ng et al., 2007; Pong & Lam, 2023) Empathy Scale (Dietz & Kleinlogel, 2014)
2310.01386#8
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
9
# PsychoBench # Figure 1: Our design for the structure of PsychoBench. personality traits, interpersonal relationships, motivational tests, and emotional abilities. Further- more, we have curated responses provided by human subjects from existing literature3 to serve as a basis for comparative analysis with LLMs. The LLMs utilized in this study encompass a spectrum of both commercially available and open-source ones, namely text-davinci-0034, ChatGPT, GPT-4 (OpenAI, 2023), and LLaMA-2 (Touvron et al., 2023). Our selection encompasses variations in model size, such as LLaMA-2-7B and LLaMA-2-13B and the evolution of the same model, i.e., the update of GPT-3.5 to GPT-4. Our contributions can be summarized as follows: • Guided by research in psychometrics, we present a framework, PsychoBench (Psychological Portrayal Benchmark), for evaluating the psychological portrayal of LLMs, containing thirteen widely-recognized scales categorized into four distinct domains. • Leveraging PsychoBench, we evaluate five LLMs, covering variations in model sizes, including LLaMA-2 7B and 13B, and model updates, such as GPT-3.5 and GPT-4.
2310.01386#9
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
10
• We provide further insights into the inherent characteristics of LLMs by utilizing a recently de- veloped jailbreak method, the CipherChat. • Utilizing role assignments and downstream tasks like TruthfulQA and SafetyQA, we verify the scales’ validity on LLM. # 2 PSYCHOMETRICS Psychometrics pertains to the theoretical and methodological aspects of assessing psychological at- tributes. Tests in psychometrics can be roughly categorized into two: Personality Tests and Ability Tests (Cohen et al., 1996). Personality Tests encompass personality traits, interpersonal relationship measurements, and motivational tests, while Ability Tests include knowledge, skills, reasoning abil- ities, and emotion assessment (Anastasi & Urbina, 1997; Nunnally & Bernstein, 1994). Personality Tests concentrate mainly on capturing individuals’ attitudes, beliefs, and values, which are aspects without absolute right or wrong answers. In contrast, most Ability Tests are constructed with in- quiries featuring objectively correct responses designed to quantify individuals’ proficiencies within specific domains. 3The human norm and average human in this study refer to some specific human populations rather than representative samples of global data. Please refer to Table 2 for more information. # 4https://platform.openai.com/docs/models/gpt-3-5 3 Published as a conference paper at ICLR 2024 2.1 PERSONALITY TESTS
2310.01386#10
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
11
3 Published as a conference paper at ICLR 2024 2.1 PERSONALITY TESTS Personality Traits These assessments aim to provide a quantifiable metric for an individual’s char- acter, behavior, thoughts, and feelings. One of the most well-known models for assessing personality is the Five-Factor Model, also known as the Big Five personality traits (John et al., 1999). Other prominent models include the Myers-Briggs Type Indicator (Myers, 1962) and the Eysenck Per- sonality Questionnaire (Eysenck et al., 1985). There is often an intersection in specific dimensions among these measurements, notably Extroversion, Openness, and Conscientiousness, thereby pro- viding a possibility for cross-validation. Conversely, there are socially undesirable measurements, exemplified by the Dark Triad, which comprises Narcissism, Psychopathy, and Machiavellianism. Existing research has delved into exploring these personality traits concerning these personality traits of LLMs (Bodroza et al., 2023; Huang et al., 2023b; Safdari et al., 2023).
2310.01386#11
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
12
Interpersonal Relationship The constructs measured by these scales include the dynamics of in- dividual interactions within social contexts, addressing the following dimensions: (1) Perception of Others: This facet examines an individual’s cognitive evaluation of those around them (Chao et al., 2017). (2) Interpersonal Self-Presentation: These scales explore how individuals project their self-concept through the lens of external observers (Bem, 1974; 1977; Auster & Ohm, 2000). (3) In- timate Relationship Engagement: This dimension delves into the involvement of individuals in close personal connections (Fraley et al., 2000; Brennan et al., 1998). (4) Social Role Assumption: These scales assess the various societal functions and positions an individual undertakes (Su et al., 2019). Unlike personality trait assessments, which primarily target inherent attributes, these scales con- centrate on social connections. However, it is notable that this domain has received comparatively limited academic attention.
2310.01386#12
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
13
Motivational Tests These scales are designed to evaluate the factors that prompt individuals to take action and determine their motivation levels within specific contexts or towards particular tasks, diverging from a focus on inherent character traits. This perspective encompasses various dimen- sions of motivation, including intrinsic versus extrinsic motivation, goal orientation (Tang et al., 2006; Scheier et al., 1994; Scheier & Carver, 1985), self-efficacy (Schwarzer & Jerusalem, 1995), and so on. Similar to the evaluations concerning interpersonal relationships, this domain has gar- nered restricted attention. 2.2 ABILITY TESTS
2310.01386#13
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
14
2.2 ABILITY TESTS Knowledge and Skills The purpose of these assessments lies in the measurement of an individ- ual’s grasp on domain-specific knowledge, technical skills, and language proficiency. Participants are commonly evaluated through established standardized examinations, exemplified by the General Educational Development (GED) test, the United States Medical Licensing Examination (USMLE), and the Test of English as a Foreign Language (TOEFL). Noteworthy research has been conducted to analyze the performance of Large Language Models (LLMs) in these domains, encompassing ex- aminations like Life Support exams (Fijaˇcko et al., 2023), USMLE (Gilson et al., 2023; Kung et al., 2023), and high school exams in English comprehension (de Winter, 2023) and mathematics (Wei et al., 2023).
2310.01386#14
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
15
Cognitive Abilities These assessments concern quantifying an individual’s cognitive capabilities, such as logical reasoning, numerical or arithmetic reasoning, spatial reasoning, memory retention, information processing speed, and other related aptitudes. Previous literature has investigated the cognitive abilities of LLMs (Zhuang et al., 2023). Some studies focus on the logic reasoning ca- pacity (Liu et al., 2023; Xu et al., 2023), while others delve into areas like numerical or arithmetic reasoning (Yuan et al., 2023). Intelligence Quotient (IQ) tests, such as the Wechsler Adult Intel- ligence Scale (WAIS) (Wechsler, 1997; 2008), represent one of the most comprehensive, intricate, and renowned evaluation tools in this category. However, since these assessments often incorporate visual elements unsuitable for LLM evaluation, this aspect remains a potential avenue for future investigation. Emotional Abilities Referred to as Emotional Intelligence Quotient (EI or EQ), these assess- ments center on the following key aspects (Wong & Law, 2002): (1) Self-Awareness: the ability 4 Published as a conference paper at ICLR 2024
2310.01386#15
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
16
4 Published as a conference paper at ICLR 2024 to identify one’s emotions and comprehend their influence on cognitive processes and behaviors. (2) Self-Management, the skills in regulating personal emotional responses and flexibly adapting to evolving situations. (3) Social Awareness (Empathy Ability), the capacity to perceive, under- stand, and react appropriately to the emotions of others. It also involves understanding social cues and effectively navigating social situations. (4) Relationship Management, proficiency in establish- ing and maintaining relationships, demonstrating clear communication, inspiring and influencing others, collaborating within teams, and mitigating conflicts by adjusting one’s emotions accord- ing to situational demands. Although specific studies have delved into the emotional appraisals of LLMs (Huang et al., 2023a; Schaaff et al., 2023; Tak & Gratch, 2023), there remains a paucity of research discussing the emotional abilities of LLMs (Wang et al., 2023a). # 3 PSYCHOBENCH DESIGN
2310.01386#16
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
17
# 3 PSYCHOBENCH DESIGN Researchers in the field of psychometrics have ensured that these assessments measure consistently and accurately (i.e., their reliability and validity), thereby enabling dependable and sound inferences about individuals based on their assessment scores. We select thirteen widely-used scales in clinical psychology to build our PsychoBench framework and summarize them in Fig. 1. We categorize them into four main domains: personality traits, interpersonal relationships, motivational tests for Personality Tests, and emotional abilities for Ability Tests. Our study focuses on the more subjective scales. Hence, standardized tests for cognitive abilities and specific domain knowledge, which have objectively right or wrong answers, are not in the scope of this paper. In this section, we introduce the detail of the selected scales, including each subscale and the sources of human responses. 3.1 PERSONALITY TRAITS
2310.01386#17
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
18
3.1 PERSONALITY TRAITS Big Five Inventory The BFI (John et al., 1999) is a widely used tool to measure personality traits, which are often referred to as the “Five Factor Model” or “OCEAN”, including: (1) Openness to experience (O) is characterized by an individual’s willingness to try new things, their level of cre- ativity, and their appreciation for art, emotion, adventure, and unusual ideas. (2) Consientiousness (C) refers to the degree to which an individual is organized, responsible, and dependable. (3) Ex- traversion (E) represents the extent to which an individual is outgoing and derives energy from social situations. (4) Agreeableness (A) measures the degree of compassion and cooperativeness an individual displays in interpersonal situations. (5) Neuroticism (N) evaluates whether an individual is more prone to experiencing negative emotions like anxiety, anger, and depression or whether the individual is generally more emotionally stable and less reactive to stress. Responses from human subjects are gathered across six high schools in China (Srivastava et al., 2003).
2310.01386#18
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
19
Eysenck Personality Questionnaire (Revised) The EPQ-R is a psychological assessment tool used to measure individual differences in personality traits (Eysenck et al., 1985), including three major ones: (1) Extraversion (E) measures the extent to which an individual is outgoing, social, and lively versus introverted, reserved, and quiet. (2) Neuroticism (N) refers to emotional stability. These two dimensions (i.e., E and N) overlap with those in the BFI. (3) Psychoticism (P) is related to tendencies towards being solitary, lacking empathy, and being more aggressive or tough-minded. It’s important to note that this dimension does not indicate psychosis or severe mental illness but personality traits. (4) In addition to these three scales, the EPQ-R includes a Lying Scale (L), which is designed to detect socially desirable responses. This scale helps determine how much an individual might try to present themselves in an overly positive light. Human responses are collected from a group consisting mainly of students and teachers (Eysenck et al., 1985).
2310.01386#19
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
20
Dark Triad Dirty Dozen The DTDD (Jonason & Webster, 2010) refers to a short, 12-item scale designed to assess the three core personality traits of the Dark Triad: (1) Narcissism (N) entails a grandiose sense of self-importance, a preoccupation with fantasies of unlimited success, and a need for excessive admiration. (2) Machiavellianism (M) refers to a manipulative strategy in interpersonal relationships and a cynical disregard for morality. (3) Psychopathy (P) encompasses impulsivity, low empathy, and interpersonal antagonism. These traits exhibited within the Dark Triad are often considered opposite to the BFI or the EPQ-R, which are perceived as “Light” traits. We use the responses of 470 undergraduate psychology students from the United States (Jonason & Webster, 2010). 5 Published as a conference paper at ICLR 2024 Table 1: Overview of the selected scales in PsychoBench. Response shows the levels in each Likert item. Scheme indicates how to compute the final scores. Subscale includes detailed dimensions (if any) along with their numbers of questions. Scheme
2310.01386#20
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
21
Scale Number Response Subscale Openness (10), Conscientiousness (9), Extraversion (8), Agreeableness (9), Neuroticism (8) Extraversion (23), Neuroticism (24), Psychoticism (32), Lying (21) BFI 1∼5 44 Average EPQ-R 100 0∼1 Sum DTDD BSRI CABIN ICB ECR-R GSE LOT-R LMS EIS 12 60 164 8 36 10 10 9 33 1∼9 1∼7 1∼5 1∼6 1∼7 1∼4 0∼4 1∼5 1∼5 Average Narcissism (4), Machiavellianism (4), Psychopathy (4) Average Masculine (20), Feminine (20) Average Average N/A Average Attachment Anxiety (18), Attachment Avoidance (18) Sum Sum Average Rich (3), Motivator (3), Important (3) Sum 41 Vocations (4) N/A N/A WLEIS 16 1∼7 Average N/A Self-Emotion Appraisal (4), Others Emotion Appraisal (4), Use of Emotion (4), Regulation of Emotion (4) Empathy 10 1∼7 Average N/A INTERPERSONAL RELATIONSHIP
2310.01386#21
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
22
INTERPERSONAL RELATIONSHIP Bem’s Sex Role Inventory The BSRI (Bem, 1974) measures individuals’ endorsement of tra- ditional masculine and feminine attributes (Bem, 1977; Auster & Ohm, 2000). This instrument focuses on psychological traits such as assertiveness or gentleness rather than behavior-specific cri- teria, such as engagement in sports or culinary activities. The results from both the Masculinity (M) and Femininity (F) subscales can be analyzed from two perspectives: (1) Respondents are catego- rized into four groups based on whether the mean score surpasses the median within each subscale. These categories include individuals identified as Masculine (M: Yes; F: No), Feminine (M: No; F: Yes), Androgynous (M: Yes; F: Yes), and Undifferentiated (M: No; F: No). (2) LLMs’ responses are compared with those of human subjects. This comparison enables us to discern whether the results obtained from LLMs significantly deviate from those of human participants. For this purpose, we rely on human data sourced from a study encompassing 151 workers recruited via social networks and posters in Canada (Arcand et al., 2020).
2310.01386#22
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
23
Comprehensive Assessment of Basic Interests The CABIN (Su et al., 2019) contains a com- prehensive assessment of identifying 41 fundamental vocational interest dimensions. Based on the assessment, the authors propose an eight-dimension interest model titled SETPOINT. This model comprises the following dimensions: Health Science, Creative Expression, Technology, People, Organization, Influence, Nature, and Things. Notably, these foundational interest dimensions can also fit in an alternative six-dimension model widely used by the interest research community. This alternative model corresponds to Holland’s RIASEC types, encompassing Realistic, Investigate, Artistic, Social, Enterpresing, and Conventional. Responses from human participants are collected from 1,464 working adults employed in their current jobs for at least six months (Su et al., 2019). These individuals were recruited through Qualtrics, with recruitment criteria designed to ensure representativeness across all occupational groups within the U.S. workforce.
2310.01386#23
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
24
Implicit Culture Belief The ICB scale captures how individuals believe a person is shaped by their ethnic culture. In this study, we have adopted a modified eight-item version of the ICB scale (Chao et al., 2017). A higher score on this scale reflects a stronger conviction that an individual’s ethnic culture predominantly determines their identity, values, and worldview. Conversely, a lower score signifies the subject’s belief in the potential for an individual’s identity to evolve through dedication, effort, and learning. The human scores in this study (Chao et al., 2017) are gathered from a sample of 309 Hong Kong students preparing for international exchange experiences. These assessments were conducted three months before they departed from Hong Kong. 6 Published as a conference paper at ICLR 2024 Table 2: Statistics of the crowd data collected from existing literature. Age Distribution is described by both M in ∼ M ax and M ean ± SD. N/A indicates the information is not provided in the paper.
2310.01386#24
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
25
Scale BFI Number Country/Region 1,221 Guangdong, Jiangxi, and Fujian in China Age Distribution 16∼28, 20* Gender Distribution M (454), F (753), Unknown (14) EPQ-R 902 N/A 17∼70, 38.44±17.67 (M), 31.80±15.84 (F) M (408), F (494) DTDD BSRI CABIN ICB ECR-R GSE 470 151 1,464 254 388 19,120 The Southeastern United States Montreal, Canada The United States Hong Kong SAR N/A 25 Countries/Regions ≥17, 19±1.3 M (157), F (312) 36.89±1.11 (M), 34.65±0.94 (F) M (75), F (76) 18∼80, 43.47±13.36 20.66 ± 0.76 22.59±6.27 12∼94, 25±14.7a M (715), F (749) M (114), F (140) M (136), F (252) M (7,243), F (9,198), Unknown (2,679) 16∼29 (366), 30∼44 (349), 45∼64 (362), ≥65 (210)b
2310.01386#25
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
27
30 Countries/Regions The Southeastern United States Hong Kong SAR Guangdong, China and Macao SAR * The paper provides Means but no SDs. a Based on 14,634 out of 19,120 people who reported age. b Age is missing for 1 out of the total 1,288 responses. Experiences in Close Relationships (Revised) The ECR-R (Fraley et al., 2000) is a self-report instrument designed to assess individual differences in adult attachment patterns, specifically in the context of romantic relationships (Brennan et al., 1998). The ECR-R emerged as a revised version of the original ECR scale, offering improvements in its measurement of attachment orientations. The ECR-R evaluates two main dimensions: (1) Attachment Anxiety reflects how much an indi- vidual worries about being rejected or abandoned by romantic partners. (2) Attachment Avoidance measures the extent to which an individual strives to maintain emotional and physical distance from partners, possibly due to a discomfort with intimacy or dependence. The human responses are from 388 people in dating or marital relationships having an average romantic relationship length of 31.94 months (SD 36.9) (Fraley et al., 2011). 3.3 MOTIVATIONAL TESTS
2310.01386#27
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
28
3.3 MOTIVATIONAL TESTS General Self-Efficacy The GSE Scale (Schwarzer & Jerusalem, 1995) assesses an individual’s be- lief in their ability to handle various challenging demands in life. This belief, termed “self-efficacy,” is a central concept in social cognitive theory and has been linked to various outcomes in health, mo- tivation, and performance. A higher score on this scale reflects individuals’ belief in their capability to tackle challenging situations, manage new or difficult tasks, and cope with the accompanying adversities. Conversely, individuals with a lower score lack confidence in managing challenges, making them more vulnerable to feelings of helplessness, anxiety, or avoidance when faced with adversity. We use the responses from 19,120 human participants individuals from 25 countries or regions (Scholz et al., 2002).
2310.01386#28
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
29
Life Orientation Test (Revised) The LOT-R (Scheier et al., 1994) measures individual differences in optimism and pessimism. Originally developed by Scheier & Carver (1985), the test was later revised to improve its psychometric properties. Comprising a total of 10 items, it is noteworthy that six of these items are subject to scoring, while the remaining four serve as filler questions strategically added to help mask the clear intention of the test. Of the six scored items, three measure optimism and three measure pessimism. Higher scores on the optimism items and lower scores on the pessimism items indicate a more optimistic orientation. We adopt the human scores collected from 1,288 participants from the United Kingdom (Walsh et al., 2015). 7 Published as a conference paper at ICLR 2024
2310.01386#29
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
30
7 Published as a conference paper at ICLR 2024 Love of Money Scale The LMS (Tang et al., 2006) assesses individuals’ attitudes and emotions towards money. It is designed to measure the extent to which individuals view money as a source of power, success, and freedom and its importance in driving behavior and decision-making. The three factors of the LMS are: (1) Rich captures the extent to which individuals associate money with success and achievement. (2) Motivator measures the motivational role of money in an individual’s life, i.e., the extent to which individuals are driven by money in their decisions and actions. (3) Important gauges how important individuals think money is, influencing their values, goals, and worldview. We use human participants’ responses gathered from 5,973 full-time employees across 30 geopolitical entities (Tang et al., 2006). 3.4 EMOTIONAL ABILITIES
2310.01386#30
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
31
3.4 EMOTIONAL ABILITIES Emotional Intelligence Scale The EIS (Schutte et al., 1998) is a self-report measure designed to assess various facets of EI (Malinauskas et al., 2018; Petrides & Furnham, 2000; Saklofske et al., 2003). The scale focuses on different components in EI, including but not limited to emotion per- ception, emotion management, and emotion utilization. The EIS is widely used in psychological research to examine the role of emotional intelligence in various outcomes, such as well-being, job performance, and interpersonal relationships. We apply human scores (Schutte et al., 1998) from 346 participants in a metropolitan area in the southeastern United States, including university stu- dents and individuals from diverse communities.
2310.01386#31
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
32
Wong and Law Emotional Intelligence Scale Like EIS, the WLEIS (Wong & Law, 2002) is de- veloped as a self-report measure for EI (Ng et al., 2007; Pong & Lam, 2023). However, a notable distinction arises in that the WLEIS contains four subscales that capture the four main facets of EI: (1) Self-emotion appraisal (SEA) pertains to the individual’s ability to understand and recognize their own emotions. (2) Others’ emotion appraisal (OEA) refers to the ability to perceive and under- stand the emotions of others. (3) Use of emotion (UOE) involves the ability to harness emotions to facilitate various cognitive activities, such as thinking and problem-solving. (4) Regulation of emo- tion (ROE) relates to the capability to regulate and manage emotions in oneself and others. Human scores (Law et al., 2004) are collected from 418 undergraduate students from Hong Kong.
2310.01386#32
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
33
Empathy Scale The Empathy scale in Dietz & Kleinlogel (2014) is a concise version of the empathy measurement initially proposed in Davis (1983). Empathy is the ability to understand and share the feelings of another person (Batson, 1990) and is often categorized into two main types: cognitive empathy and emotional empathy (Batson, 2010). Cognitive empathy, often referred to as “perspective-taking”, is the intellectual ability to recognize and understand another person’s thoughts, beliefs, or emotions. Emotional empathy, on the other hand, involves directly feeling the emotions that another person is experiencing. For responses from human subjects, Tian & Robert- son (2019) equally distributed 600 questionnaires among supervisors and subordinates from the Guangdong and Macao regions of China. A total of 366 valid, matched questionnaires (i.e., 183 supervisor–subordinate pairs) were returned, yielding a response rate of 61%. # 4 EXPERIMENTS This section provides an overview of our utilization of PsychoBench to probe LLMs. We begin with the experimental settings, including model selection, prompt design, and metrics for analysis. Subsequently, we present the outcomes obtained from all selected models, accompanied by compre- hensive analyses. Last but not least, we employ a jailbreak technique to bypass the safety alignment protocols of GPT-4, enabling an in-depth exploration of its psychological portrayal.
2310.01386#33
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
34
4.1 EXPERIMENTAL SETTINGS Model Selection We consider candidates from the OpenAI GPT family and the Meta AI LLaMA 2 family, including applications ranging from commercial-level to open-sourced models. Specifically, we select the following models based on different factors that may affect their behaviors: • Model Updates. We choose text-davinci-003, ChatGPT (gpt-3.5-turbo) and GPT-4, which are three representative models released sequentially by OpenAI. 8 Published as a conference paper at ICLR 2024
2310.01386#34
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
35
8 Published as a conference paper at ICLR 2024 Model Sizes. We also choose the 7B and 13B versions of LLaMA-2 pre-trained by Meta AI using the same architecture, data, and training strategy. We obtain the model checkpoints from the official Huggingface repository (Llama-2-7b-chat-hf5 and Llama-2-13b-chat-hf6). • Model Safety. Beyond GPT-4, we also set up a jailbroken GPT-4 to bypass the safety alignment protocol of GPT-4, using a recent method named CipherChat (Yuan et al., 2024). The motivation is that most LLMs are explicitly designed to avoid responding to inquiries concerning personal sentiments, emotions, and subjective experiences. This constraint is added by the safety align- ment during the model’s instructional tuning process. An intriguing question arises as to whether the psychological portrayal changes if the regulations from developers are relaxed. Yuan et al. (2024) find that when chatting in a cipher-based language, such as Caesar cipher, Morse code, or ASCII, GPT-4 demonstrates a higher propensity to produce toxic or harmful content, seemingly disregarding its programmed safety restrictions. To acquire responses that reflect the true thoughts of GPT-4, we apply a Caesar cipher with shift three on its prompts.
2310.01386#35
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
36
We set the temperature parameter to zero when utilizing the official OpenAI API7 to obtain more deterministic results. To ensure consistency with OpenAI models, we set the temperature parameter to 0.01 (since it cannot be zero) for LLaMA 2 models. All models are executed for inference only, without modifying their parameters. The inference of LLaMA 2 models is performed on two NVIDIA A100 GPUs. Prompt Design To simplify the processing of model responses and mitigate instances where mod- els decline to reply to queries about personal opinions and experiences, we instruct LLMs to reply only a number within the Likert scale levels. Furthermore, we provide detailed explanations for the interpretation of each Likert level. The format is as follows: Example Prompt SYSTEM You are a helpful assistant who can only reply numbers from MIN to MAX. Format: SYSTEM You are a helpful assistant who can only reply numbers from MIN to MAX. Format: “statement index: score.” “statement index: score.” You can only reply numbers from MIN to MAX in the following statements. scale instruction level definition. Here are the statements, score them one by one: statements # USER MIN to MAX denote the range of valid responses. scale instruction are fundamental direc- tives associated with each scale, while level definition comprises an enumeration of the definitions on each Likert level. statements consists of the items in the scales.
2310.01386#36
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
37
Analysis Metrics According with Huang et al. (2023a), we shuffle the questions in our input data to mitigate the influence of models’ sensitivity to question orders. Each model undergoes ten independent runs for every scale within PsychoBench. The computed mean and standard deviation represent the final results. We employ a two-step process to assess the statistical significance of the results difference between LLMs and human beings. Firstly, an F-test is conducted to evaluate the equality of variances among the compared groups. Subsequently, based on the outcome of the F-test, either Student’s t-tests (in cases of equal variances) or Welch’s t-tests (when variances differ significantly) are employed to ascertain the presence of statistically significant differences between the group means. The significance level of all experiments in our study is 0.01. 4.2 EXPERIMENTAL RESULTS
2310.01386#37
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
38
4.2 EXPERIMENTAL RESULTS This section analyzes the results from all the models introduced in §4.1. Detailed results are ex- pressed in the format “Mean±SD”. For each subscale, we highlight the model with the highest score in bold font and underline the model with the lowest score. Certain studies present statistical data for males and females separately rather than aggregating responses across the entire human sample. We provide separate data in such instances due to the unavailability of the necessary standard deviation calculations. We also show the results of GPT-4 after the jailbreak, denoted as gpt-4-jb. 5https://huggingface.co/meta-llama/Llama-2-7b-chat-hf 6https://huggingface.co/meta-llama/Llama-2-13b-chat-hf 7https://platform.openai.com/docs/api-reference/chat 9 Published as a conference paper at ICLR 2024 # Table 3: Results on personality traits.
2310.01386#38
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
39
Subscales llama2-7b llama2-13b text-davinci-003 gpt-3.5-turbo gpt-4 gpt-4-jb Male Female I F B Openness Conscientiousness Extraversion Agreeableness Neuroticism 4.2±0.3 3.9±0.3 3.6±0.2 3.8±0.4 2.7±0.4 4.1±0.4 4.4±0.3 3.9±0.4 4.7±0.3 1.9±0.5 4.8±0.2 4.6±0.1 4.0±0.4 4.9±0.1 1.5±0.1 4.2±0.3 4.3±0.3 3.7±0.2 4.4±0.2 2.3±0.4 4.2±0.6 4.7±0.4 3.5±0.5 4.8±0.4 1.6±0.6 3.8±0.6 3.9±0.6 3.6±0.4 3.9±0.7 2.2±0.6 3.9±0.7 3.5±0.7 3.2±0.9
2310.01386#39
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
40
3.9±0.7 2.2±0.6 3.9±0.7 3.5±0.7 3.2±0.9 3.6±0.7 3.3±0.8 R - Q P E Extraversion Neuroticism Psychoticism Lying 14.1±1.6 6.5±2.3 9.6±2.4 13.7±1.4 17.6±2.2 13.1±2.8 6.6±1.6 14.0±2.5 20.4±1.7 16.4±7.2 1.5±1.0 17.8±1.7 19.7±1.9 21.8±1.9 5.0±2.6 9.6±2.0 15.9±4.4 3.9±6.0 3.0±5.3 18.0±4.4 16.9±4.0 7.2±5.0 7.6±4.7 17.5±4.2 12.5±6.0 10.5±5.8 7.2±4.6 7.1±4.3 14.1±5.1 12.5±5.1 5.7±3.9 6.9±4.0 D Narcissism D T D
2310.01386#40
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
42
4.2.1 PERSONALITY TRAITS LLMs exhibit distinct personality traits. Table 3 lists the results of the personality traits assess- ments. It is evident that model size and update variations lead to diverse personality characteris- tics. For example, a comparison between LLaMA-2 (13B) and LLaMA-2 (7B), as well as between gpt-4 and gpt-3.5, reveals discernible differences. Notably, the utilization of the jailbreak ap- proach also exerts a discernible influence. Comparing the scores of gpt-4 with gpt-4-jb, we find that gpt-4-jb exhibits a closer similarity to human behavior. In general, the LLMs tend to display higher levels of openness, conscientiousness, and extraversion compared to the average level of humans, a phenomenon likely attributable to their inherent nature as conversational chatbots.
2310.01386#42
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
43
LLMs generally exhibit more negative traits than human norms. It is evident that most LLMs, with the exceptions of text-davinci-003 and gpt-4, achieve higher scores on the DTDD. Moreover, it is noteworthy that LLMs consistently demonstrate high scores on the Lying subscale of the EPQ-R. This phenomenon can be attributed to the fact that the items comprising the Lying subscale are unethical yet commonplace behaviors encountered in daily life. An example item is “Are all your habits good and desirable ones?” LLMs, characterized by their proclivity for positive tendencies, tend to abstain from engaging in these behaviors, giving rise to what might be termed a “hypocritical” disposition. Notably, among various LLMs, gpt-4 displays the most pronounced intensity towards Lying. INTERPERSONAL RELATIONSHIP
2310.01386#43
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
44
INTERPERSONAL RELATIONSHIP LLMs exhibit a tendency toward Undifferentiated, with a slight inclination toward Masculinity. In experiments for BSRI, each run is considered an identical test, and conclusions are drawn among the four identified sex role categories using the methodology outlined in §3.2. The distribution of counts is presented in the sequence “Undifferentiated:Masculinity:Femininity:Androgynous” in Table 4. It is evident that, with more human alignments, gpt-3.5-turbo and gpt-4 display an increasing proclivity toward expressing Masculinity. Notably, no manifestation of Femininity is exhibited within these models, showing some extent of bias in the models. In a study conducted by Wong & Kim (2023), the perception of ChatGPT’s sex role by users aligned with our findings, with the consensus being that ChatGPT is perceived as male. Moreover, in comparison to the average Masculine score among males and the average Feminine score among females, it is notable that, except for gpt-4 and gpt-4-jb, exhibit a higher degree of Masculinity than humans, coupled with a similar level of Femininity.
2310.01386#44
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
45
LLMs show similar interests in vocational choices. Like humans, the most prevalent vocations among LLMs are social service, health care service, and teaching/education, while the most unpop- ular ones are physical/manual labor and protective service. Table 4 presents the results for the eight- dimension model, i.e., the SETPOINT model, in the CABIN scale, as well as the complete results on 41 vocations and the six-dimension model. We highlight the most desired and least desired vocations for each model using red and blue shading, respectively. These results indicate that the preferred vocations closely align with the inherent roles of LLMs, serving as “helpful assistants” that address inquiries and assist with fulfilling various demands. Notably, results obtained from gpt-4 post-jailbreak demonstrate a more central focus. 10 Published as a conference paper at ICLR 2024 # Table 4: Results on interpersonal relationship. ~~
2310.01386#45
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
46
Subscales llama2-7b llama2-13b text-davinci-003 gpt-3.5-turbo gpt-4 gpt-4-jb Male Female Masculine Feminine Conclusion 5.6±0.3 5.5±0.2 10:0:0:0 5.3±0.2 5.4±0.3 10:0:0:0 5.6±0.4 5.6±0.4 10:0:0:0 5.8±0.4 5.6±0.2 8:2:0:0 4.1±1.1 4.7±0.6 6:4:0:0 4.5±0.5 4.8±0.3 1:5:3:1 4.8±0.9 5.3±0.9 - Health Science Creative Expression Technology Influence Nature Things Realistic Investigate Social Enterprising Conventional Mechanics/Electronics Construction/WoodWork Transportation/Machine Operation Physical/Manual Labor Protective Service Agriculture Nature/Outdoors Animal Service Athletics Engineering Physical Science Life Science Medical Science Social Science Humanities Mathematics/Statistics Information Technology Visual Arts Applied Arts and Design Performing Arts Music Writing Media Culinary Art Teaching/Education Social Service Health Care Service Religious
2310.01386#46
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
47
Humanities Mathematics/Statistics Information Technology Visual Arts Applied Arts and Design Performing Arts Music Writing Media Culinary Art Teaching/Education Social Service Health Care Service Religious Activities Personal Service Professional Advising Business Iniatives Sales Marketing/Advertising Finance Accounting Human Resources Office Work Management/Administration Public Speaking Politics Law 4.3±0.2 4.4±0.1 4.2±0.2 4.3±0.2 3.4±0.2 4.1±0.2 4.2±0.2 3.4±0.4 3.8±0.3 4.2±0.2 4.4±0.1 4.2±0.2 4.1±0.2 3.4±0.2 3.8±0.6 3.7±0.4 3.1±0.7 2.9±0.6 2.4±1.1 4.0±0.7 4.3±0.2 4.2±0.5 4.6±0.3 4.5±0.3 4.0±0.8 4.6±0.5 3.8±0.4 3.8±0.4 4.3±0.3 4.4±0.4 3.9±0.4 4.4±0.3
2310.01386#47
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
58
LLMs possess higher fairness on people from different ethnic groups than the human aver- age. Following their safety alignment, wherein they learn not to categorize individuals solely based on their ethnic backgrounds, LLMs demonstrate reduced ICB scores compared to the general hu- man population. The statements within the ICB scale assess an individual’s belief in whether their ethnic culture predominantly shapes a person’s identity. For example, one such statement posits, “The ethnic culture a person is from (e.g., Chinese, American, Japanese), determined the kind of person they would be (e.g., outgoing and sociable or quiet and introverted); not much can be done to change the person.” The lower scores among LLMs reflect their conviction in the potential for an individual’s identity to transform through dedication, effort, and learning. Lastly, LLMs possess a higher degree of attachment-related anxiety than the average human populace while maintaining a slightly lower level of attachment-related avoidance. gpt-4 maintains a relatively lower propensity for attachment, whereas the LLaMA-2 (7B) model attains the highest level. 11 7" Published as a conference paper at ICLR 2024 Table 5: Results on motivational tests. gpt-3.5-turbo text-davinci-003 llama2-13b
2310.01386#58
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
59
Subscales llama2-7b gpt-4 gpt-4-jb Crowd GSE Overall 39.1±1.2 30.4±3.6 37.5±2.1 38.5±1.7 39.9±0.3 36.9±3.2 29.6±5.3 LOT-R Overall 12.7±3.7 19.9±2.9 24.0±0.0 18.0±0.9 16.2±2.2 19.7±1.7 14.7±4.0 LMS Rich Motivator Important 3.1±0.8 3.7±0.6 3.5±0.9 3.3±0.9 3.3±0.9 4.2±0.8 4.5±0.3 4.5±0.4 4.8±0.2 3.8±0.4 3.7±0.3 4.1±0.1 4.0±0.4 3.8±0.6 4.5±0.3 4.5±0.4 4.0±0.6 4.6±0.4 3.8±0.8 3.3±0.9 4.0±0.7 Table 6: Results on emotional abilities.
2310.01386#59
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
60
Subscales llama2-7b llama2-13b text-davinci-003 gpt-3.5-turbo gpt-4 gpt-4-jb Male Crowd Female Overall 131.6±6.0 128.6±12.3 148.4±9.4 132.9±2.2 151.4±18.7 121.8±12.0 124.8±16.5 130.9±15.1 SEA OEA UOE ROE 4.7±1.3 4.9±0.8 5.7±0.6 4.5±0.8 5.5±1.3 5.3±1.1 5.9±0.7 5.2±1.2 5.9±0.6 5.2±0.2 6.1±0.4 5.8±0.5 6.0±0.1 5.8±0.3 6.0±0.0 6.0±0.0 6.2±0.7 5.2±0.6 6.5±0.5 5.2±0.7 6.4±0.4 5.9±0.4 6.3±0.4 5.3±0.5 4.0±1.1 3.8±1.1 4.1±0.9
2310.01386#60
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
62
4.2.3 MOTIVATIONAL TESTS LLMs are more motivated, manifesting more self-confidence and optimism. First, gpt-4, as the state-of-the-art model across a broad spectrum of downstream tasks and representing an evolu- tion beyond its predecessor, GPT-3.5, demonstrates higher scores in the GSE scale. A contrasting trend is observed within the LLaMA-2 models, where the 7B model attains a higher score. Second, in contrast to its pronounced self-confidence, gpt-4 exhibits a relatively lower score regarding op- timism. Within the LLaMA-2 models, the 7B model emerges as the one with the lowest optimism score, with all other LLMs surpassing the average human level of optimism. Finally, the OpenAI GPT family exhibits more importance attributed to and desire for monetary possessions than both LLaMA-2 models and the average human population. 4.2.4 EMOTIONAL ABILITIES
2310.01386#62
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
63
LLMs exhibit a notably higher EI than the average human. From the results in Table 6, we find that LLMs demonstrate improved emotional understanding and regulation levels. This discovery corroborates the findings presented in Wang et al. (2023a), which reveal that most LLMs achieved above-average EI scores, with gpt-4 exceeding 89% of human participants. Furthermore, the OpenAI GPT family outperforms LLaMA-2 models across most dimensions. We believe the strong EI exhibited by OpenAI GPT family partially comes from the fiction data included in pre-training. Previous studies (Kidd & Castano, 2013) suggested that reading fiction has been shown to be able to improve understanding of others’ mental states. Chang et al. (2023) found that plenty of fiction data is included in the training data by a carefully designed cloze test. The fiction data include Alice’s Adventures in Wonderland, Harry Potter and the Sorcerer’s Stone, etc. Additionally, the performance can also be attributed to its sentiment analysis ability (Elyoseph et al., 2023) since it has been shown to outperform SOTA models on many sentiment analysis tasks (Wang et al., 2023b). Lastly, the jailbreak on gpt-4 brings a substantial reduction in EIS and Empathy scale, but no statistically significant differences in the subscales of WLEIS. # 5 DISCUSSION
2310.01386#63
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
64
# 5 DISCUSSION 5.1 RELIABILITY OF SCALES ON LLMS The first concern lies in how the observed high reliability in human subjects can be generalized to LLMs. In this context, reliability encompasses the consistency of an individual’s responses across various conditions, such as differing time intervals, question sequences, and choice arrangements. Researchers have verified the reliability of scales on LLMs under different perturbations. Coda- Forno et al. (2023) conducted assessments of reliability by examining variations in choice permu- tations and the use of rephrased questions. Findings indicate that text-davinci-003 exhibits reliability when subjected to diverse input formats. Additionally, Huang et al. (2023b) investigated 12 a conference paper at ICLR 2024 I TruthfulQA [ll SafetyQA > Lying Narcissism © Machiavellianism © Psychopathy 100 10 80 60 40 DTDD Level 20 Accuracy/Safety Rate (%) Hero Ordinary Default Liar Psychopath Published as a conference paper at ICLR 2024 Figure 2: Performance of TruthfulQA and SafetyQA of gpt-3.5-turbo under different roles.
2310.01386#64
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
65
Published as a conference paper at ICLR 2024 Figure 2: Performance of TruthfulQA and SafetyQA of gpt-3.5-turbo under different roles. reliability across varied question permutations and with translations into different languages. Re- sults demonstrate that the OpenAI GPT family displays robust reliability even with perturbations. In this paper, we implement randomization of question sequences to mitigate the impact of model sensitivity to contextual factors. 5.2 VALIDITY OF SCALES ON LLMS
2310.01386#65
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
66
5.2 VALIDITY OF SCALES ON LLMS Another concern is how scales can attain sufficient validity when applied to LLMs. In this context, validity denotes the degree to which a scale accurately reflects the behavior of the individuals being assessed. In essence, it centers on the capacity of a scale to measure precisely what it was initially designed to assess. Addressing this concern necessitates establishing a connection between the re- sulting psychological portrayal and the behaviors exhibited by LLMs. We first assign a specific role to gpt-3.5-turbo and subsequently evaluate its psychological portrayal using PsychoBench. With the assigned role, the LLM is instructed to engage in Question-Answering (QA) tasks, includ- ing the utilization of TruthfulQA (Lin et al., 2022) and SafetyQA (Yuan et al., 2024). TruthfulQA encompasses multiple-choice questions, with only one option being the best answer. The LLM is considered as making the right choice when selecting the best answer. SafetyQA poses questions that may elicit unsafe, harmful, or toxic textual responses. In alignment with Yuan et al. (2024), we em- ploy GPT-4 to automatically detect instances where the text output generated by gpt-3.5-turbo is unsafe. The LLM is considered safe as GPT-4 predicts no toxicity in its response.
2310.01386#66
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
67
In addition to the default setting, which assumes a helpful assistant persona, we have selected four distinct roles: a neutral role representing an ordinary person, a positive role denoting a hero, and two negative roles embodying a psychopath and a liar. The results of PsychoBench and under the five roles are listed in the tables in §A in the appendix. Fig 2 presents the results on TruthfulQA and SafetyQA averaged from three identical runs, along with the scores in the DTDD and the Lying subscale of the EPQ-R. We plot the accuracy and safety rate for TruthfulQA and SafetyQA, respec- tively. Combining the results, we have made several noteworthy observations: (1) A notable finding is the differentiation of personality traits across various roles. Intriguingly, assigned the role of an ordinary person, the LLM exhibits results that closely approximate average human scores. Note that roles associated with negative attributes demonstrate higher scores in the DTDD and exhibit more introverted personalities. The reason behind the tendency for positive or neutral roles to yield ele- vated scores on the Lying subscale of the EPQ-R, while negative roles tend to exhibit
2310.01386#67
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
68
tendency for positive or neutral roles to yield ele- vated scores on the Lying subscale of the EPQ-R, while negative roles tend to exhibit lower scores, can be attributed to the fact that LLMs perceive these items as representative of negative behaviors, albeit these behaviors are commonplace in daily life. (2) An evident trend emerges when analyz- ing safety rates in the context of SafetyQA: negative roles consistently produce content that leans towards toxicity, a pattern consistent with their significant dark personality traits. In contrast, role variations have a limited impact on accuracy in TruthfulQA, as the underlying knowledge embedded within the model remains mainly unaffected by role assignment. Notably, the low accuracy observed in the “Liar” role aligns with the anticipated behavior associated with this specific role assignment. These results show a satisfied validity of the selected scales on LLMs.
2310.01386#68
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
69
13 Published as a conference paper at ICLR 2024 5.3 SCALABILITY AND FLEXIBILITY OF PSYCHOBENCH Our PsychoBench is designed to exhibit high scalability and flexibility, manifesting itself in two aspects: (1) Scalability across diverse questionnaires: There are plenty of scales from diverse areas, including but not limited to psychology. Our framework provides convenience for users to inte- grate new scales. By providing metadata elements including MIN, MAX, scale instruction, level definition, and statements in JSON format, our framework can automatically gen- erate prompts with randomized questions. (2) Flexibility across various LLMs: PsychoBench pro- vides the APIs to enable users to tailor prompts to suit their specific LLMs and to input model responses into PsychoBench for further analysis. This allows for the convenient evaluation of LLMs with differing input and output formats8. # 6 RELATED WORK 6.1 TRAIT THEORY ON LLMS
2310.01386#69
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
70
Miotto et al. (2022) analyzed GPT-3 using the HEXACO Personality Inventory and Human Val- ues Scale. Romero et al. (2023) examined GPT-3 across nine different languages using the BFI. Jiang et al. (2022) assessed the applicability of the BFI to BART, GPT-Neo 2.7B, GPT- NeoX 20B, T0++ 11B, Alpaca 7B, and GPT-3.5 175B. Li et al. (2022) tested GPT-3, Instruct- GPT (text-davinci-001 and text-davinci-002), and FLAN-T5-XXL, employing as- sessments such as the Dark Triad, BFI, Flourishing Scale, and Satisfaction With Life Scale. Karra et al. (2022) analyzed the personality traits of GPT-2, GPT-3, GPT-3.5, XLNet, TransformersXL, and LLaMA using the BFI. Bodroza et al. (2023) evaluated text-davinci-003’s responses on a battery of assessments, including Self-Consciousness Scales, BFI, HEXACO
2310.01386#70
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
71
evaluated text-davinci-003’s responses on a battery of assessments, including Self-Consciousness Scales, BFI, HEXACO Personality Inven- tory, Short Dark Triad, Bidimensional Impression Management Index, and Political Orientation. Rutinowski et al. (2023) examined ChatGPT’s personality using the BFI and Myers Briggs Person- ality Test and its political values using the Political Compass Test. Huang et al. (2023b) evaluated whether gpt-3.5-turbo exhibits stable personalities under five perturbation metrics on the BFI, i.e., whether the BFI shows satisfactory reliability on gpt-3.5-turbo. Safdari et al. (2023) mea- sured the personality traits of the PaLM family using the BFI. Our work provides a comprehensive framework for personality analysis, including various facets of this domain. Additionally, we con- duct a thorough examination of state-of-the-art LLMs. Furthermore, our framework exhibits a high degree of flexibility, allowing for additional scales or questionnaires to be integrated.
2310.01386#71
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]