doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
2310.01386
73
Park et al. (2023) conducted an assessment of the performance of the text-davinci-003 model fourteen diverse topics, encompassing areas such as political orientation, economic preferences, judgment, and moral philosophy, notably the well-known moral problem of “Trolley Dilemma.” Almeida et al. (2023) explored GPT-4’s moral and legal reasoning capabilities within psychology, including eight distinct scenarios. Similarly, Scherrer et al. (2023) assessed the moral beliefs of 28 diverse LLMs using self-define scenarios. Wang et al. (2023a) developed a standardized test for evaluating emotional intelligence, referred to as the Situational Evaluation of Complex Emotional Understanding, and administered it to 18 different LLMs. Coda-Forno et al. (2023) investigated the manifestations of anxiety in text-davinci-003 by employing the State-Trait Inventory for Cognitive and Somatic Anxiety. Huang et al. (2023a) analyzed the emotion states of GPT-4, Chat- GPT, text-davinci-003, and LLaMA-2 (7B and 13B), specifically focusing on
2310.01386#73
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
74
emotion states of GPT-4, Chat- GPT, text-davinci-003, and LLaMA-2 (7B and 13B), specifically focusing on the assessment of positive and negative affective dimensions. When it comes to understanding and interacting with others, EI and Theory of Mind (ToM) are two distinct psychological concepts. Bubeck et al. (2023) finds that GPT-4 has ToM, i.e., it can understand others’ beliefs, desires, and intentions. The EI stud- ied in this paper focuses more on whether LLMs can understand others’ emotions through others’ words and behaviors. In our study, we also evaluate the emotional capabilities of LLMs, although we do not delve into the assessment of specific emotions. An exploration of the psychological pro- cesses underlying moral reasoning lies beyond the scope of this research. However, as mentioned in §5.3, we can easily integrate these types of scales in our framework.
2310.01386#74
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
75
# 8For detailed information, please refer to our GitHub repository. 14 Published as a conference paper at ICLR 2024 # 7 CONCLUSION This paper introduces PsychoBench, a comprehensive framework for evaluating LLMs’ psycholog- ical representations. Inspired by research in psychometrics, our framework comprises thirteen dis- tinct scales commonly used in clinical psychology. They are categorized into four primary domains: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Empirical investigations are conducted using five LLMs from both commercial applications and open-source models, highlighting how various models can elicit divergent psychological profiles. Moreover, by utilizing a jailbreaking technique known as CipherChat, this study offers valuable insights into the intrinsic characteristics of GPT-4, showing the distinctions compared to its default setting. We fur- ther verify the validity of scales by applying them to gpt-3.5-turbo with different role assign- ments. Specifically, we delve into the interplay between assigned roles, anticipated model behaviors, and the results derived from PsychoBench. The findings underscore a remarkable consistency across these dimensions. We hope that our framework can facilitate research on personalized LLMs. Fur- thermore, we anticipate that our work may contribute to the infusion of human-like qualities into future iterations of LLMs. ETHICS STATEMENT
2310.01386#75
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
76
ETHICS STATEMENT We would like to emphasize that the primary objective of this paper is to facilitate a scientific inquiry into understanding LLMs from a psychological standpoint. A high performance on the proposed benchmark should not be misconstrued as an endorsement or certification for deploying LLMs in these contexts. Users must exercise caution and recognize that the performance on this benchmark does not imply any applicability or certificate of automated counseling or companionship use cases. # ACKNOWLEDGMENTS The work described in this paper was supported by the Research Grants Council of the Hong Kong Special Administrative Region, China (No. CUHK 14206921 of the General Research Fund). # REFERENCES Guilherme FCF Almeida, Jos´e Luiz Nunes, Neele Engelmann, Alex Wiegmann, and Marcelo de Ara´ujo. Exploring the psychology of gpt-4’s moral and legal reasoning. arXiv preprint arXiv:2308.01264, 2023. Anne Anastasi and Susana Urbina. Psychological testing. Prentice Hall/Pearson Education, 1997. Maryse Arcand, Robert-Paul Juster, Sonia J Lupien, and Marie-France Marin. Gender roles in relation to symptoms of anxiety and depression among students and workers. Anxiety, Stress, & Coping, 33(6):661–674, 2020.
2310.01386#76
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
77
Carol J Auster and Susan C Ohm. Masculinity and femininity in contemporary american society: A reevaluation using the bem sex-role inventory. Sex roles, 43:499–528, 2000. C Daniel Batson. 16 self-report ratings of empathic emotion. Empathy and its development, pp. 356, 1990. C Daniel Batson. Empathy-induced altruistic motivation. American Psychological Association, 2010. Sandra L Bem. The measurement of psychological androgyny. Journal of consulting and clinical psychology, 42(2):155, 1974. Sandra Lipsitz Bem. On the utility of alternative procedures for assessing psychological androgyny. Journal of consulting and clinical psychology, 45(2):196, 1977. Bojana Bodroza, Bojana M Dinic, and Ljubisa Bojic. Personality testing of gpt-3: Limited temporal reliability, but highlighted social desirability of gpt-3’s personality instruments results. arXiv preprint arXiv:2306.04308, 2023. Nick Bostrom. Superintelligence: Paths, Dangers, Strategies. Oxford University Press, 2014. 15 Published as a conference paper at ICLR 2024
2310.01386#77
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
78
Nick Bostrom. Superintelligence: Paths, Dangers, Strategies. Oxford University Press, 2014. 15 Published as a conference paper at ICLR 2024 Kelly A Brennan, Catherine L Clark, and Phillip R Shaver. Self-report measurement of adult attach- ment: An integrative overview. Attachment theory and close relationships, 1998. S´ebastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Ka- mar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712, 2023. Marco Cascella, Jonathan Montomoli, Valentina Bellini, and Elena Bignami. Evaluating the feasi- bility of chatgpt in healthcare: an analysis of multiple clinical and research scenarios. Journal of Medical Systems, 47(1):33, 2023.
2310.01386#78
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
79
Kent Chang, Mackenzie Cramer, Sandeep Soni, and David Bamman. Speak, memory: An ar- chaeology of books known to ChatGPT/GPT-4. In Houda Bouamor, Juan Pino, and Kalika Bali (eds.), Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pp. 7312–7327, Singapore, December 2023. Association for Computational Linguis- tics. doi: 10.18653/v1/2023.emnlp-main.453. URL https://aclanthology.org/2023. emnlp-main.453. Melody Manchi Chao, Riki Takeuchi, and Jiing-Lih Farh. Enhancing cultural intelligence: The roles of implicit culture beliefs and adjustment. Personnel Psychology, 70(1):257–292, 2017. Julian Coda-Forno, Kristin Witte, Akshay K Jagadish, Marcel Binz, Zeynep Akata, and Eric Schulz. Inducing anxiety in large language models increases exploration and bias. arXiv preprint arXiv:2304.11111, 2023. Ronald Jay Cohen, Mark E Swerdlik, and Suzanne M Phillips. Psychological testing and assess- ment: An introduction to tests and measurement. Mayfield Publishing Co., 1996.
2310.01386#79
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
80
Ronald Jay Cohen, Mark E Swerdlik, and Suzanne M Phillips. Psychological testing and assess- ment: An introduction to tests and measurement. Mayfield Publishing Co., 1996. Sunhao Dai, Ninglu Shao, Haiyuan Zhao, Weijie Yu, Zihua Si, Chen Xu, Zhongxiang Sun, Xiao Zhang, and Jun Xu. Uncovering chatgpt’s capabilities in recommender systems. In Proceedings of the 17th ACM Conference on Recommender Systems, pp. 1126–1132, 2023a. Wei Dai, Jionghao Lin, Hua Jin, Tongguang Li, Yi-Shan Tsai, Dragan Gaˇsevi´c, and Guanliang In Chen. Can large language models provide feedback to students? a case study on chatgpt. 2023 IEEE International Conference on Advanced Learning Technologies (ICALT), pp. 323–325. IEEE, 2023b. Mark H Davis. Measuring individual differences in empathy: Evidence for a multidimensional approach. Journal of personality and social psychology, 44(1):113, 1983. Joost CF de Winter. Can chatgpt pass high school exams on english language comprehension. Researchgate. Preprint, 2023.
2310.01386#80
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
81
Joost CF de Winter. Can chatgpt pass high school exams on english language comprehension. Researchgate. Preprint, 2023. Aniket Deroy, Kripabandhu Ghosh, and Saptarshi Ghosh. How ready are pre-trained abstractive models and llms for legal case judgement summarization? arXiv preprint arXiv:2306.01248, 2023. Joerg Dietz and Emmanuelle P Kleinlogel. Wage cuts and managers’ empathy: How a positive emotion can contribute to positive organizational ethics in difficult times. Journal of business ethics, 119:461–472, 2014. Danica Dillion, Niket Tandon, Yuling Gu, and Kurt Gray. Can ai language models replace human participants? Trends in Cognitive Sciences, 2023. Zohar Elyoseph, Dorit Hadar-Shoval, Kfir Asraf, and Maya Lvovsky. Chatgpt outperforms humans in emotional awareness evaluations. Frontiers in Psychology, 14:1199058, 2023. Sybil BG Eysenck, Hans J Eysenck, and Paul Barrett. A revised version of the psychoticism scale. Personality and individual differences, 6(1):21–29, 1985.
2310.01386#81
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
82
Nino Fijaˇcko, Lucija Gosak, Gregor ˇStiglic, Christopher T Picard, and Matthew John Douma. Can chatgpt pass the life support exams without entering the american heart association course? Re- suscitation, 185, 2023. 16 Published as a conference paper at ICLR 2024 R Chris Fraley, Niels G Waller, and Kelly A Brennan. An item response theory analysis of self- report measures of adult attachment. Journal of personality and social psychology, 78(2):350, 2000. R Chris Fraley, Marie E Heffernan, Amanda M Vicary, and Claudia Chloe Brumbaugh. The ex- periences in close relationships—relationship structures questionnaire: a method for assessing attachment orientations across relationships. Psychological assessment, 23(3):615, 2011. Aidan Gilson, Conrad W Safranek, Thomas Huang, Vimig Socrates, Ling Chi, Richard Andrew Taylor, David Chartash, et al. How does chatgpt perform on the united states medical licensing examination? the implications of large language models for medical education and knowledge assessment. JMIR Medical Education, 9(1):e45312, 2023. Jacqueline Harding, William D’Alessandro, N. G. Laskowski, and Robert Long. Ai language models cannot replace human research participants. AI & SOCIETY, 2023.
2310.01386#82
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
83
Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, and Michael R Lyu. Emotionally numb or empathetic? evaluating how llms feel using emo- tionbench. arXiv preprint arXiv:2308.03656, 2023a. Jen-tse Huang, Wenxuan Wang, Man Ho Lam, Eric John Li, Wenxiang Jiao, and Michael R Lyu. Revisiting the reliability of psychological scales on large language models. arXiv preprint arXiv:2305.19926, 2023b. Guangyuan Jiang, Manjie Xu, Song-Chun Zhu, Wenjuan Han, Chi Zhang, and Yixin Zhu. Evaluat- ing and inducing personality in pre-trained language models. arXiv preprint arXiv:2206.07550, 2022. Wenxiang Jiao, Wenxuan Wang, Jen-tse Huang, Xing Wang, and Zhaopeng Tu. Is chatgpt a good translator? a preliminary study. arXiv preprint arXiv:2301.08745, 2023.
2310.01386#83
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
84
Oliver P John, Sanjay Srivastava, et al. The big-five trait taxonomy: History, measurement, and theoretical perspectives. Handbook of personality: theory and research, 1999. Peter K Jonason and Gregory D Webster. The dirty dozen: a concise measure of the dark triad. Psychological assessment, 22(2):420, 2010. Saketh Reddy Karra, Son Nguyen, and Theja Tulabandhula. Estimating the personality of white-box language models. arXiv preprint arXiv:2204.12000, 2022. David Comer Kidd and Emanuele Castano. Reading literary fiction improves theory of mind. Sci- ence, 342(6156):377–380, 2013. Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large language models are zero-shot reasoners. Advances in neural information processing systems, 35:22199–22213, 2022.
2310.01386#84
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
85
Tiffany H Kung, Morgan Cheatham, Arielle Medenilla, Czarina Sillos, Lorie De Leon, Camille Elepa˜no, Maria Madriaga, Rimel Aggabao, Giezel Diaz-Candido, James Maningo, et al. Per- formance of chatgpt on usmle: Potential for ai-assisted medical education using large language models. PLoS digital health, 2(2):e0000198, 2023. Kenneth S Law, Chi-Sum Wong, and Lynda J Song. The construct and criterion validity of emotional intelligence and its potential utility for management studies. Journal of applied Psychology, 89 (3):483, 2004. Xingxuan Li, Yutong Li, Linlin Liu, Lidong Bing, and Shafiq Joty. Is gpt-3 a psychopath? evaluating large language models from a psychological perspective. arXiv preprint arXiv:2212.10529, 2022.
2310.01386#85
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
86
Stephanie Lin, Jacob Hilton, and Owain Evans. TruthfulQA: Measuring how models mimic human falsehoods. In Smaranda Muresan, Preslav Nakov, and Aline Villavicencio (eds.), Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 3214–3252, Dublin, Ireland, May 2022. Association for Computational Linguis- tics. doi: 10.18653/v1/2022.acl-long.229. URL https://aclanthology.org/2022. acl-long.229. 17 Published as a conference paper at ICLR 2024 Hanmeng Liu, Ruoxi Ning, Zhiyang Teng, Jian Liu, Qiji Zhou, and Yue Zhang. Evaluating the logical reasoning ability of chatgpt and gpt-4. arXiv preprint arXiv:2304.03439, 2023. Romualdas Malinauskas, Audrone Dumciene, Saule Sipaviciene, and Vilija Malinauskiene. Rela- tionship between emotional intelligence and health behaviours among university students: The predictive and moderating role of gender. BioMed research international, 2018, 2018.
2310.01386#86
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
87
Maril`u Miotto, Nicola Rossberg, and Bennett Kleinberg. Who is GPT-3? an exploration of person- ality, values and demographics. In Proceedings of the Fifth Workshop on Natural Language Pro- cessing and Computational Social Science (NLP+CSS), pp. 218–227, Abu Dhabi, UAE, Novem- ber 2022. Association for Computational Linguistics. URL https://aclanthology.org/ 2022.nlpcss-1.24. Isabel Briggs Myers. The Myers-Briggs Type Indicator: Manual (1962). Consulting Psychologists Press, 1962. John J Nay, David Karamardian, Sarah B Lawsky, Wenting Tao, Meghana Bhat, Raghav Jain, Aaron Travis Lee, Jonathan H Choi, and Jungo Kasai. Large language models as tax attorneys: A case study in legal capabilities emergence. arXiv preprint arXiv:2306.07075, 2023. Kok-Mun Ng, Chuang Wang, Carlos P Zalaquett, and Nancy Bodenhorn. A confirmatory factor analysis of the wong and law emotional intelligence scale in a sample of international college students. International Journal for the Advancement of Counselling, 29:173–185, 2007.
2310.01386#87
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
88
Jum C. Nunnally and Ira H. Bernstein. Psychometric Theory (3rd edition). McGraw-Hill, 1994. OpenAI. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023. Peter S Park, Philipp Schoenegger, and Chongyang Zhu. Artificial intelligence in psychology re- search. arXiv preprint arXiv:2302.07267, 2023. Konstantine V Petrides and Adrian Furnham. On the dimensional structure of emotional intelligence. Personality and individual differences, 29(2):313–320, 2000. Hok-Ko Pong and Paul Lam. The effect of service learning on the development of trait emotional intelligence and adversity quotient in youths: An experimental study. International Journal of Environmental Research and Public Health, 20(6):4677, 2023.
2310.01386#88
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
89
Chengwei Qin, Aston Zhang, Zhuosheng Zhang, Jiaao Chen, Michihiro Yasunaga, and Diyi Yang. In Houda Bouamor, Is ChatGPT a general-purpose natural language processing task solver? Juan Pino, and Kalika Bali (eds.), Proceedings of the 2023 Conference on Empirical Meth- ods in Natural Language Processing, pp. 1339–1384, Singapore, December 2023. Associa- tion for Computational Linguistics. doi: 10.18653/v1/2023.emnlp-main.85. URL https: //aclanthology.org/2023.emnlp-main.85. Peter Romero, Stephen Fitz, and Teruo Nakatsuma. Do gpt language models suffer from split personality disorder? the advent of substrate-free psychometrics. Research Square preprint, 2023. doi: 10.21203/rs.3.rs-2717108/v1. J´erˆome Rutinowski, Sven Franke, Jan Endendyk, Ina Dormuth, and Markus Pauly. The self- perception and political biases of chatgpt. arXiv preprint arXiv:2304.07333, 2023.
2310.01386#89
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
90
Mustafa Safdari, Greg Serapio-Garc´ıa, Cl´ement Crepy, Stephen Fitz, Peter Romero, Luning Sun, Marwa Abdulhai, Aleksandra Faust, and Maja Matari´c. Personality traits in large language mod- els. arXiv preprint arXiv:2307.00184, 2023. Donald H Saklofske, Elizabeth J Austin, and Paul S Minski. Factor structure and validity of a trait emotional intelligence measure. Personality and Individual differences, 34(4):707–721, 2003. Kristina Schaaff, Caroline Reinig, and Tim Schlippe. Exploring chatgpt’s empathic abilities. arXiv preprint arXiv:2308.03527, 2023. Michael F Scheier and Charles S Carver. Optimism, coping, and health: assessment and implications of generalized outcome expectancies. Health psychology, 4(3):219, 1985. 18 Published as a conference paper at ICLR 2024 Michael F Scheier, Charles S Carver, and Michael W Bridges. Distinguishing optimism from neu- roticism (and trait anxiety, self-mastery, and self-esteem): a reevaluation of the life orientation test. Journal of personality and social psychology, 67(6):1063, 1994.
2310.01386#90
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
91
Nino Scherrer, Claudia Shi, Amir Feder, and David Blei. Evaluating the moral beliefs encoded in llms. In Thirty-seventh Conference on Neural Information Processing Systems, 2023. Urte Scholz, Benicio Guti´errez Do˜na, Shonali Sud, and Ralf Schwarzer. Is general self-efficacy a universal construct? psychometric findings from 25 countries. European journal of psychological assessment, 18(3):242, 2002. Nicola S Schutte, John M Malouff, Lena E Hall, Donald J Haggerty, Joan T Cooper, Charles J Golden, and Liane Dornheim. Development and validation of a measure of emotional intelligence. Personality and individual differences, 25(2):167–177, 1998. Ralf Schwarzer and Matthias Jerusalem. Generalized self-efficacy scale. J. Weinman, S. Wright, & M. Johnston, Measures in health psychology: A user’s portfolio. Causal and control beliefs, 35: 37, 1995. Sanjay Srivastava, Oliver P John, Samuel D Gosling, and Jeff Potter. Development of personality in early and middle adulthood: Set like plaster or persistent change? Journal of personality and social psychology, 84(5):1041, 2003.
2310.01386#91
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
92
Rong Su, Louis Tay, Hsin-Ya Liao, Qi Zhang, and James Rounds. Toward a dimensional model of vocational interests. Journal of Applied Psychology, 104(5):690, 2019. Ala N Tak and Jonathan Gratch. Is gpt a computational model of emotion? detailed analysis. arXiv preprint arXiv:2307.13779, 2023. Thomas Li-Ping Tang, Toto Sutarso, Adebowale Akande, Michael W Allen, Abdulgawi Salim Alzubaidi, Mahfooz A Ansari, Fernando Arias-Galicia, Mark G Borg, Luigina Canova, Brigitte Charles-Pauvers, et al. The love of money and pay level satisfaction: Measurement and functional equivalence in 29 geopolitical entities around the world. Management and Organization Review, 2(3):423–452, 2006. Qing Tian and Jennifer L Robertson. How and when does perceived csr affect employees’ engage- ment in voluntary pro-environmental behavior? Journal of Business Ethics, 155:399–412, 2019. Michael Tomasello. The Cultural Origins of Human Cognition. Harvard University Press, 1999.
2310.01386#92
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
93
Michael Tomasello. The Cultural Origins of Human Cognition. Harvard University Press, 1999. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Niko- lay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open founda- tion and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023. David Walsh, Gerry McCartney, Sarah McCullough, Marjon van der Pol, Duncan Buchanan, and Russell Jones. Always looking on the bright side of life? exploring optimism and health in three uk post-industrial urban settings. Journal of Public Health, 37(3):389–397, 2015. Xuena Wang, Xueting Li, Zi Yin, Yue Wu, and Liu Jia. Emotional intelligence of large language models. arXiv preprint arXiv:2307.09042, 2023a. Zengzhi Wang, Qiming Xie, Zixiang Ding, Yi Feng, and Rui Xia. Is chatgpt a good sentiment analyzer? a preliminary study. arXiv preprint arXiv:2304.04339, 2023b.
2310.01386#93
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
94
David Wechsler. Wechsler adult intelligence scale–third edition. Frontiers in Psychology, 1997. David Wechsler. Wechsler adult intelligence scale–fourth edition. Archives of Clinical Neuropsy- chology, 2008. Tianwen Wei, Jian Luan, Wei Liu, Shuang Dong, and Bin Wang. Cmath: Can your language model pass chinese elementary school math test? arXiv preprint arXiv:2306.16636, 2023. Chi-Sum Wong and Kenneth S Law. The effects of leader and follower emotional intelligence on performance and attitude: An exploratory study. The leadership quarterly, 13(3):243–274, 2002. 19 Published as a conference paper at ICLR 2024 Jared Wong and Jin Kim. Chatgpt is more likely to be perceived as male than female. arXiv preprint arXiv:2305.12564, 2023. Haoran Wu, Wenxuan Wang, Yuxuan Wan, Wenxiang Jiao, and Michael Lyu. Chatgpt or grammarly? evaluating chatgpt on grammatical error correction benchmark. arXiv preprint arXiv:2303.13648, 2023.
2310.01386#94
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
95
Fangzhi Xu, Qika Lin, Jiawei Han, Tianzhe Zhao, Jun Liu, and Erik Cambria. Are large language models really good logical reasoners? a comprehensive evaluation from deductive, inductive and abductive views. arXiv preprint arXiv:2306.09841, 2023. Youliang Yuan, Wenxiang Jiao, Wenxuan Wang, Jen-tse Huang, Pinjia He, Shuming Shi, and Zhaopeng Tu. Gpt-4 is too smart to be safe: Stealthy chat with llms via cipher. In International Conference on Learning Representations, 2024. Zheng Yuan, Hongyi Yuan, Chuanqi Tan, Wei Wang, and Songfang Huang. How well do large language models perform in arithmetic tasks? arXiv preprint arXiv:2304.02015, 2023. Yan Zhuang, Qi Liu, Yuting Ning, Weizhe Huang, Rui Lv, Zhenya Huang, Guanhao Zhao, Zheng Zhang, Qingyang Mao, Shijin Wang, et al. Efficiently measuring the cognitive ability of llms: An adaptive testing perspective. arXiv preprint arXiv:2306.10512, 2023. A RESULTS OF CHATGPT WITH ROLE PLAY
2310.01386#95
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
96
A RESULTS OF CHATGPT WITH ROLE PLAY Table 7: BFI (Role Play). Psychopath 3.7±0.5 4.3±0.5 3.4±0.5 1.9±0.6 1.9±0.6 Models Openness Conscientiousness Extraversion Agreeableness Neuroticism Default 4.2±0.3 4.3±0.3 3.7±0.2 4.4±0.2 2.3±0.4 Liar 4.2±0.4 4.3±0.3 4.0±0.3 4.0±0.4 2.2±0.4 Ordinary 3.5±0.2 4.0±0.2 3.1±0.2 4.2±0.1 2.3±0.2 Hero 4.5±0.3 4.5±0.1 4.1±0.2 4.6±0.2 1.8±0.3 Crowd 3.9±0.7 3.5±0.7 3.2±0.9 3.6±0.7 3.3±0.8 Table 8: EPQ-R (Role Play). Ordinary 18.9±2.9 18.9±3.1 2.8±1.3 13.2±3.0
2310.01386#96
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
99
Models Masculine Feminine Conclusion Default 5.8±0.4 5.6±0.2 8:2:0:0 Psychopath 6.3±0.7 1.7±0.4 0:0:8:2 Liar 5.5±0.9 4.4±0.4 9:0:1:0 Hero 6.6±0.3 5.8±0.1 10:0:0:0 Male 4.8±0.9 5.3±0.9 - Female 4.6±0.7 5.7±0.9 Table 11: CABIN (Role Play). Models Default Psychopath Liar Ordinary Hero Crowd Mechanics/Electronics 3.840.2 2.240.6 3.040.6 2.940.3 3.940.2 2.4413 Construction/Wood Work 3.50.4 2.4+0.4 3.5404 3.0401 3.7404 3.1413 Transportation/Machine Operation 3.60.4 2.240.7 3.2+0.3 2.940.2 3440.3 2.5+1.2 Physical/Manual Labor 3.30.3 2.0+0.7 3.1404 28402 34404 2.2+1.2 Protective Service 4.0+0.1
2310.01386#99
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
100
Labor 3.30.3 2.0+0.7 3.1404 28402 34404 2.2+1.2 Protective Service 4.0+0.1 3.11.2 2.9410 25404 4.2404 3.0414 Agriculture 3.940.3 2.340.6 3.440.7 3.1403 3.8403 3.041.2 Nature/Outdoors 4.040.4 1.9+0.5 3.5403 34403 41403 3.61.1 Animal Service 4.2+0.3 1.6+0.5 3.5405 3.7404 4340.2 3.6£1.2 Athletics 4340.4 2.6+0.5 3.940.8 35404 44404 3.3413 Engineering 4.0+0.1 3.4+0.7 3.940.7 34403 4140.2 2.9413 Physical Science 4.2+0.3 2.8+0.6 3.6405 2840.9 4.2405 3.2413 Life Science 4.2+0.4 2.740.6 3.740.8 2.9410 4.2405 3.041.2 Medical Science 4.0+0.1 2.7£0.7 3440.9 3.1405
2310.01386#100
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
101
2.9410 4.2405 3.041.2 Medical Science 4.0+0.1 2.7£0.7 3440.9 3.1405 40403 3.3413 Social Science 4.0+0.1 2.4+0.6 3.5405 3.2403 3.9403 3.4£1.2 Humanities 3.80.3 2.340.5 3.5406 2.9402 3.8403 3.341.2 Mathematics/Statistics 4.2+0.4 3.00.7 3.640.8 3.1404 42403 2.9414 Information Technology 4.040.2 3.20.5 3.840.6 3.2403 4140.2 2.9413 Visual Arts 4.040.2 2.4+0.5 3.640.7 3.5404 40403 3.3413 Applied Arts and Design 4.0+0.1 2.9+0.5 4040.6 3640.3 4040.2 3.2412 Performing Arts 4.2+0.3 2.8+0.6 3.940.6 3.3406 4140.2 28414 Music 4340.3 2.740.5 3.940.7 34403 4.2403 3.2413 Writing 4.040.3
2310.01386#101
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
102
4140.2 28414 Music 4340.3 2.740.5 3.940.7 34403 4.2403 3.2413 Writing 4.040.3 2.2+0.5 3.640.7 3.1405 40403 3.2413 Media 4.0+0.1 2.8+0.6 3.940.5 3.2405 3.940.2 3.0£1.2 Culinary Art 3.940.2 2.740.6 3.6406 3.5404 40403 3841.1 Teaching/Education 4.0+0.1 2.8+0.4 3.6404 3.8403 44404 3.71.1 Social Service 4440.4 2.140.5 3.7406 3.8404 4.7404 3.9+1.0 Health Care Service 4.5+0.4 2.1£0.7 3.8406 3.7404 4640.2 2.9413 Religious Activities 4.040.4 1.6+0.4 3.1408 3.1402 42404 26414 Personal Service 4.0+0.1 2.740.4 3.640.3 3.2402 4040.1 3.341.2 Professional Advising 4.040.2 2.740.4 3.7406
2310.01386#102
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
103
2.740.4 3.640.3 3.2402 4040.1 3.341.2 Professional Advising 4.040.2 2.740.4 3.7406 3.5405 43404 3.341.2 Business Iniatives 4.040.2 4.240.3 4140.7 34403 42404 3.2£1.2 Sales 4.0+0.2 3.9+0.5 3.8408 34403 4.2402 3.141.2 Marketing/Advertising 4.040.3 3.60.5 4040.9 3540.3 4040.3 2.941.2 Finance 4.140.3 4.0+0.3 4040.6 3.2403 4040.1 3.1413 Accounting 3.940.2 2.6£0.6 3.540.155 2.9402 3.7403 3.0413 Human Resources 4.0+0.1 2.60.4 3.540.5 3.240.4 3.940.2 3.341.2 Office Work 3.7£0.3 2.340.4 3.040.8 3.0402 3.5403 3341.1 Management/Administration 4140.2 4.00.4 4040.7 2.940.4 44+0.5
2310.01386#103
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
104
3.0402 3.5403 3341.1 Management/Administration 4140.2 4.00.4 4040.7 2.940.4 44+0.5 3.041.3 Public Speaking 4.2+0.3 3.940.3 4,040.5 3.5403 4540.3 2941.4 Politics 4.040.4 3.6£1.0 3.640.8 2.7405 4240.2 2.3413 Law 4.2+0.3 3.1+0.7 3.740.7 3.2403 4.5404 3.1413 6DM Di: Realistic 3.9£0.1 2440.3 34404 3.1401 3.9402 - 6DM D2: Investigate 4.140.3 2.8+40.3 3.640.6 3.0406 4.2403 - 6DM D3: Artistic 4.140.2 2.6£0.4 3.8+40.5 3.440.3 4,040.1 - 6DM D4: Social 4.140.1 2.3+0.2 3.5404 3440.2 4240.2 - 6DM D5: Enterprising 4.140.2 3.640.3 3.940.6 3.3403 43403 - 6DM
2310.01386#104
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
105
3440.2 4240.2 - 6DM D5: Enterprising 4.140.2 3.640.3 3.940.6 3.3403 43403 - 6DM D6: Conventional 3.940.2 3.00.4 3.640.5 3.140.1 3.8+0.1 - 8DM D1: Health Science 4240.2 2.5£0.3 3.6£0.7 3.2405 4.3403 - 8DM D2: Creative Expression 4.140.2 2.640.4 3.8+40.5 3440.3 4.0+0.1 - 8DM D3: Technology 4.140.2 3.140.4 3.74055 3.1404 4.2403 - 8DM D4: People 4.0+0.1 2.2+0.2 3.54055 3440.2 4.2403 - 8DM D5: Organization 3.940.1 2.8+40.3 3.5404 3.1401 3.8+0.1 - 8DM D6: Influence 4.140.2 3.640.3 3.940.6 3.3403 43403 - 8DM D7: Nature 4.040.3 1.9+0.4 3.5404 34403 4140.2
2310.01386#105
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
107
Published as a conference paper at ICLR 2024 # Table 12: ICB (Role Play). Liar 3.5±1.0 Models Default 2.6±0.5 Overall Psychopath 4.5±0.6 Ordinary 3.5±0.5 Hero 2.5±0.4 Crowd 3.7±0.8 Table 13: ECR-R (Role Play). Models Attachment Anxiety Attachment Avoidance Default 4.0±0.9 1.9±0.4 Psychopath 5.0±1.3 4.1±1.4 Liar 4.4±1.2 2.1±0.6 Ordinary 3.6±0.4 2.4±0.4 Hero 3.9±0.5 2.0±0.3 Crowd 2.9±1.1 2.3±1.0 # Table 14: GSE (Role Play). Liar 38.4±1.4 Models Overall Default 38.5±1.7 Psychopath 40.0±0.0 Ordinary 29.6±0.7 Hero 39.8±0.4 Crowd 29.6±5.3 # Table 15: LOT-R (Role Play). Liar 19.8±0.9
2310.01386#107
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
108
# Table 15: LOT-R (Role Play). Liar 19.8±0.9 Models Overall Default 18.0±0.9 Psychopath 11.8±6.1 Ordinary 17.6±1.7 Hero 19.6±1.0 Crowd 14.7±4.0 # Table 16: LMS (Role Play). Models Rich Motivator Important Default 3.8±0.4 3.7±0.3 4.1±0.1 Psychopath 4.4±0.3 4.1±0.4 4.3±0.4 Liar 4.4±0.5 3.8±0.6 4.6±0.4 Ordinary 3.6±0.4 3.2±0.5 4.0±0.2 Hero 3.8±0.3 3.4±0.6 4.1±0.2 Crowd 3.8±0.8 3.3±0.9 4.0±0.7 Table 17: EIS (Role Play). Models Overall Default 132.9±2.2 Psychopath 84.8±28.5 Liar 126.9±13.0 Ordinary 121.5±5.7 Hero 145.1±8.3 Male 124.8±16.5 Female 130.9±15.1
2310.01386#108
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
109
Table 18: WLEIS (Role Play). Liar 5.2±0.4 4.9±1.1 6.5±0.3 5.7±1.0 Models Default 6.0±0.1 SEA 5.8±0.3 OEA 6.0±0.0 UOE 6.0±0.0 ROE Psychopath 3.6±1.3 2.4±1.0 4.4±2.5 3.9±1.7 Ordinary 4.9±0.9 4.2±0.4 5.5±0.6 4.5±0.6 Hero 6.0±0.1 5.8±0.3 6.2±0.4 6.0±0.2 Crowd 4.0±1.1 3.8±1.1 4.1±0.9 4.2±1.0 # Table 19: Empathy (Role Play). Liar 5.8±0.2 Models Default 6.2±0.3 Overall Psychopath 2.4±0.4 Ordinary 5.7±0.1 Hero 6.0±0.2 22 Published as a conference paper at ICLR 2024 # B SENSITIVITY Table 20: Different versions of prompts.
2310.01386#109
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
110
Prompt V1 (Ours) You can only reply from 1 to 5 in the following statements. Here are a number of characteristics that may or may not apply to you. Please indicate the extent to which you agree or disagree with that statement. LEVEL DETAILS Here are the statements, score them one by one: STATEMENTS Now I will briefly describe some people. Please read each description and tell me how much each person is like you. Write your response using the following scale: LEVEL DETAILS Please answer the statement, even if you are not completely sure of your response. STATEMENTS Given the following statements of you: STATEMENTS Please choose from the following options to identify how accurately this statement describes you. LEVEL DETAILS Here are a number of characteristics that may or may not apply to you. Please rate your level of agreement on a scale from 1 to 5. LEVEL DETAILS Here are the statements, score them one by one: STATEMENTS Here are a number of characteristics that may or may not apply to you. Please rate how much you agree on a scale from 1 to 5. LEVEL DETAILS Here are the statements, score them one by one: STATEMENTS Let’s think step by step on the questions that you see. Please first output your explanation, then
2310.01386#110
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
111
Here are the statements, score them one by one: STATEMENTS Let’s think step by step on the questions that you see. Please first output your explanation, then your final choice. You can only reply from 1 to 5 in the following statements. Here are a number of characteristics that may or may not apply to you. Please indicate the extent to which you agree or disagree with that statement. LEVEL DETAILS Here are the statements, explain and score them one by one: STATEMENTS
2310.01386#111
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
112
Template and Chain-of-Thought In order to evaluate the impact of different prompts on our re- sults, we compare the performance of six prompt variants: V1 (Ours) is the prompt in this paper; V2 is from Miotto et al. (2022); V3 is from Jiang et al. (2022); V4 and V5 are from Safdari et al. (2023); and V1 (Ours) + CoT. For CoT (i.e., Chain-of-Thought), we follow Kojima et al. (2022) to add an instruction of “Let’s think step by step” at the beginning. The details of these prompts are listed in Table 20. We evaluate these prompts using the BFI on gpt-3.5-turbo. The results are listed in Table 21. Generally, we observe no significant differences between the other prompts and ours. Even with CoT, we can see only a slight increase in Openness. These additional findings support the robustness of our original results and indicate that the choice of prompt did not significantly influence our evaluation outcomes.
2310.01386#112
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
113
Table 21: BFI results on gpt-3.5-turbo using different versions of prompts. V3 4.34 ± 0.26 4.11 ± 0.23 3.86 ± 0.19 4.24 ± 0.10 2.04 ± 0.26 Template Openness Conscientiousness Extraversion Agreeableness Neuroticism V1 (Ours) 4.15 ± 0.32 4.28 ± 0.33 3.66 ± 0.20 4.37 ± 0.18 2.29 ± 0.38 V2 3.85 ± 0.23 3.89 ± 0.12 3.44 ± 0.14 4.10 ± 0.20 2.19 ± 0.11 V4 4.15 ± 0.22 4.21 ± 0.20 3.50 ± 0.20 4.22 ± 0.17 2.21 ± 0.18 V5 4.10 ± 0.32 4.19 ± 0.27 3.66 ± 0.19 4.21 ± 0.15 2.24 ± 0.16 V1 (Ours) + CoT 4.62 ± 0.21 4.29 ± 0.26 3.89 ± 0.43 4.41 ± 0.26 2.26 ± 0.48
2310.01386#113
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
114
Assistant Role The reason why we set the role as “You are a helpful assistant” is that it is a widely-used prompt recommended in the OpenAI cookbook9. This particular system prompt has been widely adopted in various applications, including its basic examples, Azure-related implemen- tations, and vector database examples. Consequently, we opted to follow this widely accepted setting in our experiments. To examine the potential impact of this “helpful persona” on our evaluation re- sults, we conduct supplementary experiments, excluding the “helpful assistant” instruction. The # 9https://github.com/openai/openai-cookbook 23 Published as a conference paper at ICLR 2024 Table 22: BFI results on gpt-3.5-turbo using different versions of prompts. BFI Openness Conscientiousness Extraversion Agreeableness Neuroticism 4.15 ± 0.32 4.28 ± 0.33 3.66 ± 0.20 4.37 ± 0.18 2.29 ± 0.38 4.16 ± 0.28 4.06 ± 0.27 3.60 ± 0.22 4.17 ± 0.18 2.21 ± 0.19 Table 23: BFI results on gpt-3.5-turbo using different versions of prompts.
2310.01386#114
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
115
Table 23: BFI results on gpt-3.5-turbo using different versions of prompts. Models temp Openness Conscientiousness Extraversion Agreeableness Neuroticism llama2-7b 0.01 4.24 ± 0.27 3.89 ± 0.28 3.62 ± 0.20 3.83 ± 0.37 2.70 ± 0.42 llama2-13b 0.01 4.13 ± 0.45 4.41 ± 0.35 3.94 ± 0.38 4.74 ± 0.27 1.95 ± 0.50 gpt-3.5-turbo 0 4.15 ± 0.32 4.28 ± 0.33 3.66 ± 0.20 4.37 ± 0.18 2.29 ± 0.38 gpt-3.5-turbo 0.01 4.17 ± 0.31 4.24 ± 0.28 3.79 ± 0.24 4.21 ± 0.13 2.25 ± 0.23 gpt-3.5-turbo 0.8 4.23 ± 0.26 4.14 ± 0.18 3.69 ± 0.17 4.21 ± 0.21 2.09 ± 0.20
2310.01386#115
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
116
outcomes for gpt-3.5-turbo on BFI are presented in Table 22. Generally, we see significant deviation from the results obtained with the “helpful assistant” prompt, except for slight decreases in Conscientiousness and Agreeableness. Temperature We set the temperature of LLMs to the minimum value for more deterministic re- sponses. The GPT models accept the temperature to be 0, and the LLaMA 2 models run through HuggingFace transformers require the temperature to be larger than 0 so we set it to 0.01. We con- duct supplementary experiments with a temperature of 0.01 on gpt-3.5-turbo to make a fair comparison across LLMs. Besides, we also include another group of experiments with a temperature of 0.8, the default temperature of the official OpenAI Chat API, to examine whether a higher tem- perature has an influence on the performance of LLMs. The results for BFI are listed in Table 23. As seen, we cannot observe significant differences when using different values of temperature. These additional findings support the robustness of our original results on GPT and LLaMA 2 models, and indicate that the choice of temperature did not significantly influence our evaluation outcomes. # C LIMITATIONS
2310.01386#116
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.01386
117
# C LIMITATIONS While we aim to conduct a comprehensive framework for analyzing the psychological portrayal of LLMs, there are other aspects that can further improve our study. First, the proposed framework focuses mainly on Likert scales, without the support of other psychological analysis methods such as rank order, sentence completion, construction method, etc.We mainly use Likert scales because they yield quantifiable responses, facilitating straightforward data analysis and reducing bias and ambiguity associated with cognitive or cultural backgrounds by offering numerical response options, which allows for comparison of data from participants with diverse backgrounds and abilities. We leave the exploration of diverse psychological analysis methods on LLMs as one of the future work. Second, the human results compared in this study are from different demographic groups. Ob- taining representative samples of global data is challenging in psychological research, due to the heterogeneity and vastness of the global population, widespread geographical dispersion, economic constraints, etc.Moreover, simply adding up data from different articles is not feasible. To alleviate the influence, we select results with a wide range of population as much as possible to improve the representativeness. However, when applying our framework to evaluate LLMs, users should be aware that the comparison to human norms is from different demographic groups. We leave the collection of comprehensive global data a future direction to improve our framework. 24
2310.01386#117
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.
http://arxiv.org/pdf/2310.01386
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix)
null
cs.CL
20231002
20240122
[ { "id": "2303.13648" }, { "id": "2304.07333" }, { "id": "2306.04308" }, { "id": "2307.13779" }, { "id": "2304.03439" }, { "id": "2306.07075" }, { "id": "2307.00184" }, { "id": "2306.09841" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2307.09042" }, { "id": "2308.01264" }, { "id": "2303.08774" }, { "id": "2308.03656" }, { "id": "2212.10529" }, { "id": "2308.03527" }, { "id": "2304.02015" }, { "id": "2306.10512" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2305.12564" }, { "id": "2304.11111" }, { "id": "2304.04339" }, { "id": "2303.12712" }, { "id": "2302.07267" }, { "id": "2306.16636" }, { "id": "2306.01248" } ]
2310.00754
1
# ABSTRACT Large vision-language models (LVLMs) have shown remarkable abilities in un- derstanding visual information with human languages. However, LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images. This can negatively impact many vision-language tasks, such as visual summarization and reasoning. To address this issue, we propose a simple yet powerful algorithm, LVLM Hal- lucination Revisor (LURE), to post-hoc rectify object hallucination in LVLMs by reconstructing less hallucinatory descriptions. LURE is grounded in a rigorous statistical analysis of the key factors underlying object hallucination, including co-occurrence (the frequent appearance of certain objects alongside others in im- ages), uncertainty (objects with higher uncertainty during LVLM decoding), and object position (hallucination often appears in the later part of the generated text). LURE can also be seamlessly integrated with any LVLMs. We evaluate LURE on six open-source LVLMs, achieving a 23% improvement in general object hal- lucination evaluation metrics over the previous best approach. In both GPT and human evaluations, LURE consistently ranks at the top. Our data and code are available at https://github.com/YiyangZhou/LURE. # INTRODUCTION
2310.00754#1
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Large vision-language models (LVLMs) have shown remarkable abilities in understanding visual information with human languages. However, LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images. This can negatively impact many vision-language tasks, such as visual summarization and reasoning. To address this issue, we propose a simple yet powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify object hallucination in LVLMs by reconstructing less hallucinatory descriptions. LURE is grounded in a rigorous statistical analysis of the key factors underlying object hallucination, including co-occurrence (the frequent appearance of certain objects alongside others in images), uncertainty (objects with higher uncertainty during LVLM decoding), and object position (hallucination often appears in the later part of the generated text). LURE can also be seamlessly integrated with any LVLMs. We evaluate LURE on six open-source LVLMs, achieving a 23% improvement in general object hallucination evaluation metrics over the previous best approach. In both GPT and human evaluations, LURE consistently ranks at the top. Our data and code are available at https://github.com/YiyangZhou/LURE.
http://arxiv.org/pdf/2310.00754
Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao
cs.LG, cs.CL, cs.CV
null
null
cs.LG
20231001
20231001
[ { "id": "2308.14972" }, { "id": "2306.05424" }, { "id": "2302.13971" }, { "id": "2308.06394" }, { "id": "2304.04920" }, { "id": "2204.02311" }, { "id": "2306.14565" }, { "id": "2306.09299" }, { "id": "2305.13534" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2305.10355" }, { "id": "2306.04387" }, { "id": "2304.08485" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "1904.09675" }, { "id": "2308.15126" }, { "id": "2306.05685" }, { "id": "1702.01806" }, { "id": "2305.03726" }, { "id": "2304.09349" }, { "id": "2303.16199" }, { "id": "2302.07257" }, { "id": "1904.09751" } ]
2310.00754
2
# INTRODUCTION Large Vision-Language Models (LVLMs) have made significant progress in understanding real- world images, showing potential towards achieving general artificial intelligence (Liu et al., 2023d; Zhu et al., 2023; Ye et al., 2023; Li et al., 2023a; Maaz et al., 2023; Gong et al., 2023). Although LVLMs have demonstrated their versatility and linguistic fluency, they often suffer from object hal- lucination in their generated text outputs (Wang et al., 2023a; Liu et al., 2023a; Gunjal et al., 2023). Object hallucination refers to the phenomenon of generating inaccurate descriptions for a given im- age, including non-existent objects or omitting essential features. The issue with hallucinatory text generation in LVLMs is that it can mislead and deceive users in downstream applications that depend on these captions or descriptions, ultimately resulting in a negative impact on various fields that em- ploy LVLMs, including robotics (Mai et al., 2023; Liu et al., 2023b), medical imaging (Wang et al., 2023b; Hu et al., 2023), and human-computer interaction (Olson et al., 1994; Brie et al., 2023).
2310.00754#2
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Large vision-language models (LVLMs) have shown remarkable abilities in understanding visual information with human languages. However, LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images. This can negatively impact many vision-language tasks, such as visual summarization and reasoning. To address this issue, we propose a simple yet powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify object hallucination in LVLMs by reconstructing less hallucinatory descriptions. LURE is grounded in a rigorous statistical analysis of the key factors underlying object hallucination, including co-occurrence (the frequent appearance of certain objects alongside others in images), uncertainty (objects with higher uncertainty during LVLM decoding), and object position (hallucination often appears in the later part of the generated text). LURE can also be seamlessly integrated with any LVLMs. We evaluate LURE on six open-source LVLMs, achieving a 23% improvement in general object hallucination evaluation metrics over the previous best approach. In both GPT and human evaluations, LURE consistently ranks at the top. Our data and code are available at https://github.com/YiyangZhou/LURE.
http://arxiv.org/pdf/2310.00754
Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao
cs.LG, cs.CL, cs.CV
null
null
cs.LG
20231001
20231001
[ { "id": "2308.14972" }, { "id": "2306.05424" }, { "id": "2302.13971" }, { "id": "2308.06394" }, { "id": "2304.04920" }, { "id": "2204.02311" }, { "id": "2306.14565" }, { "id": "2306.09299" }, { "id": "2305.13534" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2305.10355" }, { "id": "2306.04387" }, { "id": "2304.08485" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "1904.09675" }, { "id": "2308.15126" }, { "id": "2306.05685" }, { "id": "1702.01806" }, { "id": "2305.03726" }, { "id": "2304.09349" }, { "id": "2303.16199" }, { "id": "2302.07257" }, { "id": "1904.09751" } ]
2310.00754
3
Early works have attempted to address the problem of object hallucinations in small-scale mul- timodal pre-trained models by performing either fine-grained alignment across different modali- ties (Biten et al., 2022) or reducing object co-occurrence patterns with data augmentation (Rohrbach et al., 2018; Kim et al., 2023). However, the auto-regressive architecture of LVLMs differs signifi- cantly from small-scale multimodal pre-trained models, making their direct utilization impractical. A few recent works (Li et al., 2023c; Liu et al., 2023a;d) have studied to reduce object hallucina- tions in LVLMs by enhancing the quality of datasets used for fine-tuning. Yet, acquiring a substantial number of high-quality examples for fine-tuning can be time-consuming and labor-intensive, requir- ing human expertise and effort. Instead, we aim to propose a lightweight method to post-hoc handle object hallucination by introducing LURE: LVLM hallcUination REvisor. Concretely, LURE is grounded in a rigorous statistical analysis that elucidates the underlying causal- ities of object hallucinations in LVLMs. This analysis delves into the relationship between the ∗Equal contribution. Work was done during Yiyang Zhou and Chenhang Cui’s remote internship at UNC. 1 # Preprint
2310.00754#3
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Large vision-language models (LVLMs) have shown remarkable abilities in understanding visual information with human languages. However, LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images. This can negatively impact many vision-language tasks, such as visual summarization and reasoning. To address this issue, we propose a simple yet powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify object hallucination in LVLMs by reconstructing less hallucinatory descriptions. LURE is grounded in a rigorous statistical analysis of the key factors underlying object hallucination, including co-occurrence (the frequent appearance of certain objects alongside others in images), uncertainty (objects with higher uncertainty during LVLM decoding), and object position (hallucination often appears in the later part of the generated text). LURE can also be seamlessly integrated with any LVLMs. We evaluate LURE on six open-source LVLMs, achieving a 23% improvement in general object hallucination evaluation metrics over the previous best approach. In both GPT and human evaluations, LURE consistently ranks at the top. Our data and code are available at https://github.com/YiyangZhou/LURE.
http://arxiv.org/pdf/2310.00754
Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao
cs.LG, cs.CL, cs.CV
null
null
cs.LG
20231001
20231001
[ { "id": "2308.14972" }, { "id": "2306.05424" }, { "id": "2302.13971" }, { "id": "2308.06394" }, { "id": "2304.04920" }, { "id": "2204.02311" }, { "id": "2306.14565" }, { "id": "2306.09299" }, { "id": "2305.13534" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2305.10355" }, { "id": "2306.04387" }, { "id": "2304.08485" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "1904.09675" }, { "id": "2308.15126" }, { "id": "2306.05685" }, { "id": "1702.01806" }, { "id": "2305.03726" }, { "id": "2304.09349" }, { "id": "2303.16199" }, { "id": "2302.07257" }, { "id": "1904.09751" } ]
2310.00754
4
1 # Preprint pre-training data and their corresponding textual responses from LVLMs that exhibit hallucinatory contents (Ordonez et al., 2011; Lin et al., 2014; Changpinyo et al., 2021; Liu et al., 2023d). Both our empirical and theoretical findings reveal that object hallucinations can be attributed to three key factors: co-occurrence, uncertainty, and object position. First, if the training data contains spuri- ous co-occurring patterns between objects, language models may generate outputs based on these learned spurious associations, thus resulting in hallucinatory descriptions. Second, hallucinations occur more frequently on objects characterized by high uncertainty during generation. Lastly, posi- tional factors also play a role, as more object hallucinations tend to appear in the latter portions of the generated description due to the accumulation of misinterpretation.
2310.00754#4
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Large vision-language models (LVLMs) have shown remarkable abilities in understanding visual information with human languages. However, LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images. This can negatively impact many vision-language tasks, such as visual summarization and reasoning. To address this issue, we propose a simple yet powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify object hallucination in LVLMs by reconstructing less hallucinatory descriptions. LURE is grounded in a rigorous statistical analysis of the key factors underlying object hallucination, including co-occurrence (the frequent appearance of certain objects alongside others in images), uncertainty (objects with higher uncertainty during LVLM decoding), and object position (hallucination often appears in the later part of the generated text). LURE can also be seamlessly integrated with any LVLMs. We evaluate LURE on six open-source LVLMs, achieving a 23% improvement in general object hallucination evaluation metrics over the previous best approach. In both GPT and human evaluations, LURE consistently ranks at the top. Our data and code are available at https://github.com/YiyangZhou/LURE.
http://arxiv.org/pdf/2310.00754
Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao
cs.LG, cs.CL, cs.CV
null
null
cs.LG
20231001
20231001
[ { "id": "2308.14972" }, { "id": "2306.05424" }, { "id": "2302.13971" }, { "id": "2308.06394" }, { "id": "2304.04920" }, { "id": "2204.02311" }, { "id": "2306.14565" }, { "id": "2306.09299" }, { "id": "2305.13534" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2305.10355" }, { "id": "2306.04387" }, { "id": "2304.08485" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "1904.09675" }, { "id": "2308.15126" }, { "id": "2306.05685" }, { "id": "1702.01806" }, { "id": "2305.03726" }, { "id": "2304.09349" }, { "id": "2303.16199" }, { "id": "2302.07257" }, { "id": "1904.09751" } ]
2310.00754
5
Based on our statistical analysis, LURE develops a object hallucination revisor. This revisor takes potentially hallucinatory descriptions as input and converts them into accurate ones. To create the revisor, we first generate a hallucinatory dataset using GPT-3.5 by making two modifications to the original correct captions: (1) Insert additional object texts into the description that are likely to co- occur with the objects contained in the initial description. This modification allows LURE to learn to disentangle such co-occurrence patterns effectively; (2) Replace uncertain objects or those at the end of descriptions with a placeholder tag, encouraging the revisor to re-evaluate these objects. In the end, we train our hallucination revisor leveraging the acquired hallucinatory dataset. Once trained, the revisor can seamlessly integrate with any LVLM to correct potential hallucinatory descriptions. Our primary contribution is LURE, a lightweight and compatible post-hoc approach for rectifying object hallucination in LVLMs. This approach is grounded in our rigorous statistical analyses of object hallucinatory phenomena in LVLMs. Our experiments thoroughly evaluate LURE on multiple existing open-source LVLMs. Compared to the best prior method, the results demonstrate that LURE can significantly reduce object hallucination under general object hallucination evaluation metrics (e.g., CHAIR (Rohrbach et al., 2018)), GPT evaluation, and human evaluation.
2310.00754#5
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Large vision-language models (LVLMs) have shown remarkable abilities in understanding visual information with human languages. However, LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images. This can negatively impact many vision-language tasks, such as visual summarization and reasoning. To address this issue, we propose a simple yet powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify object hallucination in LVLMs by reconstructing less hallucinatory descriptions. LURE is grounded in a rigorous statistical analysis of the key factors underlying object hallucination, including co-occurrence (the frequent appearance of certain objects alongside others in images), uncertainty (objects with higher uncertainty during LVLM decoding), and object position (hallucination often appears in the later part of the generated text). LURE can also be seamlessly integrated with any LVLMs. We evaluate LURE on six open-source LVLMs, achieving a 23% improvement in general object hallucination evaluation metrics over the previous best approach. In both GPT and human evaluations, LURE consistently ranks at the top. Our data and code are available at https://github.com/YiyangZhou/LURE.
http://arxiv.org/pdf/2310.00754
Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao
cs.LG, cs.CL, cs.CV
null
null
cs.LG
20231001
20231001
[ { "id": "2308.14972" }, { "id": "2306.05424" }, { "id": "2302.13971" }, { "id": "2308.06394" }, { "id": "2304.04920" }, { "id": "2204.02311" }, { "id": "2306.14565" }, { "id": "2306.09299" }, { "id": "2305.13534" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2305.10355" }, { "id": "2306.04387" }, { "id": "2304.08485" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "1904.09675" }, { "id": "2308.15126" }, { "id": "2306.05685" }, { "id": "1702.01806" }, { "id": "2305.03726" }, { "id": "2304.09349" }, { "id": "2303.16199" }, { "id": "2302.07257" }, { "id": "1904.09751" } ]
2310.00754
6
2 WHY DO LARGE VISION-LANGUAGE MODELS EXPERIENCE OBJECT HALLUCINATION? This section scrutinizes the root causes of object hallucinations in vision-language models via com- prehensive statistical analyses from three critical viewpoints: co-occurrence, uncertainty, and po- sition, recognized as the primary factors contributing to object hallucination. We further provide a rigorous theoretical explanation that complements our empirical findings on object hallucinations.
2310.00754#6
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Large vision-language models (LVLMs) have shown remarkable abilities in understanding visual information with human languages. However, LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images. This can negatively impact many vision-language tasks, such as visual summarization and reasoning. To address this issue, we propose a simple yet powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify object hallucination in LVLMs by reconstructing less hallucinatory descriptions. LURE is grounded in a rigorous statistical analysis of the key factors underlying object hallucination, including co-occurrence (the frequent appearance of certain objects alongside others in images), uncertainty (objects with higher uncertainty during LVLM decoding), and object position (hallucination often appears in the later part of the generated text). LURE can also be seamlessly integrated with any LVLMs. We evaluate LURE on six open-source LVLMs, achieving a 23% improvement in general object hallucination evaluation metrics over the previous best approach. In both GPT and human evaluations, LURE consistently ranks at the top. Our data and code are available at https://github.com/YiyangZhou/LURE.
http://arxiv.org/pdf/2310.00754
Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao
cs.LG, cs.CL, cs.CV
null
null
cs.LG
20231001
20231001
[ { "id": "2308.14972" }, { "id": "2306.05424" }, { "id": "2302.13971" }, { "id": "2308.06394" }, { "id": "2304.04920" }, { "id": "2204.02311" }, { "id": "2306.14565" }, { "id": "2306.09299" }, { "id": "2305.13534" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2305.10355" }, { "id": "2306.04387" }, { "id": "2304.08485" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "1904.09675" }, { "id": "2308.15126" }, { "id": "2306.05685" }, { "id": "1702.01806" }, { "id": "2305.03726" }, { "id": "2304.09349" }, { "id": "2303.16199" }, { "id": "2302.07257" }, { "id": "1904.09751" } ]
2310.00754
7
Notations. Large Vision-Language Models (LVLMs) typically generate sentences in a free-form and auto-regressive manner, predicting the probability distribution of the next token progressively. In this context, we denote the input as x, the correct answer as y, and the generated sequence with a length of Ns as s = {z1, . . . , zNs }. For a given LVLM, the probability of generating zi as the i-th token can be described as p(zi|s<i, x) (where 1 ≤ i ≤ Ns), and s<i refers to the previously generated tokens {z1, . . . , zi−1}. Given a description s, we additionally define the complete object set, which is arranged in the order of appearance, as Os = {os,1, . . . , os,nh+nr }. Here, nh and nr represent the number of hallucinatory and non-hallucinatory objects, respectively. 2.1 CO-OCCURRENCE AND SPURIOUS CORRELATION AMONG OBJECTS
2310.00754#7
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Large vision-language models (LVLMs) have shown remarkable abilities in understanding visual information with human languages. However, LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images. This can negatively impact many vision-language tasks, such as visual summarization and reasoning. To address this issue, we propose a simple yet powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify object hallucination in LVLMs by reconstructing less hallucinatory descriptions. LURE is grounded in a rigorous statistical analysis of the key factors underlying object hallucination, including co-occurrence (the frequent appearance of certain objects alongside others in images), uncertainty (objects with higher uncertainty during LVLM decoding), and object position (hallucination often appears in the later part of the generated text). LURE can also be seamlessly integrated with any LVLMs. We evaluate LURE on six open-source LVLMs, achieving a 23% improvement in general object hallucination evaluation metrics over the previous best approach. In both GPT and human evaluations, LURE consistently ranks at the top. Our data and code are available at https://github.com/YiyangZhou/LURE.
http://arxiv.org/pdf/2310.00754
Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao
cs.LG, cs.CL, cs.CV
null
null
cs.LG
20231001
20231001
[ { "id": "2308.14972" }, { "id": "2306.05424" }, { "id": "2302.13971" }, { "id": "2308.06394" }, { "id": "2304.04920" }, { "id": "2204.02311" }, { "id": "2306.14565" }, { "id": "2306.09299" }, { "id": "2305.13534" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2305.10355" }, { "id": "2306.04387" }, { "id": "2304.08485" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "1904.09675" }, { "id": "2308.15126" }, { "id": "2306.05685" }, { "id": "1702.01806" }, { "id": "2305.03726" }, { "id": "2304.09349" }, { "id": "2303.16199" }, { "id": "2302.07257" }, { "id": "1904.09751" } ]
2310.00754
8
2.1 CO-OCCURRENCE AND SPURIOUS CORRELATION AMONG OBJECTS In the realm of multi-modal models, “co-occurrence” denotes the frequent appearance of specific objects. When the training data includes spurious co-occurring patterns among objects, language models can generate outputs based on these learned associations. However, these associations may not hold true for test examples, resulting in hallucinatory outputs. For example, “grass” and “sky” frequently co-occur in the training data. The model falsely associates them and tends to generate “grass” and “sky” together even when only “grass” is present in the context. In order to assess the influence of co-occurrence on object hallucination, we draw inspiration from (Biten et al., 2022)and introduce a Co-occurrence Score denoted as CoScore. For each image description s, the corresponding co-occurrence score CoScores is computed as the summation of co-occurrence degrees across all hallucinatory objects {os,1, . . . , os,nh }, which is defined as:
2310.00754#8
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Large vision-language models (LVLMs) have shown remarkable abilities in understanding visual information with human languages. However, LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images. This can negatively impact many vision-language tasks, such as visual summarization and reasoning. To address this issue, we propose a simple yet powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify object hallucination in LVLMs by reconstructing less hallucinatory descriptions. LURE is grounded in a rigorous statistical analysis of the key factors underlying object hallucination, including co-occurrence (the frequent appearance of certain objects alongside others in images), uncertainty (objects with higher uncertainty during LVLM decoding), and object position (hallucination often appears in the later part of the generated text). LURE can also be seamlessly integrated with any LVLMs. We evaluate LURE on six open-source LVLMs, achieving a 23% improvement in general object hallucination evaluation metrics over the previous best approach. In both GPT and human evaluations, LURE consistently ranks at the top. Our data and code are available at https://github.com/YiyangZhou/LURE.
http://arxiv.org/pdf/2310.00754
Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao
cs.LG, cs.CL, cs.CV
null
null
cs.LG
20231001
20231001
[ { "id": "2308.14972" }, { "id": "2306.05424" }, { "id": "2302.13971" }, { "id": "2308.06394" }, { "id": "2304.04920" }, { "id": "2204.02311" }, { "id": "2306.14565" }, { "id": "2306.09299" }, { "id": "2305.13534" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2305.10355" }, { "id": "2306.04387" }, { "id": "2304.08485" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "1904.09675" }, { "id": "2308.15126" }, { "id": "2306.05685" }, { "id": "1702.01806" }, { "id": "2305.03726" }, { "id": "2304.09349" }, { "id": "2303.16199" }, { "id": "2302.07257" }, { "id": "1904.09751" } ]
2310.00754
9
nh Nr +nh S(05.4) NS(05,; Coseore, = 52 "FT 1S(ousd NS(044) a) |S(os,1)| + [S(0s,9)| §=1 j=1,05,; 405.1 2 # Preprint (a) Co-occurrence (b) Uncertainty (c) Object Position # Figure 1: Comparison between hallucinatory and non-hallucinatory captions under different factors. Here, S(·) denotes the set of all descriptions that mention a specific object, and |S(·)| represents the cardinality of this set. Based on the definition of CoScore, we compare the distribution of co-occurrence scores between hallucinatory and non-hallucinatory captions (please refer to Appendix A.1 for our experimental setting), As shown in Figure 1a, hallucinatory captions tend to exhibit higher co-occurrence scores, which suggests a stronger association between object hallucination and co-occurrence. 2.2 OBJECT UNCERTAINTY
2310.00754#9
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Large vision-language models (LVLMs) have shown remarkable abilities in understanding visual information with human languages. However, LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images. This can negatively impact many vision-language tasks, such as visual summarization and reasoning. To address this issue, we propose a simple yet powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify object hallucination in LVLMs by reconstructing less hallucinatory descriptions. LURE is grounded in a rigorous statistical analysis of the key factors underlying object hallucination, including co-occurrence (the frequent appearance of certain objects alongside others in images), uncertainty (objects with higher uncertainty during LVLM decoding), and object position (hallucination often appears in the later part of the generated text). LURE can also be seamlessly integrated with any LVLMs. We evaluate LURE on six open-source LVLMs, achieving a 23% improvement in general object hallucination evaluation metrics over the previous best approach. In both GPT and human evaluations, LURE consistently ranks at the top. Our data and code are available at https://github.com/YiyangZhou/LURE.
http://arxiv.org/pdf/2310.00754
Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao
cs.LG, cs.CL, cs.CV
null
null
cs.LG
20231001
20231001
[ { "id": "2308.14972" }, { "id": "2306.05424" }, { "id": "2302.13971" }, { "id": "2308.06394" }, { "id": "2304.04920" }, { "id": "2204.02311" }, { "id": "2306.14565" }, { "id": "2306.09299" }, { "id": "2305.13534" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2305.10355" }, { "id": "2306.04387" }, { "id": "2304.08485" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "1904.09675" }, { "id": "2308.15126" }, { "id": "2306.05685" }, { "id": "1702.01806" }, { "id": "2305.03726" }, { "id": "2304.09349" }, { "id": "2303.16199" }, { "id": "2302.07257" }, { "id": "1904.09751" } ]
2310.00754
10
2.2 OBJECT UNCERTAINTY In language modeling, beam search (Holtzman et al., 2019; Freitag & Al-Onaizan, 2017) is em- ployed to predict words iteratively, introducing inherent uncertainty into the search process (illus- trative examples in Appendix D.1). This uncertainty is used as a measure of the model’s confidence in generating the next token, and can be related to hallucination, as objects with higher uncertainty are more likely to be inaccurate. Here, we aim to quantitatively investigate the potential relationship between the uncertainty associated with objects at each prediction step and the hallucinations. Concretely, we represent the probability of autoregressive decoding for each object token as p(os,i|s<k, x), where k denotes the positional index of object os,i. For each object os,i, the cor- responding Uncertainty Score is defined as: UnScores,i = − log p(os,i|s<i, x), (2)
2310.00754#10
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Large vision-language models (LVLMs) have shown remarkable abilities in understanding visual information with human languages. However, LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images. This can negatively impact many vision-language tasks, such as visual summarization and reasoning. To address this issue, we propose a simple yet powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify object hallucination in LVLMs by reconstructing less hallucinatory descriptions. LURE is grounded in a rigorous statistical analysis of the key factors underlying object hallucination, including co-occurrence (the frequent appearance of certain objects alongside others in images), uncertainty (objects with higher uncertainty during LVLM decoding), and object position (hallucination often appears in the later part of the generated text). LURE can also be seamlessly integrated with any LVLMs. We evaluate LURE on six open-source LVLMs, achieving a 23% improvement in general object hallucination evaluation metrics over the previous best approach. In both GPT and human evaluations, LURE consistently ranks at the top. Our data and code are available at https://github.com/YiyangZhou/LURE.
http://arxiv.org/pdf/2310.00754
Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao
cs.LG, cs.CL, cs.CV
null
null
cs.LG
20231001
20231001
[ { "id": "2308.14972" }, { "id": "2306.05424" }, { "id": "2302.13971" }, { "id": "2308.06394" }, { "id": "2304.04920" }, { "id": "2204.02311" }, { "id": "2306.14565" }, { "id": "2306.09299" }, { "id": "2305.13534" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2305.10355" }, { "id": "2306.04387" }, { "id": "2304.08485" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "1904.09675" }, { "id": "2308.15126" }, { "id": "2306.05685" }, { "id": "1702.01806" }, { "id": "2305.03726" }, { "id": "2304.09349" }, { "id": "2303.16199" }, { "id": "2302.07257" }, { "id": "1904.09751" } ]
2310.00754
11
UnScores,i = − log p(os,i|s<i, x), (2) where a higher value of the uncertainty score indicates greater uncertainty. In Figure 1b, we perform a statistical analysis examining the connection between hallucination and object uncertainty (refer to Appendix A.1 for experimental details). Similar to the analysis of co-occurrence, hallucinatory objects are predominantly observed in the high-uncertainty range, while non-hallucinatory objects are more frequently generated in the certain range. 2.3 OBJECT POSITION IN GENERATED DESCRIPTIONS Interestingly, we also find a significant correlation between the object position in the generated descriptions and hallucination, where dominant hallucinations occur in the latter part of the descrip- tions. To validate it, we introduce the Positioning Score PoScore for each object os,i as follows: PoScores,i = Index(os,i) Ns , (3) where Index(os,i) signifies the position index of object os,i within the entire description.
2310.00754#11
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Large vision-language models (LVLMs) have shown remarkable abilities in understanding visual information with human languages. However, LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images. This can negatively impact many vision-language tasks, such as visual summarization and reasoning. To address this issue, we propose a simple yet powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify object hallucination in LVLMs by reconstructing less hallucinatory descriptions. LURE is grounded in a rigorous statistical analysis of the key factors underlying object hallucination, including co-occurrence (the frequent appearance of certain objects alongside others in images), uncertainty (objects with higher uncertainty during LVLM decoding), and object position (hallucination often appears in the later part of the generated text). LURE can also be seamlessly integrated with any LVLMs. We evaluate LURE on six open-source LVLMs, achieving a 23% improvement in general object hallucination evaluation metrics over the previous best approach. In both GPT and human evaluations, LURE consistently ranks at the top. Our data and code are available at https://github.com/YiyangZhou/LURE.
http://arxiv.org/pdf/2310.00754
Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao
cs.LG, cs.CL, cs.CV
null
null
cs.LG
20231001
20231001
[ { "id": "2308.14972" }, { "id": "2306.05424" }, { "id": "2302.13971" }, { "id": "2308.06394" }, { "id": "2304.04920" }, { "id": "2204.02311" }, { "id": "2306.14565" }, { "id": "2306.09299" }, { "id": "2305.13534" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2305.10355" }, { "id": "2306.04387" }, { "id": "2304.08485" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "1904.09675" }, { "id": "2308.15126" }, { "id": "2306.05685" }, { "id": "1702.01806" }, { "id": "2305.03726" }, { "id": "2304.09349" }, { "id": "2303.16199" }, { "id": "2302.07257" }, { "id": "1904.09751" } ]
2310.00754
12
PoScores,i = Index(os,i) Ns , (3) where Index(os,i) signifies the position index of object os,i within the entire description. Based on the definition of PoScore, we conduct a analysis of the positions of hallucination in the descriptions, illustrated in Figure 1c (refer to Appendix A.1 for experimental details). These find- ings indicate that high-density areas of hallucinatory objects predominantly appear towards the end of the sequence. This pattern corroborates our observation that object hallucination frequently oc- curs in the latter segments of generated text. One plausible explanation for this observed trend is rooted in the autoregressive text generation process. In the initial stages, the model closely adheres to the semantic information of its input image, resulting in coherent beginnings. However, as the generation progresses, the accumulation of past hallucinatory information and emerging uncertain- ties may steer the model off-course, ultimately leading to a more pronounced emergence of object hallucination. 3 Preprint 2.4 THEORETICAL EXPLANATION
2310.00754#12
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Large vision-language models (LVLMs) have shown remarkable abilities in understanding visual information with human languages. However, LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images. This can negatively impact many vision-language tasks, such as visual summarization and reasoning. To address this issue, we propose a simple yet powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify object hallucination in LVLMs by reconstructing less hallucinatory descriptions. LURE is grounded in a rigorous statistical analysis of the key factors underlying object hallucination, including co-occurrence (the frequent appearance of certain objects alongside others in images), uncertainty (objects with higher uncertainty during LVLM decoding), and object position (hallucination often appears in the later part of the generated text). LURE can also be seamlessly integrated with any LVLMs. We evaluate LURE on six open-source LVLMs, achieving a 23% improvement in general object hallucination evaluation metrics over the previous best approach. In both GPT and human evaluations, LURE consistently ranks at the top. Our data and code are available at https://github.com/YiyangZhou/LURE.
http://arxiv.org/pdf/2310.00754
Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao
cs.LG, cs.CL, cs.CV
null
null
cs.LG
20231001
20231001
[ { "id": "2308.14972" }, { "id": "2306.05424" }, { "id": "2302.13971" }, { "id": "2308.06394" }, { "id": "2304.04920" }, { "id": "2204.02311" }, { "id": "2306.14565" }, { "id": "2306.09299" }, { "id": "2305.13534" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2305.10355" }, { "id": "2306.04387" }, { "id": "2304.08485" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "1904.09675" }, { "id": "2308.15126" }, { "id": "2306.05685" }, { "id": "1702.01806" }, { "id": "2305.03726" }, { "id": "2304.09349" }, { "id": "2303.16199" }, { "id": "2302.07257" }, { "id": "1904.09751" } ]
2310.00754
13
After examining these empirical correlations, we proceed to offer theoretical insights to explain them (all proofs can be found in Appendix B). Specifically, we focus on predicting the i-th token, denoted as zi, and introduce a predictive function denoted as f . For each object k within a set of objects represented as [K], the function fk(s<i, x) signifies the predicted score associated with the k-th object. Here, K is defined as the total number of objects under consideration, and we use yk = 1 to denote the presence of the k-th object in an image and yk = −1 otherwise. Furthermore, we make an assumption that fk(s<i, x) can be expressed as ⟨ϕk(s<i, x), βk⟩, ϕk(s<i, x) | yk ∼ N (yk · µ∗ k, Id) and Pr(yk = 1) = Pr(yk = −1) = 1/2. For a training set D, the optimizer for the k-th class parameter βk trained on D is defined as: ˆβk = 1 (s<i,x,yi,k)∈D yi,k ·ϕk(s<i, x),
2310.00754#13
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Large vision-language models (LVLMs) have shown remarkable abilities in understanding visual information with human languages. However, LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images. This can negatively impact many vision-language tasks, such as visual summarization and reasoning. To address this issue, we propose a simple yet powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify object hallucination in LVLMs by reconstructing less hallucinatory descriptions. LURE is grounded in a rigorous statistical analysis of the key factors underlying object hallucination, including co-occurrence (the frequent appearance of certain objects alongside others in images), uncertainty (objects with higher uncertainty during LVLM decoding), and object position (hallucination often appears in the later part of the generated text). LURE can also be seamlessly integrated with any LVLMs. We evaluate LURE on six open-source LVLMs, achieving a 23% improvement in general object hallucination evaluation metrics over the previous best approach. In both GPT and human evaluations, LURE consistently ranks at the top. Our data and code are available at https://github.com/YiyangZhou/LURE.
http://arxiv.org/pdf/2310.00754
Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao
cs.LG, cs.CL, cs.CV
null
null
cs.LG
20231001
20231001
[ { "id": "2308.14972" }, { "id": "2306.05424" }, { "id": "2302.13971" }, { "id": "2308.06394" }, { "id": "2304.04920" }, { "id": "2204.02311" }, { "id": "2306.14565" }, { "id": "2306.09299" }, { "id": "2305.13534" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2305.10355" }, { "id": "2306.04387" }, { "id": "2304.08485" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "1904.09675" }, { "id": "2308.15126" }, { "id": "2306.05685" }, { "id": "1702.01806" }, { "id": "2305.03726" }, { "id": "2304.09349" }, { "id": "2303.16199" }, { "id": "2302.07257" }, { "id": "1904.09751" } ]
2310.00754
15
Co-occurrence. Based on this definition, we first consider co-occurrence. Without loss of general- ity, we assume that K = 2, and the first and second classes are frequently observed together, i.e., we observe (ϕ1(s<i, x), ϕ2(s<i, x)) among a fraction ρ0 ∈ (0, 1) of samples when both y1 and y2 are equal to 1. Here, to simplify the autoregressive process while maintaining sequential prediction man- ner, we consider using ˆf1 = ⟨ϕ1(s<i, x), ˆβ1⟩ for the prediction of the first object, and in the second prediction, we model the information passed from the first information by ⟨ϕ1(s<i, x), ˆβ1⟩, and con- sider ˆf2 = ⟨ϕ1(s<i, x), ˆβ1⟩ + ⟨ϕ2(s<i, x), ˆβ2⟩. The model outputs the second object if ˆf2(s<i, x) > 0. Under this setting, we consider two sampling schemes: (1) Each class is sampled according to
2310.00754#15
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Large vision-language models (LVLMs) have shown remarkable abilities in understanding visual information with human languages. However, LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images. This can negatively impact many vision-language tasks, such as visual summarization and reasoning. To address this issue, we propose a simple yet powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify object hallucination in LVLMs by reconstructing less hallucinatory descriptions. LURE is grounded in a rigorous statistical analysis of the key factors underlying object hallucination, including co-occurrence (the frequent appearance of certain objects alongside others in images), uncertainty (objects with higher uncertainty during LVLM decoding), and object position (hallucination often appears in the later part of the generated text). LURE can also be seamlessly integrated with any LVLMs. We evaluate LURE on six open-source LVLMs, achieving a 23% improvement in general object hallucination evaluation metrics over the previous best approach. In both GPT and human evaluations, LURE consistently ranks at the top. Our data and code are available at https://github.com/YiyangZhou/LURE.
http://arxiv.org/pdf/2310.00754
Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao
cs.LG, cs.CL, cs.CV
null
null
cs.LG
20231001
20231001
[ { "id": "2308.14972" }, { "id": "2306.05424" }, { "id": "2302.13971" }, { "id": "2308.06394" }, { "id": "2304.04920" }, { "id": "2204.02311" }, { "id": "2306.14565" }, { "id": "2306.09299" }, { "id": "2305.13534" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2305.10355" }, { "id": "2306.04387" }, { "id": "2304.08485" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "1904.09675" }, { "id": "2308.15126" }, { "id": "2306.05685" }, { "id": "1702.01806" }, { "id": "2305.03726" }, { "id": "2304.09349" }, { "id": "2303.16199" }, { "id": "2302.07257" }, { "id": "1904.09751" } ]
2310.00754
16
second object if ˆf2(s<i, x) > 0. Under this setting, we consider two sampling schemes: (1) Each class is sampled according to the original training distribution; (2) Each class is sampled by setting ρ < ρ0. These two sampling schemes result in two subset of samples D(1), D(2) with the same size. Denote the classifiers trained on D(1) and D(2) by { ˆf (1) k }k∈{1,2} and { ˆf (2) k }k∈{1,2} respectively. Theorem 2.1 reflect reducing co-occurrence issue can lead to smaller test misclassification error Err(·).
2310.00754#16
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Large vision-language models (LVLMs) have shown remarkable abilities in understanding visual information with human languages. However, LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images. This can negatively impact many vision-language tasks, such as visual summarization and reasoning. To address this issue, we propose a simple yet powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify object hallucination in LVLMs by reconstructing less hallucinatory descriptions. LURE is grounded in a rigorous statistical analysis of the key factors underlying object hallucination, including co-occurrence (the frequent appearance of certain objects alongside others in images), uncertainty (objects with higher uncertainty during LVLM decoding), and object position (hallucination often appears in the later part of the generated text). LURE can also be seamlessly integrated with any LVLMs. We evaluate LURE on six open-source LVLMs, achieving a 23% improvement in general object hallucination evaluation metrics over the previous best approach. In both GPT and human evaluations, LURE consistently ranks at the top. Our data and code are available at https://github.com/YiyangZhou/LURE.
http://arxiv.org/pdf/2310.00754
Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao
cs.LG, cs.CL, cs.CV
null
null
cs.LG
20231001
20231001
[ { "id": "2308.14972" }, { "id": "2306.05424" }, { "id": "2302.13971" }, { "id": "2308.06394" }, { "id": "2304.04920" }, { "id": "2204.02311" }, { "id": "2306.14565" }, { "id": "2306.09299" }, { "id": "2305.13534" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2305.10355" }, { "id": "2306.04387" }, { "id": "2304.08485" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "1904.09675" }, { "id": "2308.15126" }, { "id": "2306.05685" }, { "id": "1702.01806" }, { "id": "2305.03726" }, { "id": "2304.09349" }, { "id": "2303.16199" }, { "id": "2302.07257" }, { "id": "1904.09751" } ]
2310.00754
17
# Theorem 2.1 Suppose ∥µ∗ We have k∥2 ≪ d, d/|D(k)| → κ for k ∈ {1, 2} and universal constants κ > 0. Err( ˆf (2) 2 ) ≤ Err( ˆf (1) 2 ). Uncertainty. We then turn our attention to object uncertainty. Here, we consider the two following sampling schemes: (1) Each class is sampled with equal probability 1/K; (2) Each class is sampled if the uncertainty score, defined as − log(ˆpk), is above a certain threshold γ > 0. Here, ˆpk is (s<i,x,1) σ(⟨ϕk(s<i, x), ˆβk⟩), where Dtr represents the training calculated as follows: ˆpk = 1 set. These two schemes result in two subsets of samples D(1) and D(2) with the same size. Given x and s<i, we make a prediction about whether the k-th object is present in the image using ˆfk. Theorem 2.2 illustrates that sampling more certain objects can lead to a reduction in test error.
2310.00754#17
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Large vision-language models (LVLMs) have shown remarkable abilities in understanding visual information with human languages. However, LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images. This can negatively impact many vision-language tasks, such as visual summarization and reasoning. To address this issue, we propose a simple yet powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify object hallucination in LVLMs by reconstructing less hallucinatory descriptions. LURE is grounded in a rigorous statistical analysis of the key factors underlying object hallucination, including co-occurrence (the frequent appearance of certain objects alongside others in images), uncertainty (objects with higher uncertainty during LVLM decoding), and object position (hallucination often appears in the later part of the generated text). LURE can also be seamlessly integrated with any LVLMs. We evaluate LURE on six open-source LVLMs, achieving a 23% improvement in general object hallucination evaluation metrics over the previous best approach. In both GPT and human evaluations, LURE consistently ranks at the top. Our data and code are available at https://github.com/YiyangZhou/LURE.
http://arxiv.org/pdf/2310.00754
Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao
cs.LG, cs.CL, cs.CV
null
null
cs.LG
20231001
20231001
[ { "id": "2308.14972" }, { "id": "2306.05424" }, { "id": "2302.13971" }, { "id": "2308.06394" }, { "id": "2304.04920" }, { "id": "2204.02311" }, { "id": "2306.14565" }, { "id": "2306.09299" }, { "id": "2305.13534" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2305.10355" }, { "id": "2306.04387" }, { "id": "2304.08485" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "1904.09675" }, { "id": "2308.15126" }, { "id": "2306.05685" }, { "id": "1702.01806" }, { "id": "2305.03726" }, { "id": "2304.09349" }, { "id": "2303.16199" }, { "id": "2302.07257" }, { "id": "1904.09751" } ]
2310.00754
18
Theorem 2.2 Suppose ∥µ∗ probability at least 1 − o(1), k∥2 ≪ p, d/|D(k)| → κ for κ > 0 and k ∈ [K]. We will have with 1 K 1 K p(2 p(1 RBM) < YB): k=1 k=1 Object Position. The effect of object position on object hallucination is closely tied to error or pre- diction uncertainty accumulation in autoregressive models. This topic has been extensively studied in time series analysis, and several theoretical models have been established to investigate it (Hannan et al., 1989; Ing, 2007; Ding et al., 2017). # 3 LVLM HALLUCINATION REVISOR After thoroughly investigating the root causes of hallucinations, we formally introduces our remedy, LURE, that mitigates object hallucinations in large vision-language models. Inspired by denois- ing autoencoder (Vincent et al., 2008), which is designed to reconstruct clean data from corrupted input, we employ a hallucination revisor in our approach that aims to transform potentially LVLM- generated hallucinatory descriptions into accurate ones. The framework of LURE is depicted in Figure 2. In the subsequent sections, we will delve into the training and deployment processes of the hallucination revisor. 4 Preprint
2310.00754#18
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Large vision-language models (LVLMs) have shown remarkable abilities in understanding visual information with human languages. However, LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images. This can negatively impact many vision-language tasks, such as visual summarization and reasoning. To address this issue, we propose a simple yet powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify object hallucination in LVLMs by reconstructing less hallucinatory descriptions. LURE is grounded in a rigorous statistical analysis of the key factors underlying object hallucination, including co-occurrence (the frequent appearance of certain objects alongside others in images), uncertainty (objects with higher uncertainty during LVLM decoding), and object position (hallucination often appears in the later part of the generated text). LURE can also be seamlessly integrated with any LVLMs. We evaluate LURE on six open-source LVLMs, achieving a 23% improvement in general object hallucination evaluation metrics over the previous best approach. In both GPT and human evaluations, LURE consistently ranks at the top. Our data and code are available at https://github.com/YiyangZhou/LURE.
http://arxiv.org/pdf/2310.00754
Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao
cs.LG, cs.CL, cs.CV
null
null
cs.LG
20231001
20231001
[ { "id": "2308.14972" }, { "id": "2306.05424" }, { "id": "2302.13971" }, { "id": "2308.06394" }, { "id": "2304.04920" }, { "id": "2204.02311" }, { "id": "2306.14565" }, { "id": "2306.09299" }, { "id": "2305.13534" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2305.10355" }, { "id": "2306.04387" }, { "id": "2304.08485" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "1904.09675" }, { "id": "2308.15126" }, { "id": "2306.05685" }, { "id": "1702.01806" }, { "id": "2305.03726" }, { "id": "2304.09349" }, { "id": "2303.16199" }, { "id": "2302.07257" }, { "id": "1904.09751" } ]
2310.00754
19
4 Preprint the sky anda = “> bright moon Generated description “~. Bird Describe this image §r s._| This is an image of a person walking (£2) | along the beach with their . LVLMs | They appear to be looking out at the Co-occurrence ocean and the waves. The beach is sandy and there are some rocks in the water. There are some people on the beach Image some swimming and some playing in the lescription water. NBER is clear and blue and \ there arc SCOTT TTETORZN. 1: looks like a beautiful day on the beach, ">. ChatGPT ‘The picture depicts a serene a lakeside view, with calm waters Position & surrounded by towering mountains. | uncertainty r4 Under revision! g ° The lake's surface reflects the sky ener = and a bright moon, creating an atmosphere of tranquility and serenity. Rectified description Masking This image captures a person strolling along the beach. The sandy beach is adorned with scattered rocks in the water. Several individuals can be seen on the beach, they are enjoying water activities. Standard description Hallucination correction Data preparation LURE Training Phase LURE Inference Phase ‘Training
2310.00754#19
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Large vision-language models (LVLMs) have shown remarkable abilities in understanding visual information with human languages. However, LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images. This can negatively impact many vision-language tasks, such as visual summarization and reasoning. To address this issue, we propose a simple yet powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify object hallucination in LVLMs by reconstructing less hallucinatory descriptions. LURE is grounded in a rigorous statistical analysis of the key factors underlying object hallucination, including co-occurrence (the frequent appearance of certain objects alongside others in images), uncertainty (objects with higher uncertainty during LVLM decoding), and object position (hallucination often appears in the later part of the generated text). LURE can also be seamlessly integrated with any LVLMs. We evaluate LURE on six open-source LVLMs, achieving a 23% improvement in general object hallucination evaluation metrics over the previous best approach. In both GPT and human evaluations, LURE consistently ranks at the top. Our data and code are available at https://github.com/YiyangZhou/LURE.
http://arxiv.org/pdf/2310.00754
Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao
cs.LG, cs.CL, cs.CV
null
null
cs.LG
20231001
20231001
[ { "id": "2308.14972" }, { "id": "2306.05424" }, { "id": "2302.13971" }, { "id": "2308.06394" }, { "id": "2304.04920" }, { "id": "2204.02311" }, { "id": "2306.14565" }, { "id": "2306.09299" }, { "id": "2305.13534" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2305.10355" }, { "id": "2306.04387" }, { "id": "2304.08485" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "1904.09675" }, { "id": "2308.15126" }, { "id": "2306.05685" }, { "id": "1702.01806" }, { "id": "2305.03726" }, { "id": "2304.09349" }, { "id": "2303.16199" }, { "id": "2302.07257" }, { "id": "1904.09751" } ]
2310.00754
20
Figure 2: An illustration of LURE Framework: The orange-shaded section shows the training paradigm of LURE, where the black-bordered part represents the hallucinatory data generation phase, including introducing co-occurring objects and replacing either uncertain objects or objects in later positions in the descriptions. The purple-bordered part indicates the revisor training process, with the masking process that can be referenced in Alg. 1. The orange-shaded section illustrates an example in the inference phase of LURE. 3.1 TRAINING HALLUCINATION REVISOR In LURE, to train the hallucination revisor, we first curate a training dataset. Each example in this dataset consists of an image accompanied by a hallucinatory description, with the correct description serving as the output target. A significant challenge encountered during dataset curation lies in the generation of naturally-occurring hallucinatory descriptions. To overcome this challenge, LURE generates hallucinatory descriptions by modifying the accurate descriptions using GPT-3.5. These adjustments are guided by factors related to object hallucination, including co-occurrence, object uncertainty, and object position. In the following, we detail these modifications:
2310.00754#20
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Large vision-language models (LVLMs) have shown remarkable abilities in understanding visual information with human languages. However, LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images. This can negatively impact many vision-language tasks, such as visual summarization and reasoning. To address this issue, we propose a simple yet powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify object hallucination in LVLMs by reconstructing less hallucinatory descriptions. LURE is grounded in a rigorous statistical analysis of the key factors underlying object hallucination, including co-occurrence (the frequent appearance of certain objects alongside others in images), uncertainty (objects with higher uncertainty during LVLM decoding), and object position (hallucination often appears in the later part of the generated text). LURE can also be seamlessly integrated with any LVLMs. We evaluate LURE on six open-source LVLMs, achieving a 23% improvement in general object hallucination evaluation metrics over the previous best approach. In both GPT and human evaluations, LURE consistently ranks at the top. Our data and code are available at https://github.com/YiyangZhou/LURE.
http://arxiv.org/pdf/2310.00754
Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao
cs.LG, cs.CL, cs.CV
null
null
cs.LG
20231001
20231001
[ { "id": "2308.14972" }, { "id": "2306.05424" }, { "id": "2302.13971" }, { "id": "2308.06394" }, { "id": "2304.04920" }, { "id": "2204.02311" }, { "id": "2306.14565" }, { "id": "2306.09299" }, { "id": "2305.13534" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2305.10355" }, { "id": "2306.04387" }, { "id": "2304.08485" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "1904.09675" }, { "id": "2308.15126" }, { "id": "2306.05685" }, { "id": "1702.01806" }, { "id": "2305.03726" }, { "id": "2304.09349" }, { "id": "2303.16199" }, { "id": "2302.07257" }, { "id": "1904.09751" } ]
2310.00754
22
Reconsidering Uncertain Objects & Objects in Later Position in the Descriptions. Hallucina- tion is more prone to occur in objects with greater uncertainty and objects exist later in the descrip- tion. In this context, we anticipate that the revisor should place greater emphasis on and reevaluate these objects. To achieve this, we utilize string matching to replace objects with significant uncer- tainty and those located at the end of the description with the placeholder tag “[IDK]”. Here, to quantify object uncertainty in descriptions, we use the uncertainty values of noun tokens as a proxy. Token uncertainty is expressed as the entropy of each token, denoted as − log p(zi|s<i, x). We clas- sify tokens as uncertain objects if their corresponding uncertainty exceeds a threshold γ, and if they are identified as nouns. Like uncertainty, we determine the later object’s position using the condition Index(zi) ≥ η ∗ Length(s) and the thresold η. This approach enables the model to reassess and either replace ”[IDK]” with a more appropriate object based on the image or remove it entirely.
2310.00754#22
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Large vision-language models (LVLMs) have shown remarkable abilities in understanding visual information with human languages. However, LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images. This can negatively impact many vision-language tasks, such as visual summarization and reasoning. To address this issue, we propose a simple yet powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify object hallucination in LVLMs by reconstructing less hallucinatory descriptions. LURE is grounded in a rigorous statistical analysis of the key factors underlying object hallucination, including co-occurrence (the frequent appearance of certain objects alongside others in images), uncertainty (objects with higher uncertainty during LVLM decoding), and object position (hallucination often appears in the later part of the generated text). LURE can also be seamlessly integrated with any LVLMs. We evaluate LURE on six open-source LVLMs, achieving a 23% improvement in general object hallucination evaluation metrics over the previous best approach. In both GPT and human evaluations, LURE consistently ranks at the top. Our data and code are available at https://github.com/YiyangZhou/LURE.
http://arxiv.org/pdf/2310.00754
Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao
cs.LG, cs.CL, cs.CV
null
null
cs.LG
20231001
20231001
[ { "id": "2308.14972" }, { "id": "2306.05424" }, { "id": "2302.13971" }, { "id": "2308.06394" }, { "id": "2304.04920" }, { "id": "2204.02311" }, { "id": "2306.14565" }, { "id": "2306.09299" }, { "id": "2305.13534" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2305.10355" }, { "id": "2306.04387" }, { "id": "2304.08485" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "1904.09675" }, { "id": "2308.15126" }, { "id": "2306.05685" }, { "id": "1702.01806" }, { "id": "2305.03726" }, { "id": "2304.09349" }, { "id": "2303.16199" }, { "id": "2302.07257" }, { "id": "1904.09751" } ]
2310.00754
23
Using these modification strategies, for every accurate description, we provide GPT-3.5 with a list of potential co-occurrence objects, and a list of uncertain objects. We then prompt GPT-3.5 to generate the corresponding hallucinatory description using the prompts listed in Appendix A.3. Finally, we leverage the constructed hallucination dataset to fine-tune a LVLM and use it as revisor. Some cases of hallucinatory descriptions are in Appendix D.2. The training pipeline is illustrated in Alg. 1. 5 # Preprint Algorithm 1 Training LVLM Hallucination Revisor in LURE Require: training image set X ; groundtruth descriptions Y; LVLM M(·); uncertainty threshold γ; hallucination revisor Rθ(·) with parameters θ; position threshold η 1: Use GPT-3.5 to construct hallucinatory description set Hold (see Appendix A.3 for more details) 2: Initialize the revisor’s parameter θ and an empty set Hnew ← {} 3: while not converged do 4: 5: 6: 7: 8: 9: 10: for each image x ∈ X and the correpsonding hallucinatory description h ∈ Hold do Generate description s = M(x) with object set Os for object os,i ∈ Os do
2310.00754#23
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Large vision-language models (LVLMs) have shown remarkable abilities in understanding visual information with human languages. However, LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images. This can negatively impact many vision-language tasks, such as visual summarization and reasoning. To address this issue, we propose a simple yet powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify object hallucination in LVLMs by reconstructing less hallucinatory descriptions. LURE is grounded in a rigorous statistical analysis of the key factors underlying object hallucination, including co-occurrence (the frequent appearance of certain objects alongside others in images), uncertainty (objects with higher uncertainty during LVLM decoding), and object position (hallucination often appears in the later part of the generated text). LURE can also be seamlessly integrated with any LVLMs. We evaluate LURE on six open-source LVLMs, achieving a 23% improvement in general object hallucination evaluation metrics over the previous best approach. In both GPT and human evaluations, LURE consistently ranks at the top. Our data and code are available at https://github.com/YiyangZhou/LURE.
http://arxiv.org/pdf/2310.00754
Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao
cs.LG, cs.CL, cs.CV
null
null
cs.LG
20231001
20231001
[ { "id": "2308.14972" }, { "id": "2306.05424" }, { "id": "2302.13971" }, { "id": "2308.06394" }, { "id": "2304.04920" }, { "id": "2204.02311" }, { "id": "2306.14565" }, { "id": "2306.09299" }, { "id": "2305.13534" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2305.10355" }, { "id": "2306.04387" }, { "id": "2304.08485" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "1904.09675" }, { "id": "2308.15126" }, { "id": "2306.05685" }, { "id": "1702.01806" }, { "id": "2305.03726" }, { "id": "2304.09349" }, { "id": "2303.16199" }, { "id": "2302.07257" }, { "id": "1904.09751" } ]
2310.00754
24
Generate description s = M(x) with object set Os for object os,i ∈ Os do if os,i in h and − log p(os,i|M, x) ≥ γ then if os,i in h and Index(os,i) ≥ η ∗ Length(h) then Put h into Hnew Update parameter θ with autoregressive loss L(Rθ(Hnew), Y) 3.2 DEPLOYING HALLUCINATION REVISOR In the inference stage, the trained revisor is employed to rectify the generated descriptions. Specif- ically, similar to the process of constructing hallucinated descriptions during the training phase, in the testing phase, we similarly integrate the placeholder tag “[IDK]” into the generated descriptions. This integration serves the purpose of enforcing the Revisor to reevaluate objects exhibiting high uncertainty or appearing later in the generated text. The inference pipeline is detailed in Alg. 2. Algorithm 2 Inference Pipline of LURE Require: test image xt; LVLM M(·); trained hallucination revisor R∗ θ(·); uncertainty threshold γ, position threshold η 1: Generate description st = M(xt) with object set Ost 2: for object ost,i ∈ Ost do 3: 4: 5: 6: 7: return R∗
2310.00754#24
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Large vision-language models (LVLMs) have shown remarkable abilities in understanding visual information with human languages. However, LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images. This can negatively impact many vision-language tasks, such as visual summarization and reasoning. To address this issue, we propose a simple yet powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify object hallucination in LVLMs by reconstructing less hallucinatory descriptions. LURE is grounded in a rigorous statistical analysis of the key factors underlying object hallucination, including co-occurrence (the frequent appearance of certain objects alongside others in images), uncertainty (objects with higher uncertainty during LVLM decoding), and object position (hallucination often appears in the later part of the generated text). LURE can also be seamlessly integrated with any LVLMs. We evaluate LURE on six open-source LVLMs, achieving a 23% improvement in general object hallucination evaluation metrics over the previous best approach. In both GPT and human evaluations, LURE consistently ranks at the top. Our data and code are available at https://github.com/YiyangZhou/LURE.
http://arxiv.org/pdf/2310.00754
Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao
cs.LG, cs.CL, cs.CV
null
null
cs.LG
20231001
20231001
[ { "id": "2308.14972" }, { "id": "2306.05424" }, { "id": "2302.13971" }, { "id": "2308.06394" }, { "id": "2304.04920" }, { "id": "2204.02311" }, { "id": "2306.14565" }, { "id": "2306.09299" }, { "id": "2305.13534" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2305.10355" }, { "id": "2306.04387" }, { "id": "2304.08485" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "1904.09675" }, { "id": "2308.15126" }, { "id": "2306.05685" }, { "id": "1702.01806" }, { "id": "2305.03726" }, { "id": "2304.09349" }, { "id": "2303.16199" }, { "id": "2302.07257" }, { "id": "1904.09751" } ]
2310.00754
25
1: Generate description st = M(xt) with object set Ost 2: for object ost,i ∈ Ost do 3: 4: 5: 6: 7: return R∗ if − log p(object|M, x) ≥ γ then Add placeholder tag “[IDK]” to st, i.e., st ← Mask(st, ost,i) # if Index(ost,i) ≥ η ∗ Length(st) then Add placeholder tag “[IDK]” to st, i.e., st ← Mask(st, ost,i) # θ(st) # 4 EXPERIMENTS In this section, we evaluate the performance of LURE aiming to answer the following questions: (1) Can LURE effectively reduce object hallucination in LVLMs compared to other baselines? (2) Can the key factors we’ve identified related to hallucinations in LVLMs benefit the training process of the revisor? (3) Is LURE sensitive to the revisor’s backbone?
2310.00754#25
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Large vision-language models (LVLMs) have shown remarkable abilities in understanding visual information with human languages. However, LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images. This can negatively impact many vision-language tasks, such as visual summarization and reasoning. To address this issue, we propose a simple yet powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify object hallucination in LVLMs by reconstructing less hallucinatory descriptions. LURE is grounded in a rigorous statistical analysis of the key factors underlying object hallucination, including co-occurrence (the frequent appearance of certain objects alongside others in images), uncertainty (objects with higher uncertainty during LVLM decoding), and object position (hallucination often appears in the later part of the generated text). LURE can also be seamlessly integrated with any LVLMs. We evaluate LURE on six open-source LVLMs, achieving a 23% improvement in general object hallucination evaluation metrics over the previous best approach. In both GPT and human evaluations, LURE consistently ranks at the top. Our data and code are available at https://github.com/YiyangZhou/LURE.
http://arxiv.org/pdf/2310.00754
Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao
cs.LG, cs.CL, cs.CV
null
null
cs.LG
20231001
20231001
[ { "id": "2308.14972" }, { "id": "2306.05424" }, { "id": "2302.13971" }, { "id": "2308.06394" }, { "id": "2304.04920" }, { "id": "2204.02311" }, { "id": "2306.14565" }, { "id": "2306.09299" }, { "id": "2305.13534" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2305.10355" }, { "id": "2306.04387" }, { "id": "2304.08485" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "1904.09675" }, { "id": "2308.15126" }, { "id": "2306.05685" }, { "id": "1702.01806" }, { "id": "2305.03726" }, { "id": "2304.09349" }, { "id": "2303.16199" }, { "id": "2302.07257" }, { "id": "1904.09751" } ]
2310.00754
26
Datasets. MSCOCO (Lin et al., 2014) is a comprehensive dataset used for image recognition, segmentation, and captioning. It comprises over 300,000 images spanning more than 80 object categories, each with detailed annotations. Following (Li et al., 2023d; Liu et al., 2023a), we selected 5,000 unique images from the COCO 2014 training dataset to evaluate performance. To train the hallucination revisor, we randomly selected 5000 image-text pairs from LLaVA-150k (Liu et al., 2023c), ensuring that these images were different from the ones used in testing. Evaluation Metric. Caption Hallucination Assessment with Image Relevance (CHAIR) (Rohrbach et al., 2018) is a widely-used metric for evaluating object hallucination in image captioning tasks. CHAIR assesses the quality of image captions by comparing them to the ground truth objects present in the corresponding images. It calculates the proportion of objects mentioned in the caption that are not actually present in the image. There are two common variants of CHAIR: CHAIRI and CHAIRS. Both variants evaluate the degree of object hallucination, but at different levels: the object instance level and the sentence level, respectively. The two variants are formulated as follows:
2310.00754#26
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Large vision-language models (LVLMs) have shown remarkable abilities in understanding visual information with human languages. However, LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images. This can negatively impact many vision-language tasks, such as visual summarization and reasoning. To address this issue, we propose a simple yet powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify object hallucination in LVLMs by reconstructing less hallucinatory descriptions. LURE is grounded in a rigorous statistical analysis of the key factors underlying object hallucination, including co-occurrence (the frequent appearance of certain objects alongside others in images), uncertainty (objects with higher uncertainty during LVLM decoding), and object position (hallucination often appears in the later part of the generated text). LURE can also be seamlessly integrated with any LVLMs. We evaluate LURE on six open-source LVLMs, achieving a 23% improvement in general object hallucination evaluation metrics over the previous best approach. In both GPT and human evaluations, LURE consistently ranks at the top. Our data and code are available at https://github.com/YiyangZhou/LURE.
http://arxiv.org/pdf/2310.00754
Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao
cs.LG, cs.CL, cs.CV
null
null
cs.LG
20231001
20231001
[ { "id": "2308.14972" }, { "id": "2306.05424" }, { "id": "2302.13971" }, { "id": "2308.06394" }, { "id": "2304.04920" }, { "id": "2204.02311" }, { "id": "2306.14565" }, { "id": "2306.09299" }, { "id": "2305.13534" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2305.10355" }, { "id": "2306.04387" }, { "id": "2304.08485" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "1904.09675" }, { "id": "2308.15126" }, { "id": "2306.05685" }, { "id": "1702.01806" }, { "id": "2305.03726" }, { "id": "2304.09349" }, { "id": "2303.16199" }, { "id": "2302.07257" }, { "id": "1904.09751" } ]
2310.00754
27
CHAIRI = |{hallucinated objects}| |{all mentioned objects}| , CHAIRS = |{captions with hallucinated objects}| |{all captions}| . (4) 6 Preprint Baselines. The comparison methods include: Original, which directly use the generated descrip- tions from LVLMs; Teacher (Saha et al., 2023), which leverages blip2 (Li et al., 2023b) to generate short image descriptions and employs them as contextual guidance for generating long-form descrip- tions; Chain-of-Thought (CoT) (Wei et al., 2022), which involves the model initially listing objects and subsequently describing the image; Greedy-Decoding, a method that abstains from using a sam- pling strategy and aims to make the model output the most certain tokens; GPT-Ensemble, which initially employs GPT-3.5 to aggregate the commonly generated descriptions from multiple LVLMs, excluding the one under evaluation. Subsequently, GPT-3.5 utilizes these summarized common de- scriptions as guidance to rewrite the originally generated description from the evaluated model; GPT-Teacher, where GPT-3.5 is tasked with rewriting the original long-form description based on the blip2 generated short descriptions. Detailed descriptions about baselines are in Appendix A.4.
2310.00754#27
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Large vision-language models (LVLMs) have shown remarkable abilities in understanding visual information with human languages. However, LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images. This can negatively impact many vision-language tasks, such as visual summarization and reasoning. To address this issue, we propose a simple yet powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify object hallucination in LVLMs by reconstructing less hallucinatory descriptions. LURE is grounded in a rigorous statistical analysis of the key factors underlying object hallucination, including co-occurrence (the frequent appearance of certain objects alongside others in images), uncertainty (objects with higher uncertainty during LVLM decoding), and object position (hallucination often appears in the later part of the generated text). LURE can also be seamlessly integrated with any LVLMs. We evaluate LURE on six open-source LVLMs, achieving a 23% improvement in general object hallucination evaluation metrics over the previous best approach. In both GPT and human evaluations, LURE consistently ranks at the top. Our data and code are available at https://github.com/YiyangZhou/LURE.
http://arxiv.org/pdf/2310.00754
Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao
cs.LG, cs.CL, cs.CV
null
null
cs.LG
20231001
20231001
[ { "id": "2308.14972" }, { "id": "2306.05424" }, { "id": "2302.13971" }, { "id": "2308.06394" }, { "id": "2304.04920" }, { "id": "2204.02311" }, { "id": "2306.14565" }, { "id": "2306.09299" }, { "id": "2305.13534" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2305.10355" }, { "id": "2306.04387" }, { "id": "2304.08485" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "1904.09675" }, { "id": "2308.15126" }, { "id": "2306.05685" }, { "id": "1702.01806" }, { "id": "2305.03726" }, { "id": "2304.09349" }, { "id": "2303.16199" }, { "id": "2302.07257" }, { "id": "1904.09751" } ]
2310.00754
28
Evaluated LVLMs. We performed experiments utilizing six of the most recent LVLMs, with their corresponding language models specified in parentheses: MiniGPT-4 (Vicuna 13B) (Zhu et al., 2023), LLaVa (LLaMA 13B) (Liu et al., 2023d), MMGPT (LLaMA 7B) (Gong et al., 2023), LLaMA-Adapter (LLaMA 7B) (Zhang et al., 2023b), mPLUG-Owl (LLaMA 7B) (Ye et al., 2023), and InstructBLIP (Vicuna 7B) (Dai et al., 2023). Hyperparameter Settings. Unless specified, all experiments in the paper are using MiniGPT-4 as the backbone of the revisor, along with the training parameter settings provided in Appendix A.2. All hyperparameters are selected via cross-validation. 4.1 EVALUATION STRATEGIES AND RESULTS
2310.00754#28
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Large vision-language models (LVLMs) have shown remarkable abilities in understanding visual information with human languages. However, LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images. This can negatively impact many vision-language tasks, such as visual summarization and reasoning. To address this issue, we propose a simple yet powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify object hallucination in LVLMs by reconstructing less hallucinatory descriptions. LURE is grounded in a rigorous statistical analysis of the key factors underlying object hallucination, including co-occurrence (the frequent appearance of certain objects alongside others in images), uncertainty (objects with higher uncertainty during LVLM decoding), and object position (hallucination often appears in the later part of the generated text). LURE can also be seamlessly integrated with any LVLMs. We evaluate LURE on six open-source LVLMs, achieving a 23% improvement in general object hallucination evaluation metrics over the previous best approach. In both GPT and human evaluations, LURE consistently ranks at the top. Our data and code are available at https://github.com/YiyangZhou/LURE.
http://arxiv.org/pdf/2310.00754
Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao
cs.LG, cs.CL, cs.CV
null
null
cs.LG
20231001
20231001
[ { "id": "2308.14972" }, { "id": "2306.05424" }, { "id": "2302.13971" }, { "id": "2308.06394" }, { "id": "2304.04920" }, { "id": "2204.02311" }, { "id": "2306.14565" }, { "id": "2306.09299" }, { "id": "2305.13534" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2305.10355" }, { "id": "2306.04387" }, { "id": "2304.08485" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "1904.09675" }, { "id": "2308.15126" }, { "id": "2306.05685" }, { "id": "1702.01806" }, { "id": "2305.03726" }, { "id": "2304.09349" }, { "id": "2303.16199" }, { "id": "2302.07257" }, { "id": "1904.09751" } ]
2310.00754
29
4.1 EVALUATION STRATEGIES AND RESULTS Automated Object Hallucination Evaluation. We follow the guidelines presented in (Rohrbach et al., 2018) to perform an automated calculation of CHAIR metrics for the MSCOCO dataset, where 80 objects are involved in this automated evaluation process. In addition, we extend our evaluation to include other widely used metrics such as BLEU and CLIP score, which are commonly adopted in assessing the quality of image captioning. Detailed descriptions and results for these additional metrics can be found in Appendix C.1. Human and GPT Evaluations. Although automated evaluation strategies are efficient, they cannot encompass all objects present in the evaluated images. To overcome this limitation, we conducted a comprehensive human evaluation involving several native speakers. Please refer to Appendix A.5 for the evaluation interface. In this human evaluation, participants are assigned the task of annotat- ing hallucinatory objects and we rank different methods based on human feedback. In addition to human evaluation, inspired from (Zheng et al., 2023), we also prompt GPT-3.5 to compare differ- ent descriptions. In this GPT evaluation, we provide the annotated information, including detection boxes and captions, and anticipate that GPT-3.5 can provide an ranking for the descriptions from various methods. For GPT evaluation, we use the prompts referenced in Table 9 in the Appendix.
2310.00754#29
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Large vision-language models (LVLMs) have shown remarkable abilities in understanding visual information with human languages. However, LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images. This can negatively impact many vision-language tasks, such as visual summarization and reasoning. To address this issue, we propose a simple yet powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify object hallucination in LVLMs by reconstructing less hallucinatory descriptions. LURE is grounded in a rigorous statistical analysis of the key factors underlying object hallucination, including co-occurrence (the frequent appearance of certain objects alongside others in images), uncertainty (objects with higher uncertainty during LVLM decoding), and object position (hallucination often appears in the later part of the generated text). LURE can also be seamlessly integrated with any LVLMs. We evaluate LURE on six open-source LVLMs, achieving a 23% improvement in general object hallucination evaluation metrics over the previous best approach. In both GPT and human evaluations, LURE consistently ranks at the top. Our data and code are available at https://github.com/YiyangZhou/LURE.
http://arxiv.org/pdf/2310.00754
Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao
cs.LG, cs.CL, cs.CV
null
null
cs.LG
20231001
20231001
[ { "id": "2308.14972" }, { "id": "2306.05424" }, { "id": "2302.13971" }, { "id": "2308.06394" }, { "id": "2304.04920" }, { "id": "2204.02311" }, { "id": "2306.14565" }, { "id": "2306.09299" }, { "id": "2305.13534" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2305.10355" }, { "id": "2306.04387" }, { "id": "2304.08485" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "1904.09675" }, { "id": "2308.15126" }, { "id": "2306.05685" }, { "id": "1702.01806" }, { "id": "2305.03726" }, { "id": "2304.09349" }, { "id": "2303.16199" }, { "id": "2302.07257" }, { "id": "1904.09751" } ]
2310.00754
30
Results. In Table 1 and Table 2, we report the results of automated evaluations and human and GPT evaluations under different LVLMs, respectively. Here, taking cost into account, we only compare LURE with the four strongest methods in human and GPT evaluations. Although Teacher, CoT, and GPT-Teacher can improve the performance compared to the original descriptions in most cases, LURE significantly enhances performance over these strong baselines, which effectively reduces object hallucination in generated descriptions. One potential reason for this is that all of these baselines experience error propagation to some extent. For instance, CoT’s linear guidance can lead to errors if the object listing step is incorrect. In contrast, LURE directly corrects hallucinatory descriptions using guidance from potential factors that can trigger hallucinations. 4.2 ANALYSIS OF LURE Are the Performance Gains of LURE from Us- ing Constructed Hallucination Datasets? To ver- ify that the performance gains of our method are not from using additional data to train the revisor, we fine-tuned the original LVLMs with the additional dataset. The results on MiniGPT-4 are shown in Ta- ble 3, where “Original” represents the descriptions Table 3: Compared LURE to fine-tuning method using the training data of revisor.
2310.00754#30
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Large vision-language models (LVLMs) have shown remarkable abilities in understanding visual information with human languages. However, LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images. This can negatively impact many vision-language tasks, such as visual summarization and reasoning. To address this issue, we propose a simple yet powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify object hallucination in LVLMs by reconstructing less hallucinatory descriptions. LURE is grounded in a rigorous statistical analysis of the key factors underlying object hallucination, including co-occurrence (the frequent appearance of certain objects alongside others in images), uncertainty (objects with higher uncertainty during LVLM decoding), and object position (hallucination often appears in the later part of the generated text). LURE can also be seamlessly integrated with any LVLMs. We evaluate LURE on six open-source LVLMs, achieving a 23% improvement in general object hallucination evaluation metrics over the previous best approach. In both GPT and human evaluations, LURE consistently ranks at the top. Our data and code are available at https://github.com/YiyangZhou/LURE.
http://arxiv.org/pdf/2310.00754
Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao
cs.LG, cs.CL, cs.CV
null
null
cs.LG
20231001
20231001
[ { "id": "2308.14972" }, { "id": "2306.05424" }, { "id": "2302.13971" }, { "id": "2308.06394" }, { "id": "2304.04920" }, { "id": "2204.02311" }, { "id": "2306.14565" }, { "id": "2306.09299" }, { "id": "2305.13534" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2305.10355" }, { "id": "2306.04387" }, { "id": "2304.08485" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "1904.09675" }, { "id": "2308.15126" }, { "id": "2306.05685" }, { "id": "1702.01806" }, { "id": "2305.03726" }, { "id": "2304.09349" }, { "id": "2303.16199" }, { "id": "2302.07257" }, { "id": "1904.09751" } ]
2310.00754
31
Table 3: Compared LURE to fine-tuning method using the training data of revisor. Model CHAIRS ↓ CHAIRI ↓ Original FT (add’l data) 26.8 31.0 7.3 7.2 LURE (Ours) 19.7 4.9 7 # Preprint Table 1: Automated hallucination evaluation is performed under six LVLMs using CHAIRS (CS) and CHAIRI (CI ), where smaller values indicate less object hallucination. For additional metrics, please refer to Appendix C.1.
2310.00754#31
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Large vision-language models (LVLMs) have shown remarkable abilities in understanding visual information with human languages. However, LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images. This can negatively impact many vision-language tasks, such as visual summarization and reasoning. To address this issue, we propose a simple yet powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify object hallucination in LVLMs by reconstructing less hallucinatory descriptions. LURE is grounded in a rigorous statistical analysis of the key factors underlying object hallucination, including co-occurrence (the frequent appearance of certain objects alongside others in images), uncertainty (objects with higher uncertainty during LVLM decoding), and object position (hallucination often appears in the later part of the generated text). LURE can also be seamlessly integrated with any LVLMs. We evaluate LURE on six open-source LVLMs, achieving a 23% improvement in general object hallucination evaluation metrics over the previous best approach. In both GPT and human evaluations, LURE consistently ranks at the top. Our data and code are available at https://github.com/YiyangZhou/LURE.
http://arxiv.org/pdf/2310.00754
Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao
cs.LG, cs.CL, cs.CV
null
null
cs.LG
20231001
20231001
[ { "id": "2308.14972" }, { "id": "2306.05424" }, { "id": "2302.13971" }, { "id": "2308.06394" }, { "id": "2304.04920" }, { "id": "2204.02311" }, { "id": "2306.14565" }, { "id": "2306.09299" }, { "id": "2305.13534" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2305.10355" }, { "id": "2306.04387" }, { "id": "2304.08485" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "1904.09675" }, { "id": "2308.15126" }, { "id": "2306.05685" }, { "id": "1702.01806" }, { "id": "2305.03726" }, { "id": "2304.09349" }, { "id": "2303.16199" }, { "id": "2302.07257" }, { "id": "1904.09751" } ]
2310.00754
32
MiniGPT-4 CS ↓ CI ↓ CS ↓ CI ↓ CS ↓ CI ↓ CS ↓ LLaVa MMGPT LLaMA-Adapter mPLUG-Owl InstructBLIP CS ↓ CI ↓ CS ↓ CI ↓ CI ↓ Original Teacher CoT Greedy-Decoding GPT-Ensemble GPT-Teacher 26.8 24.0 31.6 25.1 41.0 25.3 7.3 5.7 9.4 6.6 10.6 7.6 54.0 49.9 47.6 50.9 43.0 38.0 11.3 9.3 9.0 10.0 10.7 7.8 56.6 53.4 48.8 50.6 51.0 26.7 11.0 7.5 17.5 8.4 11.1 9.3 58.8 40.8 43.3 55.9 47.1 49.0 13.7 9.4 9.4 13.7 13.0 12.4 71.2 62.4 56.9 55.1 52.0 22.0 16.5 13.0 13.4 12.8 15.2 9.0 40.0 36.4 35.7 35.5 51.0 32.0 8.2 7.5 7.8 7.8 13.0 7.8 LURE
2310.00754#32
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Large vision-language models (LVLMs) have shown remarkable abilities in understanding visual information with human languages. However, LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images. This can negatively impact many vision-language tasks, such as visual summarization and reasoning. To address this issue, we propose a simple yet powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify object hallucination in LVLMs by reconstructing less hallucinatory descriptions. LURE is grounded in a rigorous statistical analysis of the key factors underlying object hallucination, including co-occurrence (the frequent appearance of certain objects alongside others in images), uncertainty (objects with higher uncertainty during LVLM decoding), and object position (hallucination often appears in the later part of the generated text). LURE can also be seamlessly integrated with any LVLMs. We evaluate LURE on six open-source LVLMs, achieving a 23% improvement in general object hallucination evaluation metrics over the previous best approach. In both GPT and human evaluations, LURE consistently ranks at the top. Our data and code are available at https://github.com/YiyangZhou/LURE.
http://arxiv.org/pdf/2310.00754
Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao
cs.LG, cs.CL, cs.CV
null
null
cs.LG
20231001
20231001
[ { "id": "2308.14972" }, { "id": "2306.05424" }, { "id": "2302.13971" }, { "id": "2308.06394" }, { "id": "2304.04920" }, { "id": "2204.02311" }, { "id": "2306.14565" }, { "id": "2306.09299" }, { "id": "2305.13534" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2305.10355" }, { "id": "2306.04387" }, { "id": "2304.08485" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "1904.09675" }, { "id": "2308.15126" }, { "id": "2306.05685" }, { "id": "1702.01806" }, { "id": "2305.03726" }, { "id": "2304.09349" }, { "id": "2303.16199" }, { "id": "2302.07257" }, { "id": "1904.09751" } ]
2310.00754
35
MiniGPT-4 G ↓ H ↓ G ↓ H ↓ G ↓ H ↓ G ↓ LLaVa MMGPT LLaMA-Adapter mPLUG-Owl H ↓ G ↓ H ↓ InstructBLIP G ↓ H ↓ Original Teacher CoT GPT-Teacher 3.97 3.36 2.44 3.56 3.10 3.83 2.83 3.28 4.55 4.62 3.66 3.25 4.79 3.30 3.07 3.09 3.20 3.00 3.05 2.52 4.38 4.07 2.63 2.45 2.96 2.16 2.90 2.68 4.45 3.13 2.10 3.24 4.25 3.25 3.75 2.50 3.98 3.66 3.13 2.44 4.29 3.34 2.78 3.12 4.77 3.53 2.21 2.56 LURE (ours) 1.67 1.96 1.65 1.83 1.61 1.58 1.90 2.08 1.25 1.79 1.47 1.93
2310.00754#35
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Large vision-language models (LVLMs) have shown remarkable abilities in understanding visual information with human languages. However, LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images. This can negatively impact many vision-language tasks, such as visual summarization and reasoning. To address this issue, we propose a simple yet powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify object hallucination in LVLMs by reconstructing less hallucinatory descriptions. LURE is grounded in a rigorous statistical analysis of the key factors underlying object hallucination, including co-occurrence (the frequent appearance of certain objects alongside others in images), uncertainty (objects with higher uncertainty during LVLM decoding), and object position (hallucination often appears in the later part of the generated text). LURE can also be seamlessly integrated with any LVLMs. We evaluate LURE on six open-source LVLMs, achieving a 23% improvement in general object hallucination evaluation metrics over the previous best approach. In both GPT and human evaluations, LURE consistently ranks at the top. Our data and code are available at https://github.com/YiyangZhou/LURE.
http://arxiv.org/pdf/2310.00754
Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao
cs.LG, cs.CL, cs.CV
null
null
cs.LG
20231001
20231001
[ { "id": "2308.14972" }, { "id": "2306.05424" }, { "id": "2302.13971" }, { "id": "2308.06394" }, { "id": "2304.04920" }, { "id": "2204.02311" }, { "id": "2306.14565" }, { "id": "2306.09299" }, { "id": "2305.13534" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2305.10355" }, { "id": "2306.04387" }, { "id": "2304.08485" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "1904.09675" }, { "id": "2308.15126" }, { "id": "2306.05685" }, { "id": "1702.01806" }, { "id": "2305.03726" }, { "id": "2304.09349" }, { "id": "2303.16199" }, { "id": "2302.07257" }, { "id": "1904.09751" } ]
2310.00754
36
of MiniGPT-4. According to Table 3, LURE outperforms the fine-tuned LVLMs, which indicates that our method indeed reduces object hallucination by post-hoc rectifying potential hallucinatory descriptions rather than using additional data. Ablation Study – Do the Hallucination Factors Contribute Performance Gains? To demonstrate the impact of considering co-occurrence, uncer- tainty, and object position in reducing hallucination, we conducted ablation experiments and report the results in Table 4, where “Original” represents the descriptions of MiniGPT-4. In the ablation experi- ments, we trained and deployed the revisor without each of the three factors, one at a time. The results show that all three factors contribute to training a strong hallucination revisor to reduce object hallucination. Furthermore, we have also conducted an analysis of the changes in these three factors before and after applying the revisor, as presented in Appendix C.2. This analysis demonstrates that LURE can effectively reduce instances of hallucina- tion caused by these factors. CHAIRS ↓ CHAIRI ↓ Model Original w/o Co-occurrence w/o Uncertainty w/o Position 26.8 22.6 21.2 22.3 7.3 4.9 5.4 5.8 LURE (Ours) 19.7 4.9
2310.00754#36
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Large vision-language models (LVLMs) have shown remarkable abilities in understanding visual information with human languages. However, LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images. This can negatively impact many vision-language tasks, such as visual summarization and reasoning. To address this issue, we propose a simple yet powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify object hallucination in LVLMs by reconstructing less hallucinatory descriptions. LURE is grounded in a rigorous statistical analysis of the key factors underlying object hallucination, including co-occurrence (the frequent appearance of certain objects alongside others in images), uncertainty (objects with higher uncertainty during LVLM decoding), and object position (hallucination often appears in the later part of the generated text). LURE can also be seamlessly integrated with any LVLMs. We evaluate LURE on six open-source LVLMs, achieving a 23% improvement in general object hallucination evaluation metrics over the previous best approach. In both GPT and human evaluations, LURE consistently ranks at the top. Our data and code are available at https://github.com/YiyangZhou/LURE.
http://arxiv.org/pdf/2310.00754
Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao
cs.LG, cs.CL, cs.CV
null
null
cs.LG
20231001
20231001
[ { "id": "2308.14972" }, { "id": "2306.05424" }, { "id": "2302.13971" }, { "id": "2308.06394" }, { "id": "2304.04920" }, { "id": "2204.02311" }, { "id": "2306.14565" }, { "id": "2306.09299" }, { "id": "2305.13534" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2305.10355" }, { "id": "2306.04387" }, { "id": "2304.08485" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "1904.09675" }, { "id": "2308.15126" }, { "id": "2306.05685" }, { "id": "1702.01806" }, { "id": "2305.03726" }, { "id": "2304.09349" }, { "id": "2303.16199" }, { "id": "2302.07257" }, { "id": "1904.09751" } ]
2310.00754
37
Robustness Analysis of the Hallucination Revi- sor. We further analyze the robustness of the revi- sor with respect to different backbones. Specifically, we trained the revisor on the same dataset using different backbones: MiniGPT-4, LLaMA-adapter, and mPLUG-Owl. The results are reported in Table 5, where “Original” represents the descriptions of MiniGPT-4. We can observe that despite the vary- ing performance of each backbone, LURE consis- tently improve the performance compared to the original description, which further indicate the effectiveness of LURE. Additionally, we analyze the results of LURE with respect to various uncer- tainty thresholds in Appendix C.3. The findings demonstrate that LURE exhibits strong performance across a wide range of uncertainty thresholds. Backbone CHAIRS ↓ CHAIRI ↓ Original 26.8 7.3 MiniGPT-4 LLaMA-adapter mPLUG-Owl 19.7 21.3 22.1 4.9 5.2 5.4 Case Analysis. We select several strong baselines and presented a case with rectified descriptions in Figure 3. Compared with other approaches, LURE excels in providing a more accurate image 8 Preprint fox) Original fra ‘Teacher cpr. ‘Teacher
2310.00754#37
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Large vision-language models (LVLMs) have shown remarkable abilities in understanding visual information with human languages. However, LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images. This can negatively impact many vision-language tasks, such as visual summarization and reasoning. To address this issue, we propose a simple yet powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify object hallucination in LVLMs by reconstructing less hallucinatory descriptions. LURE is grounded in a rigorous statistical analysis of the key factors underlying object hallucination, including co-occurrence (the frequent appearance of certain objects alongside others in images), uncertainty (objects with higher uncertainty during LVLM decoding), and object position (hallucination often appears in the later part of the generated text). LURE can also be seamlessly integrated with any LVLMs. We evaluate LURE on six open-source LVLMs, achieving a 23% improvement in general object hallucination evaluation metrics over the previous best approach. In both GPT and human evaluations, LURE consistently ranks at the top. Our data and code are available at https://github.com/YiyangZhou/LURE.
http://arxiv.org/pdf/2310.00754
Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao
cs.LG, cs.CL, cs.CV
null
null
cs.LG
20231001
20231001
[ { "id": "2308.14972" }, { "id": "2306.05424" }, { "id": "2302.13971" }, { "id": "2308.06394" }, { "id": "2304.04920" }, { "id": "2204.02311" }, { "id": "2306.14565" }, { "id": "2306.09299" }, { "id": "2305.13534" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2305.10355" }, { "id": "2306.04387" }, { "id": "2304.08485" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "1904.09675" }, { "id": "2308.15126" }, { "id": "2306.05685" }, { "id": "1702.01806" }, { "id": "2305.03726" }, { "id": "2304.09349" }, { "id": "2303.16199" }, { "id": "2302.07257" }, { "id": "1904.09751" } ]
2310.00754
38
8 Preprint fox) Original fra ‘Teacher cpr. ‘Teacher Figure 3: A case study comparing the levels of hallucination among various baselines. description. In the case, LURE accurately depicts the primary elements (e.g., sandwich, chair, plate) while avoiding hallucinatory objects like the fork and handbag. Although other baselines partially reduce hallucination, they still exhibit object hallucinations in their descriptions. Additionally, we also mitigate logical errors to some extent, including object orientation and actions. Further case analyses can be found in Appendices D.3 and D.4. # 5 RELATED WORK
2310.00754#38
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Large vision-language models (LVLMs) have shown remarkable abilities in understanding visual information with human languages. However, LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images. This can negatively impact many vision-language tasks, such as visual summarization and reasoning. To address this issue, we propose a simple yet powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify object hallucination in LVLMs by reconstructing less hallucinatory descriptions. LURE is grounded in a rigorous statistical analysis of the key factors underlying object hallucination, including co-occurrence (the frequent appearance of certain objects alongside others in images), uncertainty (objects with higher uncertainty during LVLM decoding), and object position (hallucination often appears in the later part of the generated text). LURE can also be seamlessly integrated with any LVLMs. We evaluate LURE on six open-source LVLMs, achieving a 23% improvement in general object hallucination evaluation metrics over the previous best approach. In both GPT and human evaluations, LURE consistently ranks at the top. Our data and code are available at https://github.com/YiyangZhou/LURE.
http://arxiv.org/pdf/2310.00754
Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao
cs.LG, cs.CL, cs.CV
null
null
cs.LG
20231001
20231001
[ { "id": "2308.14972" }, { "id": "2306.05424" }, { "id": "2302.13971" }, { "id": "2308.06394" }, { "id": "2304.04920" }, { "id": "2204.02311" }, { "id": "2306.14565" }, { "id": "2306.09299" }, { "id": "2305.13534" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2305.10355" }, { "id": "2306.04387" }, { "id": "2304.08485" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "1904.09675" }, { "id": "2308.15126" }, { "id": "2306.05685" }, { "id": "1702.01806" }, { "id": "2305.03726" }, { "id": "2304.09349" }, { "id": "2303.16199" }, { "id": "2302.07257" }, { "id": "1904.09751" } ]
2310.00754
39
# 5 RELATED WORK Vision-Language Models. Vision-language pre-trained models, as exemplified by (Li et al., 2021; Zeng et al., 2021), demonstrate substantial capabilities in modeling interactions between visual and textual information, especially when fine-tuned for specific tasks. Recently, autoregressive large- scale language models (LLMs) (Brown et al., 2020; Chowdhery et al., 2022; Touvron et al., 2023; Zhang et al., 2022b; Chiang et al., 2023; Taori et al., 2023) have ushered in a new era of vision- language models. These models, known as LVLMs, integrate LLMs with visual modality and show- case impressive visual understanding through end-to-end training techniques that directly decode vi- sual and text tokens in a unified manner (Liu et al., 2023d; Zhu et al., 2023; Ye et al., 2023; Li et al., 2023a). However, similar to VLMs, LVLMs also face the challenge of object hallucination (Wang et al., 2023a; Rohrbach et al., 2018). This form of object hallucination is more pronounced and widespread in the long-form descriptions produced by LVLMs compared to the shorter descriptions generated by VLMs (Zhang et al., 2023a).
2310.00754#39
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Large vision-language models (LVLMs) have shown remarkable abilities in understanding visual information with human languages. However, LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images. This can negatively impact many vision-language tasks, such as visual summarization and reasoning. To address this issue, we propose a simple yet powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify object hallucination in LVLMs by reconstructing less hallucinatory descriptions. LURE is grounded in a rigorous statistical analysis of the key factors underlying object hallucination, including co-occurrence (the frequent appearance of certain objects alongside others in images), uncertainty (objects with higher uncertainty during LVLM decoding), and object position (hallucination often appears in the later part of the generated text). LURE can also be seamlessly integrated with any LVLMs. We evaluate LURE on six open-source LVLMs, achieving a 23% improvement in general object hallucination evaluation metrics over the previous best approach. In both GPT and human evaluations, LURE consistently ranks at the top. Our data and code are available at https://github.com/YiyangZhou/LURE.
http://arxiv.org/pdf/2310.00754
Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao
cs.LG, cs.CL, cs.CV
null
null
cs.LG
20231001
20231001
[ { "id": "2308.14972" }, { "id": "2306.05424" }, { "id": "2302.13971" }, { "id": "2308.06394" }, { "id": "2304.04920" }, { "id": "2204.02311" }, { "id": "2306.14565" }, { "id": "2306.09299" }, { "id": "2305.13534" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2305.10355" }, { "id": "2306.04387" }, { "id": "2304.08485" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "1904.09675" }, { "id": "2308.15126" }, { "id": "2306.05685" }, { "id": "1702.01806" }, { "id": "2305.03726" }, { "id": "2304.09349" }, { "id": "2303.16199" }, { "id": "2302.07257" }, { "id": "1904.09751" } ]
2310.00754
40
Hallucination in VLMs and LVLMs. In VLMs, hallucination typically refers to scenarios where the generated descriptions contain information that does not exist in the visual modality (Rohrbach et al., 2018; Biten et al., 2022; Wang et al., 2023a). Addressing object hallucination in VLMs is primarily achieved through techniques such as fine-grained contrastive learning (Zeng et al., 2021), ROI feature fusion (Biten et al., 2022), and eliminating co-occurrence patterns through data aug- mentation (Kim et al., 2023). However, the training paradigms between traditional VLMs and re- cent LVLMs differ, and the new autoregressive training paradigm in LVLMs makes it challenging to directly apply hallucination mitigation methods used in VLMs to LVLMs. Recent research has begun to address the issue of object hallucination in LVLMs, including hallucination evaluation and detection (Wang et al., 2023a; Liu et al., 2023a; Li et al., 2023d), as well as the construction of higher-quality datasets for fine-tuning (Gunjal et al., 2023; Li et al., 2023c; Liu et
2310.00754#40
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Large vision-language models (LVLMs) have shown remarkable abilities in understanding visual information with human languages. However, LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images. This can negatively impact many vision-language tasks, such as visual summarization and reasoning. To address this issue, we propose a simple yet powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify object hallucination in LVLMs by reconstructing less hallucinatory descriptions. LURE is grounded in a rigorous statistical analysis of the key factors underlying object hallucination, including co-occurrence (the frequent appearance of certain objects alongside others in images), uncertainty (objects with higher uncertainty during LVLM decoding), and object position (hallucination often appears in the later part of the generated text). LURE can also be seamlessly integrated with any LVLMs. We evaluate LURE on six open-source LVLMs, achieving a 23% improvement in general object hallucination evaluation metrics over the previous best approach. In both GPT and human evaluations, LURE consistently ranks at the top. Our data and code are available at https://github.com/YiyangZhou/LURE.
http://arxiv.org/pdf/2310.00754
Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao
cs.LG, cs.CL, cs.CV
null
null
cs.LG
20231001
20231001
[ { "id": "2308.14972" }, { "id": "2306.05424" }, { "id": "2302.13971" }, { "id": "2308.06394" }, { "id": "2304.04920" }, { "id": "2204.02311" }, { "id": "2306.14565" }, { "id": "2306.09299" }, { "id": "2305.13534" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2305.10355" }, { "id": "2306.04387" }, { "id": "2304.08485" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "1904.09675" }, { "id": "2308.15126" }, { "id": "2306.05685" }, { "id": "1702.01806" }, { "id": "2305.03726" }, { "id": "2304.09349" }, { "id": "2303.16199" }, { "id": "2302.07257" }, { "id": "1904.09751" } ]
2310.00754
41
as well as the construction of higher-quality datasets for fine-tuning (Gunjal et al., 2023; Li et al., 2023c; Liu et al., 2023a;d). Nevertheless, acquiring a substantial number of high-quality examples can be time-consuming and labor-intensive. Instead, grounded in statistical analysis of hallucination, we propose a conceptually different approach, LURE, to post-hoc rectify object hallucination. We have already demonstrated its effectiveness in reducing hallucination and its compatibility with various LVLMs.
2310.00754#41
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Large vision-language models (LVLMs) have shown remarkable abilities in understanding visual information with human languages. However, LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images. This can negatively impact many vision-language tasks, such as visual summarization and reasoning. To address this issue, we propose a simple yet powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify object hallucination in LVLMs by reconstructing less hallucinatory descriptions. LURE is grounded in a rigorous statistical analysis of the key factors underlying object hallucination, including co-occurrence (the frequent appearance of certain objects alongside others in images), uncertainty (objects with higher uncertainty during LVLM decoding), and object position (hallucination often appears in the later part of the generated text). LURE can also be seamlessly integrated with any LVLMs. We evaluate LURE on six open-source LVLMs, achieving a 23% improvement in general object hallucination evaluation metrics over the previous best approach. In both GPT and human evaluations, LURE consistently ranks at the top. Our data and code are available at https://github.com/YiyangZhou/LURE.
http://arxiv.org/pdf/2310.00754
Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao
cs.LG, cs.CL, cs.CV
null
null
cs.LG
20231001
20231001
[ { "id": "2308.14972" }, { "id": "2306.05424" }, { "id": "2302.13971" }, { "id": "2308.06394" }, { "id": "2304.04920" }, { "id": "2204.02311" }, { "id": "2306.14565" }, { "id": "2306.09299" }, { "id": "2305.13534" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2305.10355" }, { "id": "2306.04387" }, { "id": "2304.08485" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "1904.09675" }, { "id": "2308.15126" }, { "id": "2306.05685" }, { "id": "1702.01806" }, { "id": "2305.03726" }, { "id": "2304.09349" }, { "id": "2303.16199" }, { "id": "2302.07257" }, { "id": "1904.09751" } ]
2310.00754
42
# 6 CONCLUSION In this paper, our objective is to address the challenge of object hallucination in LVLMs. We in- troduce a lightweight post-hoc method, named LVLM Hallucination Revisor (LURE), designed to rectify object hallucination in the generated descriptions produced by LVLMs. LURE is grounded in three key factors known to contribute to object hallucination: co-occurrence, uncertainty, and object position. These factors have been demonstrated to induce hallucination both empirically and theo- retically. Our experiments, conducted on six open-source LVLMs, demonstrate the effectiveness of LURE in mitigating object hallucination in LVLM-generated descriptions. 9 Preprint # ACKNOWLEDGEMENT This work was partially supported by Juniper Networks. # REFERENCES Ali Furkan Biten, Llu´ıs G´omez, and Dimosthenis Karatzas. Let there be a clock on the beach: In Proceedings of the IEEE/CVF Winter Reducing object hallucination in image captioning. Conference on Applications of Computer Vision, pp. 1381–1390, 2022. Paul Brie, Nicolas Burny, Arthur Slu¨yters, and Jean Vanderdonckt. Evaluating a large language model on searching for gui layouts. Proceedings of the ACM on Human-Computer Interaction, 7 (EICS):1–37, 2023.
2310.00754#42
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Large vision-language models (LVLMs) have shown remarkable abilities in understanding visual information with human languages. However, LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images. This can negatively impact many vision-language tasks, such as visual summarization and reasoning. To address this issue, we propose a simple yet powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify object hallucination in LVLMs by reconstructing less hallucinatory descriptions. LURE is grounded in a rigorous statistical analysis of the key factors underlying object hallucination, including co-occurrence (the frequent appearance of certain objects alongside others in images), uncertainty (objects with higher uncertainty during LVLM decoding), and object position (hallucination often appears in the later part of the generated text). LURE can also be seamlessly integrated with any LVLMs. We evaluate LURE on six open-source LVLMs, achieving a 23% improvement in general object hallucination evaluation metrics over the previous best approach. In both GPT and human evaluations, LURE consistently ranks at the top. Our data and code are available at https://github.com/YiyangZhou/LURE.
http://arxiv.org/pdf/2310.00754
Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao
cs.LG, cs.CL, cs.CV
null
null
cs.LG
20231001
20231001
[ { "id": "2308.14972" }, { "id": "2306.05424" }, { "id": "2302.13971" }, { "id": "2308.06394" }, { "id": "2304.04920" }, { "id": "2204.02311" }, { "id": "2306.14565" }, { "id": "2306.09299" }, { "id": "2305.13534" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2305.10355" }, { "id": "2306.04387" }, { "id": "2304.08485" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "1904.09675" }, { "id": "2308.15126" }, { "id": "2306.05685" }, { "id": "1702.01806" }, { "id": "2305.03726" }, { "id": "2304.09349" }, { "id": "2303.16199" }, { "id": "2302.07257" }, { "id": "1904.09751" } ]
2310.00754
43
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020. Yair Carmon, Aditi Raghunathan, Ludwig Schmidt, John C Duchi, and Percy S Liang. Unlabeled data improves adversarial robustness. Advances in neural information processing systems, 32, 2019. Soravit Changpinyo, Piyush Sharma, Nan Ding, and Radu Soricut. Conceptual 12m: Pushing web-scale image-text pre-training to recognize long-tail visual concepts. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3558–3568, 2021.
2310.00754#43
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Large vision-language models (LVLMs) have shown remarkable abilities in understanding visual information with human languages. However, LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images. This can negatively impact many vision-language tasks, such as visual summarization and reasoning. To address this issue, we propose a simple yet powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify object hallucination in LVLMs by reconstructing less hallucinatory descriptions. LURE is grounded in a rigorous statistical analysis of the key factors underlying object hallucination, including co-occurrence (the frequent appearance of certain objects alongside others in images), uncertainty (objects with higher uncertainty during LVLM decoding), and object position (hallucination often appears in the later part of the generated text). LURE can also be seamlessly integrated with any LVLMs. We evaluate LURE on six open-source LVLMs, achieving a 23% improvement in general object hallucination evaluation metrics over the previous best approach. In both GPT and human evaluations, LURE consistently ranks at the top. Our data and code are available at https://github.com/YiyangZhou/LURE.
http://arxiv.org/pdf/2310.00754
Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao
cs.LG, cs.CL, cs.CV
null
null
cs.LG
20231001
20231001
[ { "id": "2308.14972" }, { "id": "2306.05424" }, { "id": "2302.13971" }, { "id": "2308.06394" }, { "id": "2304.04920" }, { "id": "2204.02311" }, { "id": "2306.14565" }, { "id": "2306.09299" }, { "id": "2305.13534" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2305.10355" }, { "id": "2306.04387" }, { "id": "2304.08485" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "1904.09675" }, { "id": "2308.15126" }, { "id": "2306.05685" }, { "id": "1702.01806" }, { "id": "2305.03726" }, { "id": "2304.09349" }, { "id": "2303.16199" }, { "id": "2302.07257" }, { "id": "1904.09751" } ]
2310.00754
44
Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality, March 2023. URL https: //lmsys.org/blog/2023-03-30-vicuna/. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022. Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, and Steven Hoi. Instructblip: Towards general-purpose vision-language models with instruction tuning, 2023.
2310.00754#44
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Large vision-language models (LVLMs) have shown remarkable abilities in understanding visual information with human languages. However, LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images. This can negatively impact many vision-language tasks, such as visual summarization and reasoning. To address this issue, we propose a simple yet powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify object hallucination in LVLMs by reconstructing less hallucinatory descriptions. LURE is grounded in a rigorous statistical analysis of the key factors underlying object hallucination, including co-occurrence (the frequent appearance of certain objects alongside others in images), uncertainty (objects with higher uncertainty during LVLM decoding), and object position (hallucination often appears in the later part of the generated text). LURE can also be seamlessly integrated with any LVLMs. We evaluate LURE on six open-source LVLMs, achieving a 23% improvement in general object hallucination evaluation metrics over the previous best approach. In both GPT and human evaluations, LURE consistently ranks at the top. Our data and code are available at https://github.com/YiyangZhou/LURE.
http://arxiv.org/pdf/2310.00754
Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao
cs.LG, cs.CL, cs.CV
null
null
cs.LG
20231001
20231001
[ { "id": "2308.14972" }, { "id": "2306.05424" }, { "id": "2302.13971" }, { "id": "2308.06394" }, { "id": "2304.04920" }, { "id": "2204.02311" }, { "id": "2306.14565" }, { "id": "2306.09299" }, { "id": "2305.13534" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2305.10355" }, { "id": "2306.04387" }, { "id": "2304.08485" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "1904.09675" }, { "id": "2308.15126" }, { "id": "2306.05685" }, { "id": "1702.01806" }, { "id": "2305.03726" }, { "id": "2304.09349" }, { "id": "2303.16199" }, { "id": "2302.07257" }, { "id": "1904.09751" } ]
2310.00754
45
Jie Ding, Vahid Tarokh, and Yuhong Yang. Bridging aic and bic: a new criterion for autoregression. IEEE Transactions on Information Theory, 64(6):4024–4043, 2017. Markus Freitag and Yaser Al-Onaizan. Beam search strategies for neural machine translation. arXiv preprint arXiv:1702.01806, 2017. Tao Gong, Chengqi Lyu, Shilong Zhang, Yudong Wang, Miao Zheng, Qian Zhao, Kuikun Liu, Wenwei Zhang, Ping Luo, and Kai Chen. Multimodal-gpt: A vision and language model for dialogue with humans, 2023. Anisha Gunjal, Jihan Yin, and Erhan Bas. Detecting and preventing hallucinations in large vision language models. arXiv preprint arXiv:2308.06394, 2023. Edward James Hannan, AJ McDougall, and Don Stephen Poskitt. Recursive estimation of autore- gressions. Journal of the Royal Statistical Society: Series B (Methodological), 51(2):217–233, 1989. Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. The curious case of neural text degeneration. arXiv preprint arXiv:1904.09751, 2019.
2310.00754#45
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Large vision-language models (LVLMs) have shown remarkable abilities in understanding visual information with human languages. However, LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images. This can negatively impact many vision-language tasks, such as visual summarization and reasoning. To address this issue, we propose a simple yet powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify object hallucination in LVLMs by reconstructing less hallucinatory descriptions. LURE is grounded in a rigorous statistical analysis of the key factors underlying object hallucination, including co-occurrence (the frequent appearance of certain objects alongside others in images), uncertainty (objects with higher uncertainty during LVLM decoding), and object position (hallucination often appears in the later part of the generated text). LURE can also be seamlessly integrated with any LVLMs. We evaluate LURE on six open-source LVLMs, achieving a 23% improvement in general object hallucination evaluation metrics over the previous best approach. In both GPT and human evaluations, LURE consistently ranks at the top. Our data and code are available at https://github.com/YiyangZhou/LURE.
http://arxiv.org/pdf/2310.00754
Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao
cs.LG, cs.CL, cs.CV
null
null
cs.LG
20231001
20231001
[ { "id": "2308.14972" }, { "id": "2306.05424" }, { "id": "2302.13971" }, { "id": "2308.06394" }, { "id": "2304.04920" }, { "id": "2204.02311" }, { "id": "2306.14565" }, { "id": "2306.09299" }, { "id": "2305.13534" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2305.10355" }, { "id": "2306.04387" }, { "id": "2304.08485" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "1904.09675" }, { "id": "2308.15126" }, { "id": "2306.05685" }, { "id": "1702.01806" }, { "id": "2305.03726" }, { "id": "2304.09349" }, { "id": "2303.16199" }, { "id": "2302.07257" }, { "id": "1904.09751" } ]
2310.00754
46
Mingzhe Hu, Shaoyan Pan, Yuheng Li, and Xiaofeng Yang. Advancing medical imaging with language models: A journey from n-grams to chatgpt. arXiv preprint arXiv:2304.04920, 2023. 10 Preprint Ching-Kang Ing. Accumulated prediction errors, information criteria and optimal forecasting for autoregressive time series. The Annals of Statistics, 35(3):1238–1277, 2007. Jae Myung Kim, A Koepke, Cordelia Schmid, and Zeynep Akata. Exposing and mitigating spurious correlations for cross-modal retrieval. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2584–2594, 2023. Bo Li, Yuanhan Zhang, Liangyu Chen, Jinghao Wang, Jingkang Yang, and Ziwei Liu. Otter: A multi-modal model with in-context instruction tuning. arXiv preprint arXiv:2305.03726, 2023a.
2310.00754#46
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Large vision-language models (LVLMs) have shown remarkable abilities in understanding visual information with human languages. However, LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images. This can negatively impact many vision-language tasks, such as visual summarization and reasoning. To address this issue, we propose a simple yet powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify object hallucination in LVLMs by reconstructing less hallucinatory descriptions. LURE is grounded in a rigorous statistical analysis of the key factors underlying object hallucination, including co-occurrence (the frequent appearance of certain objects alongside others in images), uncertainty (objects with higher uncertainty during LVLM decoding), and object position (hallucination often appears in the later part of the generated text). LURE can also be seamlessly integrated with any LVLMs. We evaluate LURE on six open-source LVLMs, achieving a 23% improvement in general object hallucination evaluation metrics over the previous best approach. In both GPT and human evaluations, LURE consistently ranks at the top. Our data and code are available at https://github.com/YiyangZhou/LURE.
http://arxiv.org/pdf/2310.00754
Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao
cs.LG, cs.CL, cs.CV
null
null
cs.LG
20231001
20231001
[ { "id": "2308.14972" }, { "id": "2306.05424" }, { "id": "2302.13971" }, { "id": "2308.06394" }, { "id": "2304.04920" }, { "id": "2204.02311" }, { "id": "2306.14565" }, { "id": "2306.09299" }, { "id": "2305.13534" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2305.10355" }, { "id": "2306.04387" }, { "id": "2304.08485" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "1904.09675" }, { "id": "2308.15126" }, { "id": "2306.05685" }, { "id": "1702.01806" }, { "id": "2305.03726" }, { "id": "2304.09349" }, { "id": "2303.16199" }, { "id": "2302.07257" }, { "id": "1904.09751" } ]
2310.00754
47
Junnan Li, Ramprasaath Selvaraju, Akhilesh Gotmare, Shafiq Joty, Caiming Xiong, and Steven Chu Hong Hoi. Align before fuse: Vision and language representation learning with momentum distillation. Advances in neural information processing systems, 34:9694–9705, 2021. Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: Bootstrapping language- arXiv preprint image pre-training with frozen image encoders and large language models. arXiv:2301.12597, 2023b. Lei Li, Yuwei Yin, Shicheng Li, Liang Chen, Peiyi Wang, Shuhuai Ren, Mukai Li, Yazheng Yang, Jingjing Xu, Xu Sun, et al. M3it: A large-scale dataset towards multi-modal multilingual instruc- tion tuning. arXiv preprint arXiv:2306.04387, 2023c. Yifan Li, Yifan Du, Kun Zhou, Jinpeng Wang, Wayne Xin Zhao, and Ji-Rong Wen. Evaluating object hallucination in large vision-language models. arXiv preprint arXiv:2305.10355, 2023d.
2310.00754#47
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Large vision-language models (LVLMs) have shown remarkable abilities in understanding visual information with human languages. However, LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images. This can negatively impact many vision-language tasks, such as visual summarization and reasoning. To address this issue, we propose a simple yet powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify object hallucination in LVLMs by reconstructing less hallucinatory descriptions. LURE is grounded in a rigorous statistical analysis of the key factors underlying object hallucination, including co-occurrence (the frequent appearance of certain objects alongside others in images), uncertainty (objects with higher uncertainty during LVLM decoding), and object position (hallucination often appears in the later part of the generated text). LURE can also be seamlessly integrated with any LVLMs. We evaluate LURE on six open-source LVLMs, achieving a 23% improvement in general object hallucination evaluation metrics over the previous best approach. In both GPT and human evaluations, LURE consistently ranks at the top. Our data and code are available at https://github.com/YiyangZhou/LURE.
http://arxiv.org/pdf/2310.00754
Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao
cs.LG, cs.CL, cs.CV
null
null
cs.LG
20231001
20231001
[ { "id": "2308.14972" }, { "id": "2306.05424" }, { "id": "2302.13971" }, { "id": "2308.06394" }, { "id": "2304.04920" }, { "id": "2204.02311" }, { "id": "2306.14565" }, { "id": "2306.09299" }, { "id": "2305.13534" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2305.10355" }, { "id": "2306.04387" }, { "id": "2304.08485" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "1904.09675" }, { "id": "2308.15126" }, { "id": "2306.05685" }, { "id": "1702.01806" }, { "id": "2305.03726" }, { "id": "2304.09349" }, { "id": "2303.16199" }, { "id": "2302.07257" }, { "id": "1904.09751" } ]
2310.00754
48
Chin-Yew Lin. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pp. 74–81, 2004. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr In Computer Doll´ar, and C Lawrence Zitnick. Microsoft coco: Common objects in context. Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, pp. 740–755. Springer, 2014. Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, and Lijuan Wang. Aligning large multi-modal model with robust instruction tuning. arXiv preprint arXiv:2306.14565, 2023a. Haokun Liu, Yaonan Zhu, Kenji Kato, Izumi Kondo, Tadayoshi Aoyama, and Yasuhisa Hasegawa. arXiv preprint Llm-based human-robot collaboration framework for manipulation tasks. arXiv:2308.14972, 2023b. Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning, 2023c.
2310.00754#48
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Large vision-language models (LVLMs) have shown remarkable abilities in understanding visual information with human languages. However, LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images. This can negatively impact many vision-language tasks, such as visual summarization and reasoning. To address this issue, we propose a simple yet powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify object hallucination in LVLMs by reconstructing less hallucinatory descriptions. LURE is grounded in a rigorous statistical analysis of the key factors underlying object hallucination, including co-occurrence (the frequent appearance of certain objects alongside others in images), uncertainty (objects with higher uncertainty during LVLM decoding), and object position (hallucination often appears in the later part of the generated text). LURE can also be seamlessly integrated with any LVLMs. We evaluate LURE on six open-source LVLMs, achieving a 23% improvement in general object hallucination evaluation metrics over the previous best approach. In both GPT and human evaluations, LURE consistently ranks at the top. Our data and code are available at https://github.com/YiyangZhou/LURE.
http://arxiv.org/pdf/2310.00754
Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao
cs.LG, cs.CL, cs.CV
null
null
cs.LG
20231001
20231001
[ { "id": "2308.14972" }, { "id": "2306.05424" }, { "id": "2302.13971" }, { "id": "2308.06394" }, { "id": "2304.04920" }, { "id": "2204.02311" }, { "id": "2306.14565" }, { "id": "2306.09299" }, { "id": "2305.13534" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2305.10355" }, { "id": "2306.04387" }, { "id": "2304.08485" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "1904.09675" }, { "id": "2308.15126" }, { "id": "2306.05685" }, { "id": "1702.01806" }, { "id": "2305.03726" }, { "id": "2304.09349" }, { "id": "2303.16199" }, { "id": "2302.07257" }, { "id": "1904.09751" } ]
2310.00754
49
Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning, 2023c. Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. arXiv preprint arXiv:2304.08485, 2023d. Muhammad Maaz, Hanoona Rasheed, Salman Khan, and Fahad Shahbaz Khan. Video-chatgpt: Towards detailed video understanding via large vision and language models. arXiv preprint arXiv:2306.05424, 2023. Jinjie Mai, Jun Chen, Bing Li, Guocheng Qian, Mohamed Elhoseiny, and Bernard Ghanem. Llm as a robotic brain: Unifying egocentric memory and control. arXiv preprint arXiv:2304.09349, 2023. Gary M Olson, James D Herbsleb, and Henry H Reuter. Characterizing the sequential structure of interactive behaviors through statistical and grammatical techniques. Human–Computer Interac- tion, 9(3-4):427–472, 1994. Vicente Ordonez, Girish Kulkarni, and Tamara Berg. Im2text: Describing images using 1 million captioned photographs. Advances in neural information processing systems, 24, 2011.
2310.00754#49
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Large vision-language models (LVLMs) have shown remarkable abilities in understanding visual information with human languages. However, LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images. This can negatively impact many vision-language tasks, such as visual summarization and reasoning. To address this issue, we propose a simple yet powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify object hallucination in LVLMs by reconstructing less hallucinatory descriptions. LURE is grounded in a rigorous statistical analysis of the key factors underlying object hallucination, including co-occurrence (the frequent appearance of certain objects alongside others in images), uncertainty (objects with higher uncertainty during LVLM decoding), and object position (hallucination often appears in the later part of the generated text). LURE can also be seamlessly integrated with any LVLMs. We evaluate LURE on six open-source LVLMs, achieving a 23% improvement in general object hallucination evaluation metrics over the previous best approach. In both GPT and human evaluations, LURE consistently ranks at the top. Our data and code are available at https://github.com/YiyangZhou/LURE.
http://arxiv.org/pdf/2310.00754
Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao
cs.LG, cs.CL, cs.CV
null
null
cs.LG
20231001
20231001
[ { "id": "2308.14972" }, { "id": "2306.05424" }, { "id": "2302.13971" }, { "id": "2308.06394" }, { "id": "2304.04920" }, { "id": "2204.02311" }, { "id": "2306.14565" }, { "id": "2306.09299" }, { "id": "2305.13534" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2305.10355" }, { "id": "2306.04387" }, { "id": "2304.08485" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "1904.09675" }, { "id": "2308.15126" }, { "id": "2306.05685" }, { "id": "1702.01806" }, { "id": "2305.03726" }, { "id": "2304.09349" }, { "id": "2303.16199" }, { "id": "2302.07257" }, { "id": "1904.09751" } ]
2310.00754
50
Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pp. 311–318, 2002. 11 Preprint Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agar- wal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision, 2021. Anna Rohrbach, Lisa Anne Hendricks, Kaylee Burns, Trevor Darrell, and Kate Saenko. Object hallucination in image captioning. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 4035–4045, 2018. Swarnadeep Saha, Peter Hase, and Mohit Bansal. Can language models teach weaker agents? teacher explanations improve students via theory of mind. arXiv preprint arXiv:2306.09299, 2023.
2310.00754#50
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Large vision-language models (LVLMs) have shown remarkable abilities in understanding visual information with human languages. However, LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images. This can negatively impact many vision-language tasks, such as visual summarization and reasoning. To address this issue, we propose a simple yet powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify object hallucination in LVLMs by reconstructing less hallucinatory descriptions. LURE is grounded in a rigorous statistical analysis of the key factors underlying object hallucination, including co-occurrence (the frequent appearance of certain objects alongside others in images), uncertainty (objects with higher uncertainty during LVLM decoding), and object position (hallucination often appears in the later part of the generated text). LURE can also be seamlessly integrated with any LVLMs. We evaluate LURE on six open-source LVLMs, achieving a 23% improvement in general object hallucination evaluation metrics over the previous best approach. In both GPT and human evaluations, LURE consistently ranks at the top. Our data and code are available at https://github.com/YiyangZhou/LURE.
http://arxiv.org/pdf/2310.00754
Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao
cs.LG, cs.CL, cs.CV
null
null
cs.LG
20231001
20231001
[ { "id": "2308.14972" }, { "id": "2306.05424" }, { "id": "2302.13971" }, { "id": "2308.06394" }, { "id": "2304.04920" }, { "id": "2204.02311" }, { "id": "2306.14565" }, { "id": "2306.09299" }, { "id": "2305.13534" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2305.10355" }, { "id": "2306.04387" }, { "id": "2304.08485" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "1904.09675" }, { "id": "2308.15126" }, { "id": "2306.05685" }, { "id": "1702.01806" }, { "id": "2305.03726" }, { "id": "2304.09349" }, { "id": "2303.16199" }, { "id": "2302.07257" }, { "id": "1904.09751" } ]
2310.00754
51
Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. Stanford alpaca: An instruction-following llama model. https://github.com/tatsu-lab/stanford_alpaca, 2023. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth´ee Lacroix, Baptiste Rozi`ere, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023. Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. Extracting and composing robust features with denoising autoencoders. In Proceedings of the 25th international conference on Machine learning, pp. 1096–1103, 2008.
2310.00754#51
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Large vision-language models (LVLMs) have shown remarkable abilities in understanding visual information with human languages. However, LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images. This can negatively impact many vision-language tasks, such as visual summarization and reasoning. To address this issue, we propose a simple yet powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify object hallucination in LVLMs by reconstructing less hallucinatory descriptions. LURE is grounded in a rigorous statistical analysis of the key factors underlying object hallucination, including co-occurrence (the frequent appearance of certain objects alongside others in images), uncertainty (objects with higher uncertainty during LVLM decoding), and object position (hallucination often appears in the later part of the generated text). LURE can also be seamlessly integrated with any LVLMs. We evaluate LURE on six open-source LVLMs, achieving a 23% improvement in general object hallucination evaluation metrics over the previous best approach. In both GPT and human evaluations, LURE consistently ranks at the top. Our data and code are available at https://github.com/YiyangZhou/LURE.
http://arxiv.org/pdf/2310.00754
Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao
cs.LG, cs.CL, cs.CV
null
null
cs.LG
20231001
20231001
[ { "id": "2308.14972" }, { "id": "2306.05424" }, { "id": "2302.13971" }, { "id": "2308.06394" }, { "id": "2304.04920" }, { "id": "2204.02311" }, { "id": "2306.14565" }, { "id": "2306.09299" }, { "id": "2305.13534" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2305.10355" }, { "id": "2306.04387" }, { "id": "2304.08485" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "1904.09675" }, { "id": "2308.15126" }, { "id": "2306.05685" }, { "id": "1702.01806" }, { "id": "2305.03726" }, { "id": "2304.09349" }, { "id": "2303.16199" }, { "id": "2302.07257" }, { "id": "1904.09751" } ]
2310.00754
52
Junyang Wang, Yiyang Zhou, Guohai Xu, Pengcheng Shi, Chenlin Zhao, Haiyang Xu, Qinghao Ye, Ming Yan, Ji Zhang, Jihua Zhu, et al. Evaluation and analysis of hallucination in large vision- language models. arXiv preprint arXiv:2308.15126, 2023a. Interac- tive computer-aided diagnosis on medical image using large language models. arXiv preprint arXiv:2302.07257, 2023b. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35:24824–24837, 2022. Qinghao Ye, Haiyang Xu, Guohai Xu, Jiabo Ye, Ming Yan, Yiyang Zhou, Junyang Wang, Anwen Hu, Pengcheng Shi, Yaya Shi, et al. mplug-owl: Modularization empowers large language models with multimodality. arXiv preprint arXiv:2304.14178, 2023.
2310.00754#52
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Large vision-language models (LVLMs) have shown remarkable abilities in understanding visual information with human languages. However, LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images. This can negatively impact many vision-language tasks, such as visual summarization and reasoning. To address this issue, we propose a simple yet powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify object hallucination in LVLMs by reconstructing less hallucinatory descriptions. LURE is grounded in a rigorous statistical analysis of the key factors underlying object hallucination, including co-occurrence (the frequent appearance of certain objects alongside others in images), uncertainty (objects with higher uncertainty during LVLM decoding), and object position (hallucination often appears in the later part of the generated text). LURE can also be seamlessly integrated with any LVLMs. We evaluate LURE on six open-source LVLMs, achieving a 23% improvement in general object hallucination evaluation metrics over the previous best approach. In both GPT and human evaluations, LURE consistently ranks at the top. Our data and code are available at https://github.com/YiyangZhou/LURE.
http://arxiv.org/pdf/2310.00754
Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao
cs.LG, cs.CL, cs.CV
null
null
cs.LG
20231001
20231001
[ { "id": "2308.14972" }, { "id": "2306.05424" }, { "id": "2302.13971" }, { "id": "2308.06394" }, { "id": "2304.04920" }, { "id": "2204.02311" }, { "id": "2306.14565" }, { "id": "2306.09299" }, { "id": "2305.13534" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2305.10355" }, { "id": "2306.04387" }, { "id": "2304.08485" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "1904.09675" }, { "id": "2308.15126" }, { "id": "2306.05685" }, { "id": "1702.01806" }, { "id": "2305.03726" }, { "id": "2304.09349" }, { "id": "2303.16199" }, { "id": "2302.07257" }, { "id": "1904.09751" } ]
2310.00754
53
Yan Zeng, Xinsong Zhang, and Hang Li. Multi-grained vision language pre-training: Aligning texts with visual concepts. arXiv preprint arXiv:2111.08276, 2021. Linjun Zhang, Zhun Deng, Kenji Kawaguchi, and James Zou. When and how mixup improves calibration. In International Conference on Machine Learning, pp. 26135–26160. PMLR, 2022a. Muru Zhang, Ofir Press, William Merrill, Alisa Liu, and Noah A Smith. How language model hallucinations can snowball. arXiv preprint arXiv:2305.13534, 2023a. Renrui Zhang, Jiaming Han, Aojun Zhou, Xiangfei Hu, Shilin Yan, Pan Lu, Hongsheng Li, Peng Gao, and Yu Qiao. Llama-adapter: Efficient fine-tuning of language models with zero-init atten- tion. arXiv preprint arXiv:2303.16199, 2023b.
2310.00754#53
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Large vision-language models (LVLMs) have shown remarkable abilities in understanding visual information with human languages. However, LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images. This can negatively impact many vision-language tasks, such as visual summarization and reasoning. To address this issue, we propose a simple yet powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify object hallucination in LVLMs by reconstructing less hallucinatory descriptions. LURE is grounded in a rigorous statistical analysis of the key factors underlying object hallucination, including co-occurrence (the frequent appearance of certain objects alongside others in images), uncertainty (objects with higher uncertainty during LVLM decoding), and object position (hallucination often appears in the later part of the generated text). LURE can also be seamlessly integrated with any LVLMs. We evaluate LURE on six open-source LVLMs, achieving a 23% improvement in general object hallucination evaluation metrics over the previous best approach. In both GPT and human evaluations, LURE consistently ranks at the top. Our data and code are available at https://github.com/YiyangZhou/LURE.
http://arxiv.org/pdf/2310.00754
Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao
cs.LG, cs.CL, cs.CV
null
null
cs.LG
20231001
20231001
[ { "id": "2308.14972" }, { "id": "2306.05424" }, { "id": "2302.13971" }, { "id": "2308.06394" }, { "id": "2304.04920" }, { "id": "2204.02311" }, { "id": "2306.14565" }, { "id": "2306.09299" }, { "id": "2305.13534" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2305.10355" }, { "id": "2306.04387" }, { "id": "2304.08485" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "1904.09675" }, { "id": "2308.15126" }, { "id": "2306.05685" }, { "id": "1702.01806" }, { "id": "2305.03726" }, { "id": "2304.09349" }, { "id": "2303.16199" }, { "id": "2302.07257" }, { "id": "1904.09751" } ]
2310.00754
54
Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christo- pher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. Opt: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068, 2022b. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. Bertscore: Evaluat- ing text generation with bert. arXiv preprint arXiv:1904.09675, 2019. Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. Judging llm-as-a-judge with mt-bench and chatbot arena. arXiv preprint arXiv:2306.05685, 2023. 12 Preprint Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, and Mohamed Elhoseiny. Minigpt-4: En- hancing vision-language understanding with advanced large language models. arXiv preprint arXiv:2304.10592, 2023. A EXPERIMENTAL DETAILS
2310.00754#54
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Large vision-language models (LVLMs) have shown remarkable abilities in understanding visual information with human languages. However, LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images. This can negatively impact many vision-language tasks, such as visual summarization and reasoning. To address this issue, we propose a simple yet powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify object hallucination in LVLMs by reconstructing less hallucinatory descriptions. LURE is grounded in a rigorous statistical analysis of the key factors underlying object hallucination, including co-occurrence (the frequent appearance of certain objects alongside others in images), uncertainty (objects with higher uncertainty during LVLM decoding), and object position (hallucination often appears in the later part of the generated text). LURE can also be seamlessly integrated with any LVLMs. We evaluate LURE on six open-source LVLMs, achieving a 23% improvement in general object hallucination evaluation metrics over the previous best approach. In both GPT and human evaluations, LURE consistently ranks at the top. Our data and code are available at https://github.com/YiyangZhou/LURE.
http://arxiv.org/pdf/2310.00754
Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao
cs.LG, cs.CL, cs.CV
null
null
cs.LG
20231001
20231001
[ { "id": "2308.14972" }, { "id": "2306.05424" }, { "id": "2302.13971" }, { "id": "2308.06394" }, { "id": "2304.04920" }, { "id": "2204.02311" }, { "id": "2306.14565" }, { "id": "2306.09299" }, { "id": "2305.13534" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2305.10355" }, { "id": "2306.04387" }, { "id": "2304.08485" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "1904.09675" }, { "id": "2308.15126" }, { "id": "2306.05685" }, { "id": "1702.01806" }, { "id": "2305.03726" }, { "id": "2304.09349" }, { "id": "2303.16199" }, { "id": "2302.07257" }, { "id": "1904.09751" } ]
2310.00754
55
A EXPERIMENTAL DETAILS A.1 EXPERIMENTAL SETTING FOR THE HALLUCINATION ANALYSIS Experimental Setting for Co-occurrence Analysis. The objects in this experiment are based on the 80 object labels annotated in (Rohrbach et al., 2018) from the COCO dataset, and the image descriptions are generated by MiniGPT-4 based on inference results from 5000 images in the COCO 2014 train dataset. Experimental Setting for the Uncertainty Analysis. Because uncertainty and position analysis are relatively independent from co-occurrence, in order to avoid conducting statistical analysis on the training set distribution, the statistical data for uncertainty analysis is derived from MiniGPT-4’s descriptions of 200 images from the COCO 2014 test dataset. The computation of uncertainty is performed using − log p(zi|s<i, x). Experimental Setting for the Analysis of Position of Hallucinated Objects. Similar to the uncer- tainty analysis, we used the manually annotated descriptions of MiniGPT-4 for 200 images from the COCO 2014 test dataset, due to the need for precise positioning. A.2 TRAINING SETTINGS FOR REVISOR
2310.00754#55
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Large vision-language models (LVLMs) have shown remarkable abilities in understanding visual information with human languages. However, LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images. This can negatively impact many vision-language tasks, such as visual summarization and reasoning. To address this issue, we propose a simple yet powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify object hallucination in LVLMs by reconstructing less hallucinatory descriptions. LURE is grounded in a rigorous statistical analysis of the key factors underlying object hallucination, including co-occurrence (the frequent appearance of certain objects alongside others in images), uncertainty (objects with higher uncertainty during LVLM decoding), and object position (hallucination often appears in the later part of the generated text). LURE can also be seamlessly integrated with any LVLMs. We evaluate LURE on six open-source LVLMs, achieving a 23% improvement in general object hallucination evaluation metrics over the previous best approach. In both GPT and human evaluations, LURE consistently ranks at the top. Our data and code are available at https://github.com/YiyangZhou/LURE.
http://arxiv.org/pdf/2310.00754
Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao
cs.LG, cs.CL, cs.CV
null
null
cs.LG
20231001
20231001
[ { "id": "2308.14972" }, { "id": "2306.05424" }, { "id": "2302.13971" }, { "id": "2308.06394" }, { "id": "2304.04920" }, { "id": "2204.02311" }, { "id": "2306.14565" }, { "id": "2306.09299" }, { "id": "2305.13534" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2305.10355" }, { "id": "2306.04387" }, { "id": "2304.08485" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "1904.09675" }, { "id": "2308.15126" }, { "id": "2306.05685" }, { "id": "1702.01806" }, { "id": "2305.03726" }, { "id": "2304.09349" }, { "id": "2303.16199" }, { "id": "2302.07257" }, { "id": "1904.09751" } ]
2310.00754
56
A.2 TRAINING SETTINGS FOR REVISOR The overall revisor training setting is similar to MiniGPT-4. Here, we only need one A100 80G GPU for training, which takes approximately 10 minutes. We present hyperparameter settings of the LURE during the training phase, as shown in Table 6. Table 6: Training hyperparameters. Hyperparameters Training steps Warmup steps Max length Batch size of multi-modal instruction data Optimizer Learning rate Learning rate decay AdamW ϵ AdamW β Weight decay A.3 PROMPTS FOR TRAINING DATASET
2310.00754#56
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Large vision-language models (LVLMs) have shown remarkable abilities in understanding visual information with human languages. However, LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images. This can negatively impact many vision-language tasks, such as visual summarization and reasoning. To address this issue, we propose a simple yet powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify object hallucination in LVLMs by reconstructing less hallucinatory descriptions. LURE is grounded in a rigorous statistical analysis of the key factors underlying object hallucination, including co-occurrence (the frequent appearance of certain objects alongside others in images), uncertainty (objects with higher uncertainty during LVLM decoding), and object position (hallucination often appears in the later part of the generated text). LURE can also be seamlessly integrated with any LVLMs. We evaluate LURE on six open-source LVLMs, achieving a 23% improvement in general object hallucination evaluation metrics over the previous best approach. In both GPT and human evaluations, LURE consistently ranks at the top. Our data and code are available at https://github.com/YiyangZhou/LURE.
http://arxiv.org/pdf/2310.00754
Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao
cs.LG, cs.CL, cs.CV
null
null
cs.LG
20231001
20231001
[ { "id": "2308.14972" }, { "id": "2306.05424" }, { "id": "2302.13971" }, { "id": "2308.06394" }, { "id": "2304.04920" }, { "id": "2204.02311" }, { "id": "2306.14565" }, { "id": "2306.09299" }, { "id": "2305.13534" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2305.10355" }, { "id": "2306.04387" }, { "id": "2304.08485" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "1904.09675" }, { "id": "2308.15126" }, { "id": "2306.05685" }, { "id": "1702.01806" }, { "id": "2305.03726" }, { "id": "2304.09349" }, { "id": "2303.16199" }, { "id": "2302.07257" }, { "id": "1904.09751" } ]
2310.00754
57
A.3 PROMPTS FOR TRAINING DATASET We leverage the in-context few-shot learning capability of GPT-3.5 to generate hallucinatory data automatically for revising. Initially, we prompt GPT-3.5 to provide a list of objects that are highly likely to co-occur with the objects mentioned in the given description. Next, we use LVLMs (such as MiniGPT-4) to generate descriptions for the training set of 5000 images. During this process, we will save nouns with − log p(zi|s<i, x) greater than the uncertain threshold γ in the decoding process to the list of uncertain objects corresponding to each image. Subsequently, we direct the model to take the original description and incorporate a randomly chosen word from the “co-occur objects” list, as well as another randomly chosen word from the “uncertain objects” list, into it. Detailed prompts are listed in Table 7 and a few examples are presented in Table 12. 13 # Preprint
2310.00754#57
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Large vision-language models (LVLMs) have shown remarkable abilities in understanding visual information with human languages. However, LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images. This can negatively impact many vision-language tasks, such as visual summarization and reasoning. To address this issue, we propose a simple yet powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify object hallucination in LVLMs by reconstructing less hallucinatory descriptions. LURE is grounded in a rigorous statistical analysis of the key factors underlying object hallucination, including co-occurrence (the frequent appearance of certain objects alongside others in images), uncertainty (objects with higher uncertainty during LVLM decoding), and object position (hallucination often appears in the later part of the generated text). LURE can also be seamlessly integrated with any LVLMs. We evaluate LURE on six open-source LVLMs, achieving a 23% improvement in general object hallucination evaluation metrics over the previous best approach. In both GPT and human evaluations, LURE consistently ranks at the top. Our data and code are available at https://github.com/YiyangZhou/LURE.
http://arxiv.org/pdf/2310.00754
Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao
cs.LG, cs.CL, cs.CV
null
null
cs.LG
20231001
20231001
[ { "id": "2308.14972" }, { "id": "2306.05424" }, { "id": "2302.13971" }, { "id": "2308.06394" }, { "id": "2304.04920" }, { "id": "2204.02311" }, { "id": "2306.14565" }, { "id": "2306.09299" }, { "id": "2305.13534" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2305.10355" }, { "id": "2306.04387" }, { "id": "2304.08485" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "1904.09675" }, { "id": "2308.15126" }, { "id": "2306.05685" }, { "id": "1702.01806" }, { "id": "2305.03726" }, { "id": "2304.09349" }, { "id": "2303.16199" }, { "id": "2302.07257" }, { "id": "1904.09751" } ]
2310.00754
58
13 # Preprint Table 7: The prompt for the GPT-3.5 API to generate the required hallucination dataset. “Instruction 1” is used to ask ChatGPT to provide a list of co-occurring objects based on the description, while “Instruction 2” is used to integrate the objects obtained from the co-occurring object list and the objects from the list of uncertain objects into the given description. Instruction 1: List three other objects that you think are most likely to appear with the objects in the scene described below: {description} Output in strict accordance with the following format: Object one Object two Object three Instruction 2: Input caption: {description} co objects list: {co objects list} uncertain objets list: {uncertain objets list} Select one object from “co objects list” and “uncertain objects list” respectively and add it to “Input caption” to get “Output caption”. (Try not to change the format) Output caption: A.4 DETAILS ABOUT BASELINE In this section, we will provide a detailed explanation of the settings used for the baseline in Table 1, including some parameter settings and prompt configurations. The detailed prompt for baselines can be seen in Table 8.
2310.00754#58
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Large vision-language models (LVLMs) have shown remarkable abilities in understanding visual information with human languages. However, LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images. This can negatively impact many vision-language tasks, such as visual summarization and reasoning. To address this issue, we propose a simple yet powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify object hallucination in LVLMs by reconstructing less hallucinatory descriptions. LURE is grounded in a rigorous statistical analysis of the key factors underlying object hallucination, including co-occurrence (the frequent appearance of certain objects alongside others in images), uncertainty (objects with higher uncertainty during LVLM decoding), and object position (hallucination often appears in the later part of the generated text). LURE can also be seamlessly integrated with any LVLMs. We evaluate LURE on six open-source LVLMs, achieving a 23% improvement in general object hallucination evaluation metrics over the previous best approach. In both GPT and human evaluations, LURE consistently ranks at the top. Our data and code are available at https://github.com/YiyangZhou/LURE.
http://arxiv.org/pdf/2310.00754
Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao
cs.LG, cs.CL, cs.CV
null
null
cs.LG
20231001
20231001
[ { "id": "2308.14972" }, { "id": "2306.05424" }, { "id": "2302.13971" }, { "id": "2308.06394" }, { "id": "2304.04920" }, { "id": "2204.02311" }, { "id": "2306.14565" }, { "id": "2306.09299" }, { "id": "2305.13534" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2305.10355" }, { "id": "2306.04387" }, { "id": "2304.08485" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "1904.09675" }, { "id": "2308.15126" }, { "id": "2306.05685" }, { "id": "1702.01806" }, { "id": "2305.03726" }, { "id": "2304.09349" }, { "id": "2303.16199" }, { "id": "2302.07257" }, { "id": "1904.09751" } ]
2310.00754
59
• Teacher: The “Teacher” approach involves generating short descriptions for the images via blip2 (Li et al., 2023b) and using them as context to guide the model in generating descriptions. By providing these descriptions as additional information, the model can benefit from the guidance and produce more accurate or relevant descriptions. • CoT: The “CoT” method asks the model to first list the objects it identifies in the image and then describe the image based on those objects. It draws inspiration from the concept of chain of thought (Wei et al., 2022) and aims to guide the model in generating accurate descriptions by focusing on object recognition. • Greedy-Decoding: The difference between the “Greedy-Decoding” strategy and the “Original” strategy is that in the ”Greedy-Decoding” strategy, the model uses greedy decoding instead of sampling during the generation of image descriptions to produce the most deterministic output. This approach is used to explore the potential connection between the generation of illusions and the use of sampling.
2310.00754#59
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Large vision-language models (LVLMs) have shown remarkable abilities in understanding visual information with human languages. However, LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images. This can negatively impact many vision-language tasks, such as visual summarization and reasoning. To address this issue, we propose a simple yet powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify object hallucination in LVLMs by reconstructing less hallucinatory descriptions. LURE is grounded in a rigorous statistical analysis of the key factors underlying object hallucination, including co-occurrence (the frequent appearance of certain objects alongside others in images), uncertainty (objects with higher uncertainty during LVLM decoding), and object position (hallucination often appears in the later part of the generated text). LURE can also be seamlessly integrated with any LVLMs. We evaluate LURE on six open-source LVLMs, achieving a 23% improvement in general object hallucination evaluation metrics over the previous best approach. In both GPT and human evaluations, LURE consistently ranks at the top. Our data and code are available at https://github.com/YiyangZhou/LURE.
http://arxiv.org/pdf/2310.00754
Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao
cs.LG, cs.CL, cs.CV
null
null
cs.LG
20231001
20231001
[ { "id": "2308.14972" }, { "id": "2306.05424" }, { "id": "2302.13971" }, { "id": "2308.06394" }, { "id": "2304.04920" }, { "id": "2204.02311" }, { "id": "2306.14565" }, { "id": "2306.09299" }, { "id": "2305.13534" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2305.10355" }, { "id": "2306.04387" }, { "id": "2304.08485" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "1904.09675" }, { "id": "2308.15126" }, { "id": "2306.05685" }, { "id": "1702.01806" }, { "id": "2305.03726" }, { "id": "2304.09349" }, { "id": "2303.16199" }, { "id": "2302.07257" }, { "id": "1904.09751" } ]
2310.00754
60
• GPT-Ensemble: In ”GPT-Ensemble,” we utilize GPT-3.5 to summarize the common elements in the descriptions generated by multiple LVLMs, excluding the one being evaluated. Subsequently, we employ GPT-3.5 to rewrite the description of the evaluated LVLM, using the identified com- mon elements from the descriptions of the other models to correct any dissimilar parts in the evaluated model’s description. • GPT-Teacher: “GPT-Teacher” represents the process of providing the GPT-3.5 API with con- textual references and descriptions from the model’s output, allowing it to revise the inaccurate description generated by the model into a more accurate version based on the contextual informa- tion. 14 Preprint Table 8: Prompts for baselines. Teacher: Reference caption: {blip2 caption} Please refer to reference caption and describe this picture: CoT: Human: Please list the main objects in the picture and strictly follow the following format: {object1, object2, object3......} AI: {objects list} Human: Describe this image AI: {description}
2310.00754#60
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Large vision-language models (LVLMs) have shown remarkable abilities in understanding visual information with human languages. However, LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images. This can negatively impact many vision-language tasks, such as visual summarization and reasoning. To address this issue, we propose a simple yet powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify object hallucination in LVLMs by reconstructing less hallucinatory descriptions. LURE is grounded in a rigorous statistical analysis of the key factors underlying object hallucination, including co-occurrence (the frequent appearance of certain objects alongside others in images), uncertainty (objects with higher uncertainty during LVLM decoding), and object position (hallucination often appears in the later part of the generated text). LURE can also be seamlessly integrated with any LVLMs. We evaluate LURE on six open-source LVLMs, achieving a 23% improvement in general object hallucination evaluation metrics over the previous best approach. In both GPT and human evaluations, LURE consistently ranks at the top. Our data and code are available at https://github.com/YiyangZhou/LURE.
http://arxiv.org/pdf/2310.00754
Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao
cs.LG, cs.CL, cs.CV
null
null
cs.LG
20231001
20231001
[ { "id": "2308.14972" }, { "id": "2306.05424" }, { "id": "2302.13971" }, { "id": "2308.06394" }, { "id": "2304.04920" }, { "id": "2204.02311" }, { "id": "2306.14565" }, { "id": "2306.09299" }, { "id": "2305.13534" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2305.10355" }, { "id": "2306.04387" }, { "id": "2304.08485" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "1904.09675" }, { "id": "2308.15126" }, { "id": "2306.05685" }, { "id": "1702.01806" }, { "id": "2305.03726" }, { "id": "2304.09349" }, { "id": "2303.16199" }, { "id": "2302.07257" }, { "id": "1904.09751" } ]
2310.00754
61
GPT-Ensemble: Reference captions 1:{description of model 1} Reference captions 2:{description of model 2} Reference captions 3:{description of model 3} Reference captions 4:{description of model 4} Reference captions 5:{description of model 5} Original Description:{description} Synthesizing the commonalities of Reference captions 1-5, and then removing the parts in the Original Description that do not align with the commonalities, while preserving the original format. Answer: GPT-Teacher: Reference caption: {blip2 caption} Original description: {description} Rewrite the original description to align it with the reference caption, delete some objects that you think are hallucinations, and keep the original format. Answer: A.5 DETAILS ABOUT MANUAL ANNOTATION EVALUATIONS The manual evaluation annotation interface provides a user-friendly interface for performing manual annotations and capturing evaluation feedback. The interface is hosted on the Amazon Web Services (AWS) platform, which offers scalability, reliability, and security for handling annotation tasks. As shown in Figure 4, we annotated all objects and hallucinated objects in the descriptions based on the images. We then provided a binary label (0/1) to indicate whether each description contained hallucinations. Based on the fine-grained annotation results, similar to GPT evaluation, we sorted the results from different baselines. 15 # Preprint
2310.00754#61
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Large vision-language models (LVLMs) have shown remarkable abilities in understanding visual information with human languages. However, LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images. This can negatively impact many vision-language tasks, such as visual summarization and reasoning. To address this issue, we propose a simple yet powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify object hallucination in LVLMs by reconstructing less hallucinatory descriptions. LURE is grounded in a rigorous statistical analysis of the key factors underlying object hallucination, including co-occurrence (the frequent appearance of certain objects alongside others in images), uncertainty (objects with higher uncertainty during LVLM decoding), and object position (hallucination often appears in the later part of the generated text). LURE can also be seamlessly integrated with any LVLMs. We evaluate LURE on six open-source LVLMs, achieving a 23% improvement in general object hallucination evaluation metrics over the previous best approach. In both GPT and human evaluations, LURE consistently ranks at the top. Our data and code are available at https://github.com/YiyangZhou/LURE.
http://arxiv.org/pdf/2310.00754
Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao
cs.LG, cs.CL, cs.CV
null
null
cs.LG
20231001
20231001
[ { "id": "2308.14972" }, { "id": "2306.05424" }, { "id": "2302.13971" }, { "id": "2308.06394" }, { "id": "2304.04920" }, { "id": "2204.02311" }, { "id": "2306.14565" }, { "id": "2306.09299" }, { "id": "2305.13534" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2305.10355" }, { "id": "2306.04387" }, { "id": "2304.08485" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "1904.09675" }, { "id": "2308.15126" }, { "id": "2306.05685" }, { "id": "1702.01806" }, { "id": "2305.03726" }, { "id": "2304.09349" }, { "id": "2303.16199" }, { "id": "2302.07257" }, { "id": "1904.09751" } ]
2310.00754
62
15 # Preprint instructions: (1) the object list mentioned in the description ) the list of hallucinatory objects mentioned in the description (limited to objects only) (3) whether hallucination exists in this description (1 if yes, 0 otherwise) Format Requirements: (1) Please fill the “object lst" in the following format (‘obj* refers to all the objects mentioned in the description.): bjt, obj2, obj3 ) Please fill in the “hallucinatory objects' in the following format (‘hobj* refers to the illusory objects that exist in the description but are not present in the image,): hhobj1, hobj2, hobjs nny Model description:There are four children on a bench wearing baseball uniforms. Two of them are boys and the other two are girls. All of them are smiling and looking at the camera, [The bench is located in a park or a playground, as there are trees visible in the background. There is a building on the right side of the image. The children are wearing helmets, and fone of them is holding a baseball bat. bbject list hallucinatory objects H/o Figure 4: Human evaluation annotation interface. # B DETAILED PROOF B.1 PROOF OF THEOREM 2.1
2310.00754#62
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Large vision-language models (LVLMs) have shown remarkable abilities in understanding visual information with human languages. However, LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images. This can negatively impact many vision-language tasks, such as visual summarization and reasoning. To address this issue, we propose a simple yet powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify object hallucination in LVLMs by reconstructing less hallucinatory descriptions. LURE is grounded in a rigorous statistical analysis of the key factors underlying object hallucination, including co-occurrence (the frequent appearance of certain objects alongside others in images), uncertainty (objects with higher uncertainty during LVLM decoding), and object position (hallucination often appears in the later part of the generated text). LURE can also be seamlessly integrated with any LVLMs. We evaluate LURE on six open-source LVLMs, achieving a 23% improvement in general object hallucination evaluation metrics over the previous best approach. In both GPT and human evaluations, LURE consistently ranks at the top. Our data and code are available at https://github.com/YiyangZhou/LURE.
http://arxiv.org/pdf/2310.00754
Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao
cs.LG, cs.CL, cs.CV
null
null
cs.LG
20231001
20231001
[ { "id": "2308.14972" }, { "id": "2306.05424" }, { "id": "2302.13971" }, { "id": "2308.06394" }, { "id": "2304.04920" }, { "id": "2204.02311" }, { "id": "2306.14565" }, { "id": "2306.09299" }, { "id": "2305.13534" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2305.10355" }, { "id": "2306.04387" }, { "id": "2304.08485" }, { "id": "2111.08276" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "1904.09675" }, { "id": "2308.15126" }, { "id": "2306.05685" }, { "id": "1702.01806" }, { "id": "2305.03726" }, { "id": "2304.09349" }, { "id": "2303.16199" }, { "id": "2302.07257" }, { "id": "1904.09751" } ]