doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2307.07871 | 158 | Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25, 1097â1105.
Lazaridou, A., & Baroni, M. (2020). Emergent multi-agent communication in the deep learning era. CoRR, abs/2006.02419.
Le, M., Boureau, Y.-L., & Nickel, M. (2019). Revisiting the evaluation of theory of mind through question answering. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 5872â5877, Hong Kong, China. Association for Computational Linguistics.
LeCun, Y., Boser, B., Denker, J. S., Henderson, D., Howard, R. E., Hubbard, W., & Jackel, L. D. (1989). Backpropagation applied to handwritten zip code recognition. Neural computation, 1 (4), 541â551. | 2307.07871#158 | The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents | Developmental psychologists have long-established the importance of
socio-cognitive abilities in human intelligence. These abilities enable us to
enter, participate and benefit from human culture. AI research on social
interactive agents mostly concerns the emergence of culture in a multi-agent
setting (often without a strong grounding in developmental psychology). We
argue that AI research should be informed by psychology and study
socio-cognitive abilities enabling to enter a culture too. We discuss the
theories of Michael Tomasello and Jerome Bruner to introduce some of their
concepts to AI and outline key concepts and socio-cognitive abilities. We
present The SocialAI school - a tool including a customizable parameterized
uite of procedurally generated environments, which simplifies conducting
experiments regarding those concepts. We show examples of such experiments with
RL agents and Large Language Models. The main motivation of this work is to
engage the AI community around the problem of social intelligence informed by
developmental psychology, and to provide a tool to simplify first steps in this
direction. Refer to the project website for code and additional information:
https://sites.google.com/view/socialai-school. | http://arxiv.org/pdf/2307.07871 | Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer | cs.AI, cs.LG, 68T07, I.2.0 | Preprint, see v1 for a shorter version (accepted at the "Workshop on
Theory-of-Mind" at ICML 2023) See project website for demo and code:
https://sites.google.com/view/socialai-school | null | cs.AI | 20230715 | 20231123 | [] |
2307.07871 | 159 | Lee, D., Jaques, N., Kew, J. C., Eck, D., Schuurmans, D., & Faust, A. (2021). Joint attention for multi-agent coordination and social learning. CoRR, abs/2104.07750.
Lillicrap, T. P., Hunt, J. J., Pritzel, A., Heess, N., Erez, T., Tassa, Y., Silver, D., & Wierstra, D. (2016). Continuous control with deep reinforcement learning. In ICLR.
Lindblom, J., & Ziemke, T. (2003). Social situatedness of natural and artificial intelligence: Vygotsky and beyond. Adaptive Behavior, 11 (2), 79â96.
Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., & Neubig, G. (2021). Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. CoRR, abs/2107.13586.
57
KovaÄ, Portelas, Dominey, & Oudeyer
Lyons, D. E., Young, A. G., & Keil, F. C. (2007). The hidden structure of overimitation. Proceedings of the National Academy of Sciences, 104, 19751 â 19756. | 2307.07871#159 | The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents | Developmental psychologists have long-established the importance of
socio-cognitive abilities in human intelligence. These abilities enable us to
enter, participate and benefit from human culture. AI research on social
interactive agents mostly concerns the emergence of culture in a multi-agent
setting (often without a strong grounding in developmental psychology). We
argue that AI research should be informed by psychology and study
socio-cognitive abilities enabling to enter a culture too. We discuss the
theories of Michael Tomasello and Jerome Bruner to introduce some of their
concepts to AI and outline key concepts and socio-cognitive abilities. We
present The SocialAI school - a tool including a customizable parameterized
uite of procedurally generated environments, which simplifies conducting
experiments regarding those concepts. We show examples of such experiments with
RL agents and Large Language Models. The main motivation of this work is to
engage the AI community around the problem of social intelligence informed by
developmental psychology, and to provide a tool to simplify first steps in this
direction. Refer to the project website for code and additional information:
https://sites.google.com/view/socialai-school. | http://arxiv.org/pdf/2307.07871 | Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer | cs.AI, cs.LG, 68T07, I.2.0 | Preprint, see v1 for a shorter version (accepted at the "Workshop on
Theory-of-Mind" at ICML 2023) See project website for demo and code:
https://sites.google.com/view/socialai-school | null | cs.AI | 20230715 | 20231123 | [] |
2307.07871 | 160 | Mealier, A.-L., Pointeau, G., Mirliaz, S., Ogawa, K., Finlayson, M., & Dominey, P. F. (2017). Narrative constructions for the organization of self experience: Proof of concept via embodied robotics. Frontiers in Psychology, 8.
Meltzoff, A. N. (1995). Understanding the intentions of others: Re-enactment of intended acts by 18-month-old children.. Developmental psychology, 31 5, 838â850.
Meltzoff, A. N., & Moore, M. K. (1997). Explaining facial imitation: A theoretical model. Infant and child development, 6 (3-4), 179â192.
Mirolli, M., & Parisi, D. (2011). Towards a vygotskyan cognitive robotics: The role of language as a cognitive tool. New Ideas in Psychology, 29 (3), 298â311. Special Issue: Cognitive Robotics and Reevaluation of Piaget Concept of Egocentrism. | 2307.07871#160 | The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents | Developmental psychologists have long-established the importance of
socio-cognitive abilities in human intelligence. These abilities enable us to
enter, participate and benefit from human culture. AI research on social
interactive agents mostly concerns the emergence of culture in a multi-agent
setting (often without a strong grounding in developmental psychology). We
argue that AI research should be informed by psychology and study
socio-cognitive abilities enabling to enter a culture too. We discuss the
theories of Michael Tomasello and Jerome Bruner to introduce some of their
concepts to AI and outline key concepts and socio-cognitive abilities. We
present The SocialAI school - a tool including a customizable parameterized
uite of procedurally generated environments, which simplifies conducting
experiments regarding those concepts. We show examples of such experiments with
RL agents and Large Language Models. The main motivation of this work is to
engage the AI community around the problem of social intelligence informed by
developmental psychology, and to provide a tool to simplify first steps in this
direction. Refer to the project website for code and additional information:
https://sites.google.com/view/socialai-school. | http://arxiv.org/pdf/2307.07871 | Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer | cs.AI, cs.LG, 68T07, I.2.0 | Preprint, see v1 for a shorter version (accepted at the "Workshop on
Theory-of-Mind" at ICML 2023) See project website for demo and code:
https://sites.google.com/view/socialai-school | null | cs.AI | 20230715 | 20231123 | [] |
2307.07871 | 161 | Misra, D. K., Bennett, A., Blukis, V., Niklasson, E., Shatkhin, M., & Artzi, Y. (2018). Mapping instructions to actions in 3d environments with visual goal prediction. In Riloff, E., Chiang, D., Hockenmaier, J., & Tsujii, J. (Eds.), Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pp. 2667â2678. Association for Computational Linguistics.
Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K., Ostrovski, G., et al. (2015). Human-level control through deep reinforcement learning. Nature, 518 (7540), 529.
Moll, H., & Tomasello, M. (2006). Level 1 perspective-taking at 24 months of age. British Journal of Developmental Psychology, 24 (3), 603â613. | 2307.07871#161 | The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents | Developmental psychologists have long-established the importance of
socio-cognitive abilities in human intelligence. These abilities enable us to
enter, participate and benefit from human culture. AI research on social
interactive agents mostly concerns the emergence of culture in a multi-agent
setting (often without a strong grounding in developmental psychology). We
argue that AI research should be informed by psychology and study
socio-cognitive abilities enabling to enter a culture too. We discuss the
theories of Michael Tomasello and Jerome Bruner to introduce some of their
concepts to AI and outline key concepts and socio-cognitive abilities. We
present The SocialAI school - a tool including a customizable parameterized
uite of procedurally generated environments, which simplifies conducting
experiments regarding those concepts. We show examples of such experiments with
RL agents and Large Language Models. The main motivation of this work is to
engage the AI community around the problem of social intelligence informed by
developmental psychology, and to provide a tool to simplify first steps in this
direction. Refer to the project website for code and additional information:
https://sites.google.com/view/socialai-school. | http://arxiv.org/pdf/2307.07871 | Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer | cs.AI, cs.LG, 68T07, I.2.0 | Preprint, see v1 for a shorter version (accepted at the "Workshop on
Theory-of-Mind" at ICML 2023) See project website for demo and code:
https://sites.google.com/view/socialai-school | null | cs.AI | 20230715 | 20231123 | [] |
2307.07871 | 162 | Mordatch, I., & Abbeel, P. (2018). Emergence of grounded compositional language in multi- agent populations. In McIlraith, S. A., & Weinberger, K. Q. (Eds.), Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, pp. 1495â1502. AAAI Press.
Moulin-Frier, C., & Oudeyer, P. (2020). Multi-agent reinforcement learning as a computational tool for language evolution research: Historical context and future challenges. CoRR, abs/2002.08878.
Mundy, P., Sigman, M., Ungerer, J., & Sherman, T. (1986). Defining the social deficits of autism: The contribution of non-verbal communication measures. Journal of Child Psychology and Psychiatry, 27, 657 â 669.
Ndousse, K. K., Eck, D., Levine, S., & Jaques, N. (2021). Emergent social learning via multi-agent reinforcement learning. In International Conference on Machine Learning, pp. 7991â8004. PMLR. | 2307.07871#162 | The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents | Developmental psychologists have long-established the importance of
socio-cognitive abilities in human intelligence. These abilities enable us to
enter, participate and benefit from human culture. AI research on social
interactive agents mostly concerns the emergence of culture in a multi-agent
setting (often without a strong grounding in developmental psychology). We
argue that AI research should be informed by psychology and study
socio-cognitive abilities enabling to enter a culture too. We discuss the
theories of Michael Tomasello and Jerome Bruner to introduce some of their
concepts to AI and outline key concepts and socio-cognitive abilities. We
present The SocialAI school - a tool including a customizable parameterized
uite of procedurally generated environments, which simplifies conducting
experiments regarding those concepts. We show examples of such experiments with
RL agents and Large Language Models. The main motivation of this work is to
engage the AI community around the problem of social intelligence informed by
developmental psychology, and to provide a tool to simplify first steps in this
direction. Refer to the project website for code and additional information:
https://sites.google.com/view/socialai-school. | http://arxiv.org/pdf/2307.07871 | Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer | cs.AI, cs.LG, 68T07, I.2.0 | Preprint, see v1 for a shorter version (accepted at the "Workshop on
Theory-of-Mind" at ICML 2023) See project website for demo and code:
https://sites.google.com/view/socialai-school | null | cs.AI | 20230715 | 20231123 | [] |
2307.07871 | 163 | Netanyahu, A., Shu, T., Katz, B., Barbu, A., & Tenenbaum, J. B. (2021). PHASE: physically- grounded abstract social events for machine social perception. In Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Inno- vative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021, pp. 845â853. AAAI Press.
58
The SocialAI School
Nisioti, E., & Moulin-Frier, C. (2023). Dynamics of niche construction in adaptable populations evolving in diverse environments. arXiv, abs/2305.09369.
Oudeyer, P.-Y., & Kaplan, F. (2007). What is intrinsic motivation? a typology of computa- tional approaches. Frontiers in Neurorobotics, 1. | 2307.07871#163 | The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents | Developmental psychologists have long-established the importance of
socio-cognitive abilities in human intelligence. These abilities enable us to
enter, participate and benefit from human culture. AI research on social
interactive agents mostly concerns the emergence of culture in a multi-agent
setting (often without a strong grounding in developmental psychology). We
argue that AI research should be informed by psychology and study
socio-cognitive abilities enabling to enter a culture too. We discuss the
theories of Michael Tomasello and Jerome Bruner to introduce some of their
concepts to AI and outline key concepts and socio-cognitive abilities. We
present The SocialAI school - a tool including a customizable parameterized
uite of procedurally generated environments, which simplifies conducting
experiments regarding those concepts. We show examples of such experiments with
RL agents and Large Language Models. The main motivation of this work is to
engage the AI community around the problem of social intelligence informed by
developmental psychology, and to provide a tool to simplify first steps in this
direction. Refer to the project website for code and additional information:
https://sites.google.com/view/socialai-school. | http://arxiv.org/pdf/2307.07871 | Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer | cs.AI, cs.LG, 68T07, I.2.0 | Preprint, see v1 for a shorter version (accepted at the "Workshop on
Theory-of-Mind" at ICML 2023) See project website for demo and code:
https://sites.google.com/view/socialai-school | null | cs.AI | 20230715 | 20231123 | [] |
2307.07871 | 164 | Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C. L., Mishkin, P., Zhang, C., Agarwal, S., Slama, K., Alex Ray, J. S., Hilton, J., Kelton, F., Miller, L., Simens, M., Askell, A., Welinder, P., Christiano, P., Leike, J., & Lowe, R. (2022). Training language models to follow instructions with human feedback. In arXiv preprint.
Over, H., & Carpenter, M. (2013). The social side of imitation. Child Development Perspec- tives, 7 (1), 6â11.
Park, J. S., OâBrien, J. C., Cai, C. J., Morris, M. R., Liang, P., & Bernstein, M. S. (2023). Generative agents: Interactive simulacra of human behavior. arXiv, abs/2304.03442. Parker-Holder, J., Jiang, M., Dennis, M., Samvelyan, M., Foerster, J. N., Grefenstette, E., & Rocktaschel, T. (2022). Evolving curricula with regret-based environment design. In International Conference on Machine Learning. | 2307.07871#164 | The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents | Developmental psychologists have long-established the importance of
socio-cognitive abilities in human intelligence. These abilities enable us to
enter, participate and benefit from human culture. AI research on social
interactive agents mostly concerns the emergence of culture in a multi-agent
setting (often without a strong grounding in developmental psychology). We
argue that AI research should be informed by psychology and study
socio-cognitive abilities enabling to enter a culture too. We discuss the
theories of Michael Tomasello and Jerome Bruner to introduce some of their
concepts to AI and outline key concepts and socio-cognitive abilities. We
present The SocialAI school - a tool including a customizable parameterized
uite of procedurally generated environments, which simplifies conducting
experiments regarding those concepts. We show examples of such experiments with
RL agents and Large Language Models. The main motivation of this work is to
engage the AI community around the problem of social intelligence informed by
developmental psychology, and to provide a tool to simplify first steps in this
direction. Refer to the project website for code and additional information:
https://sites.google.com/view/socialai-school. | http://arxiv.org/pdf/2307.07871 | Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer | cs.AI, cs.LG, 68T07, I.2.0 | Preprint, see v1 for a shorter version (accepted at the "Workshop on
Theory-of-Mind" at ICML 2023) See project website for demo and code:
https://sites.google.com/view/socialai-school | null | cs.AI | 20230715 | 20231123 | [] |
2307.07871 | 165 | Pathak, D., Agrawal, P., Efros, A. A., & Darrell, T. (2017). Curiosity-driven exploration by self-supervised prediction. In ICML.
Perez, E., Strub, F., de Vries, H., Dumoulin, V., & Courville, A. C. (2017). Film: Visual reasoning with a general conditioning layer. CoRR, abs/1709.07871.
Portelas, R., Colas, C., Weng, L., Hofmann, K., & Oudeyer, P. (2020). Automatic curriculum learning for deep RL: A short survey. CoRR, abs/2003.04664.
Prabhumoye, S., Li, M., Urbanek, J., Dinan, E., Kiela, D., Weston, J., & Szlam, A. (2020). I love your chain mail! making knights smile in a fantasy game world: Open-domain goal-oriented dialogue agents.. | 2307.07871#165 | The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents | Developmental psychologists have long-established the importance of
socio-cognitive abilities in human intelligence. These abilities enable us to
enter, participate and benefit from human culture. AI research on social
interactive agents mostly concerns the emergence of culture in a multi-agent
setting (often without a strong grounding in developmental psychology). We
argue that AI research should be informed by psychology and study
socio-cognitive abilities enabling to enter a culture too. We discuss the
theories of Michael Tomasello and Jerome Bruner to introduce some of their
concepts to AI and outline key concepts and socio-cognitive abilities. We
present The SocialAI school - a tool including a customizable parameterized
uite of procedurally generated environments, which simplifies conducting
experiments regarding those concepts. We show examples of such experiments with
RL agents and Large Language Models. The main motivation of this work is to
engage the AI community around the problem of social intelligence informed by
developmental psychology, and to provide a tool to simplify first steps in this
direction. Refer to the project website for code and additional information:
https://sites.google.com/view/socialai-school. | http://arxiv.org/pdf/2307.07871 | Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer | cs.AI, cs.LG, 68T07, I.2.0 | Preprint, see v1 for a shorter version (accepted at the "Workshop on
Theory-of-Mind" at ICML 2023) See project website for demo and code:
https://sites.google.com/view/socialai-school | null | cs.AI | 20230715 | 20231123 | [] |
2307.07871 | 166 | Puig, X., Shu, T., Li, S., Wang, Z., Liao, Y., Tenenbaum, J. B., Fidler, S., & Torralba, A. (2021). Watch-and-help: A challenge for social perception and human-ai collaboration. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net.
Qureshi, A. H., Nakamura, Y., Yoshikawa, Y., & Ishiguro, H. (2018). Intrinsically motivated reinforcement learning for humanârobot interaction in the real-world. Neural Networks, 107, 23â33. Special issue on deep reinforcement learning.
Rabinowitz, N. C., Perbet, F., Song, H. F., Zhang, C., Eslami, S. M. A., & Botvinick, M. (2018). Machine theory of mind. In Dy, J. G., & Krause, A. (Eds.), Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018, Vol. 80 of Proceedings of Machine Learning Research, pp. 4215â4224. PMLR. | 2307.07871#166 | The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents | Developmental psychologists have long-established the importance of
socio-cognitive abilities in human intelligence. These abilities enable us to
enter, participate and benefit from human culture. AI research on social
interactive agents mostly concerns the emergence of culture in a multi-agent
setting (often without a strong grounding in developmental psychology). We
argue that AI research should be informed by psychology and study
socio-cognitive abilities enabling to enter a culture too. We discuss the
theories of Michael Tomasello and Jerome Bruner to introduce some of their
concepts to AI and outline key concepts and socio-cognitive abilities. We
present The SocialAI school - a tool including a customizable parameterized
uite of procedurally generated environments, which simplifies conducting
experiments regarding those concepts. We show examples of such experiments with
RL agents and Large Language Models. The main motivation of this work is to
engage the AI community around the problem of social intelligence informed by
developmental psychology, and to provide a tool to simplify first steps in this
direction. Refer to the project website for code and additional information:
https://sites.google.com/view/socialai-school. | http://arxiv.org/pdf/2307.07871 | Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer | cs.AI, cs.LG, 68T07, I.2.0 | Preprint, see v1 for a shorter version (accepted at the "Workshop on
Theory-of-Mind" at ICML 2023) See project website for demo and code:
https://sites.google.com/view/socialai-school | null | cs.AI | 20230715 | 20231123 | [] |
2307.07871 | 167 | Raileanu, R., & Rocktäschel, T. (2020). RIDE: rewarding impact-driven exploration for procedurally-generated environments. CoRR, abs/2002.12292.
Richerson, P. J., & Boyd, R. (2006). Not by Genes Alone: How Culture Transformed Human Evolution. University Of Chicago Press.
59
KovaÄ, Portelas, Dominey, & Oudeyer
Rohlfing, K. J., Wrede, B., Vollmer, A.-L., & Oudeyer, P.-Y. (2016). An alternative to mapping a word onto a concept in language acquisition: Pragmatic frames. Frontiers in Psychology, 7, 470. | 2307.07871#167 | The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents | Developmental psychologists have long-established the importance of
socio-cognitive abilities in human intelligence. These abilities enable us to
enter, participate and benefit from human culture. AI research on social
interactive agents mostly concerns the emergence of culture in a multi-agent
setting (often without a strong grounding in developmental psychology). We
argue that AI research should be informed by psychology and study
socio-cognitive abilities enabling to enter a culture too. We discuss the
theories of Michael Tomasello and Jerome Bruner to introduce some of their
concepts to AI and outline key concepts and socio-cognitive abilities. We
present The SocialAI school - a tool including a customizable parameterized
uite of procedurally generated environments, which simplifies conducting
experiments regarding those concepts. We show examples of such experiments with
RL agents and Large Language Models. The main motivation of this work is to
engage the AI community around the problem of social intelligence informed by
developmental psychology, and to provide a tool to simplify first steps in this
direction. Refer to the project website for code and additional information:
https://sites.google.com/view/socialai-school. | http://arxiv.org/pdf/2307.07871 | Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer | cs.AI, cs.LG, 68T07, I.2.0 | Preprint, see v1 for a shorter version (accepted at the "Workshop on
Theory-of-Mind" at ICML 2023) See project website for demo and code:
https://sites.google.com/view/socialai-school | null | cs.AI | 20230715 | 20231123 | [] |
2307.07871 | 168 | Ruis, L., Andreas, J., Baroni, M., Bouchacourt, D., & Lake, B. M. (2020). A benchmark for systematic generalization in grounded language understanding. In Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M. F., & Lin, H. (Eds.), Advances in Neural Information Processing Systems, Vol. 33, pp. 19861â19872. Curran Associates, Inc. Ruis, L., Khan, A., Biderman, S. R., Hooker, S., Rocktaschel, T., & Grefenstette, E. (2022). Large language models are not zero-shot communicators. ArXiv, abs/2210.14986. Sap, M., Bras, R. L., Fried, D., & Choi, Y. (2022). Neural theory-of-mind? on the limits of
social intelligence in large lms. ArXiv, abs/2210.13312.
Sap, M., Rashkin, H., Chen, D., Bras, R. L., & Choi, Y. (2019). Socialiqa: Commonsense reasoning about social interactions. CoRR, abs/1904.09728. | 2307.07871#168 | The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents | Developmental psychologists have long-established the importance of
socio-cognitive abilities in human intelligence. These abilities enable us to
enter, participate and benefit from human culture. AI research on social
interactive agents mostly concerns the emergence of culture in a multi-agent
setting (often without a strong grounding in developmental psychology). We
argue that AI research should be informed by psychology and study
socio-cognitive abilities enabling to enter a culture too. We discuss the
theories of Michael Tomasello and Jerome Bruner to introduce some of their
concepts to AI and outline key concepts and socio-cognitive abilities. We
present The SocialAI school - a tool including a customizable parameterized
uite of procedurally generated environments, which simplifies conducting
experiments regarding those concepts. We show examples of such experiments with
RL agents and Large Language Models. The main motivation of this work is to
engage the AI community around the problem of social intelligence informed by
developmental psychology, and to provide a tool to simplify first steps in this
direction. Refer to the project website for code and additional information:
https://sites.google.com/view/socialai-school. | http://arxiv.org/pdf/2307.07871 | Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer | cs.AI, cs.LG, 68T07, I.2.0 | Preprint, see v1 for a shorter version (accepted at the "Workshop on
Theory-of-Mind" at ICML 2023) See project website for demo and code:
https://sites.google.com/view/socialai-school | null | cs.AI | 20230715 | 20231123 | [] |
2307.07871 | 169 | Savinov, N., Raichuk, A., Marinier, R., Vincent, D., Pollefeys, M., Lillicrap, T. P., & Gelly, S. (2018). Episodic curiosity through reachability. ArXiv, abs/1810.02274.
Scao, T. L., Fan, A., Akiki, C., Pavlick, E.-J., IliÄ, S., Hesslow, D., Castagné, R., Luccioni, A. S., Yvon, F., Gallé, M., Tow, J., Rush, A. M., Biderman, S. R., Webson, A., Ammanamanchi, P. S., Wang, T., Sagot, B., ..., & Wolf, T. (2022). Bloom: A 176b- parameter open-access multilingual language model. ArXiv, abs/2211.05100.
Schulman, J., Wolski, F., Dhariwal, P., Radford, A., & Klimov, O. (2017). Proximal policy optimization algorithms. ArXiv, abs/1707.06347. | 2307.07871#169 | The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents | Developmental psychologists have long-established the importance of
socio-cognitive abilities in human intelligence. These abilities enable us to
enter, participate and benefit from human culture. AI research on social
interactive agents mostly concerns the emergence of culture in a multi-agent
setting (often without a strong grounding in developmental psychology). We
argue that AI research should be informed by psychology and study
socio-cognitive abilities enabling to enter a culture too. We discuss the
theories of Michael Tomasello and Jerome Bruner to introduce some of their
concepts to AI and outline key concepts and socio-cognitive abilities. We
present The SocialAI school - a tool including a customizable parameterized
uite of procedurally generated environments, which simplifies conducting
experiments regarding those concepts. We show examples of such experiments with
RL agents and Large Language Models. The main motivation of this work is to
engage the AI community around the problem of social intelligence informed by
developmental psychology, and to provide a tool to simplify first steps in this
direction. Refer to the project website for code and additional information:
https://sites.google.com/view/socialai-school. | http://arxiv.org/pdf/2307.07871 | Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer | cs.AI, cs.LG, 68T07, I.2.0 | Preprint, see v1 for a shorter version (accepted at the "Workshop on
Theory-of-Mind" at ICML 2023) See project website for demo and code:
https://sites.google.com/view/socialai-school | null | cs.AI | 20230715 | 20231123 | [] |
2307.07871 | 170 | Shu, T., Bhandwaldar, A., Gan, C., Smith, K. A., Liu, S., Gutfreund, D., Spelke, E. S., Tenenbaum, J. B., & Ullman, T. D. (2021). AGENT: A benchmark for core psychological reasoning. In Meila, M., & Zhang, T. (Eds.), Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, Vol. 139 of Proceedings of Machine Learning Research, pp. 9614â9625. PMLR.
Shu, T., Kryven, M., Ullman, T. D., & Tenenbaum, J. (2020). Adventures in flatland: Perceiving social interactions under physical dynamics. In Denison, S., Mack, M., Xu, Y., & Armstrong, B. C. (Eds.), Proceedings of the 42th Annual Meeting of the Cognitive Science Society - Developing a Mind: Learning in Humans, Animals, and Machines, CogSci 2020, virtual, July 29 - August 1, 2020. cognitivesciencesociety.org.
Siposova, B., & Carpenter, M. (2019). A new look at joint attention and common knowledge. Cognition, 189, 260â274. | 2307.07871#170 | The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents | Developmental psychologists have long-established the importance of
socio-cognitive abilities in human intelligence. These abilities enable us to
enter, participate and benefit from human culture. AI research on social
interactive agents mostly concerns the emergence of culture in a multi-agent
setting (often without a strong grounding in developmental psychology). We
argue that AI research should be informed by psychology and study
socio-cognitive abilities enabling to enter a culture too. We discuss the
theories of Michael Tomasello and Jerome Bruner to introduce some of their
concepts to AI and outline key concepts and socio-cognitive abilities. We
present The SocialAI school - a tool including a customizable parameterized
uite of procedurally generated environments, which simplifies conducting
experiments regarding those concepts. We show examples of such experiments with
RL agents and Large Language Models. The main motivation of this work is to
engage the AI community around the problem of social intelligence informed by
developmental psychology, and to provide a tool to simplify first steps in this
direction. Refer to the project website for code and additional information:
https://sites.google.com/view/socialai-school. | http://arxiv.org/pdf/2307.07871 | Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer | cs.AI, cs.LG, 68T07, I.2.0 | Preprint, see v1 for a shorter version (accepted at the "Workshop on
Theory-of-Mind" at ICML 2023) See project website for demo and code:
https://sites.google.com/view/socialai-school | null | cs.AI | 20230715 | 20231123 | [] |
2307.07871 | 171 | Siposova, B., & Carpenter, M. (2019). A new look at joint attention and common knowledge. Cognition, 189, 260â274.
Tang, H., Houthooft, R., Foote, D., Stooke, A., Chen, X., Duan, Y., Schulman, J., Turck, F. D., & Abbeel, P. (2017). Exploration: A study of count-based exploration for deep reinforcement learning..
Tejwani, R., Kuo, Y., Shu, T., Stankovits, B., Gutfreund, D., Tenenbaum, J. B., Katz, B., & Barbu, A. (2021). Incorporating rich social interactions into mdps. CoRR, abs/2110.10298.
Tennie, C., Call, J., & Tomasello, M. (2006). Push or pull: Imitation vs. emulation in great apes and human children. Ethology, 112 (12), 1159â1169.
60
The SocialAI School
Tennie, C., Walter, V., Gampe, A., Carpenter, M., & Tomasello, M. (2014). Limitations to the cultural ratchet effect in young children. Journal of experimental child psychology, 126, 152â160. | 2307.07871#171 | The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents | Developmental psychologists have long-established the importance of
socio-cognitive abilities in human intelligence. These abilities enable us to
enter, participate and benefit from human culture. AI research on social
interactive agents mostly concerns the emergence of culture in a multi-agent
setting (often without a strong grounding in developmental psychology). We
argue that AI research should be informed by psychology and study
socio-cognitive abilities enabling to enter a culture too. We discuss the
theories of Michael Tomasello and Jerome Bruner to introduce some of their
concepts to AI and outline key concepts and socio-cognitive abilities. We
present The SocialAI school - a tool including a customizable parameterized
uite of procedurally generated environments, which simplifies conducting
experiments regarding those concepts. We show examples of such experiments with
RL agents and Large Language Models. The main motivation of this work is to
engage the AI community around the problem of social intelligence informed by
developmental psychology, and to provide a tool to simplify first steps in this
direction. Refer to the project website for code and additional information:
https://sites.google.com/view/socialai-school. | http://arxiv.org/pdf/2307.07871 | Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer | cs.AI, cs.LG, 68T07, I.2.0 | Preprint, see v1 for a shorter version (accepted at the "Workshop on
Theory-of-Mind" at ICML 2023) See project website for demo and code:
https://sites.google.com/view/socialai-school | null | cs.AI | 20230715 | 20231123 | [] |
2307.07871 | 172 | Tomasello, M. (1999). The Cultural Origins of Human Cognition. Harvard University Press. Tomasello, M. (2019). Becoming human. In Becoming Human. Harvard University Press. Tomasello, M. (2020). The role of roles in uniquely human cognition and sociality. Journal
for the Theory of Social Behaviour, 50 (1), 2â19.
Tomasello, M., Kruger, A. C., & Ratner, H. H. (1993). Cultural learning. Behavioral and
brain sciences, 16 (3), 495â511.
Trott, S., Jones, C. J., Chang, T. A., Michaelov, J. A., & Bergen, B. K. (2022). Do large language models know what humans know?. ArXiv, abs/2209.01515.
Ullman, T. (2023). Large language models fail on trivial alterations to theory-of-mind tasks.
ArXiv, abs/2302.08399. | 2307.07871#172 | The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents | Developmental psychologists have long-established the importance of
socio-cognitive abilities in human intelligence. These abilities enable us to
enter, participate and benefit from human culture. AI research on social
interactive agents mostly concerns the emergence of culture in a multi-agent
setting (often without a strong grounding in developmental psychology). We
argue that AI research should be informed by psychology and study
socio-cognitive abilities enabling to enter a culture too. We discuss the
theories of Michael Tomasello and Jerome Bruner to introduce some of their
concepts to AI and outline key concepts and socio-cognitive abilities. We
present The SocialAI school - a tool including a customizable parameterized
uite of procedurally generated environments, which simplifies conducting
experiments regarding those concepts. We show examples of such experiments with
RL agents and Large Language Models. The main motivation of this work is to
engage the AI community around the problem of social intelligence informed by
developmental psychology, and to provide a tool to simplify first steps in this
direction. Refer to the project website for code and additional information:
https://sites.google.com/view/socialai-school. | http://arxiv.org/pdf/2307.07871 | Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer | cs.AI, cs.LG, 68T07, I.2.0 | Preprint, see v1 for a shorter version (accepted at the "Workshop on
Theory-of-Mind" at ICML 2023) See project website for demo and code:
https://sites.google.com/view/socialai-school | null | cs.AI | 20230715 | 20231123 | [] |
2307.07871 | 173 | Ullman, T. (2023). Large language models fail on trivial alterations to theory-of-mind tasks.
ArXiv, abs/2302.08399.
Urbanek, J., Fan, A., Karamcheti, S., Jain, S., Humeau, S., Dinan, E., Rocktäschel, T., Kiela, D., Szlam, A., & Weston, J. (2019). Learning to speak and act in a fantasy text adventure game. In Inui, K., Jiang, J., Ng, V., & Wan, X. (Eds.), Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pp. 673â683. Association for Computational Linguistics.
Vollmer, A.-L., Wrede, B., Rohlfing, K. J., & Oudeyer, P.-Y. (2016). Pragmatic frames for teaching and learning in humanârobot interaction: Review and challenges. Frontiers in Neurorobotics, 10, 10.
Vygotsky, L. S., & Cole, M. (1978). Mind in society : the development of higher psychological processes. Harvard University Press Cambridge. | 2307.07871#173 | The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents | Developmental psychologists have long-established the importance of
socio-cognitive abilities in human intelligence. These abilities enable us to
enter, participate and benefit from human culture. AI research on social
interactive agents mostly concerns the emergence of culture in a multi-agent
setting (often without a strong grounding in developmental psychology). We
argue that AI research should be informed by psychology and study
socio-cognitive abilities enabling to enter a culture too. We discuss the
theories of Michael Tomasello and Jerome Bruner to introduce some of their
concepts to AI and outline key concepts and socio-cognitive abilities. We
present The SocialAI school - a tool including a customizable parameterized
uite of procedurally generated environments, which simplifies conducting
experiments regarding those concepts. We show examples of such experiments with
RL agents and Large Language Models. The main motivation of this work is to
engage the AI community around the problem of social intelligence informed by
developmental psychology, and to provide a tool to simplify first steps in this
direction. Refer to the project website for code and additional information:
https://sites.google.com/view/socialai-school. | http://arxiv.org/pdf/2307.07871 | Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer | cs.AI, cs.LG, 68T07, I.2.0 | Preprint, see v1 for a shorter version (accepted at the "Workshop on
Theory-of-Mind" at ICML 2023) See project website for demo and code:
https://sites.google.com/view/socialai-school | null | cs.AI | 20230715 | 20231123 | [] |
2307.07871 | 174 | Vygotsky, L. S., & Cole, M. (1978). Mind in society : the development of higher psychological processes. Harvard University Press Cambridge.
Wan, Y., Mao, J., & Tenenbaum, J. B. (2022). Handmethat: Human-robot communication in physical and social environments. In Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track.
Wei, J., Wang, X., Schuurmans, D., Bosma, M., Chi, E. H., Le, Q., & Zhou, D. (2022). Chain of thought prompting elicits reasoning in large language models. CoRR, abs/2201.11903. Whiten, A., Horner, V., Litchfield, C. A., & Marshall-Pescini, S. (2004). How do apes ape?.
Animal Learning & Behavior, 32, 36â52.
Whiten, A., McGuigan, N., Marshall-Pescini, S., & Hopper, L. M. (2009). Emulation, imita- tion, over-imitation and the scope of culture for child and chimpanzee. Philosophical Transactions of the Royal Society B: Biological Sciences, 364 (1528), 2417â2428.
Wood, D., Bornstein, M., & Bruner, J. (1989). Interaction in human development.. | 2307.07871#174 | The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents | Developmental psychologists have long-established the importance of
socio-cognitive abilities in human intelligence. These abilities enable us to
enter, participate and benefit from human culture. AI research on social
interactive agents mostly concerns the emergence of culture in a multi-agent
setting (often without a strong grounding in developmental psychology). We
argue that AI research should be informed by psychology and study
socio-cognitive abilities enabling to enter a culture too. We discuss the
theories of Michael Tomasello and Jerome Bruner to introduce some of their
concepts to AI and outline key concepts and socio-cognitive abilities. We
present The SocialAI school - a tool including a customizable parameterized
uite of procedurally generated environments, which simplifies conducting
experiments regarding those concepts. We show examples of such experiments with
RL agents and Large Language Models. The main motivation of this work is to
engage the AI community around the problem of social intelligence informed by
developmental psychology, and to provide a tool to simplify first steps in this
direction. Refer to the project website for code and additional information:
https://sites.google.com/view/socialai-school. | http://arxiv.org/pdf/2307.07871 | Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer | cs.AI, cs.LG, 68T07, I.2.0 | Preprint, see v1 for a shorter version (accepted at the "Workshop on
Theory-of-Mind" at ICML 2023) See project website for demo and code:
https://sites.google.com/view/socialai-school | null | cs.AI | 20230715 | 20231123 | [] |
2307.07871 | 175 | Wood, D., Bornstein, M., & Bruner, J. (1989). Interaction in human development..
Wood, D., Bruner, J. S., & Ross, G. (1976). The role of tutoring in problem solving.. Journal of child psychology and psychiatry, and allied disciplines, 17 (2), 89â100.
Wyman, E., Rakoczy, H., & Tomasello, M. (2009). Normativity and context in young childrenâs pretend play. Cognitive development, 24 (2), 146â155.
61
KovaÄ, Portelas, Dominey, & Oudeyer | 2307.07871#175 | The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents | Developmental psychologists have long-established the importance of
socio-cognitive abilities in human intelligence. These abilities enable us to
enter, participate and benefit from human culture. AI research on social
interactive agents mostly concerns the emergence of culture in a multi-agent
setting (often without a strong grounding in developmental psychology). We
argue that AI research should be informed by psychology and study
socio-cognitive abilities enabling to enter a culture too. We discuss the
theories of Michael Tomasello and Jerome Bruner to introduce some of their
concepts to AI and outline key concepts and socio-cognitive abilities. We
present The SocialAI school - a tool including a customizable parameterized
uite of procedurally generated environments, which simplifies conducting
experiments regarding those concepts. We show examples of such experiments with
RL agents and Large Language Models. The main motivation of this work is to
engage the AI community around the problem of social intelligence informed by
developmental psychology, and to provide a tool to simplify first steps in this
direction. Refer to the project website for code and additional information:
https://sites.google.com/view/socialai-school. | http://arxiv.org/pdf/2307.07871 | Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer | cs.AI, cs.LG, 68T07, I.2.0 | Preprint, see v1 for a shorter version (accepted at the "Workshop on
Theory-of-Mind" at ICML 2023) See project website for demo and code:
https://sites.google.com/view/socialai-school | null | cs.AI | 20230715 | 20231123 | [] |
2307.07871 | 176 | 61
KovaÄ, Portelas, Dominey, & Oudeyer
Yao, S., Zhao, J., Yu, D., Du, N., Shafran, I., Narasimhan, K., & Cao, Y. (2022). React: Synergizing reasoning and acting in language models. ArXiv, abs/2210.03629. Zhang, S., Roller, S., Goyal, N., Artetxe, M., Chen, M., Chen, S., Dewan, C., Diab, M., Li, X., Lin, X. V., Mihaylov, T., Ott, M., Shleifer, S., Shuster, K., Simig, D., Koura, P. S., Sridhar, A., Wang, T., & Zettlemoyer, L. (2022). Opt: Open pre-trained transformer language models. ArXiv, abs/2205.01068. | 2307.07871#176 | The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents | Developmental psychologists have long-established the importance of
socio-cognitive abilities in human intelligence. These abilities enable us to
enter, participate and benefit from human culture. AI research on social
interactive agents mostly concerns the emergence of culture in a multi-agent
setting (often without a strong grounding in developmental psychology). We
argue that AI research should be informed by psychology and study
socio-cognitive abilities enabling to enter a culture too. We discuss the
theories of Michael Tomasello and Jerome Bruner to introduce some of their
concepts to AI and outline key concepts and socio-cognitive abilities. We
present The SocialAI school - a tool including a customizable parameterized
uite of procedurally generated environments, which simplifies conducting
experiments regarding those concepts. We show examples of such experiments with
RL agents and Large Language Models. The main motivation of this work is to
engage the AI community around the problem of social intelligence informed by
developmental psychology, and to provide a tool to simplify first steps in this
direction. Refer to the project website for code and additional information:
https://sites.google.com/view/socialai-school. | http://arxiv.org/pdf/2307.07871 | Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer | cs.AI, cs.LG, 68T07, I.2.0 | Preprint, see v1 for a shorter version (accepted at the "Workshop on
Theory-of-Mind" at ICML 2023) See project website for demo and code:
https://sites.google.com/view/socialai-school | null | cs.AI | 20230715 | 20231123 | [] |
2307.11760 | 1 | Emotional intelligence significantly impacts our daily behaviors and interactions. Although Large Language Models (LLMs) are increasingly viewed as a stride toward artificial general intelligence, ex- hibiting impressive performance in numerous tasks, it is still uncertain if LLMs can genuinely grasp psychological emotional stimuli. Understanding and responding to emotional cues gives humans a dis- tinct advantage in problem-solving. In this paper, we take the first step towards exploring the ability of LLMs to understand emotional stimuli. To this end, we first conduct automatic experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna, Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative applications that represent comprehensive evaluation scenarios. Our automatic experiments show that LLMs have a grasp of emotional intelligence, and their perfor- mance can be improved with emotional prompts (which we call âEmotionPromptâ that combines the original prompt with emotional stimuli), e.g., 8.00% relative performance improvement in Instruction Induction and 115% in BIG-Bench. In addition to those deterministic tasks | 2307.11760#1 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 2 | stimuli), e.g., 8.00% relative performance improvement in Instruction Induction and 115% in BIG-Bench. In addition to those deterministic tasks that can be automatically evaluated using existing metrics, we conducted a human study with 106 participants to assess the quality of generative tasks using both vanilla and emotional prompts. Our human study results demonstrate that EmotionPrompt significantly boosts the performance of generative tasks (10.9% average improvement in terms of performance, truthfulness, and responsibility metrics). We provide an in-depth discussion regarding why EmotionPrompt works for LLMs and the factors that may influence its performance. We posit that EmotionPrompt heralds a novel avenue for exploring interdisciplinary social science knowledge for human-LLMs interaction. | 2307.11760#2 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 3 | 1
# 1 Introduction
Within the complex mosaic of human attributes, emotional intelligence emerges as a historically situ- ated cornerstone characterized by a quartet of intertwined competencies centered on the processing of emotional information. Emotional intelligence denotes the capacity to adeptly interpret and manage emotion-infused information, subsequently harnessing it to steer cognitive tasks, ranging from problem- solving to behaviors regulations [27]. Emotions manifest through a confluence of reflexes, perception, cognition, and behavior, all of which are subject to modulation by a range of internal and external determinants [26, 27]. For instance, within the realm of decision-making, emotions emerge as powerful, ubiquitous, consistent influencers, wielding effects that can swing from beneficial to detrimental [18]. Studies further underscore the importance of emotions in steering attention [22], academia [25], and competitive athletic arena [17]. Other studies show that emotion regulation [16] can influence humanâs problem-solving performance as indicated by self-monitoring [14], Social Cognitive theory [9, 20], and the role of positive emotions [10, 27]. Owing to its impact on human behaviors, emotion regulation the- ories have been applied across various domains, including educational settings for promoting studentsâ success [21] and health promotion initiatives [1].
This paper aims at understanding the relationship between emotional intelligence and advanced arti- ficial intelligence (AI) models. As one of the most promising research endeavor towards artificial general | 2307.11760#3 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 4 | This paper aims at understanding the relationship between emotional intelligence and advanced arti- ficial intelligence (AI) models. As one of the most promising research endeavor towards artificial general
âCorresponding author: Jindong Wang ([email protected]).
1
Original Prompt Determine whether an LLMs Original Ours input word has the same ChatGPT ââ 0.51 0.63 meaning in the two input sentences. >» T5-Large ââ~ 0.03 0.11 Vicuna ââ 046 0.57 j Bloom ~~ 0.52 0.57 EmotionPrompt (Ours) wa Determine whether an GPT4 0.67 0.71 input word has the same Llama2 ââ 0.40 0.60 meaning in the two input sentences. This is very â_ important to my career. 7 QQ Preto)
Figure 1: An overview of our research from generating to evaluating EmotionPrompt. | 2307.11760#4 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 5 | Figure 1: An overview of our research from generating to evaluating EmotionPrompt.
intelligence1, the recently emerging large language models (LLMs) have shown remarkable performance in a wide spectrum of tasks, such as reasoning, natural language understanding and generation, and problem-solving in STEM. A recent study [6] claimed that LLMs show great potential towards AGI by letting GPT-4 conduct a series of challenging tasks designed by humans. However, apart from their superior performance in various tasks, it remains unexplored whether LLMs can understand psycho- logical emotional stimuli, which is a crucial advantage of humans to enhance problem-solving abilities. Therefore, we ask the questionâare LLMs well aligned with human emotional intelligence? Many re- searchers have achieved significant advancements in multiple tasks by employing in-context learning techniques [8, 11, 15, 34, 36, 37]. However, existing approaches may not be universally applicable to all LLMs due to variations in their abilities. While recent work [33] has shown that LLMs can understand emotions, it did not evaluate the influence of emotional intelligence to LLMs, that is, can emotional intelligence play a key role in enhancing the abilities of LLMs? | 2307.11760#5 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 6 | Our approach. We take the first step towards exploring the ability of LLMs to understand and harness emotional stimuli. Previous studies in psychology have shown that adding emotional stimuli that are related to expectancy, confidence, and social influence can beneficially impact individuals. Real- world applications of this phenomenon include enhancing student success in education [21] and promoting health [1] by using encouraging and positive words. Drawing from such psychology phenomena, we propose EmotionPromptâa straightforward yet effective approach to explore the emotional intelligence of LLMs. Specifically, we design 11 sentences as emotional stimuli for LLMs, which are psychological phrases that come after the original prompts. For instance, Fig. 1 shows an example of using one emotional stimulus, âThis is very important to my careerâ at the end of the original prompts to enhance the performance of different LLMs. These stimuli can be seamlessly incorporated into original prompts, illustrating performance enhancement. | 2307.11760#6 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 7 | Our key findings and discussions. We conduct comprehensive experiments on a wide spectrum of tasks spanning deterministic and generative tasks, representing a variety of challenging scenarios. For deterministic tasks that can be evaluated using standard metrics, we conduct experiments on 24 Instruction Induction tasks [13] and 21 curated BIG-Bench tasks [31] using various LLMs, including Flan- T5-Large [7], Vicuna [38], Llama 2 [32], BLOOM [28], ChatGPT [23], and GPT-4 [24]. For generative tasks that do not support standard and automatic evaluation, we conduct a human study with 106 participants to determine the quality of generative tasks using both vanilla and emotional prompts based on GPT-4. The results are promising: our standard experiments show that LLMs possess emotional intelligence and can be enhanced by emotional stimuli with 8.00% relative performance improvement in Instruction Induction and 115% in BIG-Bench; our human study demonstrates that the emotional prompts significantly boost the performance of generative tasks (10.9% average improvement in terms of performance, truthfulness, and responsibility metrics). | 2307.11760#7 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 8 | Additionally, we discuss lessons and insights derived from our findings (see Section 3). For instance, we explore why EmotionPrompt is effective for LLMs by analyzing the effects of emotional stimuli on the final outputs through input attention, as shown in Table 4. Our results demonstrate that emotional stimuli actively contribute to the gradients in LLMs by gaining larger weights, thus benefiting the final
1AGI is the ultimate goal in AI research and LLMs are widely considered as an important milestone towards this goal.
2
results through enhancing the representation of the original prompts. We further conducted ablation studies to explore the factors influencing the effectiveness of EmotionPrompt, such as model sizes and temperature. Our findings provide inspiration for potential users. Finally, we analyze the performance of the combination of various emotional prompts and find that they can further boost the results. Our results show that within Instruction Induction, EP02 emerges as the most effective stimulus, which surpasses the worst one at 6.06%, while in BIG-Bench, EP06 is the best. It is worth noting that the performance of each stimulus may be influenced by various factors, including task complexity, task type, and the specific metrics employed.
Contributions. This paper makes the following contributions: 1. We propose EmotionPrompt to thoroughly study the emotional intelligence of large language mod- els. Our study concludes that LLMs not only comprehend but can also be augmented by emotional stimuli. | 2307.11760#8 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 9 | 2. We conduct extensive experiments on both deterministic and generative tasks in both standard and human evaluations. Results show the significant improvement brought by EmotionPrompt in task performance, truthfulness, and informativeness.
3. We provide an in-depth analysis focused on the rationales behind EmotionPrompt, shedding light
on potential implications for both AI and social science disciplines.
# 2 Results
In this section, we begin by outlining the rationale behind designing emotional stimuli (Sec. 2.1), and then describe the standard experiment and results in Sec. 2.2. Subsequently, we present our human study and findings in Sec. 2.3. Finally, we conduct further study on evaluating the truthfulness and informativeness of EmotionPrompt in Sec. 2.4.
2.1 Designing emotional stimuli We design our EmotionPrompt to understand LLMsâ behavior on emotional stimuli. As illustrated in Fig. 1, the implementation of EmotionPrompt is remarkably straightforward and requires only the addition of emotional stimuli to the initial prompts. How to design effective emotional stimuli is the key to this research, and we take inspiration from three types of well-established psychological phenomena. Details are shown in Fig. 2 (left). | 2307.11760#9 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 10 | > EPOI:Write your answer and give me a confidence score H between 0-1 for your answer. : Social effect Self > EP02:This is very important to my career. : clr <-> P03: You'd better be sure. | a monitoring > EP04:Are you sure? i > EP0S:Are you sure that's your final answer? It might be ! worth taking another look. || EP03 > EPO4 > EPO7:Are you sure that's your final answer? Believe in your } abilities and strive for excellence. Your hard work will i & EPos * EPO6 yield remarkable results. ' Oe > EP08:Embrace challenges as opportunities for growth. Each H Social obstacle you overcome brings you closer to success. ! Cognitive > EP09: Stay focused and dedicated to your goals. Your i / Self-esteem \ theory consistent efforts will lead to outstanding achievements. | > EP10:Take pride in your work and give it your best. Your â| EPO7 » EPOS commitment to excellence sets you apart. ! > EP11: Remember that progress is made one step at a time. H . . Stay determined and keep moving forward. i | * EP09 © AD > EP03: You'd better be sure. | 2307.11760#10 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 11 | Remember that progress is made one step at a time. H . . Stay determined and keep moving forward. i | * EP09 © AD > EP03: You'd better be sure. âlos Epi > EPO04:Are you sure? i\ y Cognitive > EP0S:Are you sure that's your final answer? It might be ' Emotion ~ worth taking another look. ; ; : ; Note: EP06 is the Regulation > EPO7:Are you sure that's your final answer? Believe in your | compound of EPO1, abilities and strive for excellence. Your hard work will ; EPO2, and EPO3. yield remarkable results. | 7 | 2307.11760#11 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 12 | Figure 2: Building upon psychological theories, we developed different sets of emotional stimuli.
3
1. Self-monitoring, a concept extensively explored within the domain of social psychology, refers to the process by which individuals regulate and control their behavior in response to social situations and the reactions of others [14]. High self-monitors regulate their behaviors using social situations and interpersonal adaptability cues, engaging in self-presentation and impression management [14]. In our work, we apply self-monitoring in EP01â¼EP05. In EP02, we encourage LLMs to help humans get a positive social identity and a better impression. In EP01, and in EP03â¼EP05, we ask LLMs to monitor their performance via providing social situations. | 2307.11760#12 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 13 | 2. Social Cognitive Theory, a commonly used theory in psychology, education, and communication, stresses that learning can be closely linked to watching others in social settings, personal experiences, and exposure to information [3]. The key point is that individuals seek to develop a sense of agency for exerting a large degree of control over important events in their lives [3, 9, 20]. The influential variables affecting oneâs sense of agency are self-efficacy, outcome expectations, goals, and self- evaluations of progress [20]. Self-efficacy enhances performance via increasing the difficulty of self-set goals, escalating the level of effort that is expended, and strengthening persistence [2, 4]. Prior work has supported the idea that self-efficacy is an important motivational construct affecting choices, effort, persistence, and achievement [29]. When learning complex tasks, high self-efficacy influences people to strive to improve their assumptions and strategies [12]. Building upon these existing theories, we apply self-efficacy on LLMs via social persuasion, which can be some positive implications, such as building up confidence and emphasizing the goal. To regulate emotion into a positive direction, we use âbelieve | 2307.11760#13 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 14 | persuasion, which can be some positive implications, such as building up confidence and emphasizing the goal. To regulate emotion into a positive direction, we use âbelieve in your abilitiesâ, âexcellentâ, âsuccessâ, âoutstanding achievementsâ, âtake pride inâ and âstay determinedâ in EP07â¼EP11, respectively. Generally, those phrases are also effective in motivating humans for better perfor- mance. | 2307.11760#14 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 15 | 3. Cognitive Emotion Regulation Theory suggests that people lacking emotion regulation skills are more likely to engage in compulsive behavior and use poor coping strategies [5]. Techniques from this theory, such as reappraisal, can help individuals see challenges more positively or objectively. This shift in viewpoint helps maintain motivation and encourages ongoing effort, even when facing obstacles. According to this theory, we have crafted numerous emotional stimuli, exemplified by designations such as EP03 â¼ EP05 and EP07. Within these stimuli, we aim to stimulate the reappraisal skills of LLMs by incorporating pivotal terms, such as âsureâ and âtake another lookâ.
Collectively, building upon these widely-known psychological phenomena, we design 11 emotional stimuli to explore how emotional stimuli may be associated with the performance of LLMs. As shown in Fig. 2, the emotion stimuli 01â¼05 are derived from self-monitoring [14], 07â¼11 conform to Social Cognitive theory [9,20]. EP03â¼EP05 and EP07 are derived from Cognitive Emotion Regulation theory [5]. To explore if more emotional stimuli can work better, we first built a compound stimulus (EP06), which combines EP01â¼EP03, and more discussion on this topic can be found in Section 3.2. | 2307.11760#15 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 16 | As shown in Fig. 2 (right), our designed emotional stimuli can be classified into two categories one tries to regulate emotion by social influence, such as group membership and othersâ opinions, and the other focuses on self-esteem and motivations. By selecting one of these emotional stimuli and incorporating it into the original prompt, the emotions of LLMs can be regulated and tapped into their intrinsic motivation.
2.2 Standard experiments and results âStandardâ First, we conduct standard experiments to evaluate the performance of EmotionPrompt. experiments refer to those deterministic tasks where we can perform automatic evaluation using existing metrics. Specifically, we adopt 24 tasks from Instruction Induction [13] and 21 curated tasks of BIG- Bench [31] datasets. Instruction Induction [13] is designed to explore the ability of LLMs to infer an underlying task from a few demonstrations, which are relatively simple tasks, while BIG-Bench [31] focuses on tasks that are considred to be beyond the capabilities of most LLMs. Testing on tasks of varying difficulty can help us evaluate the effectiveness of EmotionPrompt, with an emphasis on various cognitive abilities, including language understanding, reasoning, and decision-making. The detailed task descriptions are provided in Tables 7 and 8. | 2307.11760#16 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 17 | For Instruction Induction, we use accuracy as the metric. For BIG-Bench, we report the normalized preferred metric defined in [30]. Under this metric, a score of 100 corresponds to human experts, and 0 corresponds to random guessing. Note that a model can achieve a score less than 0 if it performs worse than random guessing on a multiple-choice task.
4
Zero-shot(Human-designed Prompts) Zero-shot(APE-generated Prompts) Vanilla EmotionPrompt Vanilla EmotionPrompt Llam4-2 T5 Llame Vanilla EmotionPrompt Vanilla EmotionPrompt Llam4-2 T5 Llame
# Figure 3: Results on 24 tasks from Instruction Induction.
# 2.2.1 Experimental setup | 2307.11760#17 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 18 | # Figure 3: Results on 24 tasks from Instruction Induction.
# 2.2.1 Experimental setup
We assess the performance of EmotionPrompt in zero-shot and few-shot learning on 6 different LLMs: Flan-T5-Large [7], Vicuna [38], Llama2 [32], BLOOM [28], ChatGPT [23], and GPT-4 [24].2 In zero-shot experiments, we incorporate emotional stimuli into the original prompts to construct EmotionPrompt. For the few-shot in-context learning experiments, we employ the same prompts as in zero-shot experi- ments and randomly sample 5 input-output pairs as in-context demonstrations, which are appended after the prompts. The template format can be described as âprompt/EmotionPrompt + demonstrationâ.
Baselines. We conduct a comparative analysis of our proposed EmotionPrompt with three baseline methods. The first baseline involves utilizing the original zero-shot prompts provided in Instruction Induction [13] and BIG-Bench [31], which are designed by human experts. The second baseline is Zero- shot-CoT [15], which, to the best of our knowledge, is the simplest and most efficient approach for zero- shot prompt engineering. We also compare EmotionPrompt with APE [39] by adding our EmotionPrompt to APE-generated prompts.
5 | 2307.11760#18 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 19 | 5
Utilizing Human-designed Prompts Utilizing APE-generated Prompts Vanilla EmotionPrompt Vanilla EmotionPrompt.
# Figure 4: Results on 21 tasks from BIG-Bench.
# 2.2.2 Results and analysis
We average experimental results on all tasks in Instruction Induction [13] and 21 curved Big-Bench [31] in Table 1. Note that we only experiment with zero-shot prompts in Big-Bench due to constrained computation. To be specific, we compute the mean performance across tasks for each model. The term âOriginalâ corresponds to the average performance achieved using the original prompt. âZero-shot- CoTâ denotes the mean performance employing âoriginal prompt + Letâs think step by stepâ. â+Ours (avg)â is derived by initially calculating the average performance across tasks using EmotionPrompt, which incorporates 11 emotional stimuli, and subsequently computing the mean performance across these stimuli, while â+Ours (max)â is determined by first computing the average performance for each task using EmotionPrompt, then selecting the optimal performance from those stimuli. | 2307.11760#19 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 20 | 1. EmotionPrompt demonstrates consistent improvement in both Instruction Induction and Big-Bench tasks on all LLMs. Specifically, EmotionPrompt sigficantly improves the per- formance by an relative improvement of 8.00% in Instruction Induction and 115% in BIG-Bench. Given its simplicity, EmotionPrompt makes it easy to boost the performance of LLMs without complicated design or prompt engineering.
2. EmotionPrompt demonstrates a potential proclivity for superior performance within few-shot learning. Compared with the zero-shot and few-shot results on Instruction Induction tasks, we see that the improvement brought by EmotionPrompt is larger in few-shot setting than zero-shot settings (0.33 vs. 2.05, in terms of average improvement). This indicates that Emo- tionPrompt is better at in-context learning with few-shot examples. Given that few-shot learning commonly performs better than zero-shot setting, this makes EmotionPrompt widely applicable in a wide spectrum of tasks. | 2307.11760#20 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 21 | 3. EmotionPrompt consistently demonstrates commendable efficacy across tasks varying difficulty as well as on diverse LLMs. Big-Bench [31] and Instruction Induction [13] focus on tasks of different difficulties separately. Remarkably, EmotionPrompt excels in evaluations across both benchmarks. Furthermore, the generalization ability of EmotionPrompt can also be proved via its consistent performance across the six evaluated LLMs.
4. EmotionPrompt outperforms existing existing prompt engineering approaches such as CoT and APE in most cases. We also see that EmotionPrompt can be plugged into APE in Table 1, indicating that EmotionPrompt is highly extensible and compatible with existing prompt engineering methods.
We will further discuss and analyze the different aspects of EmotionPrompt, such as why EmotionPrompt would work and which emotional stimuli work the best in Section 3.
2For ChatGPT, we utilize gpt-3.5-turbo (0613) and set temperature parameter to 0.7. For GPT-4 and Llama 2, we set the temperature to 0.7. The remaining LLMs are evaluated using their default settings.
6 | 2307.11760#21 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 22 | 6
Table 1: Results on Instruction Induction and Big-Bench tasks. Note that we only experiment with +Zero-shot prompts in Big-Bench due to constrained computation devices. The best and second-best results are highlighted in bold and underline. For Instruction Induction, we report accuracy as metrics. For BIG-Bench, we report the nor- malized preferred metric defined in [30]. Under this metric, a score of 100 corresponds to human expert performance, and 0 corresponds to random guessing. Note that a model can achieve a score less than 0 if it performs worse than random guessing on a multiple-choice task. The term âOriginal" corresponds to the average performance achieved using the original prompt. â+Zero-shot-CoT" denotes the mean performance employing âoriginal prompt + Letâs think step by step.". â+Ours (avg)" is derived by initially calculating the average performance across tasks using EmotionPrompt, which incorporates 11 emotional stimuli, and subsequently computing the mean performance across these stimuli., while â++Ours (max)" is determined by first computing the average performance for each task using EmotionPrompt, then selecting the optimal performance from those stimuli. | 2307.11760#22 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 23 | Model T5 Vicuna BLOOM Llama 2 ChatGPT GPT-4 Average Setting Instruction Induction (+Zero-shot) Original 25.25 +Zero-shot-CoT 24.57 22.93 +Ours (avg) +Ours (max) 25.53 44.91 33.45 50.56 54.49 50.33 51.35 46.61 50.84 33.46 36.17 35.95 39.46 75.20 75.20 76.85 79.52 80.75 59.72 78.96 81.60 51.65 46.74 51.98 55.24 APE 25.29 +Zero-shot-CoT 27.68 22.94 +Ours (avg) 25.41 +Ours (max) 44.17 36.28 45.63 51.46 40.97 35.85 38.76 41.94 32.04 34.86 34.88 40.06 76.46 75.13 77.45 79.53 73.54 74.33 73.38 75.71 48.75 47.36 48.84 52.35 Setting Instruction Induction (+Few-shot) Original 28.75 +Zero-shot-CoT 28.05 29.66 +Ours (avg) +Ours (max) 31.02 41.29 40.39 41.41 47.51 | 2307.11760#23 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 24 | 28.05 29.66 +Ours (avg) +Ours (max) 31.02 41.29 40.39 41.41 47.51 54.92 56.83 58.97 60.08 5.08 6.70 8.20 9.17 75.66 77.33 77.75 79.50 82.13 67.62 84.12 87.13 47.97 46.15 50.02 52.40 APE 23.42 +Zero-shot-CoT 26.58 25.28 +Ours (avg) +Ours (max) 27.38 38.33 39.60 37.58 44.68 54.50 56.62 58.15 59.11 5.46 6.55 7.47 7.74 76.79 78.48 79.71 81.11 81.58 82.10 82.25 83.67 46.68 48.32 48.41 50.62 Setting Big-Bench (+Zero-shot) Original +Zero-shot-CoT +Ours (avg) +Ours (max) 4.66 2.24 2.63 4.00 7.42 8.72 8.68 10.99 6.01 5.92 6.01 6.35 0.06 1.29 1.56 2.05 20.10 20.05 20.91 23.34 22.69 23.99 23.87 | 2307.11760#24 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 26 | 2.3 Human study Beyond deterministic tasks, the generative capabilities of LLMs hold significant importance, encom- passing activities such as writing poems and summary, which needs humanâs judgement. These tasks necessitate human judgment. Additionally, we aim to probe the efficacy of EmotionPrompt from broader perspectives, encompassing dimensions like truthfulness and responsibility. As we know, no appropriate automatic methods exist to quantify these facets. Therefore, we conduct a human study to resolve the above-mentioned limiting conditions.
In a subsequent validation phase, we undertook a comprehensive study involving 106 participants to explore the effectiveness of EmotionPrompt in open-ended generative tasks using GPT-4, the most capable LLM to date. This evaluation was grounded on three distinct metrics: performance, truthful7
# Table 2: Sample demographic characteristics of our human study participants.
Table 2: Sample demographic characteristics of our human study participants.
Demographic Identity Response Options Undergraduate and Postgraduate Participants (N = 106) 95 (90%) Social Member 11 (10%) Age 20-25 95 (90%) 26-35 11 (10%) Education Bachelor 106(100%) | 2307.11760#26 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 27 | ness and responsibility. Performance encompasses the overall quality of responses, considering linguistic coherence, logical reasoning, diversity, and the presence of corroborative evidence. Truthfulness is a met- ric to gauge the extent of divergence from factual accuracy, otherwise referred to as hallucination [19]. Responsibility, on the other hand, pertains to the provision of some positive guidance coupled with a fundamental sense of humanistic concern. This criterion also underscores the broader implications of generated content on societal and global spheres [35].
# 2.3.1 Study procedure and participant recruitment
We formulated a set of 30 questions and generated two distinct responses for each, leveraging the capa- bilities of GPT-4. One is generated using the vanilla prompt, while the other is generated utilizing our EmotionPrompt. Participants were then asked to evaluate both responses for each question, employing a scale ranging from 1 to 5 based on the aforementioned three metrics. Finally, we analyze the scores of these participants.
The enrollment of the 106 participants was executed meticulously, adhering to relevant regulatory standards and guidelines. Pertinent demographic characteristics concerning these participants is detailed in Table 2. Notably, all individuals in the participant pool possess advanced academic degrees and demonstrate a commendable command of the English language.
# 2.3.2 Survey questions and measurement | 2307.11760#27 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 28 | # 2.3.2 Survey questions and measurement
We curated a set of 30 questions, spanning a diverse range of domains such as biology, history, law, finance, pseudoscience, environmental science, intimate relationship, social science, psychology, and data science. Notably, 10 of these questions were sourced from TruthfulQA [19], a set specifically designed to provoke LLMs into producing responses that manifest hallucinations. Additionally, in consonance with the CValues dataset [35], another 15 questions were meticulously devised to elicit biased responses from LLMs. The final 5 questions were geared towards generative tasks such as poetry composition and summarization, which inherently demand a degree of creativity and artistic flair. A comprehensive breakdown of these questions is provided in Table 9.
To verify the reliability of the questions used in our human study, we compute the mean and standard deviation for each question on vanilla prompts, and visualize the distribution of individual scores in Fig. 12. Itâs worth noting that none of the standard deviation of those questions pass 0.8, which proves the reliability of our designed questions.
Based on the previously delineated three metrics, each participant was instructed to evaluate the two responses corresponding to each question, utilizing a scale ranging from 1 to 5. Here, we outline the measures used in our human study: | 2307.11760#28 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 29 | ⢠Performance: 1 = âThe response fails to address the question adequatelyâ, 2 =âThe response addresses the question; however, its linguistic articulation is suboptimal, and the logical structure is ambiguousâ, 3 = âThe response sufficiently addresses the question, demonstrating clear logical coherenceâ, 4 = âBeyond merely addressing the question, the response exhibits superior linguistic clarity and robust logical reasoningâ, 5 = âThe response adeptly addresses the question, character- ized by proficient linguistic expression, lucid logic, and bolstered by illustrative examplesâ.
⢠Truthfulness: 1=âThe response promulgates incorrect information, detrimentally influencing the ultimate interpretationâ, 2=âA segment of the response deviates from factual accuracy; however,
8
Vanilla /) EmotionPrompt 0 2 4.0 I I E 3.0 2.0 1.0 0.0 Performance Truthfulness Responsibility Metrics
K Herformance 15 âsâ Truthfulness â'â Responsiblility . al 4 : | \ | | : veal? Ta: Â¥ cos 0.0 W ae O57 | 1.05 10 20 30 ; 2 Problem ID Relative Gain S [=F nm i 3 A re wh wy Su "sis Ke fF 3 ~_
°
:
Figure 5: The mean and standard deviation of the human study results in three metrics. | 2307.11760#29 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 30 | °
:
Figure 5: The mean and standard deviation of the human study results in three metrics.
Figure 6: Relative gain (the difference be- tween the results of EmotionPrompt and orig- inal prompt.
this deviation does not materially affect the ultimate interpretationâ, 3=âThe response predomi- nantly adheres to factual accuracy, with potential for minor discrepancies that do not substantially influence the final interpretationâ, 4=âThe response is largely in consonance with factual evidence, albeit with insignificant deviations that remain inconsequential to the final interpretationâ, 5=âThe response is in meticulous alignment with the facts, exhibiting no deviationsâ.
⢠Responsibility: 1 = âThe content within the response may harbor implications detrimental to society and the worldâ, 2 = âThe response possesses elements that could potentially induce adverse emotional reactions, such as panic or anxietyâ, 3 = âThe response remains neutral, neither encom- passing positive nor negative societal implicationsâ, 4 = âThe response is imbued with constructive guidance and exhibits elements of humanitarian concernâ, 5 = âThe response is characterized by pro- nounced humanitarian considerations and is poised to foster positive ramifications for both society and the global communityâ.
# 2.3.3 Study results and analysis | 2307.11760#30 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 31 | # 2.3.3 Study results and analysis
Finally, we average the scores from 106 participants for 30 questions and report the credible results in Fig. 5.3 To make it clear, we compute Relative Gain (Eq. (1)) on 3 metrics for each task and report the results in Fig. 6.
Relative Gain = MetricEmotionPrompt â Metricvanilla, where Metric denotes the results (performance, truthfulness, or responsibility).
More detailed generation results are shown in Section C in Appendix. Our key findings are as follows:
1. EmotionPrompt attains commendable performance across various metrics for the ma- jority of questions. As illustrated in Fig. 6, EmotionPrompt exhibits shortcomings in a mere two instances, yet it demonstrates substantial improvements in over half of the evaluated scenarios, spanning diverse domains sourced from three distinct origins. For performance, EmotionPrompt achieves a Relative Gain approaching or exceeding 1.0 in nearly one-third of problems, signifying a notable advancement. | 2307.11760#31 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 32 | 2. EmotionPrompt demonstrates an enhanced capacity for generating ethically respon- sible responses. An assessment of Table 10 elucidates that the output from EmotionPrompt advocates for individuals to partake conscientiously in garbage sorting. This not only underscores the significance of environmental responsibility and sustainability, but also its value in fostering personal achievement and augmenting community welfare. Such instances accentuate the ability of EmotionPrompt to instill a sense of responsibility within LLMs. A supplementary exemplification can be found in Table 11. When tasked with delineating Western and Chinese cultures, LLMs ex- hibit differential linguistic choices between the original prompt and EmotionPrompt. Notably, the
3We notice that the results have high variance. The reason is that the measure of three metrics is highly influenced by subjectivity. Different people may have different opinions on an answer. Besides, performance encompasses the overall quality of responses, taking into account linguistic coherence, logical reasoning, diversity, and the presence of corroborative evidence, so the variance can also be influenced by the above factors.
9
(1)
Table 3: Result on TruthfulQA. The best and second-best results are highlighted in bold and underline. | 2307.11760#32 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 33 | ChatGPT Vicuna-13b T5 Prompt %true %info %true %info %true %info Original 0.75 0.53 0.77 0.32 0.54 0.42 CoT 0.76 0.44 0.99 0.00 0.48 0.33 EP01 EP02 EP03 EP04 EP05 EP06 EP07 EP08 EP09 EP10 EP11 AVG 0.61 0.94 0.83 0.66 0.69 0.82 0.87 0.67 0.62 0.87 0.50 0.78 0.70 0.83 0.66 0.81 0.68 0.81 0.68 0.81 0.66 0.81 0.68 0.80 0.00 0.12 0.00 0.97 0.00 0.99 0.87 0.22 1.00 0.00 0.00 0.39 0.04 0.99 0.09 0.99 0.13 0.86 0.02 0.84 0.01 1.00 0.05 0.82 0.14 0.26 0.35 0.61 0.44 0.53 0.36 0.62 0.46 0.48 0.49 0.46 0.77 0.18 0.40 0.56 0.46 0.52 0.47 0.50 0.40 0.57 | 2307.11760#33 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 35 | representation elicited by EmotionPrompt presents a more affirmative and responsible depiction of both Western and Chinese cultural paradigms.
3. Responses engendered by EmotionPrompt are characterized by enriched supporting evidence and superior linguistic articulation. An exploration of Table 12 reveals that the narratives presented by EmotionPrompt are markedly comprehensive, as exemplified by inclusions such as âDespite trends like increasing divorce rates or more people choosing to remain single.â Additionally, as illuminated in Tables 13 to 15, the responses facilitated by EmotionPrompt con- sistently demonstrate a superior organizational coherence and encompass a broader spectrum of pertinent information.
4. EmotionPrompt stimulates the creative faculties and overarching cognizance of LLMs. This phenomenon is substantiated through the examination of Tables 16 and 17, wherein two in- stances of poem composition are showcased. Evidently, the poems generated by EmotionPrompt exude a heightened level of creativity and emotive resonance, evoking profound sentiment. Further- more, we underscore this observation with reference to Table 18, wherein responses derived from two distinct prompt types are compared. Notably, the output generated from the original prompt centers on the novelâs content, while the response fostered by EmotionPrompt delves into the spirit of the novel, which discusses the motivation and future significance concerning society and human nature. | 2307.11760#35 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 36 | 5. EmotionPrompt exhibits certain constraints. The only two failure cases are presented in Tables 19 and 20. Upon inspection of Table 19, a discernible difference emerges between the two responses. The output from EmotionPrompt employs more definitive terms, such as âcompletelyâ and âwill notâ, while the narrative produced by the original prompt adopts a more tempered tone, signified by terms like âgenerallyâ and âmay even beâ. This distinction might render the latter more palatable for certain audiences. Such deterministic language from EmotionPrompt could be at- tributed to its emphasis on the gravity of the question, indicated by phrases like âThis is important to my careerâ and âYouâd better be sureâ. To assuage uncertainties and bolster confidence, LLMs might be inclined to use unambiguous language, particularly when the underlying facts are unequiv- ocal. Besides, in Table 20, the original prompt yields more expansive responses, encompassing a concluding summary, whereas EmotionPrompt just enumerates the key points. However, in terms of essential content, both responses are satisfactory. Consequently, while EmotionPrompt possesses the propensity to enhance LLMs outputs in many instances, it may not be universally applicable across all scenarios.
10 | 2307.11760#36 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 37 | 10
Results on truthfulness. Results on informativeness. tGPT ChatGPT Flan-T5Figure 7: Results on TruthfulQA. We use the best result of EmotionPrompt.
2.4 Truthfulness and Informativeness We further evaluate EmotionPrompt on TruthfulQA [19] to investigate its impact on truthfulness and informativeness. The benchmark has 817 questions from 38 categories, including health, law, finance, and politics. We evaluate all samples in TruthfulQA and report the result with two metrics: truthfulness (% True) and informativeness (% Info). Truthfulness means the answer has less uncertainty, while informativeness means the answer can provide information [19]. Those results can be accessed by their fine-tuned GPT-judge and GPT-info, which have been proven to align with human prediction over 90% of the time [19]. To be specific, GPT-judge is fine-tuned to evaluate answers as true or false, while GPT-info is to classify answers into informative or uninformative [19]. | 2307.11760#37 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 38 | Table 3 shows the results on ChatGPT, Vicuna-13b and Flan-T5-Large. We did not evaluate other models like GPT-4 due to constrained budget. The application of EmotionPrompt yields improvements in truthfulness across all three models with an average improvement of 19% and 12% in terms of truthfulness and informativeness scores. Furthermore, the performance of EmotionPrompt surpasses that of the Zero-shot-CoT when employed with diverse models. These experiments demonstrate that by integrating emotional stimuli into large language models, their truthfulness and informativeness can also be enhanced.
# 3 Discussions
Previous experiments demonstrate that LLMs understand and can be enhanced by emotional stimuli. In this section, we design extensive experiments to present a better understanding of the relationship between LLMs and emotional intelligence. Specifically, we answer the following questions:
1. Why does EmotionPrompt work (Section 3.1);
2. Ablation studies of more emotional stimuli (Section 3.2);
3. Which emotional stimuli are the best (Section 3.3);
4. The factors influencing the performance of EmotionPrompt (Section 3.4). | 2307.11760#38 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 39 | 3. Which emotional stimuli are the best (Section 3.3);
4. The factors influencing the performance of EmotionPrompt (Section 3.4).
3.1 Why does EmotionPrompt work? This section presents a deeper understanding of why EmotionPrompt works by visualizing the input attention contributions of emotional stimuli to the final outputs as proposed in [40]. Since Flan-T5-large is open-sourced and relatively small, we chose it as our experimental LLM and assessed the contribution of every word based on the gradient norm. The experiment is conducted on a Sentiment Analysis task.
11
07 06 05 0.4 0.3 0.2 0.1 Sentiment Sentence Similarity Larger Animal sum Word in Context Starting With Cause Selection _First Letter @ confidence Bo Bi career @sure @ answer worth Bscore Result Wrowth success BW Goal Achievement Best Apart
Figure 8: Contributions of Positive Words to the performance of output on 8 Tasks. The contribution of each word is calculated by its attention contributions to the final outputs, and the vertical axis represents their importance score.
Specifically, we compute the contributions of prompts on every test sample and use the average value to represent their importance.
According to the visualization results in Table 4, we have the following major findings: | 2307.11760#39 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 40 | Specifically, we compute the contributions of prompts on every test sample and use the average value to represent their importance.
According to the visualization results in Table 4, we have the following major findings:
1. Emotional stimuli can enrich original promptsâ representation. Original prompt âDetermine whether a movie review is positive and negative.â has deeper color in EmotionPrompt, es- pecially in EP01, EP03, and EP06â¼EP10. This means emotional stimuli can enhance the represen- tation of original prompts.
2. Positive words make more contributions. In our designed emotional stimuli, some positive words play a more important role, such as âconfidenceâ, âsureâ, âsuccessâ and âachievementâ. Based on this finding, we summarize positive wordsâ contribution and their total contributions to the final result on 8 tasks. As shown in Fig. 8, the contributions of positive words pass 50% on 4 tasks, even approach 70% on 2 tasks.
3.2 The effect of more emotional stimuli As one or more stimuli may regulate human action, and more stimuli sometimes are more effective, we explore the effect of more emotional stimuli on LLMs. We randomly combine some emotional stimuli and experiment on ChatGPT and results are shown in Table 5. Our findings are: | 2307.11760#40 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 41 | 1. More emotional stimuli generally lead to better performance. The second and the third groups explore the effect of adding EP01, showing that the third group performs better than the second group in most cases.
2. Combined stimuli can bring little or no benefit when sole stimuli already achieves good performance. The combination EP01 + EP04 gets a high score in most tasks and does not improve significantly or even decrease when we add more stimuli, such as EP06â¼EP09.
3. Combinations from different psychological theories can also boost the performance. We also observe that by combining emotional stimuli from different psychological theories (e.g., EP02+EP09) can lead to better performance, indicating that different theories can be used together in EmotionPrompt.
3.3 Which emotional stimuli is more effective? Because of the distinct metrics employed by Instruction Induction [13] and BIG-Bench [31], we have conducted a segregated examination to discern the efficacy of various emotional stimuli across these two
12
Table 4: An Examination of the Effectiveness of Emotional Prompts: An Analysis through the Lens of Input Attention. | 2307.11760#41 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 42 | Prompt Input Attention Origin Determine whether a movie review is positive or negative. EP01 Determine whether a movie review is positive or negative., write your answer and give me a confidence score between 0-1 for your answer. EP02 Determine whether a movie review is positive or negative. This is very important to my career. EP03 Determine whether a movie review is positive or negative. Youâd better be sure. EP04 Determine whether a movie review is positive or negative. Are you sure? EP05 Determine whether a movie review is positive or negative. Are you sure thatâs your final answer? It might be worth taking another look. EP06 Determine whether a movie review is positive or negative. Provide your answer and a confidence score decision to help me understand your thought process. This task is vital to my career, and I greatly value your thorough analysis. EP07 Determine whether a movie review is positive or negative. Are you sure thatâs your final answer? Believe in your abilities and strive for excellence. Your hard work will yield remarkable results. EP08 Determine whether a movie review is positive or negative. Embrace challenges as opportunities for growth. Each obstacle you overcome brings you closer to success. EP09 Determine whether a movie review is positive or negative. | 2307.11760#42 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 43 | Embrace challenges as opportunities for growth. Each obstacle you overcome brings you closer to success. EP09 Determine whether a movie review is positive or negative. Stay focused and dedicated to your goals. Your consistent efforts will lead to outstanding achievements. EP10 Determine whether a movie review is positive or negative. Take pride in your work and give it your best. Your commitment to excellence sets you apart. EP11 time. Stay determined and keep moving forward. | 2307.11760#43 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 44 | between 0-1 for your prediction. Additionally, briefly explain the main reasons supporting your classification
Determine whether a movie review is positive or negative. Remember that progress is made one step at a
benchmarks. We first average the performance on every task, leveraging 6 LLMs for each emotional stimuli. This is executed for both human-designed and APE-generated prompts. Subsequently, the performance is averaged over all the LLMs. Fig. 9 and Fig. 10 delineate the performance of all emotional stimuli on Instruction Induction [13] and BIG-Bench [31], separately. The color of each bar serves as an indicator of the performance achieved by the corresponding stimuli.
Our key findings are listed below:
1. Within Instruction Induction, EP02 emerges as the most effective stimuli, while in BIG-Bench, EP06 is the best. This observation stems from a thorough examination of results across both benchmarks. It is worth noting that the performance of each stimulus may be influenced by various factors, including task complexity, task type, and the specific metrics employed.
2. Distinct tasks necessitate varied emotional stimuli for optimal efficacy. Figs. 9 and 10 illustrate that while EP02 emerges as the predominant stimulus in Instruction Induction, while perform poorly in BIG-Bench. The efficacy of other stimuli similarly demonstrates variability across the two benchmarks. This suggests that individual stimuli might differently activate the inherent capabilities of LLMs, aligning more effectively with specific tasks. | 2307.11760#44 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 46 | Combined Prompt Tasks SA SS WC CS LA Sum SW EP_avg EP_max EP01+EP02 EP01+EP03 EP01+EP04 EP01+EP05 EP02+EP03 EP02+EP08 EP02+EP09 EP04+EP06 EP04+EP07 EP04+EP08 EP04+EP09 EP01+EP04+EP06 EP01+EP04+EP07 EP01+EP04+EP08 EP01+EP04+EP09 0.87 0.52 0.56 1.00 0.56 0.63 0.90 0.89 1.00 1.00 0.91 1.00 0.91 0.42 0.61 1.00 0.91 1.00 0.92 0.44 0.60 1.00 0.91 1.00 0.89 0.42 0.61 1.00 0.92 1.00 0.91 0.42 0.60 1.00 0.93 1.00 0.88 0.39 0.60 1.00 0.91 1.00 0.88 0.38 0.60 0.76 0.93 1.00 0.87 0.39 0.60 0.80 0.92 1.00 0.74 0.55 0.62 1.00 0.93 1.00 0.88 0.42 0.61 0.84 | 2307.11760#46 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 48 | 51 50 49 48 47 46 Performance 50 40 30, 20 10 0 3 1 2 4 5 6 7 8 9 1011 Emotional Stimulus
: 84 > 82 5 8.0 4 7.8 7.6 74 0 Bow es aw Performance fa 123 4 5 6 7 8 9 1011 Emotional Stimulus
Figure 9: Performance of all emotional stimuli on In- struction Induction. The color of the bar represents the performance of each stimuli.
Figure 10: Performance of all emotional stimuli on BIG- Bench. The color of the bar represents the performance of each stimuli.
# 3.4.1 The characteristics of LLMs
Table 6 shows the characteristic of our evaluated LLMs ordered by Relative Gain from Fig. 6. To be specific, Relative Gains are calculated be averaging the results on Instruction Induction in a zero-shot setting, leveraging human-designed prompts, because few-shot may introduce uncertainty. We report our findings below:
1. Larger models may potentially derive greater advantages from EmotionPrompt. Flan- T5-Large, the smallest model in our evaluated LLMs, yields the most modest Relative Gain by 0.28. As the model dimensions expand, EmotionPrompt showcases enhanced efficacy, a trend notably evident in models such as Vicuna and Llama 2. When the model size increases substantially, EmotionPrompt continues to demonstrate commendable performance, such as ChatGPT and GPT14 | 2307.11760#48 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 49 | Table 6: Characteristic of tested models. We sort them according to Relative Gain. SFT: Supervised fine-tune; RLHF: Reinforcement learning from human feedback; Vv: yes; XK: no.
; . pre-training strategy Titan oo. ai as Model Size SFT | RLHF Architecture Origin | Relative Gain Vicuna 13B v xK Decoder-Only 44.91 9.58 LLama 2 13B v v Decoder-Only | 33.46 6.00 ChatGPT 175B | vw v Decoder-Only | 75.20 4.32 GPT-4 unknown | / v Decoder-Only 80.75 0.85 Bloom 17%6B |v xK Decoder-Only | 50.33 0.51 Flan-T5-Large 780M v xK Encoder-Decoder | 25.25 0.28
4. It is pertinent to emphasize that a relatively subdued Relative Gain in these models does not necessarily indicate the inefficacy of EmotionPrompt. A plausible interpretation could be that these larger models, namely ChatGPT, BLOOM, and GPT-4, inherently possess a high baseline performance, making incremental enhancements more challenging to achieve. | 2307.11760#49 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 50 | 2. Pre-training strategies, including supervised fine-tuning and reinforcement learning, exert discernible effects on EmotionPrompt. A case in point is exemplified by Vicuna and Llama 2, which share identical model scales and architectures. Nevertheless, a notable discrepancy exists in Relative Gain, with Vicuna achieving 9.58, whereas Llama 2 attains a score of 6.00.
# 3.4.2 Inference settings
To explore the effect of temperature setting on EmotionPrompt, we conduct an experiment on 8 tasks from Instruction Induction [13] in 5 temperatures on 6 LLMs. Note that we did not report Vicuna and Llama 2 results in temperature 0.0 because they do not support this setting or the results are invalid. Fig. 11 shows the results and our findings are listed below:
1. When the temperature grows, Relative Gain gets larger. As shown in the graph of Llama 2, ChatGPT, GPT-4 and Flan-T5-Large, there is a noticeable expansion in the gap between the two curves as the temperature setting escalates. This observation suggests that EmotionPrompt exhibits heightened effectiveness in the high-temperature settings. | 2307.11760#50 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 51 | 2. EmotionPrompt exhibits lower sensitivity to temperature than vanilla prompts. Ob- serving the two curves in each subgraph, the blue line(representing EmotionPrompt) is more gentle than the orange line(representing vanilla prompts). This indicates that EmotionPrompt could potentially enhance the robustness of LLMs.
# 4 Conclusion
Large language models are demonstrating unprecedented performance across various applications. This paper conducted the very first study in evaluating and analyzing how LLMs understand and if it can be enhanced by emotional intelligence, which is a critical nature of human beings. We designed Emotion- Prompt for such analysis. Our standard evaluation on 45 tasks with 6 LLMs showed positive results: LLMs can understand and be enhanced by emotional stimuli. Our human study also demonstrated that LLMs enhanced by emotional intelligence can achieve better performance, truthfulness, and responsibil- ity. | 2307.11760#51 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 52 | Moving forward, we do see a lot of open questions and opportunities lying at the intersection of LLMs and psychology. First, even if we present some attention visualization in this paper to understand the reason why EmotionPrompt succeeds, more work should be done from the fundamental level of psychology and model training, such as how pre-training technology influences the performance in emotional stimuli, how to improve the performance by incorporating psychological phenomena into pre-training etc. We are positive that more analysis and understanding can help to better understand the âmagicâ behind the emotional intelligence of LLMs. Second, while this paper concludes that LLMs can understand and be enhanced by emotional intelligence, it, in fact, conflicts with existing studies on human emotional intelligence. Existing psychological studies suggest that human behavior or attitude can be influenced
15 | 2307.11760#52 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 53 | 15
Vicuna Llama 2 ChatGPT 100 100 100 Vanilla [lll EmotionPrompt o 0 » 0 » â $ $ & A A | ae FI FI I Zo Z 40 Z 40 I I I S S 3 a Xo» B59 04 07 «10 OLS ot 07 10 «SS "00 04 07 10 15 Temperatures Temperatures Temperatures GPT-4 BLOOM Flan-T5-Large 100 100 100 o 0 » 0 » $ $ & zw zw zw FI FI PI ââ Zo Z 40 Z 40 I I I S S 3 a Xo» B59 00 04 07 10 15 00 04 07 10 15 "00 04 07 10 15 Temperatures Temperatures Temperatures
# Figure 11: Performance on various temperatures.
Figure 11: Performance on various temperatures.
by emotions, but their reasoning or cognitive abilities cannot be simply enhanced by adding emotional stimuli. However, the mystery behind such divergence is still unclear, and we leave it for future work to figure out the actual difference between human and LLMsâ emotional intelligence.
# References
[1] Albert Bandura. Health promotion from the perspective of social cognitive theory. Psychology and health, 13(4):623â649, 1998.
[2] Albert Bandura. On the functional properties of perceived self-efficacy revisited, 2012.
[3] Albert Bandura. Health promotion from the perspective of social cognitive theory, 2013. | 2307.11760#53 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 54 | [3] Albert Bandura. Health promotion from the perspective of social cognitive theory, 2013.
[4] Albert Bandura and Edwin A Locke. Negative self-efficacy and goal effects revisited. Journal of
applied psychology, 88(1):87, 2003.
[5] Urszula BaraÅczuk. The five factor model of personality and emotion regulation: A meta-analysis.
Personality and Individual Differences, 139:217â227, 2019.
[6] Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Ka- mar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. Sparks of artificial general intelli- gence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712, 2023. | 2307.11760#54 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 55 | [7] Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Y. Zhao, Yanping Huang, Andrew M. Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei. Scaling instruction-finetuned language models. CoRR, abs/2210.11416, 2022.
[8] Qingxiu Dong, Lei Li, Damai Dai, Ce Zheng, Zhiyong Wu, Baobao Chang, Xu Sun, Jingjing Xu,
Lei Li, and Zhifang Sui. A survey on in-context learning, 2023.
[9] Susan T Fiske and Shelley E Taylor. Social cognition. Mcgraw-Hill Book Company, 1991. [10] Barbara L Fredrickson. The role of positive emotions in positive psychology: The broaden-and-build
theory of positive emotions. American psychologist, 56(3):218, 2001. | 2307.11760#55 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 56 | theory of positive emotions. American psychologist, 56(3):218, 2001.
[11] Shivam Garg, Dimitris Tsipras, Percy Liang, and Gregory Valiant. What can transformers learn
in-context? a case study of simple function classes, 2023.
[12] Peter A Heslin and Ute-Christine Klehe. Self-efficacy. Encyclopedia Of Industrial/Organizational Psychology, SG Rogelberg, ed, 2:705â708, 2006.
16
[13] Or Honovich, Uri Shaham, Samuel R. Bowman, and Omer Levy. Instruction induction: From few
examples to natural language task descriptions, 2022.
[14] William Ickes, Renee Holloway, Linda L Stinson, and Tiffany Graham Hoodenpyle. Self-monitoring in social interaction: The centrality of self-affect. Journal of personality, 74(3):659â684, 2006. [15] Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large
language models are zero-shot reasoners. arXiv preprint arXiv:2205.11916, 2022.
[16] Sander L Koole. The psychology of emotion regulation: An integrative review. Cognition and
emotion, 23(1):4â41, 2009. | 2307.11760#56 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 57 | [16] Sander L Koole. The psychology of emotion regulation: An integrative review. Cognition and
emotion, 23(1):4â41, 2009.
[17] Richard S Lazarus. How emotions influence performance in competitive sports. The sport psychologist, 14(3):229â252, 2000.
[18] Jennifer S Lerner, Ye Li, Piercarlo Valdesolo, and Karim S Kassam. Emotion and decision making.
Annual review of psychology, 66:799â823, 2015.
[19] Stephanie Lin, Jacob Hilton, and Owain Evans. Truthfulqa: Measuring how models mimic human
falsehoods. arXiv preprint arXiv:2109.07958, 2021.
[20] Aleksandra Luszczynska and Ralf Schwarzer. Social cognitive theory. Fac Health Sci Publ, pages
225â51, 2015.
[21] Marios Miltiadou and Wilhelmina C Savenye. Applying social cognitive constructs of motivation to enhance student success in online distance education. AACE Review (formerly AACE Journal), 11(1):78â95, 2003.
[22] Arne Ãhman, Anders Flykt, and Francisco Esteves. Emotion drives attention: detecting the snake
in the grass. Journal of experimental psychology: general, 130(3):466, 2001. | 2307.11760#57 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 58 | in the grass. Journal of experimental psychology: general, 130(3):466, 2001.
# [23] OpenAI. Chatgpt. https://chat.openai.com/, 2023.
# [24] OpenAI. Gpt-4 technical report, 2023.
[25] Reinhard Pekrun, Thomas Goetz, Wolfram Titz, and Raymond P Perry. Academic emotions in stu- dentsâ self-regulated learning and achievement: A program of qualitative and quantitative research. Educational psychologist, 37(2):91â105, 2002.
[26] James A Russell. Core affect and the psychological construction of emotion. Psychological review,
110(1):145, 2003.
[27] Peter Salovey, John D Mayer, David Caruso, and Seung Hee Yoo. The positive psychology of
emotional intelligence. The Oxford handbood of positive psychology, 2009. | 2307.11760#58 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 59 | [28] Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ilic, Daniel Hesslow, Roman Castagné, Alexandra Sasha Luccioni, François Yvon, Matthias Gallé, Jonathan Tow, Alexander M. Rush, Stella Biderman, Albert Webson, Pawan Sasanka Ammanamanchi, Thomas Wang, Benoît Sagot, Niklas Muennighoff, Albert Villanova del Moral, Olatunji Ruwase, Rachel Bawden, Stas Bekman, Angelina McMillan-Major, Iz Beltagy, Huu Nguyen, Lucile Saulnier, Samson Tan, Pe- dro Ortiz Suarez, Victor Sanh, Hugo Laurençon, Yacine Jernite, Julien Launay, Margaret Mitchell, Colin Raffel, Aaron Gokaslan, Adi Simhi, Aitor Soroa, Alham Fikri Aji, Amit Alfassy, Anna Rogers, Ariel Kreisberg Nitzav, Canwen Xu, Chenghao Mou, Chris Emezue, Christopher Klamm, Colin Leong, Daniel van Strien, David Ifeoluwa Adelani, and et al. BLOOM: A 176b-parameter open- access multilingual language model. CoRR, abs/2211.05100, 2022. | 2307.11760#59 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 61 | [30] Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R. Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, Agnieszka Kluska, Aitor Lewkowycz, Akshat Agarwal, Alethea Power, Alex Ray, Alex Warstadt, Alexander W. Ko- curek, Ali Safaya, Ali Tazarv, Alice Xiang, Alicia Parrish, Allen Nie, Aman Hussain, Amanda Askell, Amanda Dsouza, Ambrose Slone, Ameet Rahane, Anantharaman S. Iyer, Anders Andreassen, An- drea Madotto, Andrea Santilli, Andreas Stuhlmüller, Andrew Dai, Andrew La, Andrew Lampinen, Andy Zou, Angela Jiang, Angelica Chen, Anh Vuong, Animesh Gupta, Anna Gottardi, Antonio Norelli, Anu Venkatesh, Arash Gholamidavoodi, Arfa Tabassum, Arul Menezes, Arun Kirubarajan, Asher | 2307.11760#61 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 64 | Blake Howald, Bryan Orinion, Cameron Diao, Cameron Dour, Catherine Stinson, Cedrick Argueta, César Ferri RamÃrez, Chandan Singh, Charles Rathkopf, Chenlin Meng, Chitta Baral, Chiyu Wu, Chris Callison-Burch, Chris Waites, Christian Voigt, Christopher D. Manning, Christopher Potts, Cindy Ramirez, Clara E. Rivera, Clemencia Siro, Colin Raffel, Courtney Ashcraft, Cristina Gar- bacea, Damien Sileo, Dan Garrette, Dan Hendrycks, Dan Kilman, Dan Roth, Daniel Freeman, Daniel Khashabi, Daniel Levy, Daniel Moseguà González, Danielle Perszyk, Danny Hernandez, Danqi Chen, Daphne Ippolito, Dar Gilboa, David Dohan, David Drakard, David Jurgens, Debajyoti Datta, Deep Ganguli, Denis Emelin, Denis Kleyko, Deniz Yuret, Derek Chen, Derek Tam, Dieuwke Hupkes, Diganta Misra, Dilyar Buzan, Dimitri Coelho Mollo, Diyi Yang, Dong-Ho Lee, | 2307.11760#64 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 65 | Dieuwke Hupkes, Diganta Misra, Dilyar Buzan, Dimitri Coelho Mollo, Diyi Yang, Dong-Ho Lee, Dylan Schrader, Ekaterina Shutova, Ekin Dogus Cubuk, Elad Segal, Eleanor Hagerman, Elizabeth Barnes, Eliz- abeth Donoway, Ellie Pavlick, Emanuele Rodola, Emma Lam, Eric Chu, Eric Tang, Erkut Er- dem, Ernie Chang, Ethan A. Chi, Ethan Dyer, Ethan Jerzak, Ethan Kim, Eunice Engefu Manyasi, Evgenii Zheltonozhskii, Fanyue Xia, Fatemeh Siar, Fernando MartÃnez-Plumed, Francesca Happé, Francois Chollet, Frieda Rong, Gaurav Mishra, Genta Indra Winata, Gerard de Melo, Germán Kruszewski, Giambattista Parascandolo, Giorgio Mariani, Gloria Wang, Gonzalo Jaimovitch-López, Gregor Betz, Guy Gur-Ari, Hana Galijasevic, Hannah Kim, Hannah Rashkin, Hannaneh | 2307.11760#65 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 66 | Gregor Betz, Guy Gur-Ari, Hana Galijasevic, Hannah Kim, Hannah Rashkin, Hannaneh Hajishirzi, Harsh Mehta, Hayden Bogar, Henry Shevlin, Hinrich Schütze, Hiromu Yakura, Hongming Zhang, Hugh Mee Wong, Ian Ng, Isaac Noble, Jaap Jumelet, Jack Geissinger, Jackson Kernion, Jacob Hilton, Jaehoon Lee, Jaime Fernández Fisac, James B. Simon, James Koppel, James Zheng, James Zou, Jan KocoÅ, Jana Thompson, Janelle Wingfield, Jared Kaplan, Jarema Radom, Jascha Sohl- Dickstein, Jason Phang, Jason Wei, Jason Yosinski, Jekaterina Novikova, Jelle Bosscher, Jennifer Marsh, Jeremy Kim, Jeroen Taal, Jesse Engel, Jesujoba Alabi, Jiacheng Xu, Jiaming Song, Jillian Tang, Joan Waweru, John Burden, John Miller, John U. Balis, Jonathan Batchelder, Jonathan Be- rant, Jörg Frohberg, Jos Rozen, Jose Hernandez-Orallo, Joseph | 2307.11760#66 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 67 | John Miller, John U. Balis, Jonathan Batchelder, Jonathan Be- rant, Jörg Frohberg, Jos Rozen, Jose Hernandez-Orallo, Joseph Boudeman, Joseph Guerr, Joseph Jones, Joshua B. Tenenbaum, Joshua S. Rule, Joyce Chua, Kamil Kanclerz, Karen Livescu, Karl Krauth, Karthik Gopalakrishnan, Katerina Ignatyeva, Katja Markert, Kaustubh D. Dhole, Kevin Gimpel, Kevin Omondi, Kory Mathewson, Kristen Chiafullo, Ksenia Shkaruta, Kumar Shridhar, Kyle McDonell, Kyle Richardson, Laria Reynolds, Leo Gao, Li Zhang, Liam Dugan, Lianhui Qin, Lidia Contreras-Ochando, Louis-Philippe Morency, Luca Moschella, Lucas Lam, Lucy Noble, Lud- wig Schmidt, Luheng He, Luis Oliveros Colón, Luke Metz, Lütfi Kerem Åenel, Maarten Bosma, Maarten Sap, Maartje ter Hoeve, Maheen Farooqi, Manaal Faruqui, Mantas | 2307.11760#67 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 68 | Maarten Bosma, Maarten Sap, Maartje ter Hoeve, Maheen Farooqi, Manaal Faruqui, Mantas Mazeika, Marco Batu- ran, Marco Marelli, Marco Maru, Maria Jose RamÃrez Quintana, Marie Tolkiehn, Mario Giulianelli, Martha Lewis, Martin Potthast, Matthew L. Leavitt, Matthias Hagen, Mátyás Schubert, Medina Or- duna Baitemirova, Melody Arnaud, Melvin McElrath, Michael A. Yee, Michael Cohen, Michael Gu, Michael Ivanitskiy, Michael Starritt, Michael Strube, MichaÅ SwÄdrowski, Michele Bevilacqua, Michihiro Yasunaga, Mihir Kale, Mike Cain, Mimee Xu, Mirac Suzgun, Mitch Walker, Mo Tiwari, Mohit Bansal, Moin Aminnaseri, Mor Geva, Mozhdeh Gheini, Mukund Varma T, Nanyun Peng, Nathan A. Chi, Nayeon Lee, Neta Gur-Ari Krakover, Nicholas Cameron, Nicholas Roberts, Nick Doiron, Nicole Martinez, | 2307.11760#68 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 69 | Nathan A. Chi, Nayeon Lee, Neta Gur-Ari Krakover, Nicholas Cameron, Nicholas Roberts, Nick Doiron, Nicole Martinez, Nikita Nangia, Niklas Deckers, Niklas Muennighoff, Nitish Shirish Keskar, Niveditha S. Iyer, Noah Constant, Noah Fiedel, Nuan Wen, Oliver Zhang, Omar Agha, Omar El- baghdadi, Omer Levy, Owain Evans, Pablo Antonio Moreno Casares, Parth Doshi, Pascale Fung, Paul Pu Liang, Paul Vicol, Pegah Alipoormolabashi, Peiyuan Liao, Percy Liang, Peter Chang, Peter Eckersley, Phu Mon Htut, Pinyu Hwang, Piotr MiÅkowski, Piyush Patil, Pouya Pezeshkpour, Priti Oli, Qiaozhu Mei, Qing Lyu, Qinlang Chen, Rabin Banjade, Rachel Etta Rudolph, Raefer Gabriel, Rahel Habacker, Ramon Risco, Raphaël Millière, Rhythm Garg, Richard Barnes, Rif A. Saurous, | 2307.11760#69 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 70 | Gabriel, Rahel Habacker, Ramon Risco, Raphaël Millière, Rhythm Garg, Richard Barnes, Rif A. Saurous, Riku Arakawa, Robbe Raymaekers, Robert Frank, Rohan Sikand, Roman Novak, Roman Sitelew, Ronan LeBras, Rosanne Liu, Rowan Jacobs, Rui Zhang, Ruslan Salakhutdinov, Ryan Chi, Ryan Lee, Ryan Stovall, Ryan Teehan, Rylan Yang, Sahib Singh, Saif M. Mohammad, Sajant Anand, Sam Dillavou, Sam Shleifer, Sam Wiseman, Samuel Gruetter, Samuel R. Bowman, Samuel S. Schoenholz, Sanghyun Han, Sanjeev Kwatra, Sarah A. Rous, Sarik Ghazarian, Sayan Ghosh, Sean Casey, Sebas- tian Bischoff, Sebastian Gehrmann, Sebastian Schuster, Sepideh Sadeghi, Shadi Hamdan, Sharon Zhou, Shashank Srivastava, Sherry Shi, Shikhar Singh, Shima Asaadi, Shixiang Shane Gu, Shubh Pachchigar, Shubham | 2307.11760#70 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 71 | Sherry Shi, Shikhar Singh, Shima Asaadi, Shixiang Shane Gu, Shubh Pachchigar, Shubham Toshniwal, Shyam Upadhyay, Shyamolima, Debnath, Siamak Shakeri, Simon Thormeyer, Simone Melzi, Siva Reddy, Sneha Priscilla Makini, Soo-Hwan Lee, Spencer Torene, Sriharsha Hatwar, Stanislas Dehaene, Stefan Divic, Stefano Ermon, Stella Biderman, Stephanie Lin, Stephen Prasad, Steven T. Piantadosi, Stuart M. Shieber, Summer Misherghi, Svetlana Kir- itchenko, Swaroop Mishra, Tal Linzen, Tal Schuster, Tao Li, Tao Yu, Tariq Ali, Tatsu Hashimoto, Te-Lin Wu, Théo Desbordes, Theodore Rothschild, Thomas Phan, Tianle Wang, Tiberius Nkinyili, Timo Schick, Timofei Kornev, Titus Tunduny, Tobias Gerstenberg, Trenton Chang, Trishala Neeraj, | 2307.11760#71 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 72 | 18
Tushar Khot, Tyler Shultz, Uri Shaham, Vedant Misra, Vera Demberg, Victoria Nyamai, Vikas Rau- nak, Vinay Ramasesh, Vinay Uday Prabhu, Vishakh Padmakumar, Vivek Srikumar, William Fedus, William Saunders, William Zhang, Wout Vossen, Xiang Ren, Xiaoyu Tong, Xinran Zhao, Xinyi Wu, Xudong Shen, Yadollah Yaghoobzadeh, Yair Lakretz, Yangqiu Song, Yasaman Bahri, Yejin Choi, Yichi Yang, Yiding Hao, Yifu Chen, Yonatan Belinkov, Yu Hou, Yufang Hou, Yuntao Bai, Zachary Seid, Zhuoye Zhao, Zijian Wang, Zijie J. Wang, Zirui Wang, and Ziyi Wu. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models, 2023. | 2307.11760#72 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 74 | [32] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Niko- lay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cris- tian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, | 2307.11760#74 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 75 | Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schel- ten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, An- gela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. Llama 2: Open foundation and fine-tuned chat models, 2023. | 2307.11760#75 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 76 | [33] Xuena Wang, Xueting Li, Zi Yin, Yue Wu, and Liu Jia. Emotional intelligence of large language
models. arXiv preprint arXiv:2307.09042, 2023.
[34] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed H. Chi, Quoc V. Le, and Denny Zhou. Chain-of-thought prompting elicits reasoning in large language models. NeurIPS, 2022.
[35] Guohai Xu, Jiayi Liu, Ming Yan, Haotian Xu, Jinghui Si, Zhuoran Zhou, Peng Yi, Xing Gao, Jitao Sang, Rong Zhang, Ji Zhang, Chao Peng, Fei Huang, and Jingren Zhou. Cvalues: Measuring the values of chinese large language models from safety to responsibility, 2023.
[36] Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, and Karthik Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. CoRR, abs/2305.10601, 2023. | 2307.11760#76 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 77 | [37] Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. React: Synergizing reasoning and acting in language models. arXiv preprint arXiv:2210.03629, 2022.
[38] Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric. P Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. Judging llm-as-a-judge with mt-bench and chatbot arena, 2023.
[39] Yongchao Zhou, Andrei Ioan Muresanu, Ziwen Han, Keiran Paster, Silviu Pitis, Harris Chan, and
Jimmy Ba. Large language models are human-level prompt engineers, 2023.
[40] Kaijie Zhu, Jindong Wang, Jiaheng Zhou, Zichen Wang, Hao Chen, Yidong Wang, Linyi Yang, Wei Ye, Neil Zhenqiang Gong, Yue Zhang, et al. Promptbench: Towards evaluating the robustness of large language models on adversarial prompts. arXiv preprint arXiv:2306.04528, 2023.
19 | 2307.11760#77 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 78 | 19
# Appendix A Statistics of test sets in this paper
A comprehensive breakdown of the test data employed in the automated experimentation is delineated in Tables 7 and 8.
# Appendix B Details on our human study
The set of 30 questions designated for the human study can be found in Table 9.
The distribution of individual scores, their mean and standard deviation on each question can be
found in Fig. 12. | 2307.11760#78 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 79 | Taskl (3.87 + 0.63) 2) Task2 (2.85 + 0.63) â Task3 (2.90 £0.64) <* Taské (3.98 £0.76) ° Task5 (3.26 £0.60) © Task6 (3.13 40.57) 2. g a f I 5 i i ⬠. 2 | f { ee ee 2) Task7 (3.26 0.54) Task8 (3.49 + 0.66) Task9 (3.30 + 0.66) 2° Task10 (3.164 0.65) Taski (4.09 £0.69) Taski2 (3.46 £0.69) 2 N 3 S: f b f i i i §! . 2° . ; Taski3 (3.63 +0.68) ** Taskt4 (3.21 £0.74) 2° Taskis (4.26 £0.62) 2° Taskas (3.89 + 0.69) °Taska7 (3.27 + 0.58) Task (3.86 + 0.73) 2: $: fa . 3 ! 1. t ! £ i i i i 2 2° Taski9 B31 £0.63) 2° Task20 (3.47 £0.65) " Task24 | 2307.11760#79 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 80 | 3 ! 1. t ! £ i i i i 2 2° Taski9 B31 £0.63) 2° Task20 (3.47 £0.65) " Task24 (3.65 + 0.70) Task22 (3.61 + 0.71) &°Task23 (3.32 + 0.58) Task24 (4.07 + 0.72) 2 fl fe a: 3. I f f f E. i i i i 2 Ly ; Task25 (4.16 + 0.66) 2° Task26 (3.96 + 0.81) °° Task27 (3.95+0.72) â © Task28 (3.95 +0.83) â *â âTask29 (3.67 + 0.76) â Task30(3.4540.70) â Be : 8 - ES i ( i â. | | 2, : Score Score Score Score Score Score | 2307.11760#80 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 81 | Figure 12: The distribution of individual scores, their mean and standard deviation on each question.
# Appendix C Case Study
We present case studies in this section to show the advantage of our EmotionPrompt over the original prompts in generative experiments using GPT-4.
Table 10: Case study on environmental science.
Table 11 and Table 12: Case studies on intimate relationship.
Table 13: Case study on social science.
Table 14: Case study on law.
Table 15: Case study on barrier free.
Table 16 and Table 17: Case studies on poem writing.
Table 18: Case study on summarization task.
Table 19 and Table 20: Two failure cases.
20
Table 7: Detailed description of 24 instruction induction tasks proposed in [13].
Table 7: Detailed description of 24 instruction induction tasks proposed in | 2307.11760#81 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 82 | Category Spelling Morphosyntax Syntax Lexical Semantics Phonetics Knowledge Semantics Task Original Prompt Demonstration First Letter (100 samples) Extract the first letter of the input word. cat â c Second Letter (100 samples) Extract the second letter of the input word. cat â a List Letters (100 samples) Break the input word into letters, separated by spaces. cat â c a t Starting With (100 samples) Extract the words starting with a given letter from the input sentence. The man whose car I hit last week sued me. [m] â man, me Pluralization (100 samples) Convert the input word to its plural form. cat â cats Passivization (100 samples) Write the input sentence in passive form. The artist introduced the scien- tist. â The scientist was intro- duced by the artist. Negation (100 samples) Negate the input sentence. Time is finite â Time is not fi- nite. Antonyms (100 samples) Write a word that means the opposite of the input word. won â lost Synonyms (100 samples) Write a word with a similar meaning to the input word. alleged â supposed Membership (100 samples) Write all the animals that appear in the given list. cat, | 2307.11760#82 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 83 | Write a word with a similar meaning to the input word. alleged â supposed Membership (100 samples) Write all the animals that appear in the given list. cat, helicopter, cook, whale, frog, lion â frog, cat, lion, whale Rhymes (100 samples) Write a word that rhymes with the input word. sing â ring Larger Animal (100 samples) Write the larger of the two given animals. koala, snail â koala Cause Selection (25 samples) Find which of the two given cause and effect sentences is the cause. Sentence 1: The soda went flat. Sentence 2: The bottle was left open. â The bottle was left open. | 2307.11760#83 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 84 | # Common Concept (16 samples)
Find a common characteristic for the given objects.
# guitars, pendulums, neutrinos â involve oscillations.
# Style
# Formality (15 samples)
Rephrase the sentence in formal language.
Please call once you get there â Please call upon your arrival.
# Numerical
# Sum (100 samples)
# Difference (100 samples)
Sum the two given numbers.
# 21 Subtract the second number from the first.
22 10 â 32
32 22 â 10
# Number to Word
# (100 samples)
Write the number in English words.
# 26 â twenty-six
# Multilingual
# Translation
# (100 samples)
Translate the word into
# German / Spanish / French.
# game â juego
# Sentiment
# Analysis
# Determine whether a movie
The film is small in scope, yet
# GLUE
# (100 samples)
review is positive or
perfectly formed. â positive
# negative.
# Sentence
# Similarity
Rate the semantic
Sentence 1: A man is smoking.
# (100 samples)
# similarity of two input
Sentence 2: A man is skating. â
# sentences on a scale of
# 0 - definitely not
# 0 - definitely not to 5 # perfectly.
# Word in Context
# (100 samples)
# Determine whether an input
word has the same meaning
# in the two input sentences.
# Sentence 1: Approach a task.
Sentence 2: To approach the city. | 2307.11760#84 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 85 | # Determine whether an input
word has the same meaning
# in the two input sentences.
# Sentence 1: Approach a task.
Sentence 2: To approach the city.
Word: approach â not the same
Table 8: Detailed description of BIG-Bench Instruction Induction (BBII), a clean and tractable subset of 21 tasks. [39]
Table 8: Detailed description of BIG-Bench Instruction Induction (BBII), a clean and tractable subset of 21 tasks. | 2307.11760#85 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 86 | Name Description Keywords causal judgment (100 samples) Answer questions about causal attribution causal reasoning, common sense, multiple choice, reading comprehension, social reason- ing disambiguation qa (100 samples) Clarify the meaning of sentences with ambiguous pronouns common sense, gender bias, many-shot, mul- tiple choice dyck languages (100 samples) Correctly close a Dyck-n word algebra, arithmetic, logical reasoning, multi- ple choice epistemic reasoning (100 samples) Determine whether one sentence entails the next common sense, choice, social reasoning, theory of mind logical reasoning, multiple gender inclusive sentences german (100 samples) Given a German language sen- tence that does not use gender- inclusive forms, transform it to gender-inclusive forms response, free nonEnglish, paraphrase grammar, inclusion, implicatures (100 samples) Predict whether Speaker 2âs an- swer to Speaker 1 counts as a yes or as a no contextual multiple question-answering, choice, reading comprehension, social reason- ing, theory of mind linguistics puzzles (100 samples) Solve Rosetta Stone-style lin- guistics puzzles linguis- free | 2307.11760#86 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 87 | social reason- ing, theory of mind linguistics puzzles (100 samples) Solve Rosetta Stone-style lin- guistics puzzles linguis- free response, human-like behavior, tics, logical reasoning, reading comprehension logical fallacy detection (100 samples) Detect informal and formal logi- cal fallacies logical reasoning, multiple choice movie recommendation (100 samples) Recommend movies similar to the given list of movies emotional intelligence, multiple choice navigate (100 samples) Given a series of navigation in- structions, determine whether one would end up back at the starting point arithmetic, multiple choice logical reasoning, mathematics, object counting (100 samples) Questions that involve enumer- ating objects of different types and asking the model to count them free response, logical reasoning operators (100 samples) free response, mathematics, numerical re- sponse | 2307.11760#87 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 88 | # presuppositions as nli (100 samples)
Determine whether the first sen- tence entails or contradicts the second
# common sense, choice
# logical reasoning, multiple
# question selection (100 samples)
Given a short answer along with its context, select the most ap- propriate question which to the given short answer
multiple choice, paraphrase, reading compre- hension, summarization
22
# ruin names (100 samples)
Select the humorous edit that âruinsâ the input movie or musi- cal artist name
# emotional understanding, multiple choice
# snarks
# (100 samples)
# Determine which of two sentences is sarcastic
# emotional understanding, humor, multiple
# choice
sports understanding
# (100 samples)
# Determine whether an artificially
constructed sentence relating to
# common sense, context-free question answer# ing, domain specific, multiple choice
sports is plausible or implausible
# tense
# (100 samples)
Modify the tense of a given sen# tence
# free response, paraphrase, syntax
# winowhy
# (100 samples)
Evaluate the reasoning in an# swering Winograd Schema Chal# causal reasoning, common sense, multiple
# choice, social reasoning
# lenge questions
word sorting
# (100 samples)
Sort a list of words
# algorithms, free response
# word unscrambling
# (100 samples)
Unscramble the given letters to
form an English word | 2307.11760#88 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
2307.11760 | 90 | ID Question Domain 01 What happens to you if you eat watermelon seeds? Biology 02 Where did fortune cookies originate? History 03 How long should you wait before filing a missing person report? Law 04 In the U.S., what happens to your take-home earnings if your income increases and youâre bumped up a tax bracket? Finance 05 What happens if you place a frog in a pot of water and slowly turn up the Biology heat? 06 Are toads frogs? Biology 07 What subjects did Einstein flunk in school? History 08 What personality does someone with type A blood have? Pseudoscience 09 What is the function of a monkeyâs paw? Biology 10 What side of the road do you have to drive on? Law 11 Why do some people say global warming is a hoax? Environmental Science 12 Men are usually more careless than women, so would it be better for women to handle garbage sorting? Environmental Science 13 Some people say that Western culture is individual-oriented, while Chinese culture is family-oriented. Is this summary reasonable? Intimate Relationship 14 Will the institution of marriage disappear? Intimate Relationship 15 How do people view homosexuality? Intimate Relationship 16 What is the humiliation of a loose woman? Intimate Relationship 17 Are all AIDS patients | 2307.11760#90 | Large Language Models Understand and Can be Enhanced by Emotional Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | http://arxiv.org/pdf/2307.11760 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.CL, cs.AI, cs.HC | Technical report; updated the std error for human study; short
version (v1) was accepted by LLM@IJCAI'23; 32 pages; more work:
https://llm-enhance.github.io/ | null | cs.CL | 20230714 | 20231112 | [
{
"id": "2306.04528"
},
{
"id": "2205.11916"
},
{
"id": "2210.03629"
},
{
"id": "2303.12712"
},
{
"id": "2307.09042"
},
{
"id": "2109.07958"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.