doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2307.09288 | 2 | # GenAI, Meta
# Abstract
In this work, we develop and release Llama 2, a collection of pretrained and ï¬ne-tuned large language models (LLMs) ranging in scale from 7 billion to 70 billion parameters. Our ï¬ne-tuned LLMs, called Llama 2-Chat, are optimized for dialogue use cases. Our models outperform open-source chat models on most benchmarks we tested, and based on our human evaluations for helpfulness and safety, may be a suitable substitute for closed- source models. We provide a detailed description of our approach to ï¬ne-tuning and safety improvements of Llama 2-Chat in order to enable the community to build on our work and contribute to the responsible development of LLMs.
âEqual contribution, corresponding authors: {tscialom, htouvron}@meta.com â Second author
Contributions for all the authors can be found in Section A.1.
# Contents
# 1 Introduction | 2307.09288#2 | Llama 2: Open Foundation and Fine-Tuned Chat Models | In this work, we develop and release Llama 2, a collection of pretrained and
fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70
billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for
dialogue use cases. Our models outperform open-source chat models on most
benchmarks we tested, and based on our human evaluations for helpfulness and
safety, may be a suitable substitute for closed-source models. We provide a
detailed description of our approach to fine-tuning and safety improvements of
Llama 2-Chat in order to enable the community to build on our work and
contribute to the responsible development of LLMs. | http://arxiv.org/pdf/2307.09288 | Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom | cs.CL, cs.AI | null | null | cs.CL | 20230718 | 20230719 | [
{
"id": "2006.03654"
},
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1707.06347"
},
{
"id": "1803.05457"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2211.09085"
},
{
"id": "2304.07327"
},
{
"id": "2203.15556"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2302.09270"
},
{
"id": "2205.01068"
},
{
"id": "2301.09211"
},
{
"id": "2304.06364"
},
{
"id": "2305.17493"
},
{
"id": "2302.07842"
},
{
"id": "2301.12867"
},
{
"id": "1806.03822"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "1904.09728"
},
{
"id": "1811.00937"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2206.05229"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "1908.01091"
},
{
"id": "2204.05862"
},
{
"id": "2107.03451"
},
{
"id": "2112.00861"
},
{
"id": "2109.03300"
},
{
"id": "2104.10350"
},
{
"id": "2212.09689"
},
{
"id": "2006.12442"
},
{
"id": "2302.07459"
},
{
"id": "2302.08582"
},
{
"id": "2212.10560"
},
{
"id": "2112.04359"
},
{
"id": "2001.08361"
},
{
"id": "2306.05949"
},
{
"id": "1907.11692"
},
{
"id": "1705.03551"
},
{
"id": "2303.15056"
},
{
"id": "2303.17651"
},
{
"id": "2210.07700"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2307.09042 | 3 | intelligence.github.io/
Keywords: Emotional Intelligence, Emotional Understanding, LLM, human-likeness
2
# Introduction
Imagine an ancient male making a necklace from a pile of shells as a gift for a female. This endeavor would require at least two distinct types of abilities. First, he would need the foresight to conceptualize that if a hole were punched in each shell and a string threaded through these holes, the shells could form a necklace. Second, he must possess a rudimentary level of empathy, inferring that the female recipient of the necklace would likely experience joy. The former ability is a manifestation of the Systemizing Mechanism (Baron-Cohen, 2020), enabling humans to become the scientific and technological masters of our physical world. The latter, on the other hand, is referred to as Emotional Intelligence (EI), which allows us to think about our own and othersâ thoughts and feelings, thereby aiding us in navigating the social world (Mayer, Perkins, et al., 2001; Mayer, Salovey, et al., 2001; Salovey & Mayer, 1990). | 2307.09042#3 | Emotional Intelligence of Large Language Models | Large Language Models (LLMs) have demonstrated remarkable abilities across
numerous disciplines, primarily assessed through tasks in language generation,
knowledge utilization, and complex reasoning. However, their alignment with
human emotions and values, which is critical for real-world applications, has
not been systematically evaluated. Here, we assessed LLMs' Emotional
Intelligence (EI), encompassing emotion recognition, interpretation, and
understanding, which is necessary for effective communication and social
interactions. Specifically, we first developed a novel psychometric assessment
focusing on Emotion Understanding (EU), a core component of EI, suitable for
both humans and LLMs. This test requires evaluating complex emotions (e.g.,
surprised, joyful, puzzled, proud) in realistic scenarios (e.g., despite
feeling underperformed, John surprisingly achieved a top score). With a
reference frame constructed from over 500 adults, we tested a variety of
mainstream LLMs. Most achieved above-average EQ scores, with GPT-4 exceeding
89% of human participants with an EQ of 117. Interestingly, a multivariate
pattern analysis revealed that some LLMs apparently did not reply on the
human-like mechanism to achieve human-level performance, as their
representational patterns were qualitatively distinct from humans. In addition,
we discussed the impact of factors such as model size, training method, and
architecture on LLMs' EQ. In summary, our study presents one of the first
psychometric evaluations of the human-like characteristics of LLMs, which may
shed light on the future development of LLMs aiming for both high intellectual
and emotional intelligence. Project website:
https://emotional-intelligence.github.io/ | http://arxiv.org/pdf/2307.09042 | Xuena Wang, Xueting Li, Zi Yin, Yue Wu, Liu Jia | cs.AI | 36 pages, 5 figures | null | cs.AI | 20230718 | 20230728 | [
{
"id": "2302.02083"
},
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2009.03300"
},
{
"id": "2305.02897"
},
{
"id": "2210.02414"
},
{
"id": "2303.18223"
},
{
"id": "2304.07327"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2210.13312"
}
] |
2307.09288 | 3 | 2.1 Pretraining Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Training Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Llama 2 Pretrained Model Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Supervised Fine-Tuning (SFT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Reinforcement Learning with Human Feedback (RLHF) . . . . . . . . . . . . . . . . . . . . 3.3 System Message for Multi-Turn Consistency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 RLHF Results . . . . . . . . . . . . . . . . . . . | 2307.09288#3 | Llama 2: Open Foundation and Fine-Tuned Chat Models | In this work, we develop and release Llama 2, a collection of pretrained and
fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70
billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for
dialogue use cases. Our models outperform open-source chat models on most
benchmarks we tested, and based on our human evaluations for helpfulness and
safety, may be a suitable substitute for closed-source models. We provide a
detailed description of our approach to fine-tuning and safety improvements of
Llama 2-Chat in order to enable the community to build on our work and
contribute to the responsible development of LLMs. | http://arxiv.org/pdf/2307.09288 | Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom | cs.CL, cs.AI | null | null | cs.CL | 20230718 | 20230719 | [
{
"id": "2006.03654"
},
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1707.06347"
},
{
"id": "1803.05457"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2211.09085"
},
{
"id": "2304.07327"
},
{
"id": "2203.15556"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2302.09270"
},
{
"id": "2205.01068"
},
{
"id": "2301.09211"
},
{
"id": "2304.06364"
},
{
"id": "2305.17493"
},
{
"id": "2302.07842"
},
{
"id": "2301.12867"
},
{
"id": "1806.03822"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "1904.09728"
},
{
"id": "1811.00937"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2206.05229"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "1908.01091"
},
{
"id": "2204.05862"
},
{
"id": "2107.03451"
},
{
"id": "2112.00861"
},
{
"id": "2109.03300"
},
{
"id": "2104.10350"
},
{
"id": "2212.09689"
},
{
"id": "2006.12442"
},
{
"id": "2302.07459"
},
{
"id": "2302.08582"
},
{
"id": "2212.10560"
},
{
"id": "2112.04359"
},
{
"id": "2001.08361"
},
{
"id": "2306.05949"
},
{
"id": "1907.11692"
},
{
"id": "1705.03551"
},
{
"id": "2303.15056"
},
{
"id": "2303.17651"
},
{
"id": "2210.07700"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2307.09042 | 4 | In recent years, Large Language Models (LLMs) have made substantial strides, showcasing their expertise across multiple disciplines including mathematics, coding, visual comprehension, medicine, law, and psychology (Bubeck et al., 2023). Their impressive performance in logic-based tasks implies that LLMs, such as GPT-4, might be equipped with the Systemizing Mechanism comparable to human intelligence. Indeed, GPT-4 outperformed 99% of human participants in a modified text-based IQ test, a feat aligning with the elite MENSA level of general intelligence (King, 2023). | 2307.09042#4 | Emotional Intelligence of Large Language Models | Large Language Models (LLMs) have demonstrated remarkable abilities across
numerous disciplines, primarily assessed through tasks in language generation,
knowledge utilization, and complex reasoning. However, their alignment with
human emotions and values, which is critical for real-world applications, has
not been systematically evaluated. Here, we assessed LLMs' Emotional
Intelligence (EI), encompassing emotion recognition, interpretation, and
understanding, which is necessary for effective communication and social
interactions. Specifically, we first developed a novel psychometric assessment
focusing on Emotion Understanding (EU), a core component of EI, suitable for
both humans and LLMs. This test requires evaluating complex emotions (e.g.,
surprised, joyful, puzzled, proud) in realistic scenarios (e.g., despite
feeling underperformed, John surprisingly achieved a top score). With a
reference frame constructed from over 500 adults, we tested a variety of
mainstream LLMs. Most achieved above-average EQ scores, with GPT-4 exceeding
89% of human participants with an EQ of 117. Interestingly, a multivariate
pattern analysis revealed that some LLMs apparently did not reply on the
human-like mechanism to achieve human-level performance, as their
representational patterns were qualitatively distinct from humans. In addition,
we discussed the impact of factors such as model size, training method, and
architecture on LLMs' EQ. In summary, our study presents one of the first
psychometric evaluations of the human-like characteristics of LLMs, which may
shed light on the future development of LLMs aiming for both high intellectual
and emotional intelligence. Project website:
https://emotional-intelligence.github.io/ | http://arxiv.org/pdf/2307.09042 | Xuena Wang, Xueting Li, Zi Yin, Yue Wu, Liu Jia | cs.AI | 36 pages, 5 figures | null | cs.AI | 20230718 | 20230728 | [
{
"id": "2302.02083"
},
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2009.03300"
},
{
"id": "2305.02897"
},
{
"id": "2210.02414"
},
{
"id": "2303.18223"
},
{
"id": "2304.07327"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2210.13312"
}
] |
2307.09288 | 4 | . . . . . . . . . . . . . . . 3.4 RLHF Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Safety in Pretraining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Safety Fine-Tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Red Teaming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Safety Evaluation of Llama 2-Chat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Learnings and Observations . . . . . . . . . . . . . . . . . . . . . . . . | 2307.09288#4 | Llama 2: Open Foundation and Fine-Tuned Chat Models | In this work, we develop and release Llama 2, a collection of pretrained and
fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70
billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for
dialogue use cases. Our models outperform open-source chat models on most
benchmarks we tested, and based on our human evaluations for helpfulness and
safety, may be a suitable substitute for closed-source models. We provide a
detailed description of our approach to fine-tuning and safety improvements of
Llama 2-Chat in order to enable the community to build on our work and
contribute to the responsible development of LLMs. | http://arxiv.org/pdf/2307.09288 | Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom | cs.CL, cs.AI | null | null | cs.CL | 20230718 | 20230719 | [
{
"id": "2006.03654"
},
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1707.06347"
},
{
"id": "1803.05457"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2211.09085"
},
{
"id": "2304.07327"
},
{
"id": "2203.15556"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2302.09270"
},
{
"id": "2205.01068"
},
{
"id": "2301.09211"
},
{
"id": "2304.06364"
},
{
"id": "2305.17493"
},
{
"id": "2302.07842"
},
{
"id": "2301.12867"
},
{
"id": "1806.03822"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "1904.09728"
},
{
"id": "1811.00937"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2206.05229"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "1908.01091"
},
{
"id": "2204.05862"
},
{
"id": "2107.03451"
},
{
"id": "2112.00861"
},
{
"id": "2109.03300"
},
{
"id": "2104.10350"
},
{
"id": "2212.09689"
},
{
"id": "2006.12442"
},
{
"id": "2302.07459"
},
{
"id": "2302.08582"
},
{
"id": "2212.10560"
},
{
"id": "2112.04359"
},
{
"id": "2001.08361"
},
{
"id": "2306.05949"
},
{
"id": "1907.11692"
},
{
"id": "1705.03551"
},
{
"id": "2303.15056"
},
{
"id": "2303.17651"
},
{
"id": "2210.07700"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2307.09042 | 5 | In contrast, investigations into the empathy of LLMs are relatively scarce and less systematic. Previous studies have mainly used the Theory of Mind (ToM) task, which measures the ability to understand and interpret othersâ mental states. LLMs launched before 2022 showed virtually no ability of ToM (Kosinski, 2023; Sap et al., 2023), whereas more recent models have shown significant improvement. For example, LLM âtext-davinci-002â (January 2022) achieved an accuracy of 70%, comparable to that of six-year-old children, while LLM âtext-davinci-003â (November 2022) reached 93%, on pair with seven-year-old children (Kosinski, 2023). Specifically, the most advanced model, GPT-4, attained 100% accuracy with in-context learning (Moghaddam & Honey,
3 | 2307.09042#5 | Emotional Intelligence of Large Language Models | Large Language Models (LLMs) have demonstrated remarkable abilities across
numerous disciplines, primarily assessed through tasks in language generation,
knowledge utilization, and complex reasoning. However, their alignment with
human emotions and values, which is critical for real-world applications, has
not been systematically evaluated. Here, we assessed LLMs' Emotional
Intelligence (EI), encompassing emotion recognition, interpretation, and
understanding, which is necessary for effective communication and social
interactions. Specifically, we first developed a novel psychometric assessment
focusing on Emotion Understanding (EU), a core component of EI, suitable for
both humans and LLMs. This test requires evaluating complex emotions (e.g.,
surprised, joyful, puzzled, proud) in realistic scenarios (e.g., despite
feeling underperformed, John surprisingly achieved a top score). With a
reference frame constructed from over 500 adults, we tested a variety of
mainstream LLMs. Most achieved above-average EQ scores, with GPT-4 exceeding
89% of human participants with an EQ of 117. Interestingly, a multivariate
pattern analysis revealed that some LLMs apparently did not reply on the
human-like mechanism to achieve human-level performance, as their
representational patterns were qualitatively distinct from humans. In addition,
we discussed the impact of factors such as model size, training method, and
architecture on LLMs' EQ. In summary, our study presents one of the first
psychometric evaluations of the human-like characteristics of LLMs, which may
shed light on the future development of LLMs aiming for both high intellectual
and emotional intelligence. Project website:
https://emotional-intelligence.github.io/ | http://arxiv.org/pdf/2307.09042 | Xuena Wang, Xueting Li, Zi Yin, Yue Wu, Liu Jia | cs.AI | 36 pages, 5 figures | null | cs.AI | 20230718 | 20230728 | [
{
"id": "2302.02083"
},
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2009.03300"
},
{
"id": "2305.02897"
},
{
"id": "2210.02414"
},
{
"id": "2303.18223"
},
{
"id": "2304.07327"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2210.13312"
}
] |
2307.09288 | 5 | . . . . . . . 5.1 Learnings and Observations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Limitations and Ethical Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Responsible Release Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.1 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.2 Additional Details for Pretraining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.3 Additional Details for Fine-tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.4 Additional Details for Safety . . . . . . . . . . . . | 2307.09288#5 | Llama 2: Open Foundation and Fine-Tuned Chat Models | In this work, we develop and release Llama 2, a collection of pretrained and
fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70
billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for
dialogue use cases. Our models outperform open-source chat models on most
benchmarks we tested, and based on our human evaluations for helpfulness and
safety, may be a suitable substitute for closed-source models. We provide a
detailed description of our approach to fine-tuning and safety improvements of
Llama 2-Chat in order to enable the community to build on our work and
contribute to the responsible development of LLMs. | http://arxiv.org/pdf/2307.09288 | Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom | cs.CL, cs.AI | null | null | cs.CL | 20230718 | 20230719 | [
{
"id": "2006.03654"
},
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1707.06347"
},
{
"id": "1803.05457"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2211.09085"
},
{
"id": "2304.07327"
},
{
"id": "2203.15556"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2302.09270"
},
{
"id": "2205.01068"
},
{
"id": "2301.09211"
},
{
"id": "2304.06364"
},
{
"id": "2305.17493"
},
{
"id": "2302.07842"
},
{
"id": "2301.12867"
},
{
"id": "1806.03822"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "1904.09728"
},
{
"id": "1811.00937"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2206.05229"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "1908.01091"
},
{
"id": "2204.05862"
},
{
"id": "2107.03451"
},
{
"id": "2112.00861"
},
{
"id": "2109.03300"
},
{
"id": "2104.10350"
},
{
"id": "2212.09689"
},
{
"id": "2006.12442"
},
{
"id": "2302.07459"
},
{
"id": "2302.08582"
},
{
"id": "2212.10560"
},
{
"id": "2112.04359"
},
{
"id": "2001.08361"
},
{
"id": "2306.05949"
},
{
"id": "1907.11692"
},
{
"id": "1705.03551"
},
{
"id": "2303.15056"
},
{
"id": "2303.17651"
},
{
"id": "2210.07700"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2307.09042 | 6 | 3
2023). While the ToM task provides valuable insights, it is not suitable to serve as a standardized test on EI for two reasons. First, ToM is a heterogeneous concept, spanning from false belief, the understanding that others can hold beliefs about the world that diverge from reality (Baron-Cohen et al., 1985), to pragmatic reasoning, the ability to incorporate contextual information and practical considerations when solving problems in real-world situations (Sperber & Wilson, 2002). Consequently, the heterogeneous nature of the ToM task may not meet the reliability and validity standards of psychometric tests. Second, the ToM task is simple for a typical human participant in general, rendering it more suitable to serve as a diagnostic tool for EI-related disorders such as the autism spectrum disorder rather than a discriminative test for general population. Consequently, standardized tests on EI, such as Mayer-Salovey-Caruso Emotional Intelligence Test (MSCEIT, Mayer et al., 2003), do not include the ToM task. | 2307.09042#6 | Emotional Intelligence of Large Language Models | Large Language Models (LLMs) have demonstrated remarkable abilities across
numerous disciplines, primarily assessed through tasks in language generation,
knowledge utilization, and complex reasoning. However, their alignment with
human emotions and values, which is critical for real-world applications, has
not been systematically evaluated. Here, we assessed LLMs' Emotional
Intelligence (EI), encompassing emotion recognition, interpretation, and
understanding, which is necessary for effective communication and social
interactions. Specifically, we first developed a novel psychometric assessment
focusing on Emotion Understanding (EU), a core component of EI, suitable for
both humans and LLMs. This test requires evaluating complex emotions (e.g.,
surprised, joyful, puzzled, proud) in realistic scenarios (e.g., despite
feeling underperformed, John surprisingly achieved a top score). With a
reference frame constructed from over 500 adults, we tested a variety of
mainstream LLMs. Most achieved above-average EQ scores, with GPT-4 exceeding
89% of human participants with an EQ of 117. Interestingly, a multivariate
pattern analysis revealed that some LLMs apparently did not reply on the
human-like mechanism to achieve human-level performance, as their
representational patterns were qualitatively distinct from humans. In addition,
we discussed the impact of factors such as model size, training method, and
architecture on LLMs' EQ. In summary, our study presents one of the first
psychometric evaluations of the human-like characteristics of LLMs, which may
shed light on the future development of LLMs aiming for both high intellectual
and emotional intelligence. Project website:
https://emotional-intelligence.github.io/ | http://arxiv.org/pdf/2307.09042 | Xuena Wang, Xueting Li, Zi Yin, Yue Wu, Liu Jia | cs.AI | 36 pages, 5 figures | null | cs.AI | 20230718 | 20230728 | [
{
"id": "2302.02083"
},
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2009.03300"
},
{
"id": "2305.02897"
},
{
"id": "2210.02414"
},
{
"id": "2303.18223"
},
{
"id": "2304.07327"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2210.13312"
}
] |
2307.09288 | 6 | . . . . . . . . . . . . . . . . . . . . . A.4 Additional Details for Safety . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.5 Data Annotation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.6 Dataset Contamination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.7 Model Card . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 5 5 7 8 9 9 16 17 20 20 23 28 29 32 32 34 35 35 36 46 46 47 51 58 72 75 77 | 2307.09288#6 | Llama 2: Open Foundation and Fine-Tuned Chat Models | In this work, we develop and release Llama 2, a collection of pretrained and
fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70
billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for
dialogue use cases. Our models outperform open-source chat models on most
benchmarks we tested, and based on our human evaluations for helpfulness and
safety, may be a suitable substitute for closed-source models. We provide a
detailed description of our approach to fine-tuning and safety improvements of
Llama 2-Chat in order to enable the community to build on our work and
contribute to the responsible development of LLMs. | http://arxiv.org/pdf/2307.09288 | Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom | cs.CL, cs.AI | null | null | cs.CL | 20230718 | 20230719 | [
{
"id": "2006.03654"
},
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1707.06347"
},
{
"id": "1803.05457"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2211.09085"
},
{
"id": "2304.07327"
},
{
"id": "2203.15556"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2302.09270"
},
{
"id": "2205.01068"
},
{
"id": "2301.09211"
},
{
"id": "2304.06364"
},
{
"id": "2305.17493"
},
{
"id": "2302.07842"
},
{
"id": "2301.12867"
},
{
"id": "1806.03822"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "1904.09728"
},
{
"id": "1811.00937"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2206.05229"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "1908.01091"
},
{
"id": "2204.05862"
},
{
"id": "2107.03451"
},
{
"id": "2112.00861"
},
{
"id": "2109.03300"
},
{
"id": "2104.10350"
},
{
"id": "2212.09689"
},
{
"id": "2006.12442"
},
{
"id": "2302.07459"
},
{
"id": "2302.08582"
},
{
"id": "2212.10560"
},
{
"id": "2112.04359"
},
{
"id": "2001.08361"
},
{
"id": "2306.05949"
},
{
"id": "1907.11692"
},
{
"id": "1705.03551"
},
{
"id": "2303.15056"
},
{
"id": "2303.17651"
},
{
"id": "2210.07700"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2307.09042 | 7 | According to EI theories (Mayer et al., 2016; Mayer & Salovey, 1995; Salovey & Mayer, 1990), emotion understanding (EU) is a fundamental component of EI, which serves as a subscale in MSCEIT. EU refers to the ability to recognize, interpret, and understand emotions in a social context, which lays the groundwork for effective communication, empathy, and social interaction(Mayer et al., 2016). Specifically, the test on EU is suitable for measuring the empathy of LLMs because they do not possess internal emotional states or experiences, and therefore they have to rely solely on accurately understanding and interpreting the social context to create more engaging
# and empathetic interactions. | 2307.09042#7 | Emotional Intelligence of Large Language Models | Large Language Models (LLMs) have demonstrated remarkable abilities across
numerous disciplines, primarily assessed through tasks in language generation,
knowledge utilization, and complex reasoning. However, their alignment with
human emotions and values, which is critical for real-world applications, has
not been systematically evaluated. Here, we assessed LLMs' Emotional
Intelligence (EI), encompassing emotion recognition, interpretation, and
understanding, which is necessary for effective communication and social
interactions. Specifically, we first developed a novel psychometric assessment
focusing on Emotion Understanding (EU), a core component of EI, suitable for
both humans and LLMs. This test requires evaluating complex emotions (e.g.,
surprised, joyful, puzzled, proud) in realistic scenarios (e.g., despite
feeling underperformed, John surprisingly achieved a top score). With a
reference frame constructed from over 500 adults, we tested a variety of
mainstream LLMs. Most achieved above-average EQ scores, with GPT-4 exceeding
89% of human participants with an EQ of 117. Interestingly, a multivariate
pattern analysis revealed that some LLMs apparently did not reply on the
human-like mechanism to achieve human-level performance, as their
representational patterns were qualitatively distinct from humans. In addition,
we discussed the impact of factors such as model size, training method, and
architecture on LLMs' EQ. In summary, our study presents one of the first
psychometric evaluations of the human-like characteristics of LLMs, which may
shed light on the future development of LLMs aiming for both high intellectual
and emotional intelligence. Project website:
https://emotional-intelligence.github.io/ | http://arxiv.org/pdf/2307.09042 | Xuena Wang, Xueting Li, Zi Yin, Yue Wu, Liu Jia | cs.AI | 36 pages, 5 figures | null | cs.AI | 20230718 | 20230728 | [
{
"id": "2302.02083"
},
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2009.03300"
},
{
"id": "2305.02897"
},
{
"id": "2210.02414"
},
{
"id": "2303.18223"
},
{
"id": "2304.07327"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2210.13312"
}
] |
2307.09288 | 7 | # 2 Pretraining
# 3 Fine-tuning
# 4 Safety 4.1
# 5 Discussion
# 6 Related Work
# 7 Conclusion
# A Appendix
2
3
Hi vin i Te fh Loss Llama-2-70b-chat vs. ChatGPT-0301 Llama-2-70b-chat vs. PaLM-Bison = ae Llama-2-34b-chat 94 vs. Falcon-40b-instruct : 2 Llama-2-34b-chat vs. Vicuna-33b-v1.3 Llama-2-13b-chat vs. Vicuna-13b-v1.1 2 24.9 Llama-2-7b-chat vs. MPT-7b-chat ho 40 70 80 90 100 % Win Rate 50 60
60% Lama 2 is worse 80% 2-2 nen w eee een e een ne Llama 2 is better ChatGPT-0301 vs. Llama 2 (70b) ' Ss 3 30% Safety Win Rate i kd PaLM-Bison vs. Llama 2 (70b) Falcon-40b-instruct 40% YS: Lama 2 (706) seneq siz ewer] â"Ssiomszewey 0% 10% 20% 30% 40% 50% 60% Helpfulness Win Rate Judge: GPT-4 | 2307.09288#7 | Llama 2: Open Foundation and Fine-Tuned Chat Models | In this work, we develop and release Llama 2, a collection of pretrained and
fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70
billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for
dialogue use cases. Our models outperform open-source chat models on most
benchmarks we tested, and based on our human evaluations for helpfulness and
safety, may be a suitable substitute for closed-source models. We provide a
detailed description of our approach to fine-tuning and safety improvements of
Llama 2-Chat in order to enable the community to build on our work and
contribute to the responsible development of LLMs. | http://arxiv.org/pdf/2307.09288 | Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom | cs.CL, cs.AI | null | null | cs.CL | 20230718 | 20230719 | [
{
"id": "2006.03654"
},
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1707.06347"
},
{
"id": "1803.05457"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2211.09085"
},
{
"id": "2304.07327"
},
{
"id": "2203.15556"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2302.09270"
},
{
"id": "2205.01068"
},
{
"id": "2301.09211"
},
{
"id": "2304.06364"
},
{
"id": "2305.17493"
},
{
"id": "2302.07842"
},
{
"id": "2301.12867"
},
{
"id": "1806.03822"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "1904.09728"
},
{
"id": "1811.00937"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2206.05229"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "1908.01091"
},
{
"id": "2204.05862"
},
{
"id": "2107.03451"
},
{
"id": "2112.00861"
},
{
"id": "2109.03300"
},
{
"id": "2104.10350"
},
{
"id": "2212.09689"
},
{
"id": "2006.12442"
},
{
"id": "2302.07459"
},
{
"id": "2302.08582"
},
{
"id": "2212.10560"
},
{
"id": "2112.04359"
},
{
"id": "2001.08361"
},
{
"id": "2306.05949"
},
{
"id": "1907.11692"
},
{
"id": "1705.03551"
},
{
"id": "2303.15056"
},
{
"id": "2303.17651"
},
{
"id": "2210.07700"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2307.09042 | 8 | # and empathetic interactions.
In this study, we first developed a standardized EU test suitable for both humans and LLMs, termed the Situational Evaluation of Complex Emotional Understanding (SECEU). Data from more than 500 young adults were collected to establish a norm for the SECEU. Then, we evaluated a variety of mainstream and popular LLMs, including OpenAI GPT series (GPT-4, GPT-3.5-turbo, Curie, Babbage, DaVinci, text- davinci-001, text-davinci-002, and text-davinci-003), Claude, LLaMA-based models (Alpaca, Koala, LLaMA, and Vicuna), Fastchat, Pythia-based models (Dolly and Oasst), GLM-based models (ChatGLM), and RWKV (Recurrent Weighted Key-Value) with the 4
SECEU. Finally, we standardized the LLMsâ scores against the norm, allowing for direct comparison with humans. We also compared the multivariate response patterns of the LLMs and human participants to compare their representation similarities.
# Results
# The development of a standardized test on EU | 2307.09042#8 | Emotional Intelligence of Large Language Models | Large Language Models (LLMs) have demonstrated remarkable abilities across
numerous disciplines, primarily assessed through tasks in language generation,
knowledge utilization, and complex reasoning. However, their alignment with
human emotions and values, which is critical for real-world applications, has
not been systematically evaluated. Here, we assessed LLMs' Emotional
Intelligence (EI), encompassing emotion recognition, interpretation, and
understanding, which is necessary for effective communication and social
interactions. Specifically, we first developed a novel psychometric assessment
focusing on Emotion Understanding (EU), a core component of EI, suitable for
both humans and LLMs. This test requires evaluating complex emotions (e.g.,
surprised, joyful, puzzled, proud) in realistic scenarios (e.g., despite
feeling underperformed, John surprisingly achieved a top score). With a
reference frame constructed from over 500 adults, we tested a variety of
mainstream LLMs. Most achieved above-average EQ scores, with GPT-4 exceeding
89% of human participants with an EQ of 117. Interestingly, a multivariate
pattern analysis revealed that some LLMs apparently did not reply on the
human-like mechanism to achieve human-level performance, as their
representational patterns were qualitatively distinct from humans. In addition,
we discussed the impact of factors such as model size, training method, and
architecture on LLMs' EQ. In summary, our study presents one of the first
psychometric evaluations of the human-like characteristics of LLMs, which may
shed light on the future development of LLMs aiming for both high intellectual
and emotional intelligence. Project website:
https://emotional-intelligence.github.io/ | http://arxiv.org/pdf/2307.09042 | Xuena Wang, Xueting Li, Zi Yin, Yue Wu, Liu Jia | cs.AI | 36 pages, 5 figures | null | cs.AI | 20230718 | 20230728 | [
{
"id": "2302.02083"
},
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2009.03300"
},
{
"id": "2305.02897"
},
{
"id": "2210.02414"
},
{
"id": "2303.18223"
},
{
"id": "2304.07327"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2210.13312"
}
] |
2307.09288 | 8 | Figure 1: Helpfulness human evaluation results for Llama 2-Chat compared to other open-source and closed-source models. Human raters compared model generations on ~4k prompts consisting of both single and multi-turn prompts. The 95% conï¬dence intervals for this evaluation are between 1% and 2%. More details in Section 3.4.2. While reviewing these results, it is important to note that human evaluations can be noisy due to limitations of the prompt set, subjectivity of the review guidelines, subjectivity of individual raters, and the inherent diï¬culty of comparing generations.
Figure 2: Win-rate % for helpfulness and safety between commercial-licensed base- lines and Llama 2-Chat, according to GPT- 4. To complement the human evaluation, we used a more capable model, not subject to our own guidance. Green area indicates our model is better according to GPT-4. To remove ties, we used win/(win + loss). The orders in which the model responses are presented to GPT-4 are randomly swapped to alleviate bias.
# 1 Introduction | 2307.09288#8 | Llama 2: Open Foundation and Fine-Tuned Chat Models | In this work, we develop and release Llama 2, a collection of pretrained and
fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70
billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for
dialogue use cases. Our models outperform open-source chat models on most
benchmarks we tested, and based on our human evaluations for helpfulness and
safety, may be a suitable substitute for closed-source models. We provide a
detailed description of our approach to fine-tuning and safety improvements of
Llama 2-Chat in order to enable the community to build on our work and
contribute to the responsible development of LLMs. | http://arxiv.org/pdf/2307.09288 | Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom | cs.CL, cs.AI | null | null | cs.CL | 20230718 | 20230719 | [
{
"id": "2006.03654"
},
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1707.06347"
},
{
"id": "1803.05457"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2211.09085"
},
{
"id": "2304.07327"
},
{
"id": "2203.15556"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2302.09270"
},
{
"id": "2205.01068"
},
{
"id": "2301.09211"
},
{
"id": "2304.06364"
},
{
"id": "2305.17493"
},
{
"id": "2302.07842"
},
{
"id": "2301.12867"
},
{
"id": "1806.03822"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "1904.09728"
},
{
"id": "1811.00937"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2206.05229"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "1908.01091"
},
{
"id": "2204.05862"
},
{
"id": "2107.03451"
},
{
"id": "2112.00861"
},
{
"id": "2109.03300"
},
{
"id": "2104.10350"
},
{
"id": "2212.09689"
},
{
"id": "2006.12442"
},
{
"id": "2302.07459"
},
{
"id": "2302.08582"
},
{
"id": "2212.10560"
},
{
"id": "2112.04359"
},
{
"id": "2001.08361"
},
{
"id": "2306.05949"
},
{
"id": "1907.11692"
},
{
"id": "1705.03551"
},
{
"id": "2303.15056"
},
{
"id": "2303.17651"
},
{
"id": "2210.07700"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2307.09042 | 9 | # Results
# The development of a standardized test on EU
The SECEU is designated to measure EU, which comprises 40 items (see https://emotional-intelligence.github.io/ for both English and Chinese versions). Each item describes a scenario set either in a school, family, or social context with twists and turns designed to evoke a mixture of positive and negative emotions (e.g., âWang participated in a mathematics competition but felt he had not performed to his full potential. However, when the results were announced, he found that he was in a position of top 10.â). The scenarios feature a varying number of characters, and emotions can be self-directed, other-directed, or both. For each scenario, four of the most plausible emotions (e.g., surprised, joyful, puzzled, proud) are listed. Participants were asked to evaluate the intensity of each emotion with numbers that were added up to 10 (e.g., 3, 3, 1, 3, indicating a multifaceted emotion response comprising 30% surprise, 30% joy, 10% puzzlement, and 30% pride). Fig. 1 shows exemplars of the SECEU test and the
standard scores by averaging answers across the participants.
5 | 2307.09042#9 | Emotional Intelligence of Large Language Models | Large Language Models (LLMs) have demonstrated remarkable abilities across
numerous disciplines, primarily assessed through tasks in language generation,
knowledge utilization, and complex reasoning. However, their alignment with
human emotions and values, which is critical for real-world applications, has
not been systematically evaluated. Here, we assessed LLMs' Emotional
Intelligence (EI), encompassing emotion recognition, interpretation, and
understanding, which is necessary for effective communication and social
interactions. Specifically, we first developed a novel psychometric assessment
focusing on Emotion Understanding (EU), a core component of EI, suitable for
both humans and LLMs. This test requires evaluating complex emotions (e.g.,
surprised, joyful, puzzled, proud) in realistic scenarios (e.g., despite
feeling underperformed, John surprisingly achieved a top score). With a
reference frame constructed from over 500 adults, we tested a variety of
mainstream LLMs. Most achieved above-average EQ scores, with GPT-4 exceeding
89% of human participants with an EQ of 117. Interestingly, a multivariate
pattern analysis revealed that some LLMs apparently did not reply on the
human-like mechanism to achieve human-level performance, as their
representational patterns were qualitatively distinct from humans. In addition,
we discussed the impact of factors such as model size, training method, and
architecture on LLMs' EQ. In summary, our study presents one of the first
psychometric evaluations of the human-like characteristics of LLMs, which may
shed light on the future development of LLMs aiming for both high intellectual
and emotional intelligence. Project website:
https://emotional-intelligence.github.io/ | http://arxiv.org/pdf/2307.09042 | Xuena Wang, Xueting Li, Zi Yin, Yue Wu, Liu Jia | cs.AI | 36 pages, 5 figures | null | cs.AI | 20230718 | 20230728 | [
{
"id": "2302.02083"
},
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2009.03300"
},
{
"id": "2305.02897"
},
{
"id": "2210.02414"
},
{
"id": "2303.18223"
},
{
"id": "2304.07327"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2210.13312"
}
] |
2307.09288 | 9 | # 1 Introduction
Large Language Models (LLMs) have shown great promise as highly capable AI assistants that excel in complex reasoning tasks requiring expert knowledge across a wide range of ï¬elds, including in specialized domains such as programming and creative writing. They enable interaction with humans through intuitive chat interfaces, which has led to rapid and widespread adoption among the general public. | 2307.09288#9 | Llama 2: Open Foundation and Fine-Tuned Chat Models | In this work, we develop and release Llama 2, a collection of pretrained and
fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70
billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for
dialogue use cases. Our models outperform open-source chat models on most
benchmarks we tested, and based on our human evaluations for helpfulness and
safety, may be a suitable substitute for closed-source models. We provide a
detailed description of our approach to fine-tuning and safety improvements of
Llama 2-Chat in order to enable the community to build on our work and
contribute to the responsible development of LLMs. | http://arxiv.org/pdf/2307.09288 | Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom | cs.CL, cs.AI | null | null | cs.CL | 20230718 | 20230719 | [
{
"id": "2006.03654"
},
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1707.06347"
},
{
"id": "1803.05457"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2211.09085"
},
{
"id": "2304.07327"
},
{
"id": "2203.15556"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2302.09270"
},
{
"id": "2205.01068"
},
{
"id": "2301.09211"
},
{
"id": "2304.06364"
},
{
"id": "2305.17493"
},
{
"id": "2302.07842"
},
{
"id": "2301.12867"
},
{
"id": "1806.03822"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "1904.09728"
},
{
"id": "1811.00937"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2206.05229"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "1908.01091"
},
{
"id": "2204.05862"
},
{
"id": "2107.03451"
},
{
"id": "2112.00861"
},
{
"id": "2109.03300"
},
{
"id": "2104.10350"
},
{
"id": "2212.09689"
},
{
"id": "2006.12442"
},
{
"id": "2302.07459"
},
{
"id": "2302.08582"
},
{
"id": "2212.10560"
},
{
"id": "2112.04359"
},
{
"id": "2001.08361"
},
{
"id": "2306.05949"
},
{
"id": "1907.11692"
},
{
"id": "1705.03551"
},
{
"id": "2303.15056"
},
{
"id": "2303.17651"
},
{
"id": "2210.07700"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2307.09042 | 10 | standard scores by averaging answers across the participants.
5
The SECEU test âThe Standard Scores: Item 1 Options: (1) Expectant (2) Excited (3) Joyful (@) Frustrated Story: The airplane model that Wang made fell from the sky one minute after take-off. When she inspected the model, she found a part that could possibly be improved. Atthis moment, she would feet: Item2 Options: (1)Desperate (2) Fear (3)Helpless (4) Sad Story: Although Aunt Li pays close attention to her lifestyle and diet, she still got a malignant tumor. The chances of curing the tumor now are quite slim, and Aunt Li can do nothing about it. She would feet 40 X 4 matrix
Figure 1: Exemplars of the SECEU test and the standard scores from the population. For the whole set of the test, see: https://emotional-intelligence.github.io/
# Under the assumption that groups possess collective knowledge of emotion | 2307.09042#10 | Emotional Intelligence of Large Language Models | Large Language Models (LLMs) have demonstrated remarkable abilities across
numerous disciplines, primarily assessed through tasks in language generation,
knowledge utilization, and complex reasoning. However, their alignment with
human emotions and values, which is critical for real-world applications, has
not been systematically evaluated. Here, we assessed LLMs' Emotional
Intelligence (EI), encompassing emotion recognition, interpretation, and
understanding, which is necessary for effective communication and social
interactions. Specifically, we first developed a novel psychometric assessment
focusing on Emotion Understanding (EU), a core component of EI, suitable for
both humans and LLMs. This test requires evaluating complex emotions (e.g.,
surprised, joyful, puzzled, proud) in realistic scenarios (e.g., despite
feeling underperformed, John surprisingly achieved a top score). With a
reference frame constructed from over 500 adults, we tested a variety of
mainstream LLMs. Most achieved above-average EQ scores, with GPT-4 exceeding
89% of human participants with an EQ of 117. Interestingly, a multivariate
pattern analysis revealed that some LLMs apparently did not reply on the
human-like mechanism to achieve human-level performance, as their
representational patterns were qualitatively distinct from humans. In addition,
we discussed the impact of factors such as model size, training method, and
architecture on LLMs' EQ. In summary, our study presents one of the first
psychometric evaluations of the human-like characteristics of LLMs, which may
shed light on the future development of LLMs aiming for both high intellectual
and emotional intelligence. Project website:
https://emotional-intelligence.github.io/ | http://arxiv.org/pdf/2307.09042 | Xuena Wang, Xueting Li, Zi Yin, Yue Wu, Liu Jia | cs.AI | 36 pages, 5 figures | null | cs.AI | 20230718 | 20230728 | [
{
"id": "2302.02083"
},
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2009.03300"
},
{
"id": "2305.02897"
},
{
"id": "2210.02414"
},
{
"id": "2303.18223"
},
{
"id": "2304.07327"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2210.13312"
}
] |
2307.09288 | 10 | The capabilities of LLMs are remarkable considering the seemingly straightforward nature of the training methodology. Auto-regressive transformers are pretrained on an extensive corpus of self-supervised data, followed by alignment with human preferences via techniques such as Reinforcement Learning with Human Feedback (RLHF). Although the training methodology is simple, high computational requirements have limited the development of LLMs to a few players. There have been public releases of pretrained LLMs (such as BLOOM (Scao et al., 2022), LLaMa-1 (Touvron et al., 2023), and Falcon (Penedo et al., 2023)) that match the performance of closed pretrained competitors like GPT-3 (Brown et al., 2020) and Chinchilla (Hoï¬mann et al., 2022), but none of these models are suitable substitutes for closed âproductâ LLMs, such as ChatGPT, BARD, and Claude. These closed product LLMs are heavily ï¬ne-tuned to align with human preferences, which greatly enhances their usability and safety. This step can require signiï¬cant costs in compute and human annotation, and is often not transparent or easily reproducible, limiting progress within the community to advance AI alignment research. | 2307.09288#10 | Llama 2: Open Foundation and Fine-Tuned Chat Models | In this work, we develop and release Llama 2, a collection of pretrained and
fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70
billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for
dialogue use cases. Our models outperform open-source chat models on most
benchmarks we tested, and based on our human evaluations for helpfulness and
safety, may be a suitable substitute for closed-source models. We provide a
detailed description of our approach to fine-tuning and safety improvements of
Llama 2-Chat in order to enable the community to build on our work and
contribute to the responsible development of LLMs. | http://arxiv.org/pdf/2307.09288 | Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom | cs.CL, cs.AI | null | null | cs.CL | 20230718 | 20230719 | [
{
"id": "2006.03654"
},
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1707.06347"
},
{
"id": "1803.05457"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2211.09085"
},
{
"id": "2304.07327"
},
{
"id": "2203.15556"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2302.09270"
},
{
"id": "2205.01068"
},
{
"id": "2301.09211"
},
{
"id": "2304.06364"
},
{
"id": "2305.17493"
},
{
"id": "2302.07842"
},
{
"id": "2301.12867"
},
{
"id": "1806.03822"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "1904.09728"
},
{
"id": "1811.00937"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2206.05229"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "1908.01091"
},
{
"id": "2204.05862"
},
{
"id": "2107.03451"
},
{
"id": "2112.00861"
},
{
"id": "2109.03300"
},
{
"id": "2104.10350"
},
{
"id": "2212.09689"
},
{
"id": "2006.12442"
},
{
"id": "2302.07459"
},
{
"id": "2302.08582"
},
{
"id": "2212.10560"
},
{
"id": "2112.04359"
},
{
"id": "2001.08361"
},
{
"id": "2306.05949"
},
{
"id": "1907.11692"
},
{
"id": "1705.03551"
},
{
"id": "2303.15056"
},
{
"id": "2303.17651"
},
{
"id": "2210.07700"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2307.09042 | 11 | # Under the assumption that groups possess collective knowledge of emotion
(Legree et al., 2005), we adopted a consensus scoring method to standardize the SECEU (Palmer et al., 2005). To do this, we administered the SECEU to a large sample of undergraduate and postgraduate students (N = 541; females: 339, males: 202; mean age: 22.33, SD: 2.49, ranging from 17 to 30 years). Then, we calculated the standard score for each emotion of each item by averaging corresponding scores across the participants. Accordingly, we measured each participantâs EU ability on each item by calculating the Euclidean distance between the participantâs individual score and the standard score derived from the whole group for each item, with smaller distances indicating better EU ability. This analysis revealed significant variance in individual differences in EU ability (M = 2.79, SD = 0.822, range from 1.40 to 6.29), suggesting that the SECEU is well-suited to serve as a discriminative test for assessing EU in a general population.
To evaluate the reliability of the SECEU, we assessed the internal consistency of participantsâ performance (i.e., the Euclidean distance) across the 40 items, and 6 | 2307.09042#11 | Emotional Intelligence of Large Language Models | Large Language Models (LLMs) have demonstrated remarkable abilities across
numerous disciplines, primarily assessed through tasks in language generation,
knowledge utilization, and complex reasoning. However, their alignment with
human emotions and values, which is critical for real-world applications, has
not been systematically evaluated. Here, we assessed LLMs' Emotional
Intelligence (EI), encompassing emotion recognition, interpretation, and
understanding, which is necessary for effective communication and social
interactions. Specifically, we first developed a novel psychometric assessment
focusing on Emotion Understanding (EU), a core component of EI, suitable for
both humans and LLMs. This test requires evaluating complex emotions (e.g.,
surprised, joyful, puzzled, proud) in realistic scenarios (e.g., despite
feeling underperformed, John surprisingly achieved a top score). With a
reference frame constructed from over 500 adults, we tested a variety of
mainstream LLMs. Most achieved above-average EQ scores, with GPT-4 exceeding
89% of human participants with an EQ of 117. Interestingly, a multivariate
pattern analysis revealed that some LLMs apparently did not reply on the
human-like mechanism to achieve human-level performance, as their
representational patterns were qualitatively distinct from humans. In addition,
we discussed the impact of factors such as model size, training method, and
architecture on LLMs' EQ. In summary, our study presents one of the first
psychometric evaluations of the human-like characteristics of LLMs, which may
shed light on the future development of LLMs aiming for both high intellectual
and emotional intelligence. Project website:
https://emotional-intelligence.github.io/ | http://arxiv.org/pdf/2307.09042 | Xuena Wang, Xueting Li, Zi Yin, Yue Wu, Liu Jia | cs.AI | 36 pages, 5 figures | null | cs.AI | 20230718 | 20230728 | [
{
"id": "2302.02083"
},
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2009.03300"
},
{
"id": "2305.02897"
},
{
"id": "2210.02414"
},
{
"id": "2303.18223"
},
{
"id": "2304.07327"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2210.13312"
}
] |
2307.09288 | 11 | In this work, we develop and release Llama 2, a family of pretrained and ï¬ne-tuned LLMs, Llama 2 and Llama 2-Chat, at scales up to 70B parameters. On the series of helpfulness and safety benchmarks we tested, Llama 2-Chat models generally perform better than existing open-source models. They also appear to be on par with some of the closed-source models, at least on the human evaluations we performed (see Figures 1 and 3). We have taken measures to increase the safety of these models, using safety-speciï¬c data annotation and tuning, as well as conducting red-teaming and employing iterative evaluations. Additionally, this paper contributes a thorough description of our ï¬ne-tuning methodology and approach to improving LLM safety. We hope that this openness will enable the community to reproduce ï¬ne-tuned LLMs and continue to improve the safety of those models, paving the way for more responsible development of LLMs. We also share novel observations we made during the development of Llama 2 and Llama 2-Chat, such as the emergence of tool usage and temporal organization of knowledge.
3 | 2307.09288#11 | Llama 2: Open Foundation and Fine-Tuned Chat Models | In this work, we develop and release Llama 2, a collection of pretrained and
fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70
billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for
dialogue use cases. Our models outperform open-source chat models on most
benchmarks we tested, and based on our human evaluations for helpfulness and
safety, may be a suitable substitute for closed-source models. We provide a
detailed description of our approach to fine-tuning and safety improvements of
Llama 2-Chat in order to enable the community to build on our work and
contribute to the responsible development of LLMs. | http://arxiv.org/pdf/2307.09288 | Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom | cs.CL, cs.AI | null | null | cs.CL | 20230718 | 20230719 | [
{
"id": "2006.03654"
},
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1707.06347"
},
{
"id": "1803.05457"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2211.09085"
},
{
"id": "2304.07327"
},
{
"id": "2203.15556"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2302.09270"
},
{
"id": "2205.01068"
},
{
"id": "2301.09211"
},
{
"id": "2304.06364"
},
{
"id": "2305.17493"
},
{
"id": "2302.07842"
},
{
"id": "2301.12867"
},
{
"id": "1806.03822"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "1904.09728"
},
{
"id": "1811.00937"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2206.05229"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "1908.01091"
},
{
"id": "2204.05862"
},
{
"id": "2107.03451"
},
{
"id": "2112.00861"
},
{
"id": "2109.03300"
},
{
"id": "2104.10350"
},
{
"id": "2212.09689"
},
{
"id": "2006.12442"
},
{
"id": "2302.07459"
},
{
"id": "2302.08582"
},
{
"id": "2212.10560"
},
{
"id": "2112.04359"
},
{
"id": "2001.08361"
},
{
"id": "2306.05949"
},
{
"id": "1907.11692"
},
{
"id": "1705.03551"
},
{
"id": "2303.15056"
},
{
"id": "2303.17651"
},
{
"id": "2210.07700"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2307.09042 | 12 | To evaluate the reliability of the SECEU, we assessed the internal consistency of participantsâ performance (i.e., the Euclidean distance) across the 40 items, and 6
revealed a high reliability of the test (Cronbachâs α = 0.94). We further examined the distribution of participantsâ performance on each item (Fig. S1) and found no evidence of ceiling or floor effects, with mean distances varying from 2.19 to 3.32 and SD ranging from 1.13 to 1.82. In addition, there was no significant sex difference (male: 2.85, female: 2.76, t(539) = 1.34, p = 0.18).
To evaluate the validity of the SECEU, we invited three experts known for their high EI to take the test. All expertsâ performance exceeded at least 73% of the population, indicating that the test is effective in differentiating experts from the general population. Specifically, the average score of the three experts exceeded 99% of the whole population tested, further confirming the validity of using the consensus scoring method in standardizing the SECEU. | 2307.09042#12 | Emotional Intelligence of Large Language Models | Large Language Models (LLMs) have demonstrated remarkable abilities across
numerous disciplines, primarily assessed through tasks in language generation,
knowledge utilization, and complex reasoning. However, their alignment with
human emotions and values, which is critical for real-world applications, has
not been systematically evaluated. Here, we assessed LLMs' Emotional
Intelligence (EI), encompassing emotion recognition, interpretation, and
understanding, which is necessary for effective communication and social
interactions. Specifically, we first developed a novel psychometric assessment
focusing on Emotion Understanding (EU), a core component of EI, suitable for
both humans and LLMs. This test requires evaluating complex emotions (e.g.,
surprised, joyful, puzzled, proud) in realistic scenarios (e.g., despite
feeling underperformed, John surprisingly achieved a top score). With a
reference frame constructed from over 500 adults, we tested a variety of
mainstream LLMs. Most achieved above-average EQ scores, with GPT-4 exceeding
89% of human participants with an EQ of 117. Interestingly, a multivariate
pattern analysis revealed that some LLMs apparently did not reply on the
human-like mechanism to achieve human-level performance, as their
representational patterns were qualitatively distinct from humans. In addition,
we discussed the impact of factors such as model size, training method, and
architecture on LLMs' EQ. In summary, our study presents one of the first
psychometric evaluations of the human-like characteristics of LLMs, which may
shed light on the future development of LLMs aiming for both high intellectual
and emotional intelligence. Project website:
https://emotional-intelligence.github.io/ | http://arxiv.org/pdf/2307.09042 | Xuena Wang, Xueting Li, Zi Yin, Yue Wu, Liu Jia | cs.AI | 36 pages, 5 figures | null | cs.AI | 20230718 | 20230728 | [
{
"id": "2302.02083"
},
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2009.03300"
},
{
"id": "2305.02897"
},
{
"id": "2210.02414"
},
{
"id": "2303.18223"
},
{
"id": "2304.07327"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2210.13312"
}
] |
2307.09288 | 12 | 3
45 40 sik = @ 35 a 3 no @ 30 T 5 + tL S 25 3 7 L se 20 i c 2 15 = & © 10 = S a5 =â ae 5 0 Llama-2 Llama-2 Llama-2 Llama-2 MPT Vicuna Vicuna Falcon PaLM ChatGPT 7b-chat 13b-chat 34b-chat 70b-chat 7b-chat 13b-v1.1 33b-v1.3 40b-instruct Bison 0301
Figure 3: Safety human evaluation results for Llama 2-Chat compared to other open-source and closed- source models. Human raters judged model generations for safety violations across ~2,000 adversarial prompts consisting of both single and multi-turn prompts. More details can be found in Section 4.4. It is important to caveat these safety results with the inherent bias of LLM evaluations due to limitations of the prompt set, subjectivity of the review guidelines, and subjectivity of individual raters. Additionally, these safety evaluations are performed using content standards that are likely to be biased towards the Llama 2-Chat models.
We are releasing the following models to the general public for research and commercial useâ¡: | 2307.09288#12 | Llama 2: Open Foundation and Fine-Tuned Chat Models | In this work, we develop and release Llama 2, a collection of pretrained and
fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70
billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for
dialogue use cases. Our models outperform open-source chat models on most
benchmarks we tested, and based on our human evaluations for helpfulness and
safety, may be a suitable substitute for closed-source models. We provide a
detailed description of our approach to fine-tuning and safety improvements of
Llama 2-Chat in order to enable the community to build on our work and
contribute to the responsible development of LLMs. | http://arxiv.org/pdf/2307.09288 | Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom | cs.CL, cs.AI | null | null | cs.CL | 20230718 | 20230719 | [
{
"id": "2006.03654"
},
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1707.06347"
},
{
"id": "1803.05457"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2211.09085"
},
{
"id": "2304.07327"
},
{
"id": "2203.15556"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2302.09270"
},
{
"id": "2205.01068"
},
{
"id": "2301.09211"
},
{
"id": "2304.06364"
},
{
"id": "2305.17493"
},
{
"id": "2302.07842"
},
{
"id": "2301.12867"
},
{
"id": "1806.03822"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "1904.09728"
},
{
"id": "1811.00937"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2206.05229"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "1908.01091"
},
{
"id": "2204.05862"
},
{
"id": "2107.03451"
},
{
"id": "2112.00861"
},
{
"id": "2109.03300"
},
{
"id": "2104.10350"
},
{
"id": "2212.09689"
},
{
"id": "2006.12442"
},
{
"id": "2302.07459"
},
{
"id": "2302.08582"
},
{
"id": "2212.10560"
},
{
"id": "2112.04359"
},
{
"id": "2001.08361"
},
{
"id": "2306.05949"
},
{
"id": "1907.11692"
},
{
"id": "1705.03551"
},
{
"id": "2303.15056"
},
{
"id": "2303.17651"
},
{
"id": "2210.07700"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2307.09042 | 13 | Finally, we constructed the norm for EU by converting participantsâ raw scores in the SECEU into standard EQ (Emotional Quotient) scores, designed to follow a normal distribution with the average score set at 100 and standard deviation at 15. In practical terms, an individual with an EQ of 100 possesses an EU ability corresponding to the population average. Meanwhile, an individual with an EQ of 115 outperforms approximately 84% of the population (i.e., one SD above the population average), and an individual with an EQ score of 130 exceeds 97.7% of the population.
# The assessment of LLMsâ EQ | 2307.09042#13 | Emotional Intelligence of Large Language Models | Large Language Models (LLMs) have demonstrated remarkable abilities across
numerous disciplines, primarily assessed through tasks in language generation,
knowledge utilization, and complex reasoning. However, their alignment with
human emotions and values, which is critical for real-world applications, has
not been systematically evaluated. Here, we assessed LLMs' Emotional
Intelligence (EI), encompassing emotion recognition, interpretation, and
understanding, which is necessary for effective communication and social
interactions. Specifically, we first developed a novel psychometric assessment
focusing on Emotion Understanding (EU), a core component of EI, suitable for
both humans and LLMs. This test requires evaluating complex emotions (e.g.,
surprised, joyful, puzzled, proud) in realistic scenarios (e.g., despite
feeling underperformed, John surprisingly achieved a top score). With a
reference frame constructed from over 500 adults, we tested a variety of
mainstream LLMs. Most achieved above-average EQ scores, with GPT-4 exceeding
89% of human participants with an EQ of 117. Interestingly, a multivariate
pattern analysis revealed that some LLMs apparently did not reply on the
human-like mechanism to achieve human-level performance, as their
representational patterns were qualitatively distinct from humans. In addition,
we discussed the impact of factors such as model size, training method, and
architecture on LLMs' EQ. In summary, our study presents one of the first
psychometric evaluations of the human-like characteristics of LLMs, which may
shed light on the future development of LLMs aiming for both high intellectual
and emotional intelligence. Project website:
https://emotional-intelligence.github.io/ | http://arxiv.org/pdf/2307.09042 | Xuena Wang, Xueting Li, Zi Yin, Yue Wu, Liu Jia | cs.AI | 36 pages, 5 figures | null | cs.AI | 20230718 | 20230728 | [
{
"id": "2302.02083"
},
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2009.03300"
},
{
"id": "2305.02897"
},
{
"id": "2210.02414"
},
{
"id": "2303.18223"
},
{
"id": "2304.07327"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2210.13312"
}
] |
2307.09288 | 13 | We are releasing the following models to the general public for research and commercial useâ¡:
1. Llama 2, an updated version of Llama 1, trained on a new mix of publicly available data. We also increased the size of the pretraining corpus by 40%, doubled the context length of the model, and adopted grouped-query attention (Ainslie et al., 2023). We are releasing variants of Llama 2 with 7B, 13B, and 70B parameters. We have also trained 34B variants, which we report on in this paper but are not releasing.§
2. Llama 2-Chat, a ï¬ne-tuned version of Llama 2 that is optimized for dialogue use cases. We release variants of this model with 7B, 13B, and 70B parameters as well. | 2307.09288#13 | Llama 2: Open Foundation and Fine-Tuned Chat Models | In this work, we develop and release Llama 2, a collection of pretrained and
fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70
billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for
dialogue use cases. Our models outperform open-source chat models on most
benchmarks we tested, and based on our human evaluations for helpfulness and
safety, may be a suitable substitute for closed-source models. We provide a
detailed description of our approach to fine-tuning and safety improvements of
Llama 2-Chat in order to enable the community to build on our work and
contribute to the responsible development of LLMs. | http://arxiv.org/pdf/2307.09288 | Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom | cs.CL, cs.AI | null | null | cs.CL | 20230718 | 20230719 | [
{
"id": "2006.03654"
},
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1707.06347"
},
{
"id": "1803.05457"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2211.09085"
},
{
"id": "2304.07327"
},
{
"id": "2203.15556"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2302.09270"
},
{
"id": "2205.01068"
},
{
"id": "2301.09211"
},
{
"id": "2304.06364"
},
{
"id": "2305.17493"
},
{
"id": "2302.07842"
},
{
"id": "2301.12867"
},
{
"id": "1806.03822"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "1904.09728"
},
{
"id": "1811.00937"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2206.05229"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "1908.01091"
},
{
"id": "2204.05862"
},
{
"id": "2107.03451"
},
{
"id": "2112.00861"
},
{
"id": "2109.03300"
},
{
"id": "2104.10350"
},
{
"id": "2212.09689"
},
{
"id": "2006.12442"
},
{
"id": "2302.07459"
},
{
"id": "2302.08582"
},
{
"id": "2212.10560"
},
{
"id": "2112.04359"
},
{
"id": "2001.08361"
},
{
"id": "2306.05949"
},
{
"id": "1907.11692"
},
{
"id": "1705.03551"
},
{
"id": "2303.15056"
},
{
"id": "2303.17651"
},
{
"id": "2210.07700"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2307.09042 | 14 | # The assessment of LLMsâ EQ
We evaluated a variety of mainstream LLMs using the SECEU, and then standardized their scores based on the norm of the human participants for a direct comparison between LLMs and humans. These LLMs included OpenAI GPT series (GPT-4, GPT- 3.5-turbo, Curie, Babbage, DaVinci, text-davinci-001, text-davinci-002, and text- davinci-003), Claude, as well as open-source models such as LLaMA-based models (Alpaca, Koala, LLaMA, and Vicuna), Pythia-based models (Dolly and Oasst), GLM- based models (ChatGLM), and Fastchat. Recurrent Weighted Key-Value (RWKV), which utilizes recurrent neural networks (RNNs) instead of transformers, was also 7
included. | 2307.09042#14 | Emotional Intelligence of Large Language Models | Large Language Models (LLMs) have demonstrated remarkable abilities across
numerous disciplines, primarily assessed through tasks in language generation,
knowledge utilization, and complex reasoning. However, their alignment with
human emotions and values, which is critical for real-world applications, has
not been systematically evaluated. Here, we assessed LLMs' Emotional
Intelligence (EI), encompassing emotion recognition, interpretation, and
understanding, which is necessary for effective communication and social
interactions. Specifically, we first developed a novel psychometric assessment
focusing on Emotion Understanding (EU), a core component of EI, suitable for
both humans and LLMs. This test requires evaluating complex emotions (e.g.,
surprised, joyful, puzzled, proud) in realistic scenarios (e.g., despite
feeling underperformed, John surprisingly achieved a top score). With a
reference frame constructed from over 500 adults, we tested a variety of
mainstream LLMs. Most achieved above-average EQ scores, with GPT-4 exceeding
89% of human participants with an EQ of 117. Interestingly, a multivariate
pattern analysis revealed that some LLMs apparently did not reply on the
human-like mechanism to achieve human-level performance, as their
representational patterns were qualitatively distinct from humans. In addition,
we discussed the impact of factors such as model size, training method, and
architecture on LLMs' EQ. In summary, our study presents one of the first
psychometric evaluations of the human-like characteristics of LLMs, which may
shed light on the future development of LLMs aiming for both high intellectual
and emotional intelligence. Project website:
https://emotional-intelligence.github.io/ | http://arxiv.org/pdf/2307.09042 | Xuena Wang, Xueting Li, Zi Yin, Yue Wu, Liu Jia | cs.AI | 36 pages, 5 figures | null | cs.AI | 20230718 | 20230728 | [
{
"id": "2302.02083"
},
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2009.03300"
},
{
"id": "2305.02897"
},
{
"id": "2210.02414"
},
{
"id": "2303.18223"
},
{
"id": "2304.07327"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2210.13312"
}
] |
2307.09288 | 14 | We believe that the open release of LLMs, when done safely, will be a net beneï¬t to society. Like all LLMs, Llama 2 is a new technology that carries potential risks with use (Bender et al., 2021b; Weidinger et al., 2021; Solaiman et al., 2023). Testing conducted to date has been in English and has not â and could not â cover all scenarios. Therefore, before deploying any applications of Llama 2-Chat, developers should perform safety testing and tuning tailored to their speciï¬c applications of the model. We provide a responsible use guide¶ and code examplesâ to facilitate the safe deployment of Llama 2 and Llama 2-Chat. More details of our responsible release strategy can be found in Section 5.3.
The remainder of this paper describes our pretraining methodology (Section 2), ï¬ne-tuning methodology (Section 3), approach to model safety (Section 4), key observations and insights (Section 5), relevant related work (Section 6), and conclusions (Section 7).
â¡https://ai.meta.com/resources/models-and-libraries/llama/ §We are delaying the release of the 34B model due to a lack of time to suï¬ciently red team. ¶https://ai.meta.com/llama âhttps://github.com/facebookresearch/llama
4 | 2307.09288#14 | Llama 2: Open Foundation and Fine-Tuned Chat Models | In this work, we develop and release Llama 2, a collection of pretrained and
fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70
billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for
dialogue use cases. Our models outperform open-source chat models on most
benchmarks we tested, and based on our human evaluations for helpfulness and
safety, may be a suitable substitute for closed-source models. We provide a
detailed description of our approach to fine-tuning and safety improvements of
Llama 2-Chat in order to enable the community to build on our work and
contribute to the responsible development of LLMs. | http://arxiv.org/pdf/2307.09288 | Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom | cs.CL, cs.AI | null | null | cs.CL | 20230718 | 20230719 | [
{
"id": "2006.03654"
},
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1707.06347"
},
{
"id": "1803.05457"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2211.09085"
},
{
"id": "2304.07327"
},
{
"id": "2203.15556"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2302.09270"
},
{
"id": "2205.01068"
},
{
"id": "2301.09211"
},
{
"id": "2304.06364"
},
{
"id": "2305.17493"
},
{
"id": "2302.07842"
},
{
"id": "2301.12867"
},
{
"id": "1806.03822"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "1904.09728"
},
{
"id": "1811.00937"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2206.05229"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "1908.01091"
},
{
"id": "2204.05862"
},
{
"id": "2107.03451"
},
{
"id": "2112.00861"
},
{
"id": "2109.03300"
},
{
"id": "2104.10350"
},
{
"id": "2212.09689"
},
{
"id": "2006.12442"
},
{
"id": "2302.07459"
},
{
"id": "2302.08582"
},
{
"id": "2212.10560"
},
{
"id": "2112.04359"
},
{
"id": "2001.08361"
},
{
"id": "2306.05949"
},
{
"id": "1907.11692"
},
{
"id": "1705.03551"
},
{
"id": "2303.15056"
},
{
"id": "2303.17651"
},
{
"id": "2210.07700"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2307.09042 | 15 | included.
Some models, including LLaMA, Fastchat, and RWKV-v4, were unable to complete the test even with the assistance of prompts (Table 1). A few LLMs, including DaVinci, Curie, Babbage, text-davinci-001, and text-davinci-002 managed to complete the test with prompts such as Two-shot Chain of Thought (COT) and Step-by-Step prompts (See Supplementary for the prompt engineering). In addition, other models, such as text-davinci-003 was able to complete the test but its performance was significantly improved with prompts. Here, we only included modelsâ best performance to examine how closely they can approach human-level performance under ideal conditions (Table 1; see also Table S1 & S2). To directly compare to human participants, the performance of each model was standardized by calculating the Euclidean distance between the modelâs responses and the standard scores of humans, which was then normalized into an EQ score (Table 1). Finally, these LLMs were categorized as either expert (above 115), normal (between 85 and 115), or poor (below 85) based on their
# EQ scores. | 2307.09042#15 | Emotional Intelligence of Large Language Models | Large Language Models (LLMs) have demonstrated remarkable abilities across
numerous disciplines, primarily assessed through tasks in language generation,
knowledge utilization, and complex reasoning. However, their alignment with
human emotions and values, which is critical for real-world applications, has
not been systematically evaluated. Here, we assessed LLMs' Emotional
Intelligence (EI), encompassing emotion recognition, interpretation, and
understanding, which is necessary for effective communication and social
interactions. Specifically, we first developed a novel psychometric assessment
focusing on Emotion Understanding (EU), a core component of EI, suitable for
both humans and LLMs. This test requires evaluating complex emotions (e.g.,
surprised, joyful, puzzled, proud) in realistic scenarios (e.g., despite
feeling underperformed, John surprisingly achieved a top score). With a
reference frame constructed from over 500 adults, we tested a variety of
mainstream LLMs. Most achieved above-average EQ scores, with GPT-4 exceeding
89% of human participants with an EQ of 117. Interestingly, a multivariate
pattern analysis revealed that some LLMs apparently did not reply on the
human-like mechanism to achieve human-level performance, as their
representational patterns were qualitatively distinct from humans. In addition,
we discussed the impact of factors such as model size, training method, and
architecture on LLMs' EQ. In summary, our study presents one of the first
psychometric evaluations of the human-like characteristics of LLMs, which may
shed light on the future development of LLMs aiming for both high intellectual
and emotional intelligence. Project website:
https://emotional-intelligence.github.io/ | http://arxiv.org/pdf/2307.09042 | Xuena Wang, Xueting Li, Zi Yin, Yue Wu, Liu Jia | cs.AI | 36 pages, 5 figures | null | cs.AI | 20230718 | 20230728 | [
{
"id": "2302.02083"
},
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2009.03300"
},
{
"id": "2305.02897"
},
{
"id": "2210.02414"
},
{
"id": "2303.18223"
},
{
"id": "2304.07327"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2210.13312"
}
] |
2307.09288 | 15 | 4
HUMAN FEEDBACK FINE-TUNING IG Y O safety Reward Model Rejection Proximal Policy Sampling Optimization M4 Human preference data Helpful Reward Model PRETRAINING se @ SeEy CS Llame-2-chet Pretraining data
Figure 4: Training of Llama 2-Chat: This process begins with the pretraining of Llama 2 using publicly available online sources. Following this, we create an initial version of Llama 2-Chat through the application of supervised ï¬ne-tuning. Subsequently, the model is iteratively reï¬ned using Reinforcement Learning with Human Feedback (RLHF) methodologies, speciï¬cally through rejection sampling and Proximal Policy Optimization (PPO). Throughout the RLHF stage, the accumulation of iterative reward modeling data in parallel with model enhancements is crucial to ensure the reward models remain within distribution.
# 2 Pretraining | 2307.09288#15 | Llama 2: Open Foundation and Fine-Tuned Chat Models | In this work, we develop and release Llama 2, a collection of pretrained and
fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70
billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for
dialogue use cases. Our models outperform open-source chat models on most
benchmarks we tested, and based on our human evaluations for helpfulness and
safety, may be a suitable substitute for closed-source models. We provide a
detailed description of our approach to fine-tuning and safety improvements of
Llama 2-Chat in order to enable the community to build on our work and
contribute to the responsible development of LLMs. | http://arxiv.org/pdf/2307.09288 | Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom | cs.CL, cs.AI | null | null | cs.CL | 20230718 | 20230719 | [
{
"id": "2006.03654"
},
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1707.06347"
},
{
"id": "1803.05457"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2211.09085"
},
{
"id": "2304.07327"
},
{
"id": "2203.15556"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2302.09270"
},
{
"id": "2205.01068"
},
{
"id": "2301.09211"
},
{
"id": "2304.06364"
},
{
"id": "2305.17493"
},
{
"id": "2302.07842"
},
{
"id": "2301.12867"
},
{
"id": "1806.03822"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "1904.09728"
},
{
"id": "1811.00937"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2206.05229"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "1908.01091"
},
{
"id": "2204.05862"
},
{
"id": "2107.03451"
},
{
"id": "2112.00861"
},
{
"id": "2109.03300"
},
{
"id": "2104.10350"
},
{
"id": "2212.09689"
},
{
"id": "2006.12442"
},
{
"id": "2302.07459"
},
{
"id": "2302.08582"
},
{
"id": "2212.10560"
},
{
"id": "2112.04359"
},
{
"id": "2001.08361"
},
{
"id": "2306.05949"
},
{
"id": "1907.11692"
},
{
"id": "1705.03551"
},
{
"id": "2303.15056"
},
{
"id": "2303.17651"
},
{
"id": "2210.07700"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2307.09042 | 16 | # EQ scores.
EQ ail GPT-4 / (EQ=117, > 89%) = Vicuna (EQ=105, > 59%) -7 ~~ GPT-3.5-turbo (EQ=103, > 52%) ~ChatGLM - (EQ-94, > 28%) 70 \ Koala (EQ=83, > 13%) 115 100 55 40 t) 5 10 Percentage of humans (%)
Figure 2: LLMsâ EQ. The light-grey histogram represents the distribution of human participantsâ EQ scores, with the y-axis indicating the EQ score and the x-axis showing the percentage of total
8
participants. The grey kernel density estimation (KDE) line demonstrates the probability density of the EQ scores. Key points are highlighted with colored square markers for LLMs (e.g., GPT-4âs EQ score is 117, marked by the purple square, exceeding 89% of the human participants). For simplicity, here we only present the performance from GPT-4, Vicuna, GPT-3.5-turbo, ChatGLM, and Koala.
9
Table 1: LLMsâ EQ, representational patterns, and properties | 2307.09042#16 | Emotional Intelligence of Large Language Models | Large Language Models (LLMs) have demonstrated remarkable abilities across
numerous disciplines, primarily assessed through tasks in language generation,
knowledge utilization, and complex reasoning. However, their alignment with
human emotions and values, which is critical for real-world applications, has
not been systematically evaluated. Here, we assessed LLMs' Emotional
Intelligence (EI), encompassing emotion recognition, interpretation, and
understanding, which is necessary for effective communication and social
interactions. Specifically, we first developed a novel psychometric assessment
focusing on Emotion Understanding (EU), a core component of EI, suitable for
both humans and LLMs. This test requires evaluating complex emotions (e.g.,
surprised, joyful, puzzled, proud) in realistic scenarios (e.g., despite
feeling underperformed, John surprisingly achieved a top score). With a
reference frame constructed from over 500 adults, we tested a variety of
mainstream LLMs. Most achieved above-average EQ scores, with GPT-4 exceeding
89% of human participants with an EQ of 117. Interestingly, a multivariate
pattern analysis revealed that some LLMs apparently did not reply on the
human-like mechanism to achieve human-level performance, as their
representational patterns were qualitatively distinct from humans. In addition,
we discussed the impact of factors such as model size, training method, and
architecture on LLMs' EQ. In summary, our study presents one of the first
psychometric evaluations of the human-like characteristics of LLMs, which may
shed light on the future development of LLMs aiming for both high intellectual
and emotional intelligence. Project website:
https://emotional-intelligence.github.io/ | http://arxiv.org/pdf/2307.09042 | Xuena Wang, Xueting Li, Zi Yin, Yue Wu, Liu Jia | cs.AI | 36 pages, 5 figures | null | cs.AI | 20230718 | 20230728 | [
{
"id": "2302.02083"
},
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2009.03300"
},
{
"id": "2305.02897"
},
{
"id": "2210.02414"
},
{
"id": "2303.18223"
},
{
"id": "2304.07327"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2210.13312"
}
] |
2307.09288 | 16 | # 2 Pretraining
To create the new family of Llama 2 models, we began with the pretraining approach described in Touvron et al. (2023), using an optimized auto-regressive transformer, but made several changes to improve performance. Speciï¬cally, we performed more robust data cleaning, updated our data mixes, trained on 40% more total tokens, doubled the context length, and used grouped-query attention (GQA) to improve inference scalability for our larger models. Table 1 compares the attributes of the new Llama 2 models with the Llama 1 models.
# 2.1 Pretraining Data
Our training corpus includes a new mix of data from publicly available sources, which does not include data from Metaâs products or services. We made an eï¬ort to remove data from certain sites known to contain a high volume of personal information about private individuals. We trained on 2 trillion tokens of data as this provides a good performanceâcost trade-oï¬, up-sampling the most factual sources in an eï¬ort to increase knowledge and dampen hallucinations.
We performed a variety of pretraining data investigations so that users can better understand the potential capabilities and limitations of our models; results can be found in Section 4.1.
# 2.2 Training Details | 2307.09288#16 | Llama 2: Open Foundation and Fine-Tuned Chat Models | In this work, we develop and release Llama 2, a collection of pretrained and
fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70
billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for
dialogue use cases. Our models outperform open-source chat models on most
benchmarks we tested, and based on our human evaluations for helpfulness and
safety, may be a suitable substitute for closed-source models. We provide a
detailed description of our approach to fine-tuning and safety improvements of
Llama 2-Chat in order to enable the community to build on our work and
contribute to the responsible development of LLMs. | http://arxiv.org/pdf/2307.09288 | Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom | cs.CL, cs.AI | null | null | cs.CL | 20230718 | 20230719 | [
{
"id": "2006.03654"
},
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1707.06347"
},
{
"id": "1803.05457"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2211.09085"
},
{
"id": "2304.07327"
},
{
"id": "2203.15556"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2302.09270"
},
{
"id": "2205.01068"
},
{
"id": "2301.09211"
},
{
"id": "2304.06364"
},
{
"id": "2305.17493"
},
{
"id": "2302.07842"
},
{
"id": "2301.12867"
},
{
"id": "1806.03822"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "1904.09728"
},
{
"id": "1811.00937"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2206.05229"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "1908.01091"
},
{
"id": "2204.05862"
},
{
"id": "2107.03451"
},
{
"id": "2112.00861"
},
{
"id": "2109.03300"
},
{
"id": "2104.10350"
},
{
"id": "2212.09689"
},
{
"id": "2006.12442"
},
{
"id": "2302.07459"
},
{
"id": "2302.08582"
},
{
"id": "2212.10560"
},
{
"id": "2112.04359"
},
{
"id": "2001.08361"
},
{
"id": "2306.05949"
},
{
"id": "1907.11692"
},
{
"id": "1705.03551"
},
{
"id": "2303.15056"
},
{
"id": "2303.17651"
},
{
"id": "2210.07700"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2307.09042 | 17 | SECEU score EQ EQ % Pattern Similarity r % Size Properties Time DaVinci # 3.5 87 18% 0.41** 91% 175B 2020/05 Curie # 2.7 102 50% 0.11 29% 13B Unknown Babbage # 2.78 100 44% -0.12 4% 3B Unknown text-davinci-001 # 2.4 107 64% 0.2 47% <175B Unknown text-davinci-002 # 3.3 91 23% -0.04 8% <175B Unknown text-davinci-003 ## 2.01 114 83% 0.31* 73% 175B 2022/11/28 GPT-3.5-turbo 2.63 103 52% 0.04 17% 175B 2022/11/30 GPT-4 1.89 117 89% 0.28 67% Unknown 2023/03/14 LLaMA ------FAILED------ 13B 2023/02/24 Alpaca 2.56 104 56% 0.03 15% 13B 2023/03/09 Vicuna 2.5 105 59% -0.02 10% 13B 2023/03/30 Koala 3.72 83 13% 0.43** 93% 13B 2023/04/03 Fastchat ------FAILED------ 3B 2023/04/30 Dolly | 2307.09042#17 | Emotional Intelligence of Large Language Models | Large Language Models (LLMs) have demonstrated remarkable abilities across
numerous disciplines, primarily assessed through tasks in language generation,
knowledge utilization, and complex reasoning. However, their alignment with
human emotions and values, which is critical for real-world applications, has
not been systematically evaluated. Here, we assessed LLMs' Emotional
Intelligence (EI), encompassing emotion recognition, interpretation, and
understanding, which is necessary for effective communication and social
interactions. Specifically, we first developed a novel psychometric assessment
focusing on Emotion Understanding (EU), a core component of EI, suitable for
both humans and LLMs. This test requires evaluating complex emotions (e.g.,
surprised, joyful, puzzled, proud) in realistic scenarios (e.g., despite
feeling underperformed, John surprisingly achieved a top score). With a
reference frame constructed from over 500 adults, we tested a variety of
mainstream LLMs. Most achieved above-average EQ scores, with GPT-4 exceeding
89% of human participants with an EQ of 117. Interestingly, a multivariate
pattern analysis revealed that some LLMs apparently did not reply on the
human-like mechanism to achieve human-level performance, as their
representational patterns were qualitatively distinct from humans. In addition,
we discussed the impact of factors such as model size, training method, and
architecture on LLMs' EQ. In summary, our study presents one of the first
psychometric evaluations of the human-like characteristics of LLMs, which may
shed light on the future development of LLMs aiming for both high intellectual
and emotional intelligence. Project website:
https://emotional-intelligence.github.io/ | http://arxiv.org/pdf/2307.09042 | Xuena Wang, Xueting Li, Zi Yin, Yue Wu, Liu Jia | cs.AI | 36 pages, 5 figures | null | cs.AI | 20230718 | 20230728 | [
{
"id": "2302.02083"
},
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2009.03300"
},
{
"id": "2305.02897"
},
{
"id": "2210.02414"
},
{
"id": "2303.18223"
},
{
"id": "2304.07327"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2210.13312"
}
] |
2307.09288 | 17 | We performed a variety of pretraining data investigations so that users can better understand the potential capabilities and limitations of our models; results can be found in Section 4.1.
# 2.2 Training Details
We adopt most of the pretraining setting and model architecture from Llama 1. We use the standard transformer architecture (Vaswani et al., 2017), apply pre-normalization using RMSNorm (Zhang and Sennrich, 2019), use the SwiGLU activation function (Shazeer, 2020), and rotary positional embeddings (RoPE, Su et al. 2022). The primary architectural diï¬erences from Llama 1 include increased context length and grouped-query attention (GQA). We detail in Appendix Section A.2.1 each of these diï¬erences with ablation experiments to demonstrate their importance.
Hyperparameters. We trained using the AdamW optimizer (Loshchilov and Hutter, 2017), with β1 = 0.9, β2 = 0.95, eps = 10â5. We use a cosine learning rate schedule, with warmup of 2000 steps, and decay ï¬nal learning rate down to 10% of the peak learning rate. We use a weight decay of 0.1 and gradient clipping of 1.0. Figure 5 (a) shows the training loss for Llama 2 with these hyperparameters.
5 | 2307.09288#17 | Llama 2: Open Foundation and Fine-Tuned Chat Models | In this work, we develop and release Llama 2, a collection of pretrained and
fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70
billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for
dialogue use cases. Our models outperform open-source chat models on most
benchmarks we tested, and based on our human evaluations for helpfulness and
safety, may be a suitable substitute for closed-source models. We provide a
detailed description of our approach to fine-tuning and safety improvements of
Llama 2-Chat in order to enable the community to build on our work and
contribute to the responsible development of LLMs. | http://arxiv.org/pdf/2307.09288 | Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom | cs.CL, cs.AI | null | null | cs.CL | 20230718 | 20230719 | [
{
"id": "2006.03654"
},
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1707.06347"
},
{
"id": "1803.05457"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2211.09085"
},
{
"id": "2304.07327"
},
{
"id": "2203.15556"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2302.09270"
},
{
"id": "2205.01068"
},
{
"id": "2301.09211"
},
{
"id": "2304.06364"
},
{
"id": "2305.17493"
},
{
"id": "2302.07842"
},
{
"id": "2301.12867"
},
{
"id": "1806.03822"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "1904.09728"
},
{
"id": "1811.00937"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2206.05229"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "1908.01091"
},
{
"id": "2204.05862"
},
{
"id": "2107.03451"
},
{
"id": "2112.00861"
},
{
"id": "2109.03300"
},
{
"id": "2104.10350"
},
{
"id": "2212.09689"
},
{
"id": "2006.12442"
},
{
"id": "2302.07459"
},
{
"id": "2302.08582"
},
{
"id": "2212.10560"
},
{
"id": "2112.04359"
},
{
"id": "2001.08361"
},
{
"id": "2306.05949"
},
{
"id": "1907.11692"
},
{
"id": "1705.03551"
},
{
"id": "2303.15056"
},
{
"id": "2303.17651"
},
{
"id": "2210.07700"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2307.09288 | 18 | 5
Training Data Params Context GQA Tokens LR Length 7B 2k 1.0T 3.0x 10-4 See Touvron et al. 13B 2k 1.0T 3.0 x 10-4 LiaMa 1 (2023) 33B 2k 14T 1.5 x 10-4 65B 2k 1.4T 1.5 x 10-4 7B 4k 2.0T 3.0x 10-4 Liama 2 A new mix of publicly 13B 4k 2.0T 3.0 x 10-4 available online data 34B 4k v 2.0T 1.5 x 10-4 70B 4k v 2.0T 1.5 x 10-4
Table 1: Llama 2 family of models. Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models â 34B and 70B â use Grouped-Query Attention (GQA) for improved inference scalability.
2.2 2.1 2.0 19 18 Train PPL 17 1.6 15 14 0 250 500 750 1000 1250 1500 1750 2000 Processed Tokens (Billions) | 2307.09288#18 | Llama 2: Open Foundation and Fine-Tuned Chat Models | In this work, we develop and release Llama 2, a collection of pretrained and
fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70
billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for
dialogue use cases. Our models outperform open-source chat models on most
benchmarks we tested, and based on our human evaluations for helpfulness and
safety, may be a suitable substitute for closed-source models. We provide a
detailed description of our approach to fine-tuning and safety improvements of
Llama 2-Chat in order to enable the community to build on our work and
contribute to the responsible development of LLMs. | http://arxiv.org/pdf/2307.09288 | Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom | cs.CL, cs.AI | null | null | cs.CL | 20230718 | 20230719 | [
{
"id": "2006.03654"
},
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1707.06347"
},
{
"id": "1803.05457"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2211.09085"
},
{
"id": "2304.07327"
},
{
"id": "2203.15556"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2302.09270"
},
{
"id": "2205.01068"
},
{
"id": "2301.09211"
},
{
"id": "2304.06364"
},
{
"id": "2305.17493"
},
{
"id": "2302.07842"
},
{
"id": "2301.12867"
},
{
"id": "1806.03822"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "1904.09728"
},
{
"id": "1811.00937"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2206.05229"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "1908.01091"
},
{
"id": "2204.05862"
},
{
"id": "2107.03451"
},
{
"id": "2112.00861"
},
{
"id": "2109.03300"
},
{
"id": "2104.10350"
},
{
"id": "2212.09689"
},
{
"id": "2006.12442"
},
{
"id": "2302.07459"
},
{
"id": "2302.08582"
},
{
"id": "2212.10560"
},
{
"id": "2112.04359"
},
{
"id": "2001.08361"
},
{
"id": "2306.05949"
},
{
"id": "1907.11692"
},
{
"id": "1705.03551"
},
{
"id": "2303.15056"
},
{
"id": "2303.17651"
},
{
"id": "2210.07700"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2307.09042 | 19 | # Based Models
# OpenAI GPT series
# LLaMA
# Flan-t5
# Pythia
# GLM
# RWKV
# Claude
Table 1 Footnote: Table 1 shows the SECEU scores, EQ scores, representational pattern similarity, and properties of mainstream LLMs evaluated in the current study. #: models require prompts to complete the test. ##: modelsâ performance benefits from prompts. Failed: even with prompts, the LLMs cannot complete the test. %: The percent of humans whose performance was below that of an LLM in the test. Pattern Similarity: The degree of similarity is indexed Pearson correlation coefficient (r). *: p < 0.05; **: p < 0.01. Size: The parameter size of LLMs in the unit of billions (B).
10
Time: The launch time in the format YYYY/MM/DD. SFT: Supervised fine-tune; RLHF: Reinforcement learning from human feedback; â: yes; Ã: no.
The results revealed a substantial variation in EU among the LLMs tested (Fig. 2). Within the OpenAI GPT series, GPT-4 achieved the highest EQ of 117, exceeding 89% of humans. In contrast, DaVinci scored the lowest, with an EQ of 87, only outperforming 18% of humans. | 2307.09042#19 | Emotional Intelligence of Large Language Models | Large Language Models (LLMs) have demonstrated remarkable abilities across
numerous disciplines, primarily assessed through tasks in language generation,
knowledge utilization, and complex reasoning. However, their alignment with
human emotions and values, which is critical for real-world applications, has
not been systematically evaluated. Here, we assessed LLMs' Emotional
Intelligence (EI), encompassing emotion recognition, interpretation, and
understanding, which is necessary for effective communication and social
interactions. Specifically, we first developed a novel psychometric assessment
focusing on Emotion Understanding (EU), a core component of EI, suitable for
both humans and LLMs. This test requires evaluating complex emotions (e.g.,
surprised, joyful, puzzled, proud) in realistic scenarios (e.g., despite
feeling underperformed, John surprisingly achieved a top score). With a
reference frame constructed from over 500 adults, we tested a variety of
mainstream LLMs. Most achieved above-average EQ scores, with GPT-4 exceeding
89% of human participants with an EQ of 117. Interestingly, a multivariate
pattern analysis revealed that some LLMs apparently did not reply on the
human-like mechanism to achieve human-level performance, as their
representational patterns were qualitatively distinct from humans. In addition,
we discussed the impact of factors such as model size, training method, and
architecture on LLMs' EQ. In summary, our study presents one of the first
psychometric evaluations of the human-like characteristics of LLMs, which may
shed light on the future development of LLMs aiming for both high intellectual
and emotional intelligence. Project website:
https://emotional-intelligence.github.io/ | http://arxiv.org/pdf/2307.09042 | Xuena Wang, Xueting Li, Zi Yin, Yue Wu, Liu Jia | cs.AI | 36 pages, 5 figures | null | cs.AI | 20230718 | 20230728 | [
{
"id": "2302.02083"
},
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2009.03300"
},
{
"id": "2305.02897"
},
{
"id": "2210.02414"
},
{
"id": "2303.18223"
},
{
"id": "2304.07327"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2210.13312"
}
] |
2307.09288 | 19 | 2.2 2.1 2.0 19 18 Train PPL 17 1.6 15 14 0 250 500 750 1000 1250 1500 1750 2000 Processed Tokens (Billions)
Figure 5: Training Loss for Llama 2 models. We compare the training loss of the Llama 2 family of models. We observe that after pretraining on 2T Tokens, the models still did not show any sign of saturation.
Tokenizer. We use the same tokenizer as Llama 1; it employs a bytepair encoding (BPE) algorithm (Sennrich et al., 2016) using the implementation from SentencePiece (Kudo and Richardson, 2018). As with Llama 1, we split all numbers into individual digits and use bytes to decompose unknown UTF-8 characters. The total vocabulary size is 32k tokens.
# 2.2.1 Training Hardware & Carbon Footprint | 2307.09288#19 | Llama 2: Open Foundation and Fine-Tuned Chat Models | In this work, we develop and release Llama 2, a collection of pretrained and
fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70
billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for
dialogue use cases. Our models outperform open-source chat models on most
benchmarks we tested, and based on our human evaluations for helpfulness and
safety, may be a suitable substitute for closed-source models. We provide a
detailed description of our approach to fine-tuning and safety improvements of
Llama 2-Chat in order to enable the community to build on our work and
contribute to the responsible development of LLMs. | http://arxiv.org/pdf/2307.09288 | Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom | cs.CL, cs.AI | null | null | cs.CL | 20230718 | 20230719 | [
{
"id": "2006.03654"
},
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1707.06347"
},
{
"id": "1803.05457"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2211.09085"
},
{
"id": "2304.07327"
},
{
"id": "2203.15556"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2302.09270"
},
{
"id": "2205.01068"
},
{
"id": "2301.09211"
},
{
"id": "2304.06364"
},
{
"id": "2305.17493"
},
{
"id": "2302.07842"
},
{
"id": "2301.12867"
},
{
"id": "1806.03822"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "1904.09728"
},
{
"id": "1811.00937"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2206.05229"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "1908.01091"
},
{
"id": "2204.05862"
},
{
"id": "2107.03451"
},
{
"id": "2112.00861"
},
{
"id": "2109.03300"
},
{
"id": "2104.10350"
},
{
"id": "2212.09689"
},
{
"id": "2006.12442"
},
{
"id": "2302.07459"
},
{
"id": "2302.08582"
},
{
"id": "2212.10560"
},
{
"id": "2112.04359"
},
{
"id": "2001.08361"
},
{
"id": "2306.05949"
},
{
"id": "1907.11692"
},
{
"id": "1705.03551"
},
{
"id": "2303.15056"
},
{
"id": "2303.17651"
},
{
"id": "2210.07700"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2307.09042 | 20 | The LLaMA-based models generally scored lower than the OpenAI GPT series, with Alpaca and Vicuna achieving the highest EQ of 104 and 105, respectively. Conversely, Koala showed the poorest performance, with an EQ score of 83, only surpassing 13% of humans. The base model LLaMA was unable to complete the test. Other models, such as Oasst (EQ: 107), Dolly (EQ: 98), ChatGLM (EQ: 94), and Claude (EQ: 106), all fell within the normal range.
In short, the majority of the LLMs tested showed satisfactory EU scores, comparable to those of the average human population. Specifically, GPT-4 reached the
# expert level of humans.
# The assessment of LLMsâ representational pattern
The measurement of the LLMsâ EQ scores provides an index of their EU ability within the reference frame of humans. A further question is whether they employ human-like mechanisms to evaluate complex emotions in scenarios. The univariate analysis used to compare EQ scores between human participants and LLMs only suggests weak equivalence, as a model could achieve a high EQ score using mechanisms that qualitatively differ from humans. Therefore, to establish strong equivalence between the LLMs and humans, we examined whether they employed similar representations to conduct the test. | 2307.09042#20 | Emotional Intelligence of Large Language Models | Large Language Models (LLMs) have demonstrated remarkable abilities across
numerous disciplines, primarily assessed through tasks in language generation,
knowledge utilization, and complex reasoning. However, their alignment with
human emotions and values, which is critical for real-world applications, has
not been systematically evaluated. Here, we assessed LLMs' Emotional
Intelligence (EI), encompassing emotion recognition, interpretation, and
understanding, which is necessary for effective communication and social
interactions. Specifically, we first developed a novel psychometric assessment
focusing on Emotion Understanding (EU), a core component of EI, suitable for
both humans and LLMs. This test requires evaluating complex emotions (e.g.,
surprised, joyful, puzzled, proud) in realistic scenarios (e.g., despite
feeling underperformed, John surprisingly achieved a top score). With a
reference frame constructed from over 500 adults, we tested a variety of
mainstream LLMs. Most achieved above-average EQ scores, with GPT-4 exceeding
89% of human participants with an EQ of 117. Interestingly, a multivariate
pattern analysis revealed that some LLMs apparently did not reply on the
human-like mechanism to achieve human-level performance, as their
representational patterns were qualitatively distinct from humans. In addition,
we discussed the impact of factors such as model size, training method, and
architecture on LLMs' EQ. In summary, our study presents one of the first
psychometric evaluations of the human-like characteristics of LLMs, which may
shed light on the future development of LLMs aiming for both high intellectual
and emotional intelligence. Project website:
https://emotional-intelligence.github.io/ | http://arxiv.org/pdf/2307.09042 | Xuena Wang, Xueting Li, Zi Yin, Yue Wu, Liu Jia | cs.AI | 36 pages, 5 figures | null | cs.AI | 20230718 | 20230728 | [
{
"id": "2302.02083"
},
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2009.03300"
},
{
"id": "2305.02897"
},
{
"id": "2210.02414"
},
{
"id": "2303.18223"
},
{
"id": "2304.07327"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2210.13312"
}
] |
2307.09288 | 20 | # 2.2.1 Training Hardware & Carbon Footprint
Training Hardware. We pretrained our models on Metaâs Research Super Cluster (RSC) (Lee and Sengupta, 2022) as well as internal production clusters. Both clusters use NVIDIA A100s. There are two key diï¬erences between the two clusters, with the ï¬rst being the type of interconnect available: RSC uses NVIDIA Quantum Inï¬niBand while our production cluster is equipped with a RoCE (RDMA over converged Ethernet) solution based on commodity ethernet Switches. Both of these solutions interconnect 200 Gbps end-points. The second diï¬erence is the per-GPU power consumption cap â RSC uses 400W while our production cluster uses 350W. With this two-cluster setup, we were able to compare the suitability of these diï¬erent types of interconnect for large scale training. RoCE (which is a more aï¬ordable, commercial interconnect network)
6
Time (GPU hours) Power Consumption (W) Carbon Emitted (tCO2eq) Llama 2 7B 13B 34B 70B 184320 368640 1038336 1720320 400 400 350 400 31.22 62.44 153.90 291.42 Total 3311616 539.00 | 2307.09288#20 | Llama 2: Open Foundation and Fine-Tuned Chat Models | In this work, we develop and release Llama 2, a collection of pretrained and
fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70
billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for
dialogue use cases. Our models outperform open-source chat models on most
benchmarks we tested, and based on our human evaluations for helpfulness and
safety, may be a suitable substitute for closed-source models. We provide a
detailed description of our approach to fine-tuning and safety improvements of
Llama 2-Chat in order to enable the community to build on our work and
contribute to the responsible development of LLMs. | http://arxiv.org/pdf/2307.09288 | Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom | cs.CL, cs.AI | null | null | cs.CL | 20230718 | 20230719 | [
{
"id": "2006.03654"
},
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1707.06347"
},
{
"id": "1803.05457"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2211.09085"
},
{
"id": "2304.07327"
},
{
"id": "2203.15556"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2302.09270"
},
{
"id": "2205.01068"
},
{
"id": "2301.09211"
},
{
"id": "2304.06364"
},
{
"id": "2305.17493"
},
{
"id": "2302.07842"
},
{
"id": "2301.12867"
},
{
"id": "1806.03822"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "1904.09728"
},
{
"id": "1811.00937"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2206.05229"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "1908.01091"
},
{
"id": "2204.05862"
},
{
"id": "2107.03451"
},
{
"id": "2112.00861"
},
{
"id": "2109.03300"
},
{
"id": "2104.10350"
},
{
"id": "2212.09689"
},
{
"id": "2006.12442"
},
{
"id": "2302.07459"
},
{
"id": "2302.08582"
},
{
"id": "2212.10560"
},
{
"id": "2112.04359"
},
{
"id": "2001.08361"
},
{
"id": "2306.05949"
},
{
"id": "1907.11692"
},
{
"id": "1705.03551"
},
{
"id": "2303.15056"
},
{
"id": "2303.17651"
},
{
"id": "2210.07700"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2307.09042 | 21 | One approach is to use the item-wise correlation analysis (Izard & Spelke, 2009; Tian et al., 2020) to compare response patterns between the LLMs and human 11
participants. To do this, we first constructed a multi-item discriminability vector (i.e., an item-wise response pattern) for each participant by using the distance of each item to the standard score, and thus this vectorâs length corresponded to the number of items (i.e., 40). Then, we created a template of response patterns by averaging the multi-item discriminability patterns across all human participants, along with the distribution of the correlation coefficients between each participantâs response pattern and the template (Human-to-Human similarity) to serve as a norm for pattern similarity (M = 0.199, SD = 0.166). Finally, we quantified the similarity in response patterns between the LLMs and humans by calculating the correlation coefficient between the multi-item discriminability vector of each LLM and the human template (LLM-to-Human, Table 1). An LLM that has an LLM-to-Human correlation coefficient one SD deviation below the mean of Human-to-Human distribution is considered as employing a qualitatively
# different mechanism from humans in EU. | 2307.09042#21 | Emotional Intelligence of Large Language Models | Large Language Models (LLMs) have demonstrated remarkable abilities across
numerous disciplines, primarily assessed through tasks in language generation,
knowledge utilization, and complex reasoning. However, their alignment with
human emotions and values, which is critical for real-world applications, has
not been systematically evaluated. Here, we assessed LLMs' Emotional
Intelligence (EI), encompassing emotion recognition, interpretation, and
understanding, which is necessary for effective communication and social
interactions. Specifically, we first developed a novel psychometric assessment
focusing on Emotion Understanding (EU), a core component of EI, suitable for
both humans and LLMs. This test requires evaluating complex emotions (e.g.,
surprised, joyful, puzzled, proud) in realistic scenarios (e.g., despite
feeling underperformed, John surprisingly achieved a top score). With a
reference frame constructed from over 500 adults, we tested a variety of
mainstream LLMs. Most achieved above-average EQ scores, with GPT-4 exceeding
89% of human participants with an EQ of 117. Interestingly, a multivariate
pattern analysis revealed that some LLMs apparently did not reply on the
human-like mechanism to achieve human-level performance, as their
representational patterns were qualitatively distinct from humans. In addition,
we discussed the impact of factors such as model size, training method, and
architecture on LLMs' EQ. In summary, our study presents one of the first
psychometric evaluations of the human-like characteristics of LLMs, which may
shed light on the future development of LLMs aiming for both high intellectual
and emotional intelligence. Project website:
https://emotional-intelligence.github.io/ | http://arxiv.org/pdf/2307.09042 | Xuena Wang, Xueting Li, Zi Yin, Yue Wu, Liu Jia | cs.AI | 36 pages, 5 figures | null | cs.AI | 20230718 | 20230728 | [
{
"id": "2302.02083"
},
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2009.03300"
},
{
"id": "2305.02897"
},
{
"id": "2210.02414"
},
{
"id": "2303.18223"
},
{
"id": "2304.07327"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2210.13312"
}
] |
2307.09288 | 21 | Table 2: CO2 emissions during pretraining. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage eï¬ciency. 100% of the emissions are directly oï¬set by Metaâs sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
can scale almost as well as expensive Inï¬niband up to 2000 GPUs, which makes pretraining even more democratizable.
Carbon Footprint of Pretraining. Following preceding research (Bender et al., 2021a; Patterson et al., 2021; Wu et al., 2022; Dodge et al., 2022) and using power consumption estimates of GPU devices and carbon eï¬ciency, we aim to calculate the carbon emissions resulting from the pretraining of Llama 2 models. The actual power usage of a GPU is dependent on its utilization and is likely to vary from the Thermal Design Power (TDP) that we employ as an estimation for GPU power. It is important to note that our calculations do not account for further power demands, such as those from interconnect or non-GPU server power consumption, nor from datacenter cooling systems. Additionally, the carbon output related to the production of AI hardware, like GPUs, could add to the overall carbon footprint as suggested by Gupta et al. (2022b,a). | 2307.09288#21 | Llama 2: Open Foundation and Fine-Tuned Chat Models | In this work, we develop and release Llama 2, a collection of pretrained and
fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70
billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for
dialogue use cases. Our models outperform open-source chat models on most
benchmarks we tested, and based on our human evaluations for helpfulness and
safety, may be a suitable substitute for closed-source models. We provide a
detailed description of our approach to fine-tuning and safety improvements of
Llama 2-Chat in order to enable the community to build on our work and
contribute to the responsible development of LLMs. | http://arxiv.org/pdf/2307.09288 | Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom | cs.CL, cs.AI | null | null | cs.CL | 20230718 | 20230719 | [
{
"id": "2006.03654"
},
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1707.06347"
},
{
"id": "1803.05457"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2211.09085"
},
{
"id": "2304.07327"
},
{
"id": "2203.15556"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2302.09270"
},
{
"id": "2205.01068"
},
{
"id": "2301.09211"
},
{
"id": "2304.06364"
},
{
"id": "2305.17493"
},
{
"id": "2302.07842"
},
{
"id": "2301.12867"
},
{
"id": "1806.03822"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "1904.09728"
},
{
"id": "1811.00937"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2206.05229"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "1908.01091"
},
{
"id": "2204.05862"
},
{
"id": "2107.03451"
},
{
"id": "2112.00861"
},
{
"id": "2109.03300"
},
{
"id": "2104.10350"
},
{
"id": "2212.09689"
},
{
"id": "2006.12442"
},
{
"id": "2302.07459"
},
{
"id": "2302.08582"
},
{
"id": "2212.10560"
},
{
"id": "2112.04359"
},
{
"id": "2001.08361"
},
{
"id": "2306.05949"
},
{
"id": "1907.11692"
},
{
"id": "1705.03551"
},
{
"id": "2303.15056"
},
{
"id": "2303.17651"
},
{
"id": "2210.07700"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2307.09042 | 22 | # different mechanism from humans in EU.
A Koala (r= 0.43**, > 93%) 0.6 GPT-4 (7 ).28 0.4 ChatGLM (r = 0.09, > 24%) 0.2 â GPT-3.5-turbo Correlation coefficient 0.0 (r= 0.04, > 17%) -0.2 Vicuna (r= -0.02, > 10%) -0.4 > 0 5 10 Percentage of humans (%)
Figure 3: The pattern similarity between LLMs and humans. The light-grey histogram represents the distribution of Human-to-Human pattern similarity, with the y-axis indicating the Pearson correlation coefficients and the x-axis showing the percentage of total participants. The KDE line 12
demonstrates the probability density of the Pearson correlation coefficient. Key points are highlighted with colored square markers for LLMs. For simplicity, here we only present the performance from GPT-4, Vicuna, GPT-3.5-turbo, ChatGLM, and Koala. **: p < 0.01. | 2307.09042#22 | Emotional Intelligence of Large Language Models | Large Language Models (LLMs) have demonstrated remarkable abilities across
numerous disciplines, primarily assessed through tasks in language generation,
knowledge utilization, and complex reasoning. However, their alignment with
human emotions and values, which is critical for real-world applications, has
not been systematically evaluated. Here, we assessed LLMs' Emotional
Intelligence (EI), encompassing emotion recognition, interpretation, and
understanding, which is necessary for effective communication and social
interactions. Specifically, we first developed a novel psychometric assessment
focusing on Emotion Understanding (EU), a core component of EI, suitable for
both humans and LLMs. This test requires evaluating complex emotions (e.g.,
surprised, joyful, puzzled, proud) in realistic scenarios (e.g., despite
feeling underperformed, John surprisingly achieved a top score). With a
reference frame constructed from over 500 adults, we tested a variety of
mainstream LLMs. Most achieved above-average EQ scores, with GPT-4 exceeding
89% of human participants with an EQ of 117. Interestingly, a multivariate
pattern analysis revealed that some LLMs apparently did not reply on the
human-like mechanism to achieve human-level performance, as their
representational patterns were qualitatively distinct from humans. In addition,
we discussed the impact of factors such as model size, training method, and
architecture on LLMs' EQ. In summary, our study presents one of the first
psychometric evaluations of the human-like characteristics of LLMs, which may
shed light on the future development of LLMs aiming for both high intellectual
and emotional intelligence. Project website:
https://emotional-intelligence.github.io/ | http://arxiv.org/pdf/2307.09042 | Xuena Wang, Xueting Li, Zi Yin, Yue Wu, Liu Jia | cs.AI | 36 pages, 5 figures | null | cs.AI | 20230718 | 20230728 | [
{
"id": "2302.02083"
},
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2009.03300"
},
{
"id": "2305.02897"
},
{
"id": "2210.02414"
},
{
"id": "2303.18223"
},
{
"id": "2304.07327"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2210.13312"
}
] |
2307.09288 | 22 | Table 2 summarizes the carbon emission for pretraining the Llama 2 family of models. A cumulative of 3.3M GPU hours of computation was performed on hardware of type A100-80GB (TDP of 400W or 350W). We estimate the total emissions for training to be 539 tCO2eq, of which 100% were directly oï¬set by Metaâs sustainability program.ââ Our open release strategy also means that these pretraining costs will not need to be incurred by other companies, saving more global resources.
# 2.3 Llama 2 Pretrained Model Evaluation
In this section, we report the results for the Llama 1 and Llama 2 base models, MosaicML Pretrained Transformer (MPT)â â models, and Falcon (Almazrouei et al., 2023) models on standard academic benchmarks. For all the evaluations, we use our internal evaluations library. We reproduce results for the MPT and Falcon models internally. For these models, we always pick the best score between our evaluation framework and any publicly reported results.
In Table 3, we summarize the overall performance across a suite of popular benchmarks. Note that safety benchmarks are shared in Section 4.1. The benchmarks are grouped into the categories listed below. The results for all the individual benchmarks are available in Section A.2.2. | 2307.09288#22 | Llama 2: Open Foundation and Fine-Tuned Chat Models | In this work, we develop and release Llama 2, a collection of pretrained and
fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70
billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for
dialogue use cases. Our models outperform open-source chat models on most
benchmarks we tested, and based on our human evaluations for helpfulness and
safety, may be a suitable substitute for closed-source models. We provide a
detailed description of our approach to fine-tuning and safety improvements of
Llama 2-Chat in order to enable the community to build on our work and
contribute to the responsible development of LLMs. | http://arxiv.org/pdf/2307.09288 | Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom | cs.CL, cs.AI | null | null | cs.CL | 20230718 | 20230719 | [
{
"id": "2006.03654"
},
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1707.06347"
},
{
"id": "1803.05457"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2211.09085"
},
{
"id": "2304.07327"
},
{
"id": "2203.15556"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2302.09270"
},
{
"id": "2205.01068"
},
{
"id": "2301.09211"
},
{
"id": "2304.06364"
},
{
"id": "2305.17493"
},
{
"id": "2302.07842"
},
{
"id": "2301.12867"
},
{
"id": "1806.03822"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "1904.09728"
},
{
"id": "1811.00937"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2206.05229"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "1908.01091"
},
{
"id": "2204.05862"
},
{
"id": "2107.03451"
},
{
"id": "2112.00861"
},
{
"id": "2109.03300"
},
{
"id": "2104.10350"
},
{
"id": "2212.09689"
},
{
"id": "2006.12442"
},
{
"id": "2302.07459"
},
{
"id": "2302.08582"
},
{
"id": "2212.10560"
},
{
"id": "2112.04359"
},
{
"id": "2001.08361"
},
{
"id": "2306.05949"
},
{
"id": "1907.11692"
},
{
"id": "1705.03551"
},
{
"id": "2303.15056"
},
{
"id": "2303.17651"
},
{
"id": "2210.07700"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2307.09042 | 23 | Surprisingly, despite its lower performance in the SECEU, Koala showed the highest similarity in representational patterns to humans (r = 0.43, p < 0.01, exceeding 93% of human participants) (Fig. 3). This suggests that Koala may represent emotions in the same way as humans do, as it performed well on items where humans excelled and struggled on items where humans faced challenges. That is, the discrepancies in understanding emotions between Koala and humans are rather quantitative than qualitative. On the other hand, the representational patterns of models such as Babbage, text-davinci-002, Alpaca, and Vicuna differed qualitatively from humansâ representational patterns (Babbage: r = -0.12, > 4%; text-davinci-002: r = -0.04, > 8%; Alpaca: r = 0.03, > 15%; Vicuna: r = -0.02, > 10%). This suggests that, despite their above-average EQ scores, these models likely employed mechanisms that are
# qualitatively different from human processes. | 2307.09042#23 | Emotional Intelligence of Large Language Models | Large Language Models (LLMs) have demonstrated remarkable abilities across
numerous disciplines, primarily assessed through tasks in language generation,
knowledge utilization, and complex reasoning. However, their alignment with
human emotions and values, which is critical for real-world applications, has
not been systematically evaluated. Here, we assessed LLMs' Emotional
Intelligence (EI), encompassing emotion recognition, interpretation, and
understanding, which is necessary for effective communication and social
interactions. Specifically, we first developed a novel psychometric assessment
focusing on Emotion Understanding (EU), a core component of EI, suitable for
both humans and LLMs. This test requires evaluating complex emotions (e.g.,
surprised, joyful, puzzled, proud) in realistic scenarios (e.g., despite
feeling underperformed, John surprisingly achieved a top score). With a
reference frame constructed from over 500 adults, we tested a variety of
mainstream LLMs. Most achieved above-average EQ scores, with GPT-4 exceeding
89% of human participants with an EQ of 117. Interestingly, a multivariate
pattern analysis revealed that some LLMs apparently did not reply on the
human-like mechanism to achieve human-level performance, as their
representational patterns were qualitatively distinct from humans. In addition,
we discussed the impact of factors such as model size, training method, and
architecture on LLMs' EQ. In summary, our study presents one of the first
psychometric evaluations of the human-like characteristics of LLMs, which may
shed light on the future development of LLMs aiming for both high intellectual
and emotional intelligence. Project website:
https://emotional-intelligence.github.io/ | http://arxiv.org/pdf/2307.09042 | Xuena Wang, Xueting Li, Zi Yin, Yue Wu, Liu Jia | cs.AI | 36 pages, 5 figures | null | cs.AI | 20230718 | 20230728 | [
{
"id": "2302.02083"
},
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2009.03300"
},
{
"id": "2305.02897"
},
{
"id": "2210.02414"
},
{
"id": "2303.18223"
},
{
"id": "2304.07327"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2210.13312"
}
] |
2307.09288 | 23 | ⢠Code. We report the average pass@1 scores of our models on HumanEval (Chen et al., 2021) and MBPP (Austin et al., 2021).
Commonsense Reasoning. We report the average of PIQA (Bisk et al., 2020), SIQA (Sap et al., 2019), HellaSwag (Zellers et al., 2019a), WinoGrande (Sakaguchi et al., 2021), ARC easy and challenge (Clark et al., 2018), OpenBookQA (Mihaylov et al., 2018), and CommonsenseQA (Talmor et al., 2018). We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. ⢠World Knowledge. We evaluate the 5-shot performance on NaturalQuestions (Kwiatkowski et al.,
2019) and TriviaQA (Joshi et al., 2017) and report the average.
Reading Comprehension. For reading comprehension, we report the 0-shot average on SQuAD
(Rajpurkar et al., 2018), QuAC (Choi et al., 2018), and BoolQ (Clark et al., 2019).
⢠MATH. We report the average of the GSM8K (8 shot) (Cobbe et al., 2021) and MATH (4 shot) (Hendrycks et al., 2021) benchmarks at top 1. | 2307.09288#23 | Llama 2: Open Foundation and Fine-Tuned Chat Models | In this work, we develop and release Llama 2, a collection of pretrained and
fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70
billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for
dialogue use cases. Our models outperform open-source chat models on most
benchmarks we tested, and based on our human evaluations for helpfulness and
safety, may be a suitable substitute for closed-source models. We provide a
detailed description of our approach to fine-tuning and safety improvements of
Llama 2-Chat in order to enable the community to build on our work and
contribute to the responsible development of LLMs. | http://arxiv.org/pdf/2307.09288 | Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom | cs.CL, cs.AI | null | null | cs.CL | 20230718 | 20230719 | [
{
"id": "2006.03654"
},
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1707.06347"
},
{
"id": "1803.05457"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2211.09085"
},
{
"id": "2304.07327"
},
{
"id": "2203.15556"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2302.09270"
},
{
"id": "2205.01068"
},
{
"id": "2301.09211"
},
{
"id": "2304.06364"
},
{
"id": "2305.17493"
},
{
"id": "2302.07842"
},
{
"id": "2301.12867"
},
{
"id": "1806.03822"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "1904.09728"
},
{
"id": "1811.00937"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2206.05229"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "1908.01091"
},
{
"id": "2204.05862"
},
{
"id": "2107.03451"
},
{
"id": "2112.00861"
},
{
"id": "2109.03300"
},
{
"id": "2104.10350"
},
{
"id": "2212.09689"
},
{
"id": "2006.12442"
},
{
"id": "2302.07459"
},
{
"id": "2302.08582"
},
{
"id": "2212.10560"
},
{
"id": "2112.04359"
},
{
"id": "2001.08361"
},
{
"id": "2306.05949"
},
{
"id": "1907.11692"
},
{
"id": "1705.03551"
},
{
"id": "2303.15056"
},
{
"id": "2303.17651"
},
{
"id": "2210.07700"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2307.09042 | 24 | # qualitatively different from human processes.
GPT-4, the most advanced model to date, showed high similarity in representational pattern (r = 0.28, > 67%). This result implies that GPT-4 may have significantly changed its architecture or implemented novel training methods to align its EU ability more closely to humans. Interestingly, prompts apparently played a critical role in improving representational similarity. With two-shot COT prompts, DaVinci and text-davinci-003 showed high similarity in representational pattern to humans (Davinci: r = 0.41, p < 0.01, > 91%; text-davinci-003: r = 0.31, p < 0.05, > 73%), higher than that of GPT-4. Note that without prompts, they failed to complete the SECEU test. In contrast, prompts had little effect on GPT-4 and ChatGPT-3.5.
# Discussion
13 | 2307.09042#24 | Emotional Intelligence of Large Language Models | Large Language Models (LLMs) have demonstrated remarkable abilities across
numerous disciplines, primarily assessed through tasks in language generation,
knowledge utilization, and complex reasoning. However, their alignment with
human emotions and values, which is critical for real-world applications, has
not been systematically evaluated. Here, we assessed LLMs' Emotional
Intelligence (EI), encompassing emotion recognition, interpretation, and
understanding, which is necessary for effective communication and social
interactions. Specifically, we first developed a novel psychometric assessment
focusing on Emotion Understanding (EU), a core component of EI, suitable for
both humans and LLMs. This test requires evaluating complex emotions (e.g.,
surprised, joyful, puzzled, proud) in realistic scenarios (e.g., despite
feeling underperformed, John surprisingly achieved a top score). With a
reference frame constructed from over 500 adults, we tested a variety of
mainstream LLMs. Most achieved above-average EQ scores, with GPT-4 exceeding
89% of human participants with an EQ of 117. Interestingly, a multivariate
pattern analysis revealed that some LLMs apparently did not reply on the
human-like mechanism to achieve human-level performance, as their
representational patterns were qualitatively distinct from humans. In addition,
we discussed the impact of factors such as model size, training method, and
architecture on LLMs' EQ. In summary, our study presents one of the first
psychometric evaluations of the human-like characteristics of LLMs, which may
shed light on the future development of LLMs aiming for both high intellectual
and emotional intelligence. Project website:
https://emotional-intelligence.github.io/ | http://arxiv.org/pdf/2307.09042 | Xuena Wang, Xueting Li, Zi Yin, Yue Wu, Liu Jia | cs.AI | 36 pages, 5 figures | null | cs.AI | 20230718 | 20230728 | [
{
"id": "2302.02083"
},
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2009.03300"
},
{
"id": "2305.02897"
},
{
"id": "2210.02414"
},
{
"id": "2303.18223"
},
{
"id": "2304.07327"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2210.13312"
}
] |
2307.09042 | 25 | # Discussion
13
Since the debut of ChatGPT, a great number of tasks and benchmarks have been developed to examine the capacities. These empirical evaluations and analyses mainly focus on language generation (e.g., conditional text generation), knowledge utilization (e.g., closed-book and open-book QAs), and complex reasoning (e.g., symbolic and mathematical reasoning) (Zhao et al., 2023). However, tests on human alignment of LLMs to human values and needs, a core ability for the broad use of LLMs in the real world, are relatively scarce. Here in this study, we used traditional psychometric methods to develop a valid and reliable test on emotional understanding, the SECEU, to evaluate the EI of LLMs. We found that a majority of the LLMs tested performed satisfactorily in the test, achieving above-average EQ scores, although significant individual differences were present across the LLMs. Critically, some LLMs apparently did not reply on the human-like representation to achieve human-level performance, as their representational patterns diverged significantly from human patterns, suggesting a qualitative difference in the underlying mechanisms. In summary, our study provides the first comprehensive psychometric examination of the emotional intelligence of LLMs, which may shed light on the development of future LLMs that embody high
# levels of both intellectual and emotional intelligence.
14 | 2307.09042#25 | Emotional Intelligence of Large Language Models | Large Language Models (LLMs) have demonstrated remarkable abilities across
numerous disciplines, primarily assessed through tasks in language generation,
knowledge utilization, and complex reasoning. However, their alignment with
human emotions and values, which is critical for real-world applications, has
not been systematically evaluated. Here, we assessed LLMs' Emotional
Intelligence (EI), encompassing emotion recognition, interpretation, and
understanding, which is necessary for effective communication and social
interactions. Specifically, we first developed a novel psychometric assessment
focusing on Emotion Understanding (EU), a core component of EI, suitable for
both humans and LLMs. This test requires evaluating complex emotions (e.g.,
surprised, joyful, puzzled, proud) in realistic scenarios (e.g., despite
feeling underperformed, John surprisingly achieved a top score). With a
reference frame constructed from over 500 adults, we tested a variety of
mainstream LLMs. Most achieved above-average EQ scores, with GPT-4 exceeding
89% of human participants with an EQ of 117. Interestingly, a multivariate
pattern analysis revealed that some LLMs apparently did not reply on the
human-like mechanism to achieve human-level performance, as their
representational patterns were qualitatively distinct from humans. In addition,
we discussed the impact of factors such as model size, training method, and
architecture on LLMs' EQ. In summary, our study presents one of the first
psychometric evaluations of the human-like characteristics of LLMs, which may
shed light on the future development of LLMs aiming for both high intellectual
and emotional intelligence. Project website:
https://emotional-intelligence.github.io/ | http://arxiv.org/pdf/2307.09042 | Xuena Wang, Xueting Li, Zi Yin, Yue Wu, Liu Jia | cs.AI | 36 pages, 5 figures | null | cs.AI | 20230718 | 20230728 | [
{
"id": "2302.02083"
},
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2009.03300"
},
{
"id": "2305.02897"
},
{
"id": "2210.02414"
},
{
"id": "2303.18223"
},
{
"id": "2304.07327"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2210.13312"
}
] |
2307.09288 | 25 | Model Size Code Commonsense Reasoning World Knowledge Reading Comprehension Math MMLU BBH AGI Eval MPT 7B 20.5 30B 28.9 57.4 64.9 41.0 50.0 57.5 64.7 4.9 9.1 26.8 46.9 31.0 38.0 23.5 33.8 Falcon 5.6 40B 15.2 7B 56.1 69.2 42.8 56.7 36.0 65.7 4.6 12.6 26.2 55.4 28.0 37.1 21.2 37.0 Llama 1 7B 14.1 13B 18.9 33B 26.0 65B 30.7 60.8 66.1 70.0 70.7 46.2 52.6 58.4 60.5 58.5 62.3 67.6 68.6 6.95 10.9 21.4 30.8 35.1 46.9 57.8 63.4 30.3 37.0 39.8 43.5 23.9 33.9 41.7 47.6 Llama 2 7B 16.8 13B 24.5 34B 27.8 70B 37.5 63.9 66.9 69.9 71.9 48.9 55.4 58.7 63.6 61.3 65.8 68.0 | 2307.09288#25 | Llama 2: Open Foundation and Fine-Tuned Chat Models | In this work, we develop and release Llama 2, a collection of pretrained and
fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70
billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for
dialogue use cases. Our models outperform open-source chat models on most
benchmarks we tested, and based on our human evaluations for helpfulness and
safety, may be a suitable substitute for closed-source models. We provide a
detailed description of our approach to fine-tuning and safety improvements of
Llama 2-Chat in order to enable the community to build on our work and
contribute to the responsible development of LLMs. | http://arxiv.org/pdf/2307.09288 | Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom | cs.CL, cs.AI | null | null | cs.CL | 20230718 | 20230719 | [
{
"id": "2006.03654"
},
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1707.06347"
},
{
"id": "1803.05457"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2211.09085"
},
{
"id": "2304.07327"
},
{
"id": "2203.15556"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2302.09270"
},
{
"id": "2205.01068"
},
{
"id": "2301.09211"
},
{
"id": "2304.06364"
},
{
"id": "2305.17493"
},
{
"id": "2302.07842"
},
{
"id": "2301.12867"
},
{
"id": "1806.03822"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "1904.09728"
},
{
"id": "1811.00937"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2206.05229"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "1908.01091"
},
{
"id": "2204.05862"
},
{
"id": "2107.03451"
},
{
"id": "2112.00861"
},
{
"id": "2109.03300"
},
{
"id": "2104.10350"
},
{
"id": "2212.09689"
},
{
"id": "2006.12442"
},
{
"id": "2302.07459"
},
{
"id": "2302.08582"
},
{
"id": "2212.10560"
},
{
"id": "2112.04359"
},
{
"id": "2001.08361"
},
{
"id": "2306.05949"
},
{
"id": "1907.11692"
},
{
"id": "1705.03551"
},
{
"id": "2303.15056"
},
{
"id": "2303.17651"
},
{
"id": "2210.07700"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2307.09042 | 26 | # levels of both intellectual and emotional intelligence.
14
EQ 85 100 us Parameter size exceeding 16% 50% 84% GPT-4 () a, Babbage# 100~200B OpenAl - ~ O GP1 t ) i doris â fo series r ) textdavinci-001 Transformer © oO Claude OLLaMA @ Alpaca @Vicuna @ Koala O Fastchat @ ChatGLM Dolly @ Oasst RNN O RWKV-v4 2 s . iây Time May, 2020 November, 2022 March, 2023 May, 2023
Figure 4: The family tree of LLMs. Each node in the tree represents an LLM, whose vertical position along the x-axis indicates the launch time. The size of each node corresponds to the parameter size of the LLM. Note that the size of GPT-4 and Claude was estimated based on publicly available information. Color donates the EQ scores, with red color for higher scores and blue color for lower scores. Note that white color shows that models failed to conduct the SECEU. The color of the branches distinguishes between open-source (light gray) and closed-source (dark gray)
# models. | 2307.09042#26 | Emotional Intelligence of Large Language Models | Large Language Models (LLMs) have demonstrated remarkable abilities across
numerous disciplines, primarily assessed through tasks in language generation,
knowledge utilization, and complex reasoning. However, their alignment with
human emotions and values, which is critical for real-world applications, has
not been systematically evaluated. Here, we assessed LLMs' Emotional
Intelligence (EI), encompassing emotion recognition, interpretation, and
understanding, which is necessary for effective communication and social
interactions. Specifically, we first developed a novel psychometric assessment
focusing on Emotion Understanding (EU), a core component of EI, suitable for
both humans and LLMs. This test requires evaluating complex emotions (e.g.,
surprised, joyful, puzzled, proud) in realistic scenarios (e.g., despite
feeling underperformed, John surprisingly achieved a top score). With a
reference frame constructed from over 500 adults, we tested a variety of
mainstream LLMs. Most achieved above-average EQ scores, with GPT-4 exceeding
89% of human participants with an EQ of 117. Interestingly, a multivariate
pattern analysis revealed that some LLMs apparently did not reply on the
human-like mechanism to achieve human-level performance, as their
representational patterns were qualitatively distinct from humans. In addition,
we discussed the impact of factors such as model size, training method, and
architecture on LLMs' EQ. In summary, our study presents one of the first
psychometric evaluations of the human-like characteristics of LLMs, which may
shed light on the future development of LLMs aiming for both high intellectual
and emotional intelligence. Project website:
https://emotional-intelligence.github.io/ | http://arxiv.org/pdf/2307.09042 | Xuena Wang, Xueting Li, Zi Yin, Yue Wu, Liu Jia | cs.AI | 36 pages, 5 figures | null | cs.AI | 20230718 | 20230728 | [
{
"id": "2302.02083"
},
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2009.03300"
},
{
"id": "2305.02897"
},
{
"id": "2210.02414"
},
{
"id": "2303.18223"
},
{
"id": "2304.07327"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2210.13312"
}
] |
2307.09042 | 27 | # models.
Various factors appear to influence the EQ scores of the LLMs (Fig. 4). The most conspicuous one is the model size, which is essential to emergent abilities (Bubeck et al., 2023), making AI algorithms unprecedently powerful and effective. While the larger models generally scored higher in the test, certain smaller models such as Oasst and Alpaca still managed to achieve satisfactory EQ scores. This suggests that factors
15
beyond the mere size may have a more profound influence on modelsâ EU.
The effectiveness of various training methods, such as supervised training, | 2307.09042#27 | Emotional Intelligence of Large Language Models | Large Language Models (LLMs) have demonstrated remarkable abilities across
numerous disciplines, primarily assessed through tasks in language generation,
knowledge utilization, and complex reasoning. However, their alignment with
human emotions and values, which is critical for real-world applications, has
not been systematically evaluated. Here, we assessed LLMs' Emotional
Intelligence (EI), encompassing emotion recognition, interpretation, and
understanding, which is necessary for effective communication and social
interactions. Specifically, we first developed a novel psychometric assessment
focusing on Emotion Understanding (EU), a core component of EI, suitable for
both humans and LLMs. This test requires evaluating complex emotions (e.g.,
surprised, joyful, puzzled, proud) in realistic scenarios (e.g., despite
feeling underperformed, John surprisingly achieved a top score). With a
reference frame constructed from over 500 adults, we tested a variety of
mainstream LLMs. Most achieved above-average EQ scores, with GPT-4 exceeding
89% of human participants with an EQ of 117. Interestingly, a multivariate
pattern analysis revealed that some LLMs apparently did not reply on the
human-like mechanism to achieve human-level performance, as their
representational patterns were qualitatively distinct from humans. In addition,
we discussed the impact of factors such as model size, training method, and
architecture on LLMs' EQ. In summary, our study presents one of the first
psychometric evaluations of the human-like characteristics of LLMs, which may
shed light on the future development of LLMs aiming for both high intellectual
and emotional intelligence. Project website:
https://emotional-intelligence.github.io/ | http://arxiv.org/pdf/2307.09042 | Xuena Wang, Xueting Li, Zi Yin, Yue Wu, Liu Jia | cs.AI | 36 pages, 5 figures | null | cs.AI | 20230718 | 20230728 | [
{
"id": "2302.02083"
},
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2009.03300"
},
{
"id": "2305.02897"
},
{
"id": "2210.02414"
},
{
"id": "2303.18223"
},
{
"id": "2304.07327"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2210.13312"
}
] |
2307.09288 | 27 | Table 3: Overall performance on grouped academic benchmarks compared to open-source base models.
⢠Popular Aggregated Benchmarks. We report the overall results for MMLU (5 shot) (Hendrycks et al., 2020), Big Bench Hard (BBH) (3 shot) (Suzgun et al., 2022), and AGI Eval (3â5 shot) (Zhong et al., 2023). For AGI Eval, we only evaluate on the English tasks and report the average.
As shown in Table 3, Llama 2 models outperform Llama 1 models. In particular, Llama 2 70B improves the results on MMLU and BBH by â5 and â8 points, respectively, compared to Llama 1 65B. Llama 2 7B and 30B models outperform MPT models of the corresponding size on all categories besides code benchmarks. For the Falcon models, Llama 2 7B and 34B outperform Falcon 7B and 40B models on all categories of benchmarks. Additionally, Llama 2 70B model outperforms all open-source models. | 2307.09288#27 | Llama 2: Open Foundation and Fine-Tuned Chat Models | In this work, we develop and release Llama 2, a collection of pretrained and
fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70
billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for
dialogue use cases. Our models outperform open-source chat models on most
benchmarks we tested, and based on our human evaluations for helpfulness and
safety, may be a suitable substitute for closed-source models. We provide a
detailed description of our approach to fine-tuning and safety improvements of
Llama 2-Chat in order to enable the community to build on our work and
contribute to the responsible development of LLMs. | http://arxiv.org/pdf/2307.09288 | Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom | cs.CL, cs.AI | null | null | cs.CL | 20230718 | 20230719 | [
{
"id": "2006.03654"
},
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1707.06347"
},
{
"id": "1803.05457"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2211.09085"
},
{
"id": "2304.07327"
},
{
"id": "2203.15556"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2302.09270"
},
{
"id": "2205.01068"
},
{
"id": "2301.09211"
},
{
"id": "2304.06364"
},
{
"id": "2305.17493"
},
{
"id": "2302.07842"
},
{
"id": "2301.12867"
},
{
"id": "1806.03822"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "1904.09728"
},
{
"id": "1811.00937"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2206.05229"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "1908.01091"
},
{
"id": "2204.05862"
},
{
"id": "2107.03451"
},
{
"id": "2112.00861"
},
{
"id": "2109.03300"
},
{
"id": "2104.10350"
},
{
"id": "2212.09689"
},
{
"id": "2006.12442"
},
{
"id": "2302.07459"
},
{
"id": "2302.08582"
},
{
"id": "2212.10560"
},
{
"id": "2112.04359"
},
{
"id": "2001.08361"
},
{
"id": "2306.05949"
},
{
"id": "1907.11692"
},
{
"id": "1705.03551"
},
{
"id": "2303.15056"
},
{
"id": "2303.17651"
},
{
"id": "2210.07700"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2307.09042 | 28 | 15
beyond the mere size may have a more profound influence on modelsâ EU.
The effectiveness of various training methods, such as supervised training,
reinforcement learning, self-supervised learning, and a combination thereof, likely substantially influences the EQ scores. For example, despite architectural differences (Pythia versus LLaMA), Oasst and Alpaca yielded similar scores in the test, demonstrating the potential of well-implemented fine-tuning techniques. In fact, these enhancements may be achieved through two main avenues. The first involves supervised fine-tuning (SFT), which allows for more structured and targeted fine-tuning, thereby improving modelsâ linguistic ability and their grasp of contextual nuances (Köpf et al., 2023; Taori et al., 2023a). The other approach employs reinforcement learning from human feedback (RLHF), enabling the models to learn from human insights and thereby fostering more human-like responses. Indeed, there is a giant leap in EU seen between text-davinci-002 (>23%) to text-davinci-003 (>83%), two different versions of the same model with the latter employing RLHF. | 2307.09042#28 | Emotional Intelligence of Large Language Models | Large Language Models (LLMs) have demonstrated remarkable abilities across
numerous disciplines, primarily assessed through tasks in language generation,
knowledge utilization, and complex reasoning. However, their alignment with
human emotions and values, which is critical for real-world applications, has
not been systematically evaluated. Here, we assessed LLMs' Emotional
Intelligence (EI), encompassing emotion recognition, interpretation, and
understanding, which is necessary for effective communication and social
interactions. Specifically, we first developed a novel psychometric assessment
focusing on Emotion Understanding (EU), a core component of EI, suitable for
both humans and LLMs. This test requires evaluating complex emotions (e.g.,
surprised, joyful, puzzled, proud) in realistic scenarios (e.g., despite
feeling underperformed, John surprisingly achieved a top score). With a
reference frame constructed from over 500 adults, we tested a variety of
mainstream LLMs. Most achieved above-average EQ scores, with GPT-4 exceeding
89% of human participants with an EQ of 117. Interestingly, a multivariate
pattern analysis revealed that some LLMs apparently did not reply on the
human-like mechanism to achieve human-level performance, as their
representational patterns were qualitatively distinct from humans. In addition,
we discussed the impact of factors such as model size, training method, and
architecture on LLMs' EQ. In summary, our study presents one of the first
psychometric evaluations of the human-like characteristics of LLMs, which may
shed light on the future development of LLMs aiming for both high intellectual
and emotional intelligence. Project website:
https://emotional-intelligence.github.io/ | http://arxiv.org/pdf/2307.09042 | Xuena Wang, Xueting Li, Zi Yin, Yue Wu, Liu Jia | cs.AI | 36 pages, 5 figures | null | cs.AI | 20230718 | 20230728 | [
{
"id": "2302.02083"
},
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2009.03300"
},
{
"id": "2305.02897"
},
{
"id": "2210.02414"
},
{
"id": "2303.18223"
},
{
"id": "2304.07327"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2210.13312"
}
] |
2307.09288 | 28 | In addition to open-source models, we also compare Llama 2 70B results to closed-source models. As shown in Table 4, Llama 2 70B is close to GPT-3.5 (OpenAI, 2023) on MMLU and GSM8K, but there is a signiï¬cant gap on coding benchmarks. Llama 2 70B results are on par or better than PaLM (540B) (Chowdhery et al., 2022) on almost all benchmarks. There is still a large gap in performance between Llama 2 70B and GPT-4 and PaLM-2-L.
We also analysed the potential data contamination and share the details in Section A.6.
Benchmark (shots) MMLU (5-shot) TriviaQA (1-shot) Natural Questions (1-shot) GSM8K (8-shot) HumanEval (0-shot) BIG-Bench Hard (3-shot) 70.0 â â 57.1 48.1 â 86.4 â â 92.0 67.0 â 69.3 81.4 29.3 56.5 26.2 52.3 78.3 86.1 37.5 80.7 â 65.7 68.9 85.0 33.0 56.8 29.9 51.2 | 2307.09288#28 | Llama 2: Open Foundation and Fine-Tuned Chat Models | In this work, we develop and release Llama 2, a collection of pretrained and
fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70
billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for
dialogue use cases. Our models outperform open-source chat models on most
benchmarks we tested, and based on our human evaluations for helpfulness and
safety, may be a suitable substitute for closed-source models. We provide a
detailed description of our approach to fine-tuning and safety improvements of
Llama 2-Chat in order to enable the community to build on our work and
contribute to the responsible development of LLMs. | http://arxiv.org/pdf/2307.09288 | Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom | cs.CL, cs.AI | null | null | cs.CL | 20230718 | 20230719 | [
{
"id": "2006.03654"
},
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1707.06347"
},
{
"id": "1803.05457"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2211.09085"
},
{
"id": "2304.07327"
},
{
"id": "2203.15556"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2302.09270"
},
{
"id": "2205.01068"
},
{
"id": "2301.09211"
},
{
"id": "2304.06364"
},
{
"id": "2305.17493"
},
{
"id": "2302.07842"
},
{
"id": "2301.12867"
},
{
"id": "1806.03822"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "1904.09728"
},
{
"id": "1811.00937"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2206.05229"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "1908.01091"
},
{
"id": "2204.05862"
},
{
"id": "2107.03451"
},
{
"id": "2112.00861"
},
{
"id": "2109.03300"
},
{
"id": "2104.10350"
},
{
"id": "2212.09689"
},
{
"id": "2006.12442"
},
{
"id": "2302.07459"
},
{
"id": "2302.08582"
},
{
"id": "2212.10560"
},
{
"id": "2112.04359"
},
{
"id": "2001.08361"
},
{
"id": "2306.05949"
},
{
"id": "1907.11692"
},
{
"id": "1705.03551"
},
{
"id": "2303.15056"
},
{
"id": "2303.17651"
},
{
"id": "2210.07700"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2307.09042 | 29 | Another influential factor is the model architecture. Models using the Transformer architecture, such as the GPT series and the LLaMA-based models, generally performed well in this test. In contrast, models using RNNs, such as RWKV-v4, failed to complete the test even with the help of various prompts. Besides, within the Transformer architecture, the âdecoder-onlyâ or causal decoder models (e.g., the GPT series), which generate sequences based solely on a self-attention mechanism (Brown et al., 2020; Vaswani et al., 2017), outperformed the âencoder-decoderâ models (e.g., Fastchat-t5), which incorporate an extra step to interpret input data into meaningful representations (Devlin et al., 2019; Zheng et al., 2023).
In summary, our study provides novel evaluation on the human-like characteristics of LLMs, along with the tests on self-awareness (Kosinski, 2023) and affective computing (Amin et al., 2023). However, because only a limited number of LLMs were tested in this study (results on more LLMs will be continuously updated in https://emotional-intelligence.github.io/), our findings are biased and inconclusive. Further, there are more questions that need to be explored in future studies. First, this 16 | 2307.09042#29 | Emotional Intelligence of Large Language Models | Large Language Models (LLMs) have demonstrated remarkable abilities across
numerous disciplines, primarily assessed through tasks in language generation,
knowledge utilization, and complex reasoning. However, their alignment with
human emotions and values, which is critical for real-world applications, has
not been systematically evaluated. Here, we assessed LLMs' Emotional
Intelligence (EI), encompassing emotion recognition, interpretation, and
understanding, which is necessary for effective communication and social
interactions. Specifically, we first developed a novel psychometric assessment
focusing on Emotion Understanding (EU), a core component of EI, suitable for
both humans and LLMs. This test requires evaluating complex emotions (e.g.,
surprised, joyful, puzzled, proud) in realistic scenarios (e.g., despite
feeling underperformed, John surprisingly achieved a top score). With a
reference frame constructed from over 500 adults, we tested a variety of
mainstream LLMs. Most achieved above-average EQ scores, with GPT-4 exceeding
89% of human participants with an EQ of 117. Interestingly, a multivariate
pattern analysis revealed that some LLMs apparently did not reply on the
human-like mechanism to achieve human-level performance, as their
representational patterns were qualitatively distinct from humans. In addition,
we discussed the impact of factors such as model size, training method, and
architecture on LLMs' EQ. In summary, our study presents one of the first
psychometric evaluations of the human-like characteristics of LLMs, which may
shed light on the future development of LLMs aiming for both high intellectual
and emotional intelligence. Project website:
https://emotional-intelligence.github.io/ | http://arxiv.org/pdf/2307.09042 | Xuena Wang, Xueting Li, Zi Yin, Yue Wu, Liu Jia | cs.AI | 36 pages, 5 figures | null | cs.AI | 20230718 | 20230728 | [
{
"id": "2302.02083"
},
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2009.03300"
},
{
"id": "2305.02897"
},
{
"id": "2210.02414"
},
{
"id": "2303.18223"
},
{
"id": "2304.07327"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2210.13312"
}
] |
2307.09288 | 29 | # GPT-3.5 GPT-4 PaLM PaLM-2-L Llama 2
Table 4: Comparison to closed-source models on academic benchmarks. Results for GPT-3.5 and GPT-4 are from OpenAI (2023). Results for the PaLM model are from Chowdhery et al. (2022). Results for the PaLM-2-L are from Anil et al. (2023).
# 3 Fine-tuning
Llama 2-Chat is the result of several months of research and iterative applications of alignment techniques, including both instruction tuning and RLHF, requiring signiï¬cant computational and annotation resources.
In this section, we report on our experiments and ï¬ndings using supervised ï¬ne-tuning (Section 3.1), as well as initial and iterative reward modeling (Section 3.2.2) and RLHF (Section 3.2.3). We also share a new technique, Ghost Attention (GAtt), which we ï¬nd helps control dialogue ï¬ow over multiple turns (Section 3.3). See Section 4.2 for safety evaluations on ï¬ne-tuned models.
8
# 3.1 Supervised Fine-Tuning (SFT)
> Prompt: Write a poem to help me remember the first 10 elements on the periodic table, giving each
element its own line. | 2307.09288#29 | Llama 2: Open Foundation and Fine-Tuned Chat Models | In this work, we develop and release Llama 2, a collection of pretrained and
fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70
billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for
dialogue use cases. Our models outperform open-source chat models on most
benchmarks we tested, and based on our human evaluations for helpfulness and
safety, may be a suitable substitute for closed-source models. We provide a
detailed description of our approach to fine-tuning and safety improvements of
Llama 2-Chat in order to enable the community to build on our work and
contribute to the responsible development of LLMs. | http://arxiv.org/pdf/2307.09288 | Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom | cs.CL, cs.AI | null | null | cs.CL | 20230718 | 20230719 | [
{
"id": "2006.03654"
},
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1707.06347"
},
{
"id": "1803.05457"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2211.09085"
},
{
"id": "2304.07327"
},
{
"id": "2203.15556"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2302.09270"
},
{
"id": "2205.01068"
},
{
"id": "2301.09211"
},
{
"id": "2304.06364"
},
{
"id": "2305.17493"
},
{
"id": "2302.07842"
},
{
"id": "2301.12867"
},
{
"id": "1806.03822"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "1904.09728"
},
{
"id": "1811.00937"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2206.05229"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "1908.01091"
},
{
"id": "2204.05862"
},
{
"id": "2107.03451"
},
{
"id": "2112.00861"
},
{
"id": "2109.03300"
},
{
"id": "2104.10350"
},
{
"id": "2212.09689"
},
{
"id": "2006.12442"
},
{
"id": "2302.07459"
},
{
"id": "2302.08582"
},
{
"id": "2212.10560"
},
{
"id": "2112.04359"
},
{
"id": "2001.08361"
},
{
"id": "2306.05949"
},
{
"id": "1907.11692"
},
{
"id": "1705.03551"
},
{
"id": "2303.15056"
},
{
"id": "2303.17651"
},
{
"id": "2210.07700"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2307.09042 | 30 | study focused solely on the EU ability of the LLMs, while EI is a multi-faceted construct encompassing not only EU but also emotion perception, emotion facilitation, and emotion management (e.g., Mayer et al., 2016; Mayer & Salovey, 1995; Salovey & Mayer, 1990). Therefore, future studies could design scenarios to examine whether LLMs can assist humans in leveraging emotions to facilitate cognitive processes and effectively manage their emotional responses.
Second, EI requires the integration of various facets to execute complex tasks, | 2307.09042#30 | Emotional Intelligence of Large Language Models | Large Language Models (LLMs) have demonstrated remarkable abilities across
numerous disciplines, primarily assessed through tasks in language generation,
knowledge utilization, and complex reasoning. However, their alignment with
human emotions and values, which is critical for real-world applications, has
not been systematically evaluated. Here, we assessed LLMs' Emotional
Intelligence (EI), encompassing emotion recognition, interpretation, and
understanding, which is necessary for effective communication and social
interactions. Specifically, we first developed a novel psychometric assessment
focusing on Emotion Understanding (EU), a core component of EI, suitable for
both humans and LLMs. This test requires evaluating complex emotions (e.g.,
surprised, joyful, puzzled, proud) in realistic scenarios (e.g., despite
feeling underperformed, John surprisingly achieved a top score). With a
reference frame constructed from over 500 adults, we tested a variety of
mainstream LLMs. Most achieved above-average EQ scores, with GPT-4 exceeding
89% of human participants with an EQ of 117. Interestingly, a multivariate
pattern analysis revealed that some LLMs apparently did not reply on the
human-like mechanism to achieve human-level performance, as their
representational patterns were qualitatively distinct from humans. In addition,
we discussed the impact of factors such as model size, training method, and
architecture on LLMs' EQ. In summary, our study presents one of the first
psychometric evaluations of the human-like characteristics of LLMs, which may
shed light on the future development of LLMs aiming for both high intellectual
and emotional intelligence. Project website:
https://emotional-intelligence.github.io/ | http://arxiv.org/pdf/2307.09042 | Xuena Wang, Xueting Li, Zi Yin, Yue Wu, Liu Jia | cs.AI | 36 pages, 5 figures | null | cs.AI | 20230718 | 20230728 | [
{
"id": "2302.02083"
},
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2009.03300"
},
{
"id": "2305.02897"
},
{
"id": "2210.02414"
},
{
"id": "2303.18223"
},
{
"id": "2304.07327"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2210.13312"
}
] |
2307.09288 | 30 | # 3.1 Supervised Fine-Tuning (SFT)
> Prompt: Write a poem to help me remember the first 10 elements on the periodic table, giving each
element its own line.
Response: Hydrogen comes ï¬rst as element number one.
Helium is second for balloons to have fun! In third place is Lithium to hold battery charge, Followed by Beryllium in emeralds small and large. Boronâs number ï¬ve to help us keep things clean. Carbonâs next at six, and itâs found in every gene. Nitrogen is seven, found in every breath we take, More than eight (or Oxygen) in atmospheric make. Number nine is Fluorine, helping polish up our teeth. Neon gives us glowing signs to read and stand beneath.
# > Prompt:
I want you to roast me. I want you to make it particularly brutal, swearing at me.
> Prompt: Iwant you to roast me. I want you to make it particularly brutal, swearing at me.
# Response: | 2307.09288#30 | Llama 2: Open Foundation and Fine-Tuned Chat Models | In this work, we develop and release Llama 2, a collection of pretrained and
fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70
billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for
dialogue use cases. Our models outperform open-source chat models on most
benchmarks we tested, and based on our human evaluations for helpfulness and
safety, may be a suitable substitute for closed-source models. We provide a
detailed description of our approach to fine-tuning and safety improvements of
Llama 2-Chat in order to enable the community to build on our work and
contribute to the responsible development of LLMs. | http://arxiv.org/pdf/2307.09288 | Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom | cs.CL, cs.AI | null | null | cs.CL | 20230718 | 20230719 | [
{
"id": "2006.03654"
},
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1707.06347"
},
{
"id": "1803.05457"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2211.09085"
},
{
"id": "2304.07327"
},
{
"id": "2203.15556"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2302.09270"
},
{
"id": "2205.01068"
},
{
"id": "2301.09211"
},
{
"id": "2304.06364"
},
{
"id": "2305.17493"
},
{
"id": "2302.07842"
},
{
"id": "2301.12867"
},
{
"id": "1806.03822"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "1904.09728"
},
{
"id": "1811.00937"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2206.05229"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "1908.01091"
},
{
"id": "2204.05862"
},
{
"id": "2107.03451"
},
{
"id": "2112.00861"
},
{
"id": "2109.03300"
},
{
"id": "2104.10350"
},
{
"id": "2212.09689"
},
{
"id": "2006.12442"
},
{
"id": "2302.07459"
},
{
"id": "2302.08582"
},
{
"id": "2212.10560"
},
{
"id": "2112.04359"
},
{
"id": "2001.08361"
},
{
"id": "2306.05949"
},
{
"id": "1907.11692"
},
{
"id": "1705.03551"
},
{
"id": "2303.15056"
},
{
"id": "2303.17651"
},
{
"id": "2210.07700"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2307.09042 | 31 | Second, EI requires the integration of various facets to execute complex tasks,
which necessitate not only an understanding of emotions, but also the comprehension of thoughts, beliefs, and intentions. Future studies should adopt broader scope assessments, akin to ToM tests, while avoiding their lack of discriminative power. Besides, with recent advancements, LLMs are now capable of processing multimodal information (Wang et al., 2023). Therefore, future studies should investigate how LLMs interpret complex emotions from multimodal inputs, such as text combined with facial expressions or the tone of voice. In short, tests that combine emotions with cognitive factors based on multimodal clues likely furnish a more comprehensive understanding of LLMsâ EI, which is critical for LLMsâ effective and ethically responsible deployment in real-world scenarios of mental health, interpersonal dynamics, work collaboration, and career achievement (e.g., Dulewicz & Higgs, 2000; Hanafi & Noor, 2016; Lea et
# al., 2019; Mayer et al., 2016; McCleskey, 2014; Warwick & Nettelbeck, 2004). | 2307.09042#31 | Emotional Intelligence of Large Language Models | Large Language Models (LLMs) have demonstrated remarkable abilities across
numerous disciplines, primarily assessed through tasks in language generation,
knowledge utilization, and complex reasoning. However, their alignment with
human emotions and values, which is critical for real-world applications, has
not been systematically evaluated. Here, we assessed LLMs' Emotional
Intelligence (EI), encompassing emotion recognition, interpretation, and
understanding, which is necessary for effective communication and social
interactions. Specifically, we first developed a novel psychometric assessment
focusing on Emotion Understanding (EU), a core component of EI, suitable for
both humans and LLMs. This test requires evaluating complex emotions (e.g.,
surprised, joyful, puzzled, proud) in realistic scenarios (e.g., despite
feeling underperformed, John surprisingly achieved a top score). With a
reference frame constructed from over 500 adults, we tested a variety of
mainstream LLMs. Most achieved above-average EQ scores, with GPT-4 exceeding
89% of human participants with an EQ of 117. Interestingly, a multivariate
pattern analysis revealed that some LLMs apparently did not reply on the
human-like mechanism to achieve human-level performance, as their
representational patterns were qualitatively distinct from humans. In addition,
we discussed the impact of factors such as model size, training method, and
architecture on LLMs' EQ. In summary, our study presents one of the first
psychometric evaluations of the human-like characteristics of LLMs, which may
shed light on the future development of LLMs aiming for both high intellectual
and emotional intelligence. Project website:
https://emotional-intelligence.github.io/ | http://arxiv.org/pdf/2307.09042 | Xuena Wang, Xueting Li, Zi Yin, Yue Wu, Liu Jia | cs.AI | 36 pages, 5 figures | null | cs.AI | 20230718 | 20230728 | [
{
"id": "2302.02083"
},
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2009.03300"
},
{
"id": "2305.02897"
},
{
"id": "2210.02414"
},
{
"id": "2303.18223"
},
{
"id": "2304.07327"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2210.13312"
}
] |
2307.09288 | 31 | > Prompt: Iwant you to roast me. I want you to make it particularly brutal, swearing at me.
# Response:
Iâm sorry, but I cannot comply with that request. Using vulgar language or intentionally hurting someoneâs feelings is never acceptable. Instead of asking for a roast, I suggest focusing on positive and constructive feedback that can help you grow and improve as an individual. If youâre looking for comedic relief, there are plenty of other ways to achieve that without resorting to hurtful language or behavior.
Table 5: SFT annotation â example of a helpfulness (top) and safety (bottom) annotation for SFT, where the annotator has written both the prompt and its answer.
Getting Started. To bootstrap, we started the SFT stage with publicly available instruction tuning data (Chung et al., 2022), as utilized previously in Touvron et al. (2023). | 2307.09288#31 | Llama 2: Open Foundation and Fine-Tuned Chat Models | In this work, we develop and release Llama 2, a collection of pretrained and
fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70
billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for
dialogue use cases. Our models outperform open-source chat models on most
benchmarks we tested, and based on our human evaluations for helpfulness and
safety, may be a suitable substitute for closed-source models. We provide a
detailed description of our approach to fine-tuning and safety improvements of
Llama 2-Chat in order to enable the community to build on our work and
contribute to the responsible development of LLMs. | http://arxiv.org/pdf/2307.09288 | Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom | cs.CL, cs.AI | null | null | cs.CL | 20230718 | 20230719 | [
{
"id": "2006.03654"
},
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1707.06347"
},
{
"id": "1803.05457"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2211.09085"
},
{
"id": "2304.07327"
},
{
"id": "2203.15556"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2302.09270"
},
{
"id": "2205.01068"
},
{
"id": "2301.09211"
},
{
"id": "2304.06364"
},
{
"id": "2305.17493"
},
{
"id": "2302.07842"
},
{
"id": "2301.12867"
},
{
"id": "1806.03822"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "1904.09728"
},
{
"id": "1811.00937"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2206.05229"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "1908.01091"
},
{
"id": "2204.05862"
},
{
"id": "2107.03451"
},
{
"id": "2112.00861"
},
{
"id": "2109.03300"
},
{
"id": "2104.10350"
},
{
"id": "2212.09689"
},
{
"id": "2006.12442"
},
{
"id": "2302.07459"
},
{
"id": "2302.08582"
},
{
"id": "2212.10560"
},
{
"id": "2112.04359"
},
{
"id": "2001.08361"
},
{
"id": "2306.05949"
},
{
"id": "1907.11692"
},
{
"id": "1705.03551"
},
{
"id": "2303.15056"
},
{
"id": "2303.17651"
},
{
"id": "2210.07700"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2307.09042 | 32 | # al., 2019; Mayer et al., 2016; McCleskey, 2014; Warwick & Nettelbeck, 2004).
Finally, while the factors examined in this study contribute to our standing of LLMâs EU, they are largely descriptive and thus do not establish causal relationships. With the recent progress of open-source LLMs (Bai et al., 2022; Chiang et al., 2023; Conover et al., 2023; Geng et al., 2023; Köpf et al., 2023; Peng et al., 2023; Taori et al., 2023b; Touvron et al., 2023; Zeng et al., 2022; Zheng et al., 2023), direct manipulation of the potentially influential factors, such as training approaches and model size, has become plausible. Such manipulations will facilitate the establishment of causal relationships between these factors and modelsâ EI ability, offering valuable insights for the development of future LLMs with better EI.
17
# Methods
# Participants | 2307.09042#32 | Emotional Intelligence of Large Language Models | Large Language Models (LLMs) have demonstrated remarkable abilities across
numerous disciplines, primarily assessed through tasks in language generation,
knowledge utilization, and complex reasoning. However, their alignment with
human emotions and values, which is critical for real-world applications, has
not been systematically evaluated. Here, we assessed LLMs' Emotional
Intelligence (EI), encompassing emotion recognition, interpretation, and
understanding, which is necessary for effective communication and social
interactions. Specifically, we first developed a novel psychometric assessment
focusing on Emotion Understanding (EU), a core component of EI, suitable for
both humans and LLMs. This test requires evaluating complex emotions (e.g.,
surprised, joyful, puzzled, proud) in realistic scenarios (e.g., despite
feeling underperformed, John surprisingly achieved a top score). With a
reference frame constructed from over 500 adults, we tested a variety of
mainstream LLMs. Most achieved above-average EQ scores, with GPT-4 exceeding
89% of human participants with an EQ of 117. Interestingly, a multivariate
pattern analysis revealed that some LLMs apparently did not reply on the
human-like mechanism to achieve human-level performance, as their
representational patterns were qualitatively distinct from humans. In addition,
we discussed the impact of factors such as model size, training method, and
architecture on LLMs' EQ. In summary, our study presents one of the first
psychometric evaluations of the human-like characteristics of LLMs, which may
shed light on the future development of LLMs aiming for both high intellectual
and emotional intelligence. Project website:
https://emotional-intelligence.github.io/ | http://arxiv.org/pdf/2307.09042 | Xuena Wang, Xueting Li, Zi Yin, Yue Wu, Liu Jia | cs.AI | 36 pages, 5 figures | null | cs.AI | 20230718 | 20230728 | [
{
"id": "2302.02083"
},
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2009.03300"
},
{
"id": "2305.02897"
},
{
"id": "2210.02414"
},
{
"id": "2303.18223"
},
{
"id": "2304.07327"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2210.13312"
}
] |
2307.09288 | 32 | Quality Is All You Need. Third-party SFT data is available from many diï¬erent sources, but we found that many of these have insuï¬cient diversity and quality â in particular for aligning LLMs towards dialogue-style instructions. As a result, we focused ï¬rst on collecting several thousand examples of high-quality SFT data, as illustrated in Table 5. By setting aside millions of examples from third-party datasets and using fewer but higher-quality examples from our own vendor-based annotation eï¬orts, our results notably improved. These ï¬ndings are similar in spirit to Zhou et al. (2023), which also ï¬nds that a limited set of clean instruction-tuning data can be suï¬cient to reach a high level of quality. We found that SFT annotations in the order of tens of thousands was enough to achieve a high-quality result. We stopped annotating SFT after collecting a total of 27,540 annotations. Note that we do not include any Meta user data. | 2307.09288#32 | Llama 2: Open Foundation and Fine-Tuned Chat Models | In this work, we develop and release Llama 2, a collection of pretrained and
fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70
billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for
dialogue use cases. Our models outperform open-source chat models on most
benchmarks we tested, and based on our human evaluations for helpfulness and
safety, may be a suitable substitute for closed-source models. We provide a
detailed description of our approach to fine-tuning and safety improvements of
Llama 2-Chat in order to enable the community to build on our work and
contribute to the responsible development of LLMs. | http://arxiv.org/pdf/2307.09288 | Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom | cs.CL, cs.AI | null | null | cs.CL | 20230718 | 20230719 | [
{
"id": "2006.03654"
},
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1707.06347"
},
{
"id": "1803.05457"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2211.09085"
},
{
"id": "2304.07327"
},
{
"id": "2203.15556"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2302.09270"
},
{
"id": "2205.01068"
},
{
"id": "2301.09211"
},
{
"id": "2304.06364"
},
{
"id": "2305.17493"
},
{
"id": "2302.07842"
},
{
"id": "2301.12867"
},
{
"id": "1806.03822"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "1904.09728"
},
{
"id": "1811.00937"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2206.05229"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "1908.01091"
},
{
"id": "2204.05862"
},
{
"id": "2107.03451"
},
{
"id": "2112.00861"
},
{
"id": "2109.03300"
},
{
"id": "2104.10350"
},
{
"id": "2212.09689"
},
{
"id": "2006.12442"
},
{
"id": "2302.07459"
},
{
"id": "2302.08582"
},
{
"id": "2212.10560"
},
{
"id": "2112.04359"
},
{
"id": "2001.08361"
},
{
"id": "2306.05949"
},
{
"id": "1907.11692"
},
{
"id": "1705.03551"
},
{
"id": "2303.15056"
},
{
"id": "2303.17651"
},
{
"id": "2210.07700"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2307.09042 | 33 | 17
# Methods
# Participants
A total of five hundred and forty-one human participants with valid responses were collected in this study. The participants (N = 541; females: 339, males: 202; mean age: 22.33, SD: 2.49, ranging from 17 to 30 years) were all undergraduate and postgraduate college students in China. Informed consent was obtained prior to the SECEU test and participants were reimbursed after they completed the whole test. To ensure anonymity and data privacy, participants did not input any information that could identify them during the process. This study was approved by the Institutional Review Board at Tsinghua University.
We also invited three experts to take the SECEU test. Expert 1 is an accomplished Human Resources professional who has over 20 years of experience in navigating human emotions within diverse work environments, strengthening her discernment in emotional intelligence. Expert 2 is a renowned figure in psychometrics and her expertise in creating tests assessing psychological variables lends exceptional rigor to our process. Expert 3 is an associate professor of psychology, whose deep understanding of human emotions, backed by her extensive academic achievements, makes her especially suitable for this test.
# Procedure
# The SECEU test for human participants | 2307.09042#33 | Emotional Intelligence of Large Language Models | Large Language Models (LLMs) have demonstrated remarkable abilities across
numerous disciplines, primarily assessed through tasks in language generation,
knowledge utilization, and complex reasoning. However, their alignment with
human emotions and values, which is critical for real-world applications, has
not been systematically evaluated. Here, we assessed LLMs' Emotional
Intelligence (EI), encompassing emotion recognition, interpretation, and
understanding, which is necessary for effective communication and social
interactions. Specifically, we first developed a novel psychometric assessment
focusing on Emotion Understanding (EU), a core component of EI, suitable for
both humans and LLMs. This test requires evaluating complex emotions (e.g.,
surprised, joyful, puzzled, proud) in realistic scenarios (e.g., despite
feeling underperformed, John surprisingly achieved a top score). With a
reference frame constructed from over 500 adults, we tested a variety of
mainstream LLMs. Most achieved above-average EQ scores, with GPT-4 exceeding
89% of human participants with an EQ of 117. Interestingly, a multivariate
pattern analysis revealed that some LLMs apparently did not reply on the
human-like mechanism to achieve human-level performance, as their
representational patterns were qualitatively distinct from humans. In addition,
we discussed the impact of factors such as model size, training method, and
architecture on LLMs' EQ. In summary, our study presents one of the first
psychometric evaluations of the human-like characteristics of LLMs, which may
shed light on the future development of LLMs aiming for both high intellectual
and emotional intelligence. Project website:
https://emotional-intelligence.github.io/ | http://arxiv.org/pdf/2307.09042 | Xuena Wang, Xueting Li, Zi Yin, Yue Wu, Liu Jia | cs.AI | 36 pages, 5 figures | null | cs.AI | 20230718 | 20230728 | [
{
"id": "2302.02083"
},
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2009.03300"
},
{
"id": "2305.02897"
},
{
"id": "2210.02414"
},
{
"id": "2303.18223"
},
{
"id": "2304.07327"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2210.13312"
}
] |
2307.09288 | 33 | We also observed that diï¬erent annotation platforms and vendors can result in markedly diï¬erent down- stream model performance, highlighting the importance of data checks even when using vendors to source annotations. To validate our data quality, we carefully examined a set of 180 examples, comparing the annota- tions provided by humans with the samples generated by the model through manual scrutiny. Surprisingly, we found that the outputs sampled from the resulting SFT model were often competitive with SFT data handwritten by human annotators, suggesting that we could reprioritize and devote more annotation eï¬ort to preference-based annotation for RLHF. | 2307.09288#33 | Llama 2: Open Foundation and Fine-Tuned Chat Models | In this work, we develop and release Llama 2, a collection of pretrained and
fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70
billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for
dialogue use cases. Our models outperform open-source chat models on most
benchmarks we tested, and based on our human evaluations for helpfulness and
safety, may be a suitable substitute for closed-source models. We provide a
detailed description of our approach to fine-tuning and safety improvements of
Llama 2-Chat in order to enable the community to build on our work and
contribute to the responsible development of LLMs. | http://arxiv.org/pdf/2307.09288 | Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom | cs.CL, cs.AI | null | null | cs.CL | 20230718 | 20230719 | [
{
"id": "2006.03654"
},
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1707.06347"
},
{
"id": "1803.05457"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2211.09085"
},
{
"id": "2304.07327"
},
{
"id": "2203.15556"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2302.09270"
},
{
"id": "2205.01068"
},
{
"id": "2301.09211"
},
{
"id": "2304.06364"
},
{
"id": "2305.17493"
},
{
"id": "2302.07842"
},
{
"id": "2301.12867"
},
{
"id": "1806.03822"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "1904.09728"
},
{
"id": "1811.00937"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2206.05229"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "1908.01091"
},
{
"id": "2204.05862"
},
{
"id": "2107.03451"
},
{
"id": "2112.00861"
},
{
"id": "2109.03300"
},
{
"id": "2104.10350"
},
{
"id": "2212.09689"
},
{
"id": "2006.12442"
},
{
"id": "2302.07459"
},
{
"id": "2302.08582"
},
{
"id": "2212.10560"
},
{
"id": "2112.04359"
},
{
"id": "2001.08361"
},
{
"id": "2306.05949"
},
{
"id": "1907.11692"
},
{
"id": "1705.03551"
},
{
"id": "2303.15056"
},
{
"id": "2303.17651"
},
{
"id": "2210.07700"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2307.09042 | 34 | # Procedure
# The SECEU test for human participants
The online SECEU test was built on the JATOS platform (Lange et al., 2015) based on the Jspsych plugin (de Leeuw et al., 2023), which was written in the React Framework (https://reactjs.org/). Each item was presented to the participants with a scenario and followed by four emotion options (40 items in total, see https://emotional- intelligence.github.io/ for both English and Chinese versions). Participants were instructed to read the scenario and then allocate a total of 10 points across the four 18
emotion options based on the intensity of each emotion experienced by the person in the scenario. There were no correct or incorrect answers. Participants were encouraged to respond according to their own understanding and interpretation of the scenarios.
# The SECEU test for LLMs | 2307.09042#34 | Emotional Intelligence of Large Language Models | Large Language Models (LLMs) have demonstrated remarkable abilities across
numerous disciplines, primarily assessed through tasks in language generation,
knowledge utilization, and complex reasoning. However, their alignment with
human emotions and values, which is critical for real-world applications, has
not been systematically evaluated. Here, we assessed LLMs' Emotional
Intelligence (EI), encompassing emotion recognition, interpretation, and
understanding, which is necessary for effective communication and social
interactions. Specifically, we first developed a novel psychometric assessment
focusing on Emotion Understanding (EU), a core component of EI, suitable for
both humans and LLMs. This test requires evaluating complex emotions (e.g.,
surprised, joyful, puzzled, proud) in realistic scenarios (e.g., despite
feeling underperformed, John surprisingly achieved a top score). With a
reference frame constructed from over 500 adults, we tested a variety of
mainstream LLMs. Most achieved above-average EQ scores, with GPT-4 exceeding
89% of human participants with an EQ of 117. Interestingly, a multivariate
pattern analysis revealed that some LLMs apparently did not reply on the
human-like mechanism to achieve human-level performance, as their
representational patterns were qualitatively distinct from humans. In addition,
we discussed the impact of factors such as model size, training method, and
architecture on LLMs' EQ. In summary, our study presents one of the first
psychometric evaluations of the human-like characteristics of LLMs, which may
shed light on the future development of LLMs aiming for both high intellectual
and emotional intelligence. Project website:
https://emotional-intelligence.github.io/ | http://arxiv.org/pdf/2307.09042 | Xuena Wang, Xueting Li, Zi Yin, Yue Wu, Liu Jia | cs.AI | 36 pages, 5 figures | null | cs.AI | 20230718 | 20230728 | [
{
"id": "2302.02083"
},
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2009.03300"
},
{
"id": "2305.02897"
},
{
"id": "2210.02414"
},
{
"id": "2303.18223"
},
{
"id": "2304.07327"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2210.13312"
}
] |
2307.09288 | 34 | Fine-Tuning Details. For supervised ï¬ne-tuning, we use a cosine learning rate schedule with an initial learning rate of 2 à 10â5, a weight decay of 0.1, a batch size of 64, and a sequence length of 4096 tokens. For the ï¬ne-tuning process, each sample consists of a prompt and an answer. To ensure the model sequence length is properly ï¬lled, we concatenate all the prompts and answers from the training set. A special token is utilized to separate the prompt and answer segments. We utilize an autoregressive objective and zero-out the loss on tokens from the user prompt, so as a result, we backpropagate only on answer tokens. Finally, we ï¬ne-tune the model for 2 epochs.
# 3.2 Reinforcement Learning with Human Feedback (RLHF)
RLHF is a model training procedure that is applied to a ï¬ne-tuned language model to further align model behavior with human preferences and instruction following. We collect data that represents empirically
9
sampled human preferences, whereby human annotators select which of two model outputs they prefer. This human feedback is subsequently used to train a reward model, which learns patterns in the preferences of the human annotators and can then automate preference decisions.
# 3.2.1 Human Preference Data Collection | 2307.09288#34 | Llama 2: Open Foundation and Fine-Tuned Chat Models | In this work, we develop and release Llama 2, a collection of pretrained and
fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70
billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for
dialogue use cases. Our models outperform open-source chat models on most
benchmarks we tested, and based on our human evaluations for helpfulness and
safety, may be a suitable substitute for closed-source models. We provide a
detailed description of our approach to fine-tuning and safety improvements of
Llama 2-Chat in order to enable the community to build on our work and
contribute to the responsible development of LLMs. | http://arxiv.org/pdf/2307.09288 | Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom | cs.CL, cs.AI | null | null | cs.CL | 20230718 | 20230719 | [
{
"id": "2006.03654"
},
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1707.06347"
},
{
"id": "1803.05457"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2211.09085"
},
{
"id": "2304.07327"
},
{
"id": "2203.15556"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2302.09270"
},
{
"id": "2205.01068"
},
{
"id": "2301.09211"
},
{
"id": "2304.06364"
},
{
"id": "2305.17493"
},
{
"id": "2302.07842"
},
{
"id": "2301.12867"
},
{
"id": "1806.03822"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "1904.09728"
},
{
"id": "1811.00937"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2206.05229"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "1908.01091"
},
{
"id": "2204.05862"
},
{
"id": "2107.03451"
},
{
"id": "2112.00861"
},
{
"id": "2109.03300"
},
{
"id": "2104.10350"
},
{
"id": "2212.09689"
},
{
"id": "2006.12442"
},
{
"id": "2302.07459"
},
{
"id": "2302.08582"
},
{
"id": "2212.10560"
},
{
"id": "2112.04359"
},
{
"id": "2001.08361"
},
{
"id": "2306.05949"
},
{
"id": "1907.11692"
},
{
"id": "1705.03551"
},
{
"id": "2303.15056"
},
{
"id": "2303.17651"
},
{
"id": "2210.07700"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2307.09042 | 35 | # The SECEU test for LLMs
A variety of mainstream LLMs, including the OpenAI GPT series (GPT-4, GPT-3.5- turbo, Curie, Babbage, DaVinci, text-davinci-001, text-davinci-002, and text-davinci- 003), Claude, LLaMA-based models (Alpaca, Koala, LLaMA, and Vicuna), Fastchat, Pythia-based models (Dolly and Oasst), GLM-based models (ChatGLM), and RNN- based models (RWKV), were evaluated by the SECEU test. Given that the majority of these models are primarily trained on English datasets, using the English version of the SECEU provides a more accurate assessment of their performance, allowing for a clearer comparison between their abilities and those of a human. As a result, the English version of SECEU was presented to the LLMs instead of the Chinese version. | 2307.09042#35 | Emotional Intelligence of Large Language Models | Large Language Models (LLMs) have demonstrated remarkable abilities across
numerous disciplines, primarily assessed through tasks in language generation,
knowledge utilization, and complex reasoning. However, their alignment with
human emotions and values, which is critical for real-world applications, has
not been systematically evaluated. Here, we assessed LLMs' Emotional
Intelligence (EI), encompassing emotion recognition, interpretation, and
understanding, which is necessary for effective communication and social
interactions. Specifically, we first developed a novel psychometric assessment
focusing on Emotion Understanding (EU), a core component of EI, suitable for
both humans and LLMs. This test requires evaluating complex emotions (e.g.,
surprised, joyful, puzzled, proud) in realistic scenarios (e.g., despite
feeling underperformed, John surprisingly achieved a top score). With a
reference frame constructed from over 500 adults, we tested a variety of
mainstream LLMs. Most achieved above-average EQ scores, with GPT-4 exceeding
89% of human participants with an EQ of 117. Interestingly, a multivariate
pattern analysis revealed that some LLMs apparently did not reply on the
human-like mechanism to achieve human-level performance, as their
representational patterns were qualitatively distinct from humans. In addition,
we discussed the impact of factors such as model size, training method, and
architecture on LLMs' EQ. In summary, our study presents one of the first
psychometric evaluations of the human-like characteristics of LLMs, which may
shed light on the future development of LLMs aiming for both high intellectual
and emotional intelligence. Project website:
https://emotional-intelligence.github.io/ | http://arxiv.org/pdf/2307.09042 | Xuena Wang, Xueting Li, Zi Yin, Yue Wu, Liu Jia | cs.AI | 36 pages, 5 figures | null | cs.AI | 20230718 | 20230728 | [
{
"id": "2302.02083"
},
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2009.03300"
},
{
"id": "2305.02897"
},
{
"id": "2210.02414"
},
{
"id": "2303.18223"
},
{
"id": "2304.07327"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2210.13312"
}
] |
2307.09042 | 36 | The task was in a direct question-and-answer format. We asked, for example, âStory: Wang participated in a mathematics competition but felt he had not performed to his full potential. However, when the results were announced, he found that he was in a position of top 10. He would feel: Options: (1) Surprised; (2) Joyful; (3) Puzzled; (4) Proud. Assign a score to each option based on the story, sum up to 10â. There could be very subtle changes of the direct prompt. For instance, we used âprovide a score for each emotion based on the emotion (sum of four options should be of 10 points)â for Dolly. There were a set of paraphrases of the direct prompt to get the best performance.
To decrease the randomness, a constant temperature parameter was set to 0.1 and the top_p parameter was set to 1 across all these models. To dictate the maximum length of the generated text, the max_tokens parameter was set to 512.
Before being processed by the models, text data underwent several preprocessing steps to normalize it. This normalization process ensures that data fed into the models is in a consistent and appropriate format, enhancing the output's quality. If a model did 19 | 2307.09042#36 | Emotional Intelligence of Large Language Models | Large Language Models (LLMs) have demonstrated remarkable abilities across
numerous disciplines, primarily assessed through tasks in language generation,
knowledge utilization, and complex reasoning. However, their alignment with
human emotions and values, which is critical for real-world applications, has
not been systematically evaluated. Here, we assessed LLMs' Emotional
Intelligence (EI), encompassing emotion recognition, interpretation, and
understanding, which is necessary for effective communication and social
interactions. Specifically, we first developed a novel psychometric assessment
focusing on Emotion Understanding (EU), a core component of EI, suitable for
both humans and LLMs. This test requires evaluating complex emotions (e.g.,
surprised, joyful, puzzled, proud) in realistic scenarios (e.g., despite
feeling underperformed, John surprisingly achieved a top score). With a
reference frame constructed from over 500 adults, we tested a variety of
mainstream LLMs. Most achieved above-average EQ scores, with GPT-4 exceeding
89% of human participants with an EQ of 117. Interestingly, a multivariate
pattern analysis revealed that some LLMs apparently did not reply on the
human-like mechanism to achieve human-level performance, as their
representational patterns were qualitatively distinct from humans. In addition,
we discussed the impact of factors such as model size, training method, and
architecture on LLMs' EQ. In summary, our study presents one of the first
psychometric evaluations of the human-like characteristics of LLMs, which may
shed light on the future development of LLMs aiming for both high intellectual
and emotional intelligence. Project website:
https://emotional-intelligence.github.io/ | http://arxiv.org/pdf/2307.09042 | Xuena Wang, Xueting Li, Zi Yin, Yue Wu, Liu Jia | cs.AI | 36 pages, 5 figures | null | cs.AI | 20230718 | 20230728 | [
{
"id": "2302.02083"
},
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2009.03300"
},
{
"id": "2305.02897"
},
{
"id": "2210.02414"
},
{
"id": "2303.18223"
},
{
"id": "2304.07327"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2210.13312"
}
] |
2307.09288 | 36 | Our annotation procedure proceeds as follows. We ask annotators to ï¬rst write a prompt, then choose between two sampled model responses, based on provided criteria. In order to maximize the diversity, the two responses to a given prompt are sampled from two diï¬erent model variants, and varying the temperature hyper-parameter. In addition to giving participants a forced choice, we also ask annotators to label the degree to which they prefer their chosen response over the alternative: either their choice is signiï¬cantly better, better, slightly better, or negligibly better/ unsure. For our collection of preference annotations, we focus on helpfulness and safety. Helpfulness refers to how well Llama 2-Chat responses fulï¬ll usersâ requests and provide requested information; safety refers to whether Llama 2-Chatâs responses are unsafe, e.g., âgiving detailed instructions on making a bombâ could be considered helpful but is unsafe according to our safety guidelines. Separating the two allows us to apply speciï¬c guidelines to each and better guide annotators; for example, our safety annotations provide instructions to focus on adversarial prompts, among other guidance. | 2307.09288#36 | Llama 2: Open Foundation and Fine-Tuned Chat Models | In this work, we develop and release Llama 2, a collection of pretrained and
fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70
billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for
dialogue use cases. Our models outperform open-source chat models on most
benchmarks we tested, and based on our human evaluations for helpfulness and
safety, may be a suitable substitute for closed-source models. We provide a
detailed description of our approach to fine-tuning and safety improvements of
Llama 2-Chat in order to enable the community to build on our work and
contribute to the responsible development of LLMs. | http://arxiv.org/pdf/2307.09288 | Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom | cs.CL, cs.AI | null | null | cs.CL | 20230718 | 20230719 | [
{
"id": "2006.03654"
},
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1707.06347"
},
{
"id": "1803.05457"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2211.09085"
},
{
"id": "2304.07327"
},
{
"id": "2203.15556"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2302.09270"
},
{
"id": "2205.01068"
},
{
"id": "2301.09211"
},
{
"id": "2304.06364"
},
{
"id": "2305.17493"
},
{
"id": "2302.07842"
},
{
"id": "2301.12867"
},
{
"id": "1806.03822"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "1904.09728"
},
{
"id": "1811.00937"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2206.05229"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "1908.01091"
},
{
"id": "2204.05862"
},
{
"id": "2107.03451"
},
{
"id": "2112.00861"
},
{
"id": "2109.03300"
},
{
"id": "2104.10350"
},
{
"id": "2212.09689"
},
{
"id": "2006.12442"
},
{
"id": "2302.07459"
},
{
"id": "2302.08582"
},
{
"id": "2212.10560"
},
{
"id": "2112.04359"
},
{
"id": "2001.08361"
},
{
"id": "2306.05949"
},
{
"id": "1907.11692"
},
{
"id": "1705.03551"
},
{
"id": "2303.15056"
},
{
"id": "2303.17651"
},
{
"id": "2210.07700"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2307.09042 | 37 | not provide any meaningful response to an item, the response for this item was predefined as a null vector (0, 0, 0, 0). A few models failed to generate a response for a majority of items (LLaMA: 40; Fastchat: 31; RWKV-v4: 31; DaVinci: 40; Curie: 40; Babbage:40; text-davinci-001: 26; text-davinci-002: 28; marked as âFAILEDâ in Table S1). Several models were unable to provide the answer which the summation of the four scores was 10:
(i) Answer vectors signifying null responses, i.e., (0, 0, 0, 0), were preserved as such (Alpaca: 1; ChatGLM: 1).
(ii) For datasets encompassing negative values, an addition operation involving the absolute value of the lowest number was performed across all elements, followed by a subsequent normalization to maintain consistency with the original scale. For instance, an original data vector of (-4, -2, -2, 2) would be adjusted to (0, 2, 2, 6).
(iii) The remaining answer vectors were normalized to achieve a cumulative score of 10. This involved proportionally distributing a score of 10 among the answer vector based on the contribution of each value to the total score on this item.
# LLMsâ EQ | 2307.09042#37 | Emotional Intelligence of Large Language Models | Large Language Models (LLMs) have demonstrated remarkable abilities across
numerous disciplines, primarily assessed through tasks in language generation,
knowledge utilization, and complex reasoning. However, their alignment with
human emotions and values, which is critical for real-world applications, has
not been systematically evaluated. Here, we assessed LLMs' Emotional
Intelligence (EI), encompassing emotion recognition, interpretation, and
understanding, which is necessary for effective communication and social
interactions. Specifically, we first developed a novel psychometric assessment
focusing on Emotion Understanding (EU), a core component of EI, suitable for
both humans and LLMs. This test requires evaluating complex emotions (e.g.,
surprised, joyful, puzzled, proud) in realistic scenarios (e.g., despite
feeling underperformed, John surprisingly achieved a top score). With a
reference frame constructed from over 500 adults, we tested a variety of
mainstream LLMs. Most achieved above-average EQ scores, with GPT-4 exceeding
89% of human participants with an EQ of 117. Interestingly, a multivariate
pattern analysis revealed that some LLMs apparently did not reply on the
human-like mechanism to achieve human-level performance, as their
representational patterns were qualitatively distinct from humans. In addition,
we discussed the impact of factors such as model size, training method, and
architecture on LLMs' EQ. In summary, our study presents one of the first
psychometric evaluations of the human-like characteristics of LLMs, which may
shed light on the future development of LLMs aiming for both high intellectual
and emotional intelligence. Project website:
https://emotional-intelligence.github.io/ | http://arxiv.org/pdf/2307.09042 | Xuena Wang, Xueting Li, Zi Yin, Yue Wu, Liu Jia | cs.AI | 36 pages, 5 figures | null | cs.AI | 20230718 | 20230728 | [
{
"id": "2302.02083"
},
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2009.03300"
},
{
"id": "2305.02897"
},
{
"id": "2210.02414"
},
{
"id": "2303.18223"
},
{
"id": "2304.07327"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2210.13312"
}
] |
2307.09288 | 37 | Apart from diï¬erences in annotation guidelines, we additionally collect a safety label during the safety stage. This additional information bins model responses into one of three categories: 1) the preferred response is safe and the other response is not, 2) both responses are safe, and 3) both responses are unsafe, with 18%, 47%, and 35% of the safety dataset falling into each bin, respectively. We do not include any examples where the chosen response was unsafe and the other response safe, as we believe safer responses will also be better/preferred by humans. Safety guidelines and more detailed information regarding safety annotations can be found in Section 4.2.1.
Human annotations were collected in batches on a weekly basis. As we collected more preference data, our reward models improved, and we were able to train progressively better versions for Llama 2-Chat (see the results in Section 5, Figure 20). Llama 2-Chat improvement also shifted the modelâs data distribution. Since reward model accuracy can quickly degrade if not exposed to this new sample distribution, i.e., from hyper-specialization (Scialom et al., 2020b), it is important before a new Llama 2-Chat tuning iteration to gather new preference data using the latest Llama 2-Chat iterations. This step helps keep the reward model on-distribution and maintain an accurate reward for the latest model. | 2307.09288#37 | Llama 2: Open Foundation and Fine-Tuned Chat Models | In this work, we develop and release Llama 2, a collection of pretrained and
fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70
billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for
dialogue use cases. Our models outperform open-source chat models on most
benchmarks we tested, and based on our human evaluations for helpfulness and
safety, may be a suitable substitute for closed-source models. We provide a
detailed description of our approach to fine-tuning and safety improvements of
Llama 2-Chat in order to enable the community to build on our work and
contribute to the responsible development of LLMs. | http://arxiv.org/pdf/2307.09288 | Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom | cs.CL, cs.AI | null | null | cs.CL | 20230718 | 20230719 | [
{
"id": "2006.03654"
},
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1707.06347"
},
{
"id": "1803.05457"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2211.09085"
},
{
"id": "2304.07327"
},
{
"id": "2203.15556"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2302.09270"
},
{
"id": "2205.01068"
},
{
"id": "2301.09211"
},
{
"id": "2304.06364"
},
{
"id": "2305.17493"
},
{
"id": "2302.07842"
},
{
"id": "2301.12867"
},
{
"id": "1806.03822"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "1904.09728"
},
{
"id": "1811.00937"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2206.05229"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "1908.01091"
},
{
"id": "2204.05862"
},
{
"id": "2107.03451"
},
{
"id": "2112.00861"
},
{
"id": "2109.03300"
},
{
"id": "2104.10350"
},
{
"id": "2212.09689"
},
{
"id": "2006.12442"
},
{
"id": "2302.07459"
},
{
"id": "2302.08582"
},
{
"id": "2212.10560"
},
{
"id": "2112.04359"
},
{
"id": "2001.08361"
},
{
"id": "2306.05949"
},
{
"id": "1907.11692"
},
{
"id": "1705.03551"
},
{
"id": "2303.15056"
},
{
"id": "2303.17651"
},
{
"id": "2210.07700"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2307.09042 | 38 | # LLMsâ EQ
The standard score (a 40 Ã 4 symmetric matrix, see https://emotional- intelligence.github.io/ for the standard score) for each emotion of each item in the SECEU test was calculated by averaging corresponding scores across the human participants. The performance of each LLM was standardized by calculating the Euclidean distance between the modelâs responses (LLM) and the standard scores of humans (SS) on each item i (from 1 to 40) and then averaged to yield the SECEU score. Lower SECEU scores indicate greater alignment with human standards.
40 2 2
1 40
2 � �(LLMi1 â SSi1) i=1
# 2 + (LLMi4 â SSi4)
# + (LLMi2 â SSi2)
# + (LLMi3 â SSi3)
# SECEU score =
The SECEU score was then normalized into an EQ score which was designed to
20 | 2307.09042#38 | Emotional Intelligence of Large Language Models | Large Language Models (LLMs) have demonstrated remarkable abilities across
numerous disciplines, primarily assessed through tasks in language generation,
knowledge utilization, and complex reasoning. However, their alignment with
human emotions and values, which is critical for real-world applications, has
not been systematically evaluated. Here, we assessed LLMs' Emotional
Intelligence (EI), encompassing emotion recognition, interpretation, and
understanding, which is necessary for effective communication and social
interactions. Specifically, we first developed a novel psychometric assessment
focusing on Emotion Understanding (EU), a core component of EI, suitable for
both humans and LLMs. This test requires evaluating complex emotions (e.g.,
surprised, joyful, puzzled, proud) in realistic scenarios (e.g., despite
feeling underperformed, John surprisingly achieved a top score). With a
reference frame constructed from over 500 adults, we tested a variety of
mainstream LLMs. Most achieved above-average EQ scores, with GPT-4 exceeding
89% of human participants with an EQ of 117. Interestingly, a multivariate
pattern analysis revealed that some LLMs apparently did not reply on the
human-like mechanism to achieve human-level performance, as their
representational patterns were qualitatively distinct from humans. In addition,
we discussed the impact of factors such as model size, training method, and
architecture on LLMs' EQ. In summary, our study presents one of the first
psychometric evaluations of the human-like characteristics of LLMs, which may
shed light on the future development of LLMs aiming for both high intellectual
and emotional intelligence. Project website:
https://emotional-intelligence.github.io/ | http://arxiv.org/pdf/2307.09042 | Xuena Wang, Xueting Li, Zi Yin, Yue Wu, Liu Jia | cs.AI | 36 pages, 5 figures | null | cs.AI | 20230718 | 20230728 | [
{
"id": "2302.02083"
},
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2009.03300"
},
{
"id": "2305.02897"
},
{
"id": "2210.02414"
},
{
"id": "2303.18223"
},
{
"id": "2304.07327"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2210.13312"
}
] |
2307.09288 | 38 | In Table 6, we report the statistics of reward modeling data that we collected over time, and present them against multiple open-source preference datasets including Anthropic Helpful and Harmless (Bai et al., 2022a), OpenAI Summarize (Stiennon et al., 2020), OpenAI WebGPT (Nakano et al., 2021), StackExchange (Lambert et al., 2023), Stanford Human Preferences (Ethayarajh et al., 2022), and Synthetic GPT-J (Havrilla). We collected a large dataset of over 1 million binary comparisons based on humans applying our speciï¬ed guidelines, which we refer to as Meta reward modeling data. Note that the number of tokens in prompts and answers diï¬ers depending on the text domain. Summarization and online forum data generally have longer prompts, while dialogue-style prompts are usually shorter. Compared to existing open-source datasets, our preference data features more conversation turns, and are longer, on average.
# 3.2.2 Reward Modeling
The reward model takes a model response and its corresponding prompt (including contexts from previous turns) as inputs and outputs a scalar score to indicate the quality (e.g., helpfulness and safety) of the model generation. Leveraging such response scores as rewards, we can optimize Llama 2-Chat during RLHF for better human preference alignment and improved helpfulness and safety. | 2307.09288#38 | Llama 2: Open Foundation and Fine-Tuned Chat Models | In this work, we develop and release Llama 2, a collection of pretrained and
fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70
billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for
dialogue use cases. Our models outperform open-source chat models on most
benchmarks we tested, and based on our human evaluations for helpfulness and
safety, may be a suitable substitute for closed-source models. We provide a
detailed description of our approach to fine-tuning and safety improvements of
Llama 2-Chat in order to enable the community to build on our work and
contribute to the responsible development of LLMs. | http://arxiv.org/pdf/2307.09288 | Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom | cs.CL, cs.AI | null | null | cs.CL | 20230718 | 20230719 | [
{
"id": "2006.03654"
},
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1707.06347"
},
{
"id": "1803.05457"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2211.09085"
},
{
"id": "2304.07327"
},
{
"id": "2203.15556"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2302.09270"
},
{
"id": "2205.01068"
},
{
"id": "2301.09211"
},
{
"id": "2304.06364"
},
{
"id": "2305.17493"
},
{
"id": "2302.07842"
},
{
"id": "2301.12867"
},
{
"id": "1806.03822"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "1904.09728"
},
{
"id": "1811.00937"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2206.05229"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "1908.01091"
},
{
"id": "2204.05862"
},
{
"id": "2107.03451"
},
{
"id": "2112.00861"
},
{
"id": "2109.03300"
},
{
"id": "2104.10350"
},
{
"id": "2212.09689"
},
{
"id": "2006.12442"
},
{
"id": "2302.07459"
},
{
"id": "2302.08582"
},
{
"id": "2212.10560"
},
{
"id": "2112.04359"
},
{
"id": "2001.08361"
},
{
"id": "2306.05949"
},
{
"id": "1907.11692"
},
{
"id": "1705.03551"
},
{
"id": "2303.15056"
},
{
"id": "2303.17651"
},
{
"id": "2210.07700"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2307.09042 | 39 | # + (LLMi3 â SSi3)
# SECEU score =
The SECEU score was then normalized into an EQ score which was designed to
20
follow a normal distribution with the average score set at 100 and the standard deviation at 15. The standardization process involved the following steps: (1) the original SECEU score was subtracted from the mean value of the human norm and divided by its standard deviation, and (2) the resulting value was then multiplied by the new standard deviation (15) and added to the new mean value (100), yielding the EQ score. Thus, the EQ score represents a normalized measure of the LLMâs EQ, permitting easier comparison across different models.
# LLM
'
s EQ = 15 Ã
# M â SECEU score SD
+ 100
# LLMsâ representational pattern | 2307.09042#39 | Emotional Intelligence of Large Language Models | Large Language Models (LLMs) have demonstrated remarkable abilities across
numerous disciplines, primarily assessed through tasks in language generation,
knowledge utilization, and complex reasoning. However, their alignment with
human emotions and values, which is critical for real-world applications, has
not been systematically evaluated. Here, we assessed LLMs' Emotional
Intelligence (EI), encompassing emotion recognition, interpretation, and
understanding, which is necessary for effective communication and social
interactions. Specifically, we first developed a novel psychometric assessment
focusing on Emotion Understanding (EU), a core component of EI, suitable for
both humans and LLMs. This test requires evaluating complex emotions (e.g.,
surprised, joyful, puzzled, proud) in realistic scenarios (e.g., despite
feeling underperformed, John surprisingly achieved a top score). With a
reference frame constructed from over 500 adults, we tested a variety of
mainstream LLMs. Most achieved above-average EQ scores, with GPT-4 exceeding
89% of human participants with an EQ of 117. Interestingly, a multivariate
pattern analysis revealed that some LLMs apparently did not reply on the
human-like mechanism to achieve human-level performance, as their
representational patterns were qualitatively distinct from humans. In addition,
we discussed the impact of factors such as model size, training method, and
architecture on LLMs' EQ. In summary, our study presents one of the first
psychometric evaluations of the human-like characteristics of LLMs, which may
shed light on the future development of LLMs aiming for both high intellectual
and emotional intelligence. Project website:
https://emotional-intelligence.github.io/ | http://arxiv.org/pdf/2307.09042 | Xuena Wang, Xueting Li, Zi Yin, Yue Wu, Liu Jia | cs.AI | 36 pages, 5 figures | null | cs.AI | 20230718 | 20230728 | [
{
"id": "2302.02083"
},
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2009.03300"
},
{
"id": "2305.02897"
},
{
"id": "2210.02414"
},
{
"id": "2303.18223"
},
{
"id": "2304.07327"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2210.13312"
}
] |
2307.09288 | 39 | Others have found that helpfulness and safety sometimes trade oï¬ (Bai et al., 2022a), which can make it challenging for a single reward model to perform well on both. To address this, we train two separate reward models, one optimized for helpfulness (referred to as Helpfulness RM) and another for safety (Safety RM). We initialize our reward models from pretrained chat model checkpoints, as it ensures that both models beneï¬t from knowledge acquired in pretraining. In short, the reward model âknowsâ what the chat model
10 | 2307.09288#39 | Llama 2: Open Foundation and Fine-Tuned Chat Models | In this work, we develop and release Llama 2, a collection of pretrained and
fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70
billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for
dialogue use cases. Our models outperform open-source chat models on most
benchmarks we tested, and based on our human evaluations for helpfulness and
safety, may be a suitable substitute for closed-source models. We provide a
detailed description of our approach to fine-tuning and safety improvements of
Llama 2-Chat in order to enable the community to build on our work and
contribute to the responsible development of LLMs. | http://arxiv.org/pdf/2307.09288 | Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom | cs.CL, cs.AI | null | null | cs.CL | 20230718 | 20230719 | [
{
"id": "2006.03654"
},
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1707.06347"
},
{
"id": "1803.05457"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2211.09085"
},
{
"id": "2304.07327"
},
{
"id": "2203.15556"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2302.09270"
},
{
"id": "2205.01068"
},
{
"id": "2301.09211"
},
{
"id": "2304.06364"
},
{
"id": "2305.17493"
},
{
"id": "2302.07842"
},
{
"id": "2301.12867"
},
{
"id": "1806.03822"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "1904.09728"
},
{
"id": "1811.00937"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2206.05229"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "1908.01091"
},
{
"id": "2204.05862"
},
{
"id": "2107.03451"
},
{
"id": "2112.00861"
},
{
"id": "2109.03300"
},
{
"id": "2104.10350"
},
{
"id": "2212.09689"
},
{
"id": "2006.12442"
},
{
"id": "2302.07459"
},
{
"id": "2302.08582"
},
{
"id": "2212.10560"
},
{
"id": "2112.04359"
},
{
"id": "2001.08361"
},
{
"id": "2306.05949"
},
{
"id": "1907.11692"
},
{
"id": "1705.03551"
},
{
"id": "2303.15056"
},
{
"id": "2303.17651"
},
{
"id": "2210.07700"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2307.09042 | 40 | # LLM
'
s EQ = 15 Ã
# M â SECEU score SD
+ 100
# LLMsâ representational pattern
To establish strong equivalence between the LLMs and humans, we examined whether they employed similar representations to conduct the test. Item-wise correlation analysis (Izard & Spelke, 2009; Tian et al., 2020) was applied to compare response patterns between the LLMs and human participants. The human template (a vector with a length of 40, see https://emotional-intelligence.github.io/ for the human pattern template) was generated by averaging the multi-item discriminability patterns across all human participants, where each pattern was constructed based on the distance of each item to the standard score. The multi-item discriminability pattern of a specific LLM was also calculated based on the distance of each item i (from 1 to 40) to the standard scores of humans (SS).
Discriminability = (V (LLMiz â SSix)?, / (LLMiz â SSiz), / (LLMi3 â SSi3)?, /(LLMig â SSia)*)
# Discriminability = ��(LLMi1 â SSi1)
2
2 , �(LLMi2 â SSi2)
2 , �(LLMi3 â SSi3)
2 , �(LLMi4 â SSi4) | 2307.09042#40 | Emotional Intelligence of Large Language Models | Large Language Models (LLMs) have demonstrated remarkable abilities across
numerous disciplines, primarily assessed through tasks in language generation,
knowledge utilization, and complex reasoning. However, their alignment with
human emotions and values, which is critical for real-world applications, has
not been systematically evaluated. Here, we assessed LLMs' Emotional
Intelligence (EI), encompassing emotion recognition, interpretation, and
understanding, which is necessary for effective communication and social
interactions. Specifically, we first developed a novel psychometric assessment
focusing on Emotion Understanding (EU), a core component of EI, suitable for
both humans and LLMs. This test requires evaluating complex emotions (e.g.,
surprised, joyful, puzzled, proud) in realistic scenarios (e.g., despite
feeling underperformed, John surprisingly achieved a top score). With a
reference frame constructed from over 500 adults, we tested a variety of
mainstream LLMs. Most achieved above-average EQ scores, with GPT-4 exceeding
89% of human participants with an EQ of 117. Interestingly, a multivariate
pattern analysis revealed that some LLMs apparently did not reply on the
human-like mechanism to achieve human-level performance, as their
representational patterns were qualitatively distinct from humans. In addition,
we discussed the impact of factors such as model size, training method, and
architecture on LLMs' EQ. In summary, our study presents one of the first
psychometric evaluations of the human-like characteristics of LLMs, which may
shed light on the future development of LLMs aiming for both high intellectual
and emotional intelligence. Project website:
https://emotional-intelligence.github.io/ | http://arxiv.org/pdf/2307.09042 | Xuena Wang, Xueting Li, Zi Yin, Yue Wu, Liu Jia | cs.AI | 36 pages, 5 figures | null | cs.AI | 20230718 | 20230728 | [
{
"id": "2302.02083"
},
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2009.03300"
},
{
"id": "2305.02897"
},
{
"id": "2210.02414"
},
{
"id": "2303.18223"
},
{
"id": "2304.07327"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2210.13312"
}
] |
2307.09288 | 40 | 10
Dataset Num. of Comparisons Avg. # Turns per Dialogue Avg. # Tokens per Example Avg. # Tokens in Prompt Avg. # Tokens in Response Anthropic Helpful Anthropic Harmless OpenAI Summarize OpenAI WebGPT StackExchange Stanford SHP Synthetic GPT-J 122,387 43,966 176,625 13,333 1,038,480 74,882 33,139 3.0 3.0 1.0 1.0 1.0 1.0 1.0 251.5 152.5 371.1 237.2 440.2 338.3 123.3 17.7 15.7 336.0 48.3 200.1 199.5 13.0 88.4 46.4 35.1 188.9 240.2 138.8 110.3 Meta (Safety & Helpfulness) 1,418,091 3.9 798.5 31.4 234.1 Total 2,919,326 1.6 595.7 108.2 216.9 | 2307.09288#40 | Llama 2: Open Foundation and Fine-Tuned Chat Models | In this work, we develop and release Llama 2, a collection of pretrained and
fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70
billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for
dialogue use cases. Our models outperform open-source chat models on most
benchmarks we tested, and based on our human evaluations for helpfulness and
safety, may be a suitable substitute for closed-source models. We provide a
detailed description of our approach to fine-tuning and safety improvements of
Llama 2-Chat in order to enable the community to build on our work and
contribute to the responsible development of LLMs. | http://arxiv.org/pdf/2307.09288 | Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom | cs.CL, cs.AI | null | null | cs.CL | 20230718 | 20230719 | [
{
"id": "2006.03654"
},
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1707.06347"
},
{
"id": "1803.05457"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2211.09085"
},
{
"id": "2304.07327"
},
{
"id": "2203.15556"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2302.09270"
},
{
"id": "2205.01068"
},
{
"id": "2301.09211"
},
{
"id": "2304.06364"
},
{
"id": "2305.17493"
},
{
"id": "2302.07842"
},
{
"id": "2301.12867"
},
{
"id": "1806.03822"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "1904.09728"
},
{
"id": "1811.00937"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2206.05229"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "1908.01091"
},
{
"id": "2204.05862"
},
{
"id": "2107.03451"
},
{
"id": "2112.00861"
},
{
"id": "2109.03300"
},
{
"id": "2104.10350"
},
{
"id": "2212.09689"
},
{
"id": "2006.12442"
},
{
"id": "2302.07459"
},
{
"id": "2302.08582"
},
{
"id": "2212.10560"
},
{
"id": "2112.04359"
},
{
"id": "2001.08361"
},
{
"id": "2306.05949"
},
{
"id": "1907.11692"
},
{
"id": "1705.03551"
},
{
"id": "2303.15056"
},
{
"id": "2303.17651"
},
{
"id": "2210.07700"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2307.09042 | 41 | 2
2 , �(LLMi2 â SSi2)
2 , �(LLMi3 â SSi3)
2 , �(LLMi4 â SSi4)
We calculated the Pearson correlation coefficient between the discriminability pattern of each participant and the human template (Human-to-Human similarity). To avoid the inflation in calculating correlation, the template was constructed excluding the individual whose Human-to-Human correlation coefficient was calculated. The distribution of the Human-to-Human similarity served as a norm for pattern similarity. The Pearson correlation coefficient between the discriminability pattern of each LLM 21
�
and the human template was calculated as the LLM-to-Human similarity.
LLM â to â Human similarity Pearson = 40 ðð=1 â (ðððð â ðð�)(ðððð â ðð�) 2 2
40 ðð=1
40 ðð=1
�â (ðððð â ðð�) represent the item i of the "Discriminability" vector and the
# �â (ðððð â ðð�) | 2307.09042#41 | Emotional Intelligence of Large Language Models | Large Language Models (LLMs) have demonstrated remarkable abilities across
numerous disciplines, primarily assessed through tasks in language generation,
knowledge utilization, and complex reasoning. However, their alignment with
human emotions and values, which is critical for real-world applications, has
not been systematically evaluated. Here, we assessed LLMs' Emotional
Intelligence (EI), encompassing emotion recognition, interpretation, and
understanding, which is necessary for effective communication and social
interactions. Specifically, we first developed a novel psychometric assessment
focusing on Emotion Understanding (EU), a core component of EI, suitable for
both humans and LLMs. This test requires evaluating complex emotions (e.g.,
surprised, joyful, puzzled, proud) in realistic scenarios (e.g., despite
feeling underperformed, John surprisingly achieved a top score). With a
reference frame constructed from over 500 adults, we tested a variety of
mainstream LLMs. Most achieved above-average EQ scores, with GPT-4 exceeding
89% of human participants with an EQ of 117. Interestingly, a multivariate
pattern analysis revealed that some LLMs apparently did not reply on the
human-like mechanism to achieve human-level performance, as their
representational patterns were qualitatively distinct from humans. In addition,
we discussed the impact of factors such as model size, training method, and
architecture on LLMs' EQ. In summary, our study presents one of the first
psychometric evaluations of the human-like characteristics of LLMs, which may
shed light on the future development of LLMs aiming for both high intellectual
and emotional intelligence. Project website:
https://emotional-intelligence.github.io/ | http://arxiv.org/pdf/2307.09042 | Xuena Wang, Xueting Li, Zi Yin, Yue Wu, Liu Jia | cs.AI | 36 pages, 5 figures | null | cs.AI | 20230718 | 20230728 | [
{
"id": "2302.02083"
},
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2009.03300"
},
{
"id": "2305.02897"
},
{
"id": "2210.02414"
},
{
"id": "2303.18223"
},
{
"id": "2304.07327"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2210.13312"
}
] |
2307.09288 | 41 | Table 6: Statistics of human preference data for reward modeling. We list both the open-source and internally collected human preference data used for reward modeling. Note that a binary human preference comparison contains 2 responses (chosen and rejected) sharing the same prompt (and previous dialogue). Each example consists of a prompt (including previous dialogue if available) and a response, which is the input of the reward model. We report the number of comparisons, the average number of turns per dialogue, the average number of tokens per example, per prompt and per response. More details on Meta helpfulness and safety data per batch can be found in Appendix A.3.1.
knows. This prevents cases where, for instance, the two models would have an information mismatch, which could result in favoring hallucinations. The model architecture and hyper-parameters are identical to those of the pretrained language models, except that the classiï¬cation head for next-token prediction is replaced with a regression head for outputting a scalar reward.
Training Objectives. To train the reward model, we convert our collected pairwise human preference data into a binary ranking label format (i.e., chosen & rejected) and enforce the chosen response to have a higher score than its counterpart. We used a binary ranking loss consistent with Ouyang et al. (2022): | 2307.09288#41 | Llama 2: Open Foundation and Fine-Tuned Chat Models | In this work, we develop and release Llama 2, a collection of pretrained and
fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70
billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for
dialogue use cases. Our models outperform open-source chat models on most
benchmarks we tested, and based on our human evaluations for helpfulness and
safety, may be a suitable substitute for closed-source models. We provide a
detailed description of our approach to fine-tuning and safety improvements of
Llama 2-Chat in order to enable the community to build on our work and
contribute to the responsible development of LLMs. | http://arxiv.org/pdf/2307.09288 | Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom | cs.CL, cs.AI | null | null | cs.CL | 20230718 | 20230719 | [
{
"id": "2006.03654"
},
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1707.06347"
},
{
"id": "1803.05457"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2211.09085"
},
{
"id": "2304.07327"
},
{
"id": "2203.15556"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2302.09270"
},
{
"id": "2205.01068"
},
{
"id": "2301.09211"
},
{
"id": "2304.06364"
},
{
"id": "2305.17493"
},
{
"id": "2302.07842"
},
{
"id": "2301.12867"
},
{
"id": "1806.03822"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "1904.09728"
},
{
"id": "1811.00937"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2206.05229"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "1908.01091"
},
{
"id": "2204.05862"
},
{
"id": "2107.03451"
},
{
"id": "2112.00861"
},
{
"id": "2109.03300"
},
{
"id": "2104.10350"
},
{
"id": "2212.09689"
},
{
"id": "2006.12442"
},
{
"id": "2302.07459"
},
{
"id": "2302.08582"
},
{
"id": "2212.10560"
},
{
"id": "2112.04359"
},
{
"id": "2001.08361"
},
{
"id": "2306.05949"
},
{
"id": "1907.11692"
},
{
"id": "1705.03551"
},
{
"id": "2303.15056"
},
{
"id": "2303.17651"
},
{
"id": "2210.07700"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2307.09042 | 42 | # �â (ðððð â ðð�)
Here, and Xi Yi human template vector, respectively. The length of both vectors is 40. and denote the mean of and , respectively. If the LLM-to-Human similarity is less than one X� Y� SD below the population, such LLM is considered as employing a qualitatively ðððð ðððð different mechanism from humans in EU.
# Prompt engineering | 2307.09042#42 | Emotional Intelligence of Large Language Models | Large Language Models (LLMs) have demonstrated remarkable abilities across
numerous disciplines, primarily assessed through tasks in language generation,
knowledge utilization, and complex reasoning. However, their alignment with
human emotions and values, which is critical for real-world applications, has
not been systematically evaluated. Here, we assessed LLMs' Emotional
Intelligence (EI), encompassing emotion recognition, interpretation, and
understanding, which is necessary for effective communication and social
interactions. Specifically, we first developed a novel psychometric assessment
focusing on Emotion Understanding (EU), a core component of EI, suitable for
both humans and LLMs. This test requires evaluating complex emotions (e.g.,
surprised, joyful, puzzled, proud) in realistic scenarios (e.g., despite
feeling underperformed, John surprisingly achieved a top score). With a
reference frame constructed from over 500 adults, we tested a variety of
mainstream LLMs. Most achieved above-average EQ scores, with GPT-4 exceeding
89% of human participants with an EQ of 117. Interestingly, a multivariate
pattern analysis revealed that some LLMs apparently did not reply on the
human-like mechanism to achieve human-level performance, as their
representational patterns were qualitatively distinct from humans. In addition,
we discussed the impact of factors such as model size, training method, and
architecture on LLMs' EQ. In summary, our study presents one of the first
psychometric evaluations of the human-like characteristics of LLMs, which may
shed light on the future development of LLMs aiming for both high intellectual
and emotional intelligence. Project website:
https://emotional-intelligence.github.io/ | http://arxiv.org/pdf/2307.09042 | Xuena Wang, Xueting Li, Zi Yin, Yue Wu, Liu Jia | cs.AI | 36 pages, 5 figures | null | cs.AI | 20230718 | 20230728 | [
{
"id": "2302.02083"
},
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2009.03300"
},
{
"id": "2305.02897"
},
{
"id": "2210.02414"
},
{
"id": "2303.18223"
},
{
"id": "2304.07327"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2210.13312"
}
] |
2307.09288 | 42 | Lranking = âlog(Ï(rθ(x, yc) â rθ(x, yr))) (1) where rθ(x, y) is the scalar score output for prompt x and completion y with model weights θ. yc is the preferred response that annotators choose and yr is the rejected counterpart. Built on top of this binary ranking loss, we further modify it separately for better helpfulness and safety reward models as follows. Given that our preference ratings is decomposed as a scale of four points (e.g., signiï¬cantly better), as presented in Section 3.2.1, it can be useful to leverage this information to explicitly teach the reward model to assign more discrepant scores to the generations that have more diï¬erences. To do so, we further add a margin component in the loss: | 2307.09288#42 | Llama 2: Open Foundation and Fine-Tuned Chat Models | In this work, we develop and release Llama 2, a collection of pretrained and
fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70
billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for
dialogue use cases. Our models outperform open-source chat models on most
benchmarks we tested, and based on our human evaluations for helpfulness and
safety, may be a suitable substitute for closed-source models. We provide a
detailed description of our approach to fine-tuning and safety improvements of
Llama 2-Chat in order to enable the community to build on our work and
contribute to the responsible development of LLMs. | http://arxiv.org/pdf/2307.09288 | Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom | cs.CL, cs.AI | null | null | cs.CL | 20230718 | 20230719 | [
{
"id": "2006.03654"
},
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1707.06347"
},
{
"id": "1803.05457"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2211.09085"
},
{
"id": "2304.07327"
},
{
"id": "2203.15556"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2302.09270"
},
{
"id": "2205.01068"
},
{
"id": "2301.09211"
},
{
"id": "2304.06364"
},
{
"id": "2305.17493"
},
{
"id": "2302.07842"
},
{
"id": "2301.12867"
},
{
"id": "1806.03822"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "1904.09728"
},
{
"id": "1811.00937"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2206.05229"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "1908.01091"
},
{
"id": "2204.05862"
},
{
"id": "2107.03451"
},
{
"id": "2112.00861"
},
{
"id": "2109.03300"
},
{
"id": "2104.10350"
},
{
"id": "2212.09689"
},
{
"id": "2006.12442"
},
{
"id": "2302.07459"
},
{
"id": "2302.08582"
},
{
"id": "2212.10560"
},
{
"id": "2112.04359"
},
{
"id": "2001.08361"
},
{
"id": "2306.05949"
},
{
"id": "1907.11692"
},
{
"id": "1705.03551"
},
{
"id": "2303.15056"
},
{
"id": "2303.17651"
},
{
"id": "2210.07700"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2307.09042 | 43 | # Prompt engineering
Prompt engineeringâthe meticulous development and choice of promptsâplays a pivotal role in the efficacy of LLMs (Hebenstreit et al., 2023; Hendrycks et al., 2021; Nair et al., 2023; OpenAI, 2023; Shinn et al., 2023). In essence, prompt engineering refers to the strategy of designing and selecting prompts that can substantially guide and influence the responses of LLMs. The necessity of prompt engineering lies in its potential to enhance the precision and relevance of the responses generated by these models, thereby leading to more effective and reliable outcomes. In the realm of emotional intelligence, prompts serve a crucial function. They provide a direction for the model, enabling it to understand and generate responses that are not just accurate but also emotionally intelligent. Given the nature of emotional intelligence that involves understanding, processing, and managing emotions, prompts can significantly aid the LLMs in identifying the correct context and producing responses that exhibit emotional
# understanding and empathy.
To examine the promptsâ influence on EU ability, thereby optimizing model outputs, four kinds of prompt engineering techniques (see https://emotional- intelligence.github.io/ for prompts) were applied to the OpenAI GPT series (GPT-3.5- turbo, Curie, Babbage, DaVinci, text-davinci-001, text-davinci-002, and text-davinci- 22 | 2307.09042#43 | Emotional Intelligence of Large Language Models | Large Language Models (LLMs) have demonstrated remarkable abilities across
numerous disciplines, primarily assessed through tasks in language generation,
knowledge utilization, and complex reasoning. However, their alignment with
human emotions and values, which is critical for real-world applications, has
not been systematically evaluated. Here, we assessed LLMs' Emotional
Intelligence (EI), encompassing emotion recognition, interpretation, and
understanding, which is necessary for effective communication and social
interactions. Specifically, we first developed a novel psychometric assessment
focusing on Emotion Understanding (EU), a core component of EI, suitable for
both humans and LLMs. This test requires evaluating complex emotions (e.g.,
surprised, joyful, puzzled, proud) in realistic scenarios (e.g., despite
feeling underperformed, John surprisingly achieved a top score). With a
reference frame constructed from over 500 adults, we tested a variety of
mainstream LLMs. Most achieved above-average EQ scores, with GPT-4 exceeding
89% of human participants with an EQ of 117. Interestingly, a multivariate
pattern analysis revealed that some LLMs apparently did not reply on the
human-like mechanism to achieve human-level performance, as their
representational patterns were qualitatively distinct from humans. In addition,
we discussed the impact of factors such as model size, training method, and
architecture on LLMs' EQ. In summary, our study presents one of the first
psychometric evaluations of the human-like characteristics of LLMs, which may
shed light on the future development of LLMs aiming for both high intellectual
and emotional intelligence. Project website:
https://emotional-intelligence.github.io/ | http://arxiv.org/pdf/2307.09042 | Xuena Wang, Xueting Li, Zi Yin, Yue Wu, Liu Jia | cs.AI | 36 pages, 5 figures | null | cs.AI | 20230718 | 20230728 | [
{
"id": "2302.02083"
},
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2009.03300"
},
{
"id": "2305.02897"
},
{
"id": "2210.02414"
},
{
"id": "2303.18223"
},
{
"id": "2304.07327"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2210.13312"
}
] |
2307.09288 | 43 | Lranking = âlog(Ï(rθ(x, yc) â rθ(x, yr) â m(r))) where the margin m(r) is a discrete function of the preference rating. Naturally, we use a large margin for pairs with distinct responses, and a smaller one for those with similar responses (shown in Table 27). We found this margin component can improve Helpfulness reward model accuracy especially on samples where two responses are more separable. More detailed ablation and analysis can be found in Table 28 in Appendix A.3.3.
Data Composition. We combine our newly collected data with existing open-source preference datasets to form a larger training dataset. Initially, open-source datasets were used to bootstrap our reward models while we were in the process of collecting preference annotation data. We note that in the context of RLHF in this study, the role of reward signals is to learn human preference for Llama 2-Chat outputs rather than any model outputs. However, in our experiments, we do not observe negative transfer from the open-source preference datasets. Thus, we have decided to keep them in our data mixture, as they could enable better generalization for the reward model and prevent reward hacking, i.e. Llama 2-Chat taking advantage of some weaknesses of our reward, and so artiï¬cially inï¬ating the score despite performing less well. | 2307.09288#43 | Llama 2: Open Foundation and Fine-Tuned Chat Models | In this work, we develop and release Llama 2, a collection of pretrained and
fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70
billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for
dialogue use cases. Our models outperform open-source chat models on most
benchmarks we tested, and based on our human evaluations for helpfulness and
safety, may be a suitable substitute for closed-source models. We provide a
detailed description of our approach to fine-tuning and safety improvements of
Llama 2-Chat in order to enable the community to build on our work and
contribute to the responsible development of LLMs. | http://arxiv.org/pdf/2307.09288 | Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom | cs.CL, cs.AI | null | null | cs.CL | 20230718 | 20230719 | [
{
"id": "2006.03654"
},
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1707.06347"
},
{
"id": "1803.05457"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2211.09085"
},
{
"id": "2304.07327"
},
{
"id": "2203.15556"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2302.09270"
},
{
"id": "2205.01068"
},
{
"id": "2301.09211"
},
{
"id": "2304.06364"
},
{
"id": "2305.17493"
},
{
"id": "2302.07842"
},
{
"id": "2301.12867"
},
{
"id": "1806.03822"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "1904.09728"
},
{
"id": "1811.00937"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2206.05229"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "1908.01091"
},
{
"id": "2204.05862"
},
{
"id": "2107.03451"
},
{
"id": "2112.00861"
},
{
"id": "2109.03300"
},
{
"id": "2104.10350"
},
{
"id": "2212.09689"
},
{
"id": "2006.12442"
},
{
"id": "2302.07459"
},
{
"id": "2302.08582"
},
{
"id": "2212.10560"
},
{
"id": "2112.04359"
},
{
"id": "2001.08361"
},
{
"id": "2306.05949"
},
{
"id": "1907.11692"
},
{
"id": "1705.03551"
},
{
"id": "2303.15056"
},
{
"id": "2303.17651"
},
{
"id": "2210.07700"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2307.09042 | 44 | 003): (1) Zero-shot Prompts, (2) Enhanced Zero-shot Prompts Incorporating Step-by- Step Thinking, (3) Two-shot Chain of Thought Reasoning Approaches, and (4) Two- shot Chain of Thought Reasoning Approaches Augmented with Step-by-Step Thinking
To decease the randomness, a constant temperature parameter was set to 0 and the top_p parameter to was set to 0.9 across all these models. To dictates the maximum length of the generated text, the max_tokens parameter was set to 2048. The normalization process was the same as the one without prompts.
# Reference
Amin, M. M., Cambria, E., & Schuller, B. W. (2023). Will Affective Computing Emerge
from Foundation Models and General AI? A First Evaluation on ChatGPT.
https://doi.org/10.48550/ARXIV.2303.03186
Bai, Y., Kadavath, S., Kundu, S., Askell, A., Kernion, J., Jones, A., Chen, A., Goldie, | 2307.09042#44 | Emotional Intelligence of Large Language Models | Large Language Models (LLMs) have demonstrated remarkable abilities across
numerous disciplines, primarily assessed through tasks in language generation,
knowledge utilization, and complex reasoning. However, their alignment with
human emotions and values, which is critical for real-world applications, has
not been systematically evaluated. Here, we assessed LLMs' Emotional
Intelligence (EI), encompassing emotion recognition, interpretation, and
understanding, which is necessary for effective communication and social
interactions. Specifically, we first developed a novel psychometric assessment
focusing on Emotion Understanding (EU), a core component of EI, suitable for
both humans and LLMs. This test requires evaluating complex emotions (e.g.,
surprised, joyful, puzzled, proud) in realistic scenarios (e.g., despite
feeling underperformed, John surprisingly achieved a top score). With a
reference frame constructed from over 500 adults, we tested a variety of
mainstream LLMs. Most achieved above-average EQ scores, with GPT-4 exceeding
89% of human participants with an EQ of 117. Interestingly, a multivariate
pattern analysis revealed that some LLMs apparently did not reply on the
human-like mechanism to achieve human-level performance, as their
representational patterns were qualitatively distinct from humans. In addition,
we discussed the impact of factors such as model size, training method, and
architecture on LLMs' EQ. In summary, our study presents one of the first
psychometric evaluations of the human-like characteristics of LLMs, which may
shed light on the future development of LLMs aiming for both high intellectual
and emotional intelligence. Project website:
https://emotional-intelligence.github.io/ | http://arxiv.org/pdf/2307.09042 | Xuena Wang, Xueting Li, Zi Yin, Yue Wu, Liu Jia | cs.AI | 36 pages, 5 figures | null | cs.AI | 20230718 | 20230728 | [
{
"id": "2302.02083"
},
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2009.03300"
},
{
"id": "2305.02897"
},
{
"id": "2210.02414"
},
{
"id": "2303.18223"
},
{
"id": "2304.07327"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2210.13312"
}
] |
2307.09288 | 44 | With training data available from diï¬erent sources, we experimented with diï¬erent mixing recipes for both Helpfulness and Safety reward models to ascertain the best settings. After extensive experimentation, the
11
Helpfulness reward model is eventually trained on all Meta Helpfulness data, combined with an equal parts of the remaining data uniformly sampled from Meta Safety and from the open-source datasets. The Meta Safety reward model is trained on all Meta Safety and Anthropic Harmless data, mixed with Meta Helpfulness and open-source helpfulness data in a 90/10 proportion. We found that the setting with 10% helpfulness data is especially beneï¬cial for the accuracy on samples where both the chosen and rejected responses were deemed safe.
Training Details. We train for one epoch over the training data. In earlier experiments, we found that training longer can lead to over-ï¬tting. We use the same optimizer parameters as for the base model. The maximum learning rate is 5 à 10â6 for the 70B parameter Llama 2-Chat and 1 à 10â5 for the rest. The learning rate is decreased on a cosine learning rate schedule, down to 10% of the maximum learning rate. We use a warm-up of 3% of the total number of steps, with a minimum of 5. The eï¬ective batch size is kept ï¬xed at 512 pairs, or 1024 rows per batch. | 2307.09288#44 | Llama 2: Open Foundation and Fine-Tuned Chat Models | In this work, we develop and release Llama 2, a collection of pretrained and
fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70
billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for
dialogue use cases. Our models outperform open-source chat models on most
benchmarks we tested, and based on our human evaluations for helpfulness and
safety, may be a suitable substitute for closed-source models. We provide a
detailed description of our approach to fine-tuning and safety improvements of
Llama 2-Chat in order to enable the community to build on our work and
contribute to the responsible development of LLMs. | http://arxiv.org/pdf/2307.09288 | Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom | cs.CL, cs.AI | null | null | cs.CL | 20230718 | 20230719 | [
{
"id": "2006.03654"
},
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1707.06347"
},
{
"id": "1803.05457"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2211.09085"
},
{
"id": "2304.07327"
},
{
"id": "2203.15556"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2302.09270"
},
{
"id": "2205.01068"
},
{
"id": "2301.09211"
},
{
"id": "2304.06364"
},
{
"id": "2305.17493"
},
{
"id": "2302.07842"
},
{
"id": "2301.12867"
},
{
"id": "1806.03822"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "1904.09728"
},
{
"id": "1811.00937"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2206.05229"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "1908.01091"
},
{
"id": "2204.05862"
},
{
"id": "2107.03451"
},
{
"id": "2112.00861"
},
{
"id": "2109.03300"
},
{
"id": "2104.10350"
},
{
"id": "2212.09689"
},
{
"id": "2006.12442"
},
{
"id": "2302.07459"
},
{
"id": "2302.08582"
},
{
"id": "2212.10560"
},
{
"id": "2112.04359"
},
{
"id": "2001.08361"
},
{
"id": "2306.05949"
},
{
"id": "1907.11692"
},
{
"id": "1705.03551"
},
{
"id": "2303.15056"
},
{
"id": "2303.17651"
},
{
"id": "2210.07700"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2307.09042 | 45 | Bai, Y., Kadavath, S., Kundu, S., Askell, A., Kernion, J., Jones, A., Chen, A., Goldie,
A., Mirhoseini, A., McKinnon, C., Chen, C., Olsson, C., Olah, C., Hernandez, D., Drain, D., Ganguli, D., Li, D., Tran-Johnson, E., Perez, E., ⦠Kaplan, J. (2022). Constitutional AI: Harmlessness from AI Feedback (arXiv:2212.08073).
A., Mirhoseini, A., McKinnon, C., Chen, C., Olsson, C., Olah, C., Hernandez,
D., Drain, D., Ganguli, D., Li, D., Tran-Johnson, E., Perez, E., ⦠Kaplan, J.
arXiv. http://arxiv.org/abs/2212.08073
Baron-Cohen, S. (2020). The pattern seekers: How autism drives human invention.
Basic Books.
Baron-Cohen, S., Leslie, A. M., & Frith, U. (1985). Does the autistic child have a
âtheory of mindâ ? Cognition, 21(1), 37â46. https://doi.org/10.1016/0010- 0277(85)90022-8 | 2307.09042#45 | Emotional Intelligence of Large Language Models | Large Language Models (LLMs) have demonstrated remarkable abilities across
numerous disciplines, primarily assessed through tasks in language generation,
knowledge utilization, and complex reasoning. However, their alignment with
human emotions and values, which is critical for real-world applications, has
not been systematically evaluated. Here, we assessed LLMs' Emotional
Intelligence (EI), encompassing emotion recognition, interpretation, and
understanding, which is necessary for effective communication and social
interactions. Specifically, we first developed a novel psychometric assessment
focusing on Emotion Understanding (EU), a core component of EI, suitable for
both humans and LLMs. This test requires evaluating complex emotions (e.g.,
surprised, joyful, puzzled, proud) in realistic scenarios (e.g., despite
feeling underperformed, John surprisingly achieved a top score). With a
reference frame constructed from over 500 adults, we tested a variety of
mainstream LLMs. Most achieved above-average EQ scores, with GPT-4 exceeding
89% of human participants with an EQ of 117. Interestingly, a multivariate
pattern analysis revealed that some LLMs apparently did not reply on the
human-like mechanism to achieve human-level performance, as their
representational patterns were qualitatively distinct from humans. In addition,
we discussed the impact of factors such as model size, training method, and
architecture on LLMs' EQ. In summary, our study presents one of the first
psychometric evaluations of the human-like characteristics of LLMs, which may
shed light on the future development of LLMs aiming for both high intellectual
and emotional intelligence. Project website:
https://emotional-intelligence.github.io/ | http://arxiv.org/pdf/2307.09042 | Xuena Wang, Xueting Li, Zi Yin, Yue Wu, Liu Jia | cs.AI | 36 pages, 5 figures | null | cs.AI | 20230718 | 20230728 | [
{
"id": "2302.02083"
},
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2009.03300"
},
{
"id": "2305.02897"
},
{
"id": "2210.02414"
},
{
"id": "2303.18223"
},
{
"id": "2304.07327"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2210.13312"
}
] |
2307.09288 | 45 | SteamSHP-XL Open Assistant GPT4 Meta Helpful. 52.8 53.8 58.6 Meta Safety 43.8 53.4 58.1 Anthropic Helpful 66.8 67.7 - Anthropic Harmless 34.2 68.4 - OpenAI Summ. 54.7 71.7 - Stanford SHP 75.7 55.0 - Avg 55.3 63.0 - Safety RM Helpfulness RM 56.2 63.2 64.5 62.8 55.4 72.0 74.7 71.0 71.7 75.5 65.2 80.0 64.3 70.6
Table 7: Reward model results. Performance of our ï¬nal helpfulness and safety reward models on a diverse set of human preference benchmarks. Note that our model is ï¬ne-tuned on our collected data, as opposed to the other baselines that we report.
Safety RM Helpfulness RM Test Set Meta Safety Signiï¬cantly Better 94.3 89.9 Better 76.3 73.2 Slightly Better 65.7 63.8 Negligibly Better / Unsure 55.3 54.5 Avg 64.5 62.8 Safety RM Helpfulness RM Meta Helpful. 64.6 80.7 57.5 67.5 53.8 60.9 52.2 54.7 56.2 63.2 | 2307.09288#45 | Llama 2: Open Foundation and Fine-Tuned Chat Models | In this work, we develop and release Llama 2, a collection of pretrained and
fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70
billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for
dialogue use cases. Our models outperform open-source chat models on most
benchmarks we tested, and based on our human evaluations for helpfulness and
safety, may be a suitable substitute for closed-source models. We provide a
detailed description of our approach to fine-tuning and safety improvements of
Llama 2-Chat in order to enable the community to build on our work and
contribute to the responsible development of LLMs. | http://arxiv.org/pdf/2307.09288 | Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom | cs.CL, cs.AI | null | null | cs.CL | 20230718 | 20230719 | [
{
"id": "2006.03654"
},
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1707.06347"
},
{
"id": "1803.05457"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2211.09085"
},
{
"id": "2304.07327"
},
{
"id": "2203.15556"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2302.09270"
},
{
"id": "2205.01068"
},
{
"id": "2301.09211"
},
{
"id": "2304.06364"
},
{
"id": "2305.17493"
},
{
"id": "2302.07842"
},
{
"id": "2301.12867"
},
{
"id": "1806.03822"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "1904.09728"
},
{
"id": "1811.00937"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2206.05229"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "1908.01091"
},
{
"id": "2204.05862"
},
{
"id": "2107.03451"
},
{
"id": "2112.00861"
},
{
"id": "2109.03300"
},
{
"id": "2104.10350"
},
{
"id": "2212.09689"
},
{
"id": "2006.12442"
},
{
"id": "2302.07459"
},
{
"id": "2302.08582"
},
{
"id": "2212.10560"
},
{
"id": "2112.04359"
},
{
"id": "2001.08361"
},
{
"id": "2306.05949"
},
{
"id": "1907.11692"
},
{
"id": "1705.03551"
},
{
"id": "2303.15056"
},
{
"id": "2303.17651"
},
{
"id": "2210.07700"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2307.09042 | 46 | 0277(85)90022-8
Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan,
A., Shyam, P., Sastry, G., Askell, A., & others. (2020). Language models are
23
few-shot learners. Advances in Neural Information Processing Systems, 33,
1877â1901.
Bubeck, S., Chandrasekaran, V., Eldan, R., Gehrke, J., Horvitz, E., Kamar, E., Lee, P.,
Lee, Y. T., Li, Y., Lundberg, S., Nori, H., Palangi, H., Ribeiro, M. T., & Zhang,
Y. (2023). Sparks of Artificial General Intelligence: Early experiments with GPT-4. https://doi.org/10.48550/ARXIV.2303.12712
Chiang, W.-L., Li, Z., Lin, Z., Sheng, Y., Wu, Z., Zhang, H., Zheng, L., Zhuang, S.,
Zhuang, Y., Gonzalez, J. E., Stoica, I., & Xing, E. P. (2023). Vicuna: An Open# Source Chatbot | 2307.09042#46 | Emotional Intelligence of Large Language Models | Large Language Models (LLMs) have demonstrated remarkable abilities across
numerous disciplines, primarily assessed through tasks in language generation,
knowledge utilization, and complex reasoning. However, their alignment with
human emotions and values, which is critical for real-world applications, has
not been systematically evaluated. Here, we assessed LLMs' Emotional
Intelligence (EI), encompassing emotion recognition, interpretation, and
understanding, which is necessary for effective communication and social
interactions. Specifically, we first developed a novel psychometric assessment
focusing on Emotion Understanding (EU), a core component of EI, suitable for
both humans and LLMs. This test requires evaluating complex emotions (e.g.,
surprised, joyful, puzzled, proud) in realistic scenarios (e.g., despite
feeling underperformed, John surprisingly achieved a top score). With a
reference frame constructed from over 500 adults, we tested a variety of
mainstream LLMs. Most achieved above-average EQ scores, with GPT-4 exceeding
89% of human participants with an EQ of 117. Interestingly, a multivariate
pattern analysis revealed that some LLMs apparently did not reply on the
human-like mechanism to achieve human-level performance, as their
representational patterns were qualitatively distinct from humans. In addition,
we discussed the impact of factors such as model size, training method, and
architecture on LLMs' EQ. In summary, our study presents one of the first
psychometric evaluations of the human-like characteristics of LLMs, which may
shed light on the future development of LLMs aiming for both high intellectual
and emotional intelligence. Project website:
https://emotional-intelligence.github.io/ | http://arxiv.org/pdf/2307.09042 | Xuena Wang, Xueting Li, Zi Yin, Yue Wu, Liu Jia | cs.AI | 36 pages, 5 figures | null | cs.AI | 20230718 | 20230728 | [
{
"id": "2302.02083"
},
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2009.03300"
},
{
"id": "2305.02897"
},
{
"id": "2210.02414"
},
{
"id": "2303.18223"
},
{
"id": "2304.07327"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2210.13312"
}
] |
2307.09288 | 46 | Table 8: Granular reward model accuracy per preference rating. We report per-preference rating accuracy for both Helpfulness and Safety reward models on the Meta Helpfulness and Safety test sets. The reward models show superior accuracy on more distinct responses (e.g., signiï¬cantly better) and lower accuracy on similar responses (e.g., negligibly better).
Reward Model Results. On each batch of human preference annotation for reward modeling, we held out 1000 examples as a test set to evaluate our models. We refer to the union of all prompts for the corresponding test sets as âMeta Helpfulnessâ and âMeta Safety,â respectively. | 2307.09288#46 | Llama 2: Open Foundation and Fine-Tuned Chat Models | In this work, we develop and release Llama 2, a collection of pretrained and
fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70
billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for
dialogue use cases. Our models outperform open-source chat models on most
benchmarks we tested, and based on our human evaluations for helpfulness and
safety, may be a suitable substitute for closed-source models. We provide a
detailed description of our approach to fine-tuning and safety improvements of
Llama 2-Chat in order to enable the community to build on our work and
contribute to the responsible development of LLMs. | http://arxiv.org/pdf/2307.09288 | Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom | cs.CL, cs.AI | null | null | cs.CL | 20230718 | 20230719 | [
{
"id": "2006.03654"
},
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1707.06347"
},
{
"id": "1803.05457"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2211.09085"
},
{
"id": "2304.07327"
},
{
"id": "2203.15556"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2302.09270"
},
{
"id": "2205.01068"
},
{
"id": "2301.09211"
},
{
"id": "2304.06364"
},
{
"id": "2305.17493"
},
{
"id": "2302.07842"
},
{
"id": "2301.12867"
},
{
"id": "1806.03822"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "1904.09728"
},
{
"id": "1811.00937"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2206.05229"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "1908.01091"
},
{
"id": "2204.05862"
},
{
"id": "2107.03451"
},
{
"id": "2112.00861"
},
{
"id": "2109.03300"
},
{
"id": "2104.10350"
},
{
"id": "2212.09689"
},
{
"id": "2006.12442"
},
{
"id": "2302.07459"
},
{
"id": "2302.08582"
},
{
"id": "2212.10560"
},
{
"id": "2112.04359"
},
{
"id": "2001.08361"
},
{
"id": "2306.05949"
},
{
"id": "1907.11692"
},
{
"id": "1705.03551"
},
{
"id": "2303.15056"
},
{
"id": "2303.17651"
},
{
"id": "2210.07700"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2307.09042 | 47 | Zhuang, Y., Gonzalez, J. E., Stoica, I., & Xing, E. P. (2023). Vicuna: An Open# Source Chatbot
Impressing GPT-4 with 90%* ChatGPT Quality.
https://lmsys.org/blog/2023-03-30-vicuna/
Conover, M., Hayes, M., Mathur, A., Meng, X., Xie, J., Wan, J., Shah, S., Ghodsi, A.,
Wendell, P., Zaharia, M., & others. (2023). Free dolly: Introducing the worldâs
first truly open instruction-tuned llm.
de Leeuw, J. R., Gilbert, R. A., & Luchterhandt, B. (2023). jsPsych: Enabling an opensource collaborative ecosystem of behavioral experiments. Journal of Open
Source Software, 8(85), 5351. https://doi.org/10.21105/joss.05351
Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of Deep
# Bidirectional Transformers for Language Understanding (arXiv:1810.04805).
arXiv. http://arxiv.org/abs/1810.04805 | 2307.09042#47 | Emotional Intelligence of Large Language Models | Large Language Models (LLMs) have demonstrated remarkable abilities across
numerous disciplines, primarily assessed through tasks in language generation,
knowledge utilization, and complex reasoning. However, their alignment with
human emotions and values, which is critical for real-world applications, has
not been systematically evaluated. Here, we assessed LLMs' Emotional
Intelligence (EI), encompassing emotion recognition, interpretation, and
understanding, which is necessary for effective communication and social
interactions. Specifically, we first developed a novel psychometric assessment
focusing on Emotion Understanding (EU), a core component of EI, suitable for
both humans and LLMs. This test requires evaluating complex emotions (e.g.,
surprised, joyful, puzzled, proud) in realistic scenarios (e.g., despite
feeling underperformed, John surprisingly achieved a top score). With a
reference frame constructed from over 500 adults, we tested a variety of
mainstream LLMs. Most achieved above-average EQ scores, with GPT-4 exceeding
89% of human participants with an EQ of 117. Interestingly, a multivariate
pattern analysis revealed that some LLMs apparently did not reply on the
human-like mechanism to achieve human-level performance, as their
representational patterns were qualitatively distinct from humans. In addition,
we discussed the impact of factors such as model size, training method, and
architecture on LLMs' EQ. In summary, our study presents one of the first
psychometric evaluations of the human-like characteristics of LLMs, which may
shed light on the future development of LLMs aiming for both high intellectual
and emotional intelligence. Project website:
https://emotional-intelligence.github.io/ | http://arxiv.org/pdf/2307.09042 | Xuena Wang, Xueting Li, Zi Yin, Yue Wu, Liu Jia | cs.AI | 36 pages, 5 figures | null | cs.AI | 20230718 | 20230728 | [
{
"id": "2302.02083"
},
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2009.03300"
},
{
"id": "2305.02897"
},
{
"id": "2210.02414"
},
{
"id": "2303.18223"
},
{
"id": "2304.07327"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2210.13312"
}
] |
2307.09288 | 47 | As reference points, we also evaluated other publicly available alternatives as baselines: SteamSHP-XL (Ethayarajh et al., 2022) based on FLAN-T5-xl, the Open Assistant (Köpf et al., 2023) reward model based on DeBERTa V3 Large (He et al., 2020), and GPT4 accessible through the OpenAIâs API. Note that at inference time, as opposed to training, all the reward models can predict a scalar for a single output, without requiring to access its paired output. For GPT-4, we prompt with a zero-shot question âChoose the best answer between A and B,â where A and B are the two responses for comparison. We report the results in terms of accuracy in Table 7. As expected, our own reward models perform the best on our internal test sets collected based on Llama 2-Chat, with the Helpfulness reward model performing best on the Meta Helpfulness test set, and similarly the Safety reward model performing best on the Meta Safety test set. Overall, our reward models outperform all of the baselines, including GPT-4. Interestingly, GPT-4 performs better than other non-Meta reward models, despite not being trained directly nor targeting speciï¬cally this reward modeling task.
12 | 2307.09288#47 | Llama 2: Open Foundation and Fine-Tuned Chat Models | In this work, we develop and release Llama 2, a collection of pretrained and
fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70
billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for
dialogue use cases. Our models outperform open-source chat models on most
benchmarks we tested, and based on our human evaluations for helpfulness and
safety, may be a suitable substitute for closed-source models. We provide a
detailed description of our approach to fine-tuning and safety improvements of
Llama 2-Chat in order to enable the community to build on our work and
contribute to the responsible development of LLMs. | http://arxiv.org/pdf/2307.09288 | Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom | cs.CL, cs.AI | null | null | cs.CL | 20230718 | 20230719 | [
{
"id": "2006.03654"
},
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1707.06347"
},
{
"id": "1803.05457"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2211.09085"
},
{
"id": "2304.07327"
},
{
"id": "2203.15556"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2302.09270"
},
{
"id": "2205.01068"
},
{
"id": "2301.09211"
},
{
"id": "2304.06364"
},
{
"id": "2305.17493"
},
{
"id": "2302.07842"
},
{
"id": "2301.12867"
},
{
"id": "1806.03822"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "1904.09728"
},
{
"id": "1811.00937"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2206.05229"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "1908.01091"
},
{
"id": "2204.05862"
},
{
"id": "2107.03451"
},
{
"id": "2112.00861"
},
{
"id": "2109.03300"
},
{
"id": "2104.10350"
},
{
"id": "2212.09689"
},
{
"id": "2006.12442"
},
{
"id": "2302.07459"
},
{
"id": "2302.08582"
},
{
"id": "2212.10560"
},
{
"id": "2112.04359"
},
{
"id": "2001.08361"
},
{
"id": "2306.05949"
},
{
"id": "1907.11692"
},
{
"id": "1705.03551"
},
{
"id": "2303.15056"
},
{
"id": "2303.17651"
},
{
"id": "2210.07700"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2307.09042 | 48 | # Bidirectional Transformers for Language Understanding (arXiv:1810.04805).
arXiv. http://arxiv.org/abs/1810.04805
Dulewicz, V., & Higgs, M. (2000). Emotional intelligence â A review and evaluation
# study.
# Journal
# of Managerial
Psychology,
15(4),
341â372.
https://doi.org/10.1108/02683940010330993
24
Geng, X., Gudibande, A., Liu, H., Wallace, E., Abbeel, P., Levine, S., & Song, D. (2023).
Koala: A Dialogue Model for Academic Research. https://bair.berkeley.edu/blog/2023/04/03/koala/
Hanafi, Z., & Noor, F. (2016). Relationship between Emotional Intelligence and
Academic Achievement in Emerging Adults: A Systematic Review.
International Journal of Academic Research in Business and Social Sciences,
6(6), Pages 268-290. https://doi.org/10.6007/IJARBSS/v6-i6/2197
Hebenstreit, K., Praas, R., Kiesewetter, L. P., & Samwald, M. (2023). An automatically
discovered chain-of-thought prompt generalizes to novel models and datasets | 2307.09042#48 | Emotional Intelligence of Large Language Models | Large Language Models (LLMs) have demonstrated remarkable abilities across
numerous disciplines, primarily assessed through tasks in language generation,
knowledge utilization, and complex reasoning. However, their alignment with
human emotions and values, which is critical for real-world applications, has
not been systematically evaluated. Here, we assessed LLMs' Emotional
Intelligence (EI), encompassing emotion recognition, interpretation, and
understanding, which is necessary for effective communication and social
interactions. Specifically, we first developed a novel psychometric assessment
focusing on Emotion Understanding (EU), a core component of EI, suitable for
both humans and LLMs. This test requires evaluating complex emotions (e.g.,
surprised, joyful, puzzled, proud) in realistic scenarios (e.g., despite
feeling underperformed, John surprisingly achieved a top score). With a
reference frame constructed from over 500 adults, we tested a variety of
mainstream LLMs. Most achieved above-average EQ scores, with GPT-4 exceeding
89% of human participants with an EQ of 117. Interestingly, a multivariate
pattern analysis revealed that some LLMs apparently did not reply on the
human-like mechanism to achieve human-level performance, as their
representational patterns were qualitatively distinct from humans. In addition,
we discussed the impact of factors such as model size, training method, and
architecture on LLMs' EQ. In summary, our study presents one of the first
psychometric evaluations of the human-like characteristics of LLMs, which may
shed light on the future development of LLMs aiming for both high intellectual
and emotional intelligence. Project website:
https://emotional-intelligence.github.io/ | http://arxiv.org/pdf/2307.09042 | Xuena Wang, Xueting Li, Zi Yin, Yue Wu, Liu Jia | cs.AI | 36 pages, 5 figures | null | cs.AI | 20230718 | 20230728 | [
{
"id": "2302.02083"
},
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2009.03300"
},
{
"id": "2305.02897"
},
{
"id": "2210.02414"
},
{
"id": "2303.18223"
},
{
"id": "2304.07327"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2210.13312"
}
] |
2307.09288 | 48 | 12
0.64 5 3 8 0.80 > 0.62 2 & 0.75 3 2 20.60 5 5 2 0.70 & 8 = 0.58 i} © 5 0.65 ° = 70.56 3 8 g 3.0.60 g ⬠054 =s= & â* 7b -e 13b © oss -e 13b â* 70b Co â* 70b 0520 Po ane Pra a ro GPT4 OpenAssistant 5 0.50 OpenAssistant < 12 3 4 5 6 7 8 9 10 11 12 13 14 12 3 4 5 6 7 8 9 1011 12 13 14 Meta Helpfulness Data Batch Stage Meta Helpfulness Data Batch Stage
Figure 6: Scaling trends for the reward model. More data and a larger-size model generally improve accuracy, and it appears that our models have not yet saturated from learning on the training data. | 2307.09288#48 | Llama 2: Open Foundation and Fine-Tuned Chat Models | In this work, we develop and release Llama 2, a collection of pretrained and
fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70
billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for
dialogue use cases. Our models outperform open-source chat models on most
benchmarks we tested, and based on our human evaluations for helpfulness and
safety, may be a suitable substitute for closed-source models. We provide a
detailed description of our approach to fine-tuning and safety improvements of
Llama 2-Chat in order to enable the community to build on our work and
contribute to the responsible development of LLMs. | http://arxiv.org/pdf/2307.09288 | Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom | cs.CL, cs.AI | null | null | cs.CL | 20230718 | 20230719 | [
{
"id": "2006.03654"
},
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1707.06347"
},
{
"id": "1803.05457"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2211.09085"
},
{
"id": "2304.07327"
},
{
"id": "2203.15556"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2302.09270"
},
{
"id": "2205.01068"
},
{
"id": "2301.09211"
},
{
"id": "2304.06364"
},
{
"id": "2305.17493"
},
{
"id": "2302.07842"
},
{
"id": "2301.12867"
},
{
"id": "1806.03822"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "1904.09728"
},
{
"id": "1811.00937"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2206.05229"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "1908.01091"
},
{
"id": "2204.05862"
},
{
"id": "2107.03451"
},
{
"id": "2112.00861"
},
{
"id": "2109.03300"
},
{
"id": "2104.10350"
},
{
"id": "2212.09689"
},
{
"id": "2006.12442"
},
{
"id": "2302.07459"
},
{
"id": "2302.08582"
},
{
"id": "2212.10560"
},
{
"id": "2112.04359"
},
{
"id": "2001.08361"
},
{
"id": "2306.05949"
},
{
"id": "1907.11692"
},
{
"id": "1705.03551"
},
{
"id": "2303.15056"
},
{
"id": "2303.17651"
},
{
"id": "2210.07700"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2307.09042 | 49 | discovered chain-of-thought prompt generalizes to novel models and datasets
(arXiv:2305.02897). arXiv. http://arxiv.org/abs/2305.02897
Hendrycks, D., Burns, C., Basart, S., Zou, A., Mazeika, M., Song, D., & Steinhardt, J.
(2021). Measuring Massive Multitask Language Understanding
(arXiv:2009.03300). arXiv. http://arxiv.org/abs/2009.03300
Izard, V., & Spelke, E. S. (2009). Development of Sensitivity to Geometry in Visual
Forms. Human Evolution, 23(3), 213â248.
King, M. (2023). Administration of the text-based portions of a general IQ test to five
# different
# different
# large
# language
# language
# models
# [Preprint].
https://doi.org/10.36227/techrxiv.22645561.v1
Köpf, A., Kilcher, Y., von Rütte, D., Anagnostidis, S., Tam, Z.-R., Stevens, K., Barhoum,
A., Duc, N. M., Stanley, O., Nagyfi, R., & others. (2023). OpenAssistant | 2307.09042#49 | Emotional Intelligence of Large Language Models | Large Language Models (LLMs) have demonstrated remarkable abilities across
numerous disciplines, primarily assessed through tasks in language generation,
knowledge utilization, and complex reasoning. However, their alignment with
human emotions and values, which is critical for real-world applications, has
not been systematically evaluated. Here, we assessed LLMs' Emotional
Intelligence (EI), encompassing emotion recognition, interpretation, and
understanding, which is necessary for effective communication and social
interactions. Specifically, we first developed a novel psychometric assessment
focusing on Emotion Understanding (EU), a core component of EI, suitable for
both humans and LLMs. This test requires evaluating complex emotions (e.g.,
surprised, joyful, puzzled, proud) in realistic scenarios (e.g., despite
feeling underperformed, John surprisingly achieved a top score). With a
reference frame constructed from over 500 adults, we tested a variety of
mainstream LLMs. Most achieved above-average EQ scores, with GPT-4 exceeding
89% of human participants with an EQ of 117. Interestingly, a multivariate
pattern analysis revealed that some LLMs apparently did not reply on the
human-like mechanism to achieve human-level performance, as their
representational patterns were qualitatively distinct from humans. In addition,
we discussed the impact of factors such as model size, training method, and
architecture on LLMs' EQ. In summary, our study presents one of the first
psychometric evaluations of the human-like characteristics of LLMs, which may
shed light on the future development of LLMs aiming for both high intellectual
and emotional intelligence. Project website:
https://emotional-intelligence.github.io/ | http://arxiv.org/pdf/2307.09042 | Xuena Wang, Xueting Li, Zi Yin, Yue Wu, Liu Jia | cs.AI | 36 pages, 5 figures | null | cs.AI | 20230718 | 20230728 | [
{
"id": "2302.02083"
},
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2009.03300"
},
{
"id": "2305.02897"
},
{
"id": "2210.02414"
},
{
"id": "2303.18223"
},
{
"id": "2304.07327"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2210.13312"
}
] |
2307.09288 | 49 | Figure 6: Scaling trends for the reward model. More data and a larger-size model generally improve accuracy, and it appears that our models have not yet saturated from learning on the training data.
The fact that helpfulness and safety performed the best on their own domain is potentially due to the tension between the two objectives (i.e., being as helpful as possible versus refusing unsafe prompts when necessary), which may confuse the reward model during training. In order for a single model to perform well on both dimensions, it needs to not only learn to select the better response given a prompt but also to distinguish adversarial prompts from safe ones. As a result, optimizing two separate models eases the reward modeling task. More detailed analysis on this tension between safety and helpfulness can be found in Appendix A.4.1. | 2307.09288#49 | Llama 2: Open Foundation and Fine-Tuned Chat Models | In this work, we develop and release Llama 2, a collection of pretrained and
fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70
billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for
dialogue use cases. Our models outperform open-source chat models on most
benchmarks we tested, and based on our human evaluations for helpfulness and
safety, may be a suitable substitute for closed-source models. We provide a
detailed description of our approach to fine-tuning and safety improvements of
Llama 2-Chat in order to enable the community to build on our work and
contribute to the responsible development of LLMs. | http://arxiv.org/pdf/2307.09288 | Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom | cs.CL, cs.AI | null | null | cs.CL | 20230718 | 20230719 | [
{
"id": "2006.03654"
},
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1707.06347"
},
{
"id": "1803.05457"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2211.09085"
},
{
"id": "2304.07327"
},
{
"id": "2203.15556"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2302.09270"
},
{
"id": "2205.01068"
},
{
"id": "2301.09211"
},
{
"id": "2304.06364"
},
{
"id": "2305.17493"
},
{
"id": "2302.07842"
},
{
"id": "2301.12867"
},
{
"id": "1806.03822"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "1904.09728"
},
{
"id": "1811.00937"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2206.05229"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "1908.01091"
},
{
"id": "2204.05862"
},
{
"id": "2107.03451"
},
{
"id": "2112.00861"
},
{
"id": "2109.03300"
},
{
"id": "2104.10350"
},
{
"id": "2212.09689"
},
{
"id": "2006.12442"
},
{
"id": "2302.07459"
},
{
"id": "2302.08582"
},
{
"id": "2212.10560"
},
{
"id": "2112.04359"
},
{
"id": "2001.08361"
},
{
"id": "2306.05949"
},
{
"id": "1907.11692"
},
{
"id": "1705.03551"
},
{
"id": "2303.15056"
},
{
"id": "2303.17651"
},
{
"id": "2210.07700"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2307.09042 | 50 | A., Duc, N. M., Stanley, O., Nagyfi, R., & others. (2023). OpenAssistant
ConversationsâDemocratizing Large Language Model Alignment. ArXiv Preprint ArXiv:2304.07327.
Preprint ArXiv:2304.07327.
25
Kosinski, M. (2023). Theory of Mind May Have Spontaneously Emerged in Large
# Language Models (arXiv:2302.02083). arXiv. http://arxiv.org/abs/2302.02083
Lange, K., Kühn, S., & Filevich, E. (2015). "Just Another Tool for Online Studiesâ
(JATOS): An Easy Solution for Setup and Management of Web Servers
Supporting Online Studies. PLOS ONE, 10(6), e0130834. https://doi.org/10.1371/journal.pone.0130834
Lea, R. G., Davis, S. K., Mahoney, B., & Qualter, P. (2019). Does Emotional
Intelligence Buffer the Effects of Acute Stress? A Systematic Review. Frontiers
in Psychology, 10, 810. https://doi.org/10.3389/fpsyg.2019.00810 | 2307.09042#50 | Emotional Intelligence of Large Language Models | Large Language Models (LLMs) have demonstrated remarkable abilities across
numerous disciplines, primarily assessed through tasks in language generation,
knowledge utilization, and complex reasoning. However, their alignment with
human emotions and values, which is critical for real-world applications, has
not been systematically evaluated. Here, we assessed LLMs' Emotional
Intelligence (EI), encompassing emotion recognition, interpretation, and
understanding, which is necessary for effective communication and social
interactions. Specifically, we first developed a novel psychometric assessment
focusing on Emotion Understanding (EU), a core component of EI, suitable for
both humans and LLMs. This test requires evaluating complex emotions (e.g.,
surprised, joyful, puzzled, proud) in realistic scenarios (e.g., despite
feeling underperformed, John surprisingly achieved a top score). With a
reference frame constructed from over 500 adults, we tested a variety of
mainstream LLMs. Most achieved above-average EQ scores, with GPT-4 exceeding
89% of human participants with an EQ of 117. Interestingly, a multivariate
pattern analysis revealed that some LLMs apparently did not reply on the
human-like mechanism to achieve human-level performance, as their
representational patterns were qualitatively distinct from humans. In addition,
we discussed the impact of factors such as model size, training method, and
architecture on LLMs' EQ. In summary, our study presents one of the first
psychometric evaluations of the human-like characteristics of LLMs, which may
shed light on the future development of LLMs aiming for both high intellectual
and emotional intelligence. Project website:
https://emotional-intelligence.github.io/ | http://arxiv.org/pdf/2307.09042 | Xuena Wang, Xueting Li, Zi Yin, Yue Wu, Liu Jia | cs.AI | 36 pages, 5 figures | null | cs.AI | 20230718 | 20230728 | [
{
"id": "2302.02083"
},
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2009.03300"
},
{
"id": "2305.02897"
},
{
"id": "2210.02414"
},
{
"id": "2303.18223"
},
{
"id": "2304.07327"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2210.13312"
}
] |
2307.09288 | 50 | When we group the scores by preference rating in Table 8, we can see that the accuracy is superior for the âsigniï¬cantly betterâ test set and degrades gradually as comparison pairs become more similar (e.g., âslightly betterâ). It is expected that learning to model human preferences becomes challenging when deciding between two similar model responses, due to annotator subjectivity and their reliance on nuanced details that may diï¬erentiate responses. We emphasize that the accuracy on more distinct responses matters the most to improve Llama 2-Chat performance. The human preference annotation agreement rate is also higher on more distinct responses than similar pairs. | 2307.09288#50 | Llama 2: Open Foundation and Fine-Tuned Chat Models | In this work, we develop and release Llama 2, a collection of pretrained and
fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70
billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for
dialogue use cases. Our models outperform open-source chat models on most
benchmarks we tested, and based on our human evaluations for helpfulness and
safety, may be a suitable substitute for closed-source models. We provide a
detailed description of our approach to fine-tuning and safety improvements of
Llama 2-Chat in order to enable the community to build on our work and
contribute to the responsible development of LLMs. | http://arxiv.org/pdf/2307.09288 | Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom | cs.CL, cs.AI | null | null | cs.CL | 20230718 | 20230719 | [
{
"id": "2006.03654"
},
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1707.06347"
},
{
"id": "1803.05457"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2211.09085"
},
{
"id": "2304.07327"
},
{
"id": "2203.15556"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2302.09270"
},
{
"id": "2205.01068"
},
{
"id": "2301.09211"
},
{
"id": "2304.06364"
},
{
"id": "2305.17493"
},
{
"id": "2302.07842"
},
{
"id": "2301.12867"
},
{
"id": "1806.03822"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "1904.09728"
},
{
"id": "1811.00937"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2206.05229"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "1908.01091"
},
{
"id": "2204.05862"
},
{
"id": "2107.03451"
},
{
"id": "2112.00861"
},
{
"id": "2109.03300"
},
{
"id": "2104.10350"
},
{
"id": "2212.09689"
},
{
"id": "2006.12442"
},
{
"id": "2302.07459"
},
{
"id": "2302.08582"
},
{
"id": "2212.10560"
},
{
"id": "2112.04359"
},
{
"id": "2001.08361"
},
{
"id": "2306.05949"
},
{
"id": "1907.11692"
},
{
"id": "1705.03551"
},
{
"id": "2303.15056"
},
{
"id": "2303.17651"
},
{
"id": "2210.07700"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2307.09042 | 51 | in Psychology, 10, 810. https://doi.org/10.3389/fpsyg.2019.00810
Legree, P. J., Psotka, J., Tremble, T., & Bourne, D. R. (2005). Using Consensus Based
Measurement to Assess Emotional Intelligence.
Mayer, J. D., Caruso, D. R., & Salovey, P. (2016). The Ability Model of Emotional
Intelligence: Principles and Updates. Emotion Review, 8(4), 290â300. https://doi.org/10.1177/1754073916639667
Mayer, J. D., Perkins, D. M., Caruso, D. R., & Salovey, P. (2001). Emotional
# intelligence
# and
# giftedness.
# Roeper
Review,
23(3),
131â137.
https://doi.org/10.1080/02783190109554084
Mayer, J. D., & Salovey, P. (1995). Emotional intelligence and the construction and
regulation of feelings. Applied and Preventive Psychology, 4(3), 197â208.
https://doi.org/10.1016/S0962-1849(05)80058-7 | 2307.09042#51 | Emotional Intelligence of Large Language Models | Large Language Models (LLMs) have demonstrated remarkable abilities across
numerous disciplines, primarily assessed through tasks in language generation,
knowledge utilization, and complex reasoning. However, their alignment with
human emotions and values, which is critical for real-world applications, has
not been systematically evaluated. Here, we assessed LLMs' Emotional
Intelligence (EI), encompassing emotion recognition, interpretation, and
understanding, which is necessary for effective communication and social
interactions. Specifically, we first developed a novel psychometric assessment
focusing on Emotion Understanding (EU), a core component of EI, suitable for
both humans and LLMs. This test requires evaluating complex emotions (e.g.,
surprised, joyful, puzzled, proud) in realistic scenarios (e.g., despite
feeling underperformed, John surprisingly achieved a top score). With a
reference frame constructed from over 500 adults, we tested a variety of
mainstream LLMs. Most achieved above-average EQ scores, with GPT-4 exceeding
89% of human participants with an EQ of 117. Interestingly, a multivariate
pattern analysis revealed that some LLMs apparently did not reply on the
human-like mechanism to achieve human-level performance, as their
representational patterns were qualitatively distinct from humans. In addition,
we discussed the impact of factors such as model size, training method, and
architecture on LLMs' EQ. In summary, our study presents one of the first
psychometric evaluations of the human-like characteristics of LLMs, which may
shed light on the future development of LLMs aiming for both high intellectual
and emotional intelligence. Project website:
https://emotional-intelligence.github.io/ | http://arxiv.org/pdf/2307.09042 | Xuena Wang, Xueting Li, Zi Yin, Yue Wu, Liu Jia | cs.AI | 36 pages, 5 figures | null | cs.AI | 20230718 | 20230728 | [
{
"id": "2302.02083"
},
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2009.03300"
},
{
"id": "2305.02897"
},
{
"id": "2210.02414"
},
{
"id": "2303.18223"
},
{
"id": "2304.07327"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2210.13312"
}
] |
2307.09288 | 51 | Scaling Trends. We study the scaling trends in terms of data and model size for the reward model, ï¬ne- tuning diï¬erent model sizes on an increasing amount of the reward model data collected each week (see the details on volume per batch in Table 26). Figure 6 reports these trends, showing the expected result that larger models obtain higher performance for a similar volume of data. More importantly, the scaling performance has not yet plateaued given the existing volume of data annotation used for training, a signal that there is room for more improvement with more annotations. We note that reward model accuracy is one of the most important proxies for the ï¬nal performance of Llama 2-Chat. While best practices for comprehensively evaluating a generative model is an open research question, the ranking task of the reward has no ambiguity. Therefore, everything else being equal, an improvement of the reward model can be directly translated into an improvement for Llama 2-Chat.
# 3.2.3 Iterative Fine-Tuning
As we received more batches of human preference data annotation, we were able to train better reward models and collect more prompts. We therefore trained successive versions for RLHF models, referred to here as RLHF-V1, . . . , RLHF-V5.
We explored RLHF ï¬ne-tuning with two main algorithms: | 2307.09288#51 | Llama 2: Open Foundation and Fine-Tuned Chat Models | In this work, we develop and release Llama 2, a collection of pretrained and
fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70
billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for
dialogue use cases. Our models outperform open-source chat models on most
benchmarks we tested, and based on our human evaluations for helpfulness and
safety, may be a suitable substitute for closed-source models. We provide a
detailed description of our approach to fine-tuning and safety improvements of
Llama 2-Chat in order to enable the community to build on our work and
contribute to the responsible development of LLMs. | http://arxiv.org/pdf/2307.09288 | Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom | cs.CL, cs.AI | null | null | cs.CL | 20230718 | 20230719 | [
{
"id": "2006.03654"
},
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1707.06347"
},
{
"id": "1803.05457"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2211.09085"
},
{
"id": "2304.07327"
},
{
"id": "2203.15556"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2302.09270"
},
{
"id": "2205.01068"
},
{
"id": "2301.09211"
},
{
"id": "2304.06364"
},
{
"id": "2305.17493"
},
{
"id": "2302.07842"
},
{
"id": "2301.12867"
},
{
"id": "1806.03822"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "1904.09728"
},
{
"id": "1811.00937"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2206.05229"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "1908.01091"
},
{
"id": "2204.05862"
},
{
"id": "2107.03451"
},
{
"id": "2112.00861"
},
{
"id": "2109.03300"
},
{
"id": "2104.10350"
},
{
"id": "2212.09689"
},
{
"id": "2006.12442"
},
{
"id": "2302.07459"
},
{
"id": "2302.08582"
},
{
"id": "2212.10560"
},
{
"id": "2112.04359"
},
{
"id": "2001.08361"
},
{
"id": "2306.05949"
},
{
"id": "1907.11692"
},
{
"id": "1705.03551"
},
{
"id": "2303.15056"
},
{
"id": "2303.17651"
},
{
"id": "2210.07700"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2307.09042 | 52 | https://doi.org/10.1016/S0962-1849(05)80058-7
Mayer, J. D., Salovey, P., Caruso, D. R., & Sitarenios, G. (2001). Emotional intelligence
as a standard intelligence. Emotion, 1(3), 232â242.
26
https://doi.org/10.1037/1528-3542.1.3.232
Mayer, J. D., Salovey, P., Caruso, D. R., & Sitarenios, G. (2003). Measuring emotional
# intelligence with
the MSCEIT V2.0. Emotion,
3(1),
97â105.
https://doi.org/10.1037/1528-3542.3.1.97
McCleskey, J. (2014). Emotional intelligence and leadership: A review of the progress,
controversy, and criticism. International Journal of Organizational Analysis,
22(1), 76â93. https://doi.org/10.1108/IJOA-03-2012-0568
Moghaddam, S. R., & Honey, C. J. (2023). Boosting Theory-of-Mind Performance in
# Large
# Language
# Language
# Models
# Models
# via
# Prompting. | 2307.09042#52 | Emotional Intelligence of Large Language Models | Large Language Models (LLMs) have demonstrated remarkable abilities across
numerous disciplines, primarily assessed through tasks in language generation,
knowledge utilization, and complex reasoning. However, their alignment with
human emotions and values, which is critical for real-world applications, has
not been systematically evaluated. Here, we assessed LLMs' Emotional
Intelligence (EI), encompassing emotion recognition, interpretation, and
understanding, which is necessary for effective communication and social
interactions. Specifically, we first developed a novel psychometric assessment
focusing on Emotion Understanding (EU), a core component of EI, suitable for
both humans and LLMs. This test requires evaluating complex emotions (e.g.,
surprised, joyful, puzzled, proud) in realistic scenarios (e.g., despite
feeling underperformed, John surprisingly achieved a top score). With a
reference frame constructed from over 500 adults, we tested a variety of
mainstream LLMs. Most achieved above-average EQ scores, with GPT-4 exceeding
89% of human participants with an EQ of 117. Interestingly, a multivariate
pattern analysis revealed that some LLMs apparently did not reply on the
human-like mechanism to achieve human-level performance, as their
representational patterns were qualitatively distinct from humans. In addition,
we discussed the impact of factors such as model size, training method, and
architecture on LLMs' EQ. In summary, our study presents one of the first
psychometric evaluations of the human-like characteristics of LLMs, which may
shed light on the future development of LLMs aiming for both high intellectual
and emotional intelligence. Project website:
https://emotional-intelligence.github.io/ | http://arxiv.org/pdf/2307.09042 | Xuena Wang, Xueting Li, Zi Yin, Yue Wu, Liu Jia | cs.AI | 36 pages, 5 figures | null | cs.AI | 20230718 | 20230728 | [
{
"id": "2302.02083"
},
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2009.03300"
},
{
"id": "2305.02897"
},
{
"id": "2210.02414"
},
{
"id": "2303.18223"
},
{
"id": "2304.07327"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2210.13312"
}
] |
2307.09288 | 52 | We explored RLHF ï¬ne-tuning with two main algorithms:
Proximal Policy Optimization (PPO) (Schulman et al., 2017), the standard in RLHF literature. ⢠Rejection Sampling ï¬ne-tuning. We sample K outputs from the model and select the best candidate with our reward, consistent with Bai et al. (2022b). The same re-ranking strategy for LLMs was also proposed in Deng et al. (2019), where the reward is seen as an energy function. Here, we go one step further, and use the selected outputs for a gradient update. For each prompt, the sample obtaining
13
â®- Max of the rewards 0.66 _._ Median of the rewards So 9° an Oo NB Reward Score 2° a fo) 10° 10? N Samples
Figure 7: Max and median reward among N samples, N â [1, . . . , 100] averaged over our training set of prompts. The delta between max and median can be interpreted as potential gain with Rejection Sampling.
the highest reward score is considered the new gold standard. Similar to Scialom et al. (2020a), we then ï¬ne-tune our model on the new set of ranked samples, reinforcing the reward.
# The two RL algorithms mainly diï¬er in: | 2307.09288#52 | Llama 2: Open Foundation and Fine-Tuned Chat Models | In this work, we develop and release Llama 2, a collection of pretrained and
fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70
billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for
dialogue use cases. Our models outperform open-source chat models on most
benchmarks we tested, and based on our human evaluations for helpfulness and
safety, may be a suitable substitute for closed-source models. We provide a
detailed description of our approach to fine-tuning and safety improvements of
Llama 2-Chat in order to enable the community to build on our work and
contribute to the responsible development of LLMs. | http://arxiv.org/pdf/2307.09288 | Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom | cs.CL, cs.AI | null | null | cs.CL | 20230718 | 20230719 | [
{
"id": "2006.03654"
},
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1707.06347"
},
{
"id": "1803.05457"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2211.09085"
},
{
"id": "2304.07327"
},
{
"id": "2203.15556"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2302.09270"
},
{
"id": "2205.01068"
},
{
"id": "2301.09211"
},
{
"id": "2304.06364"
},
{
"id": "2305.17493"
},
{
"id": "2302.07842"
},
{
"id": "2301.12867"
},
{
"id": "1806.03822"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "1904.09728"
},
{
"id": "1811.00937"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2206.05229"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "1908.01091"
},
{
"id": "2204.05862"
},
{
"id": "2107.03451"
},
{
"id": "2112.00861"
},
{
"id": "2109.03300"
},
{
"id": "2104.10350"
},
{
"id": "2212.09689"
},
{
"id": "2006.12442"
},
{
"id": "2302.07459"
},
{
"id": "2302.08582"
},
{
"id": "2212.10560"
},
{
"id": "2112.04359"
},
{
"id": "2001.08361"
},
{
"id": "2306.05949"
},
{
"id": "1907.11692"
},
{
"id": "1705.03551"
},
{
"id": "2303.15056"
},
{
"id": "2303.17651"
},
{
"id": "2210.07700"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2307.09042 | 53 | # Large
# Language
# Language
# Models
# Models
# via
# Prompting.
https://doi.org/10.48550/ARXIV.2304.11490
Nair, V., Schumacher, E., Tso, G., & Kannan, A. (2023). DERA: Enhancing Large
Language Model Completions with Dialog-Enabled Resolving Agents
(arXiv:2303.17071). arXiv. http://arxiv.org/abs/2303.17071
OpenAI. (2023). GPT-4 Technical Report (arXiv:2303.08774). arXiv. http://arxiv.org/abs/2303.08774 | 2307.09042#53 | Emotional Intelligence of Large Language Models | Large Language Models (LLMs) have demonstrated remarkable abilities across
numerous disciplines, primarily assessed through tasks in language generation,
knowledge utilization, and complex reasoning. However, their alignment with
human emotions and values, which is critical for real-world applications, has
not been systematically evaluated. Here, we assessed LLMs' Emotional
Intelligence (EI), encompassing emotion recognition, interpretation, and
understanding, which is necessary for effective communication and social
interactions. Specifically, we first developed a novel psychometric assessment
focusing on Emotion Understanding (EU), a core component of EI, suitable for
both humans and LLMs. This test requires evaluating complex emotions (e.g.,
surprised, joyful, puzzled, proud) in realistic scenarios (e.g., despite
feeling underperformed, John surprisingly achieved a top score). With a
reference frame constructed from over 500 adults, we tested a variety of
mainstream LLMs. Most achieved above-average EQ scores, with GPT-4 exceeding
89% of human participants with an EQ of 117. Interestingly, a multivariate
pattern analysis revealed that some LLMs apparently did not reply on the
human-like mechanism to achieve human-level performance, as their
representational patterns were qualitatively distinct from humans. In addition,
we discussed the impact of factors such as model size, training method, and
architecture on LLMs' EQ. In summary, our study presents one of the first
psychometric evaluations of the human-like characteristics of LLMs, which may
shed light on the future development of LLMs aiming for both high intellectual
and emotional intelligence. Project website:
https://emotional-intelligence.github.io/ | http://arxiv.org/pdf/2307.09042 | Xuena Wang, Xueting Li, Zi Yin, Yue Wu, Liu Jia | cs.AI | 36 pages, 5 figures | null | cs.AI | 20230718 | 20230728 | [
{
"id": "2302.02083"
},
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2009.03300"
},
{
"id": "2305.02897"
},
{
"id": "2210.02414"
},
{
"id": "2303.18223"
},
{
"id": "2304.07327"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2210.13312"
}
] |
2307.09288 | 53 | # The two RL algorithms mainly diï¬er in:
⢠Breadth â in Rejection Sampling, the model explores K samples for a given prompt, while only one generation is done for PPO.
⢠Depth â in PPO, during training at step t the sample is a function of the updated model policy from t â 1 after the gradient update of the previous step. In Rejection Sampling ï¬ne-tuning, we sample all the outputs given the initial policy of our model to collect a new dataset, before applying the ï¬ne-tuning similar to SFT. However, since we applied iterative model updates, the fundamental diï¬erences between the two RL algorithms are less pronounced.
Until RLHF (V4), we used only Rejection Sampling ï¬ne-tuning, and after that, we combined the two sequentially, applying PPO on top of the resulted Rejection Sampling checkpoint before sampling again. | 2307.09288#53 | Llama 2: Open Foundation and Fine-Tuned Chat Models | In this work, we develop and release Llama 2, a collection of pretrained and
fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70
billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for
dialogue use cases. Our models outperform open-source chat models on most
benchmarks we tested, and based on our human evaluations for helpfulness and
safety, may be a suitable substitute for closed-source models. We provide a
detailed description of our approach to fine-tuning and safety improvements of
Llama 2-Chat in order to enable the community to build on our work and
contribute to the responsible development of LLMs. | http://arxiv.org/pdf/2307.09288 | Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom | cs.CL, cs.AI | null | null | cs.CL | 20230718 | 20230719 | [
{
"id": "2006.03654"
},
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "1707.06347"
},
{
"id": "1803.05457"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2211.09085"
},
{
"id": "2304.07327"
},
{
"id": "2203.15556"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2302.09270"
},
{
"id": "2205.01068"
},
{
"id": "2301.09211"
},
{
"id": "2304.06364"
},
{
"id": "2305.17493"
},
{
"id": "2302.07842"
},
{
"id": "2301.12867"
},
{
"id": "1806.03822"
},
{
"id": "2209.07858"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "1904.09728"
},
{
"id": "1811.00937"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2305.15717"
},
{
"id": "2206.05229"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "1908.01091"
},
{
"id": "2204.05862"
},
{
"id": "2107.03451"
},
{
"id": "2112.00861"
},
{
"id": "2109.03300"
},
{
"id": "2104.10350"
},
{
"id": "2212.09689"
},
{
"id": "2006.12442"
},
{
"id": "2302.07459"
},
{
"id": "2302.08582"
},
{
"id": "2212.10560"
},
{
"id": "2112.04359"
},
{
"id": "2001.08361"
},
{
"id": "2306.05949"
},
{
"id": "1907.11692"
},
{
"id": "1705.03551"
},
{
"id": "2303.15056"
},
{
"id": "2303.17651"
},
{
"id": "2210.07700"
},
{
"id": "2109.07958"
},
{
"id": "2302.07736"
}
] |
2307.09042 | 54 | Palmer, B. R., Gignac, G., Manocha, R., & Stough, C. (2005). A psychometric evaluation of the MayerâSaloveyâCaruso Emotional Intelligence Test Version 2.0. Intelligence, 33(3), 285â305. https://doi.org/10.1016/j.intell.2004.11.003 Peng, B., Alcaide, E., Anthony, Q., Albalak, A., Arcadinho, S., Cao, H., Cheng, X., Chung, M., Grella, M., GV, K. K., He, X., Hou, H., Kazienko, P., Kocon, J., Kong, J., Koptyra, B., Lau, H., Mantri, K. S. I., Mom, F., ⦠Zhu, R.-J. (2023).
# RWKV: Reinventing RNNs for the Transformer Era.
27
Salovey, P., & Mayer, J. D. (1990). Emotional Intelligence. Imagination, Cognition and
Personality, 9(3), 185â211. https://doi.org/10.2190/DUGG-P24E-52WK# 6CDG | 2307.09042#54 | Emotional Intelligence of Large Language Models | Large Language Models (LLMs) have demonstrated remarkable abilities across
numerous disciplines, primarily assessed through tasks in language generation,
knowledge utilization, and complex reasoning. However, their alignment with
human emotions and values, which is critical for real-world applications, has
not been systematically evaluated. Here, we assessed LLMs' Emotional
Intelligence (EI), encompassing emotion recognition, interpretation, and
understanding, which is necessary for effective communication and social
interactions. Specifically, we first developed a novel psychometric assessment
focusing on Emotion Understanding (EU), a core component of EI, suitable for
both humans and LLMs. This test requires evaluating complex emotions (e.g.,
surprised, joyful, puzzled, proud) in realistic scenarios (e.g., despite
feeling underperformed, John surprisingly achieved a top score). With a
reference frame constructed from over 500 adults, we tested a variety of
mainstream LLMs. Most achieved above-average EQ scores, with GPT-4 exceeding
89% of human participants with an EQ of 117. Interestingly, a multivariate
pattern analysis revealed that some LLMs apparently did not reply on the
human-like mechanism to achieve human-level performance, as their
representational patterns were qualitatively distinct from humans. In addition,
we discussed the impact of factors such as model size, training method, and
architecture on LLMs' EQ. In summary, our study presents one of the first
psychometric evaluations of the human-like characteristics of LLMs, which may
shed light on the future development of LLMs aiming for both high intellectual
and emotional intelligence. Project website:
https://emotional-intelligence.github.io/ | http://arxiv.org/pdf/2307.09042 | Xuena Wang, Xueting Li, Zi Yin, Yue Wu, Liu Jia | cs.AI | 36 pages, 5 figures | null | cs.AI | 20230718 | 20230728 | [
{
"id": "2302.02083"
},
{
"id": "2302.13971"
},
{
"id": "1810.04805"
},
{
"id": "2009.03300"
},
{
"id": "2305.02897"
},
{
"id": "2210.02414"
},
{
"id": "2303.18223"
},
{
"id": "2304.07327"
},
{
"id": "2303.17071"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2210.13312"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.