id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
2307.14430#109
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
32 Skill Random Skill-stratiï¬ ed SKILL-IT Answer Veriï¬ cation Code to Text Discourse Connective Identiï¬ cation Entity Generation Entity Relation Classiï¬ cation Information Extraction Irony Detection Preposition Prediction Punctuation Error Detection Question Answering Question Generation Question Understanding Sentence Expansion Sentiment Analysis Stance Detection Summarization Text Categorization Text Matching Text Simpliï¬ cation Text to Code Toxic Language Detection Word Semantics Wrong Candidate Generation 2.005±0.059 0.302±0.032 2.529±0.046 2.108±0.328 1.130±0.048 2.032±0.013 2.802±0.125 1.095±0.040 2.633±0.027 1.947±0.003 2.214±0.007 1.928±0.020 2.054±0.018 2.771±0.009 1.814±0.151 2.531±0.009 2.289±0.016 1.967±0.008 1.861±0.003 0.614±0.030 2.853±0.020 1.999±0.023 2.187±0.028 1.903±0.069 0.204±0.022 2.372±0.054 1.788±0.429 0.836±0.006 1.992±0.006 2.528±0.146 0.686±0.041 3.188±0.055 2.119±0.003 2.345±0.008 1.837±0.031 1.828±0.060 2.818±0.006 1.500±0.117 2.472±0.012 2.341±0.021 1.913±0.005 1.692±0.023 0.518±0.030 2.911±0.019 1.870±0.039 2.192±0.023 1.890±0.072 0.269±0.032 2.393±0.056 1.885±0.461 0.841±0.010 1.933±0.013 2.585±0.149 0.774±0.029 2.726±0.025 2.073±0.001 2.263±0.010 1.700±0.042 1.853±0.058 2.774±0.007 1.628±0.149 2.440±0.013 2.231±0.022 1.872±0.005 1.698±0.022 0.585±0.022 2.862±0.018 1.902±0.024 2.140±0.020 Average 1.985±0.022 1.907±0.027 1.883±0.032
2307.14430#108
2307.14430#110
2307.14430
[ "2101.00027" ]
2307.14430#110
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
Table 10: Results when skills graph for Natural Instructions learned on 125M parameter model is used for data selection with a 1.3B model. We see that SKILL-IT on average still outperforms random and skill-stratiï¬ ed sampling, even though the edges used by SKILL-IT are not derived from the larger model. no skill inï¬ uences another skill. We refer this approach as â No graphâ . Note that the opposite case of a complete graph recovers skill-stratiï¬ ed sampling, which we already have as a baseline. Second, instead of sampling over multiple rounds and weighting according to the loss of each skill, we study the effect of setting T = 1, which only uses a softmax on A to yield static weights on the skills. We refer to this approach as â Staticâ . We omit results on Natural Instructions continual pre-training, since SKILL-IT uses T = 1 and using no graph with T = 1 recovers skill-stratiï¬ ed sampling. Intuitively, we expect the static version of SKILL-IT to perform somewhat well unless there is signiï¬ cant discrepancy among the losses (e.g. in synthetics where the loss on one skill can be close to 0 while the other is not, versus in Natural Instructions where all losses decrease consistently). For both ablations, we sweep over values of η = [0.1, 0.2, 0.5, 0.8]. Figure 26 shows the comparison between SKILL-IT and no graph on the continual pre-training LEGO experiment, and Figure 27 shows the comparison between SKILL-IT and a static approach. We see that both the graph and the online dynamics of SKILL-IT are important for its performance. In particular, using no graph results in allocating signiï¬ cant weight to harder skills early on, even though many of them have easier prerequisite skills (such as skill 3 having edges to skills 1 and 2). Using a static graph results in consistent allocation of signiï¬ cant weight to prerequisite skills even after their validation losses converge to near 0, and thus the harder skills that have higher loss are not learned quickly afterwards. We perform the same ablation on the Addition datasetâ the results for this are shown in Figures 28 and Figure 29.
2307.14430#109
2307.14430#111
2307.14430
[ "2101.00027" ]
2307.14430#111
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
We ï¬ nd that these simple baselines, including using a static graph and no graph perform similarly to SKILL-IT on average across all skillsâ while SKILL-IT performs the best on skill 2 compared to vanilla multiplicative weights, and SKILL-IT performs the best on skill 1 compared to a static graph. This suggests that Addition is somewhat easier than the other datasets that we consider, as SKILL-IT still outperforms other baselines, as shown in Figure 4. Figure 30 compares SKILL-IT, no graph, and static data selection for the LEGO ï¬ ne-tuning experiment. No graph can be interpreted as allocating equal weight to all training skills not equal to the target skill, and varying this weight versus the weight on the target skill. While SKILL-IT and setting T = 1 behave similarly, we see that SKILL-IT is slightly better than using no graph. For instance, SKILL-IT obtains a validation loss of 0.05 in 2000 steps, compared to 2050-2200 steps when using no graph. Figure 31 and 32 compare SKILL-IT, no graph, and static data selection for the Natural Instructions ï¬ ne-tuning experiments. For both Spanish QG and stance detection, SKILL-IT attains lower loss than using no graph or using T = 1 round. Figure 33 compares SKILL-IT and static data selection for the Natural Instructions out-of-domain experiment. SKILL-IT
2307.14430#110
2307.14430#112
2307.14430
[ "2101.00027" ]
2307.14430#112
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
33 LEGO Skill 1 LEGO Skill 2 LEGO Skill 3 10° 10° Bi04 a 2 10-1 $10 a 1 § 10-3 an 3 10-2 s 10+ 0 2000 4000 6000 0 2000 4000 6000 0 2000 4000 6000 LEGO Skill 4 LEGO Skill 5 Average per skill B Ej 3 2 a 4 1071 1071 g 10-1 No graph (0.1) 3 â No graph (0.2) s â No graph (0.5) 3 â No graph (0.8) â Skill-It 102 10-2 0 2000 4000 6000 0 2000 4000 6000 0 2000 4000 6000 Steps Steps Steps Figure 26: Comparison of SKILL-IT versus using the identity adjacency matrix (no skills graph) with η = 0.1, 0.2, 0.5, 0.8 on the LEGO continual pre-training experiment. The latter does not capture the relationship between skills, and we ï¬ nd that SKILL-IT attains lower loss on all skills. LEGO Skill 1 LEGO Skill 2 LEGO Skill 3 10° SB 8 gd 2 a 8 a g 10-1 3 a > 0 2000. 4000 ~â «6000 0 2000. 4000~â «6000 0 2000 4000 ~â -6000 10° LEGO Skill 4 LEGO Skill 5 Average per skill B 8 gd 2 a 4 10-1 A 2 10-1 10"! Static (0.1) 3 â Static (0.2) Z â Static (0.5) 3 â Static (0.8) â skillt 102 10-2 0 2000. 4000 ~â «6000 0 2000. -4000~â «6000 0 2000 4000 ~â - 6000 Steps Steps Steps Figure 27:
2307.14430#111
2307.14430#113
2307.14430
[ "2101.00027" ]
2307.14430#113
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
Comparison of SKILL-IT versus using static data selection (T = 1) with η = 0.1, 0.2, 0.5, 0.8 on the LEGO continual pre-training experiment. While SKILL-IT eventually allocates more weights to skills 3, 4, 5, which have higher loss, the static approach is not able to do this. We ï¬ nd that SKILL-IT attains lower loss on all skills.
2307.14430#112
2307.14430#114
2307.14430
[ "2101.00027" ]
2307.14430#114
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
34 Addition Skill 1 Addition Skill 2 Validation Loss (Log) 10° 10-1 10-2 0 2000 4000 Addition Skill 3 6000 0 2000 4000 6000 Average per skill Validation Loss (Log) 10° 107 10-2 â No graph (0.1) â No graph (0.2) â No graph (0.5) â No graph (0.8) â skirt 0 2000 Steps 4000 6000 0 2000 4000 Steps 6000 Figure 28: Comparison of SKILL-IT versus using the identity adjacency matrix (no skills graph) with η = 0.1, 0.2, 0.5, 0.8 on the Addition continual pre-training experiment. The latter does not capture the relationship between skills, and we ï¬ nd that SKILL-IT attains lower loss on skill 2, but attains similar performance to methods that do not use the skills graph. Addition Skill 1 Addition Skill 2 Validation Loss (Log) 10° 107 10-2 2000 4000 Addition Skill 3 6000 0 2000 4000 6000 Average per skill Validation Loss (Log) 10° 107 10-2 â Static (0.1) â Static (0.2) â Static (0.5) â Static (0.8) â skillt 2000 Steps 4000 6000 0 2000 4000 Steps 6000 Figure 29: Comparison of SKILL-IT versus using static data selection (T = 1) with η = 0.1, 0.2, 0.5, 0.8 on the Addition continual pre-training experiment. We ï¬ nd that SKILL-IT attains lower loss on skill 1, but attains similar performance to the static methods. # Validation Loss
2307.14430#113
2307.14430#115
2307.14430
[ "2101.00027" ]
2307.14430#115
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
° ° ° ° re) is a & 2 ° Performance on Skill 3 Performance on Skill 3 Skill-It No graph (0.1) No graph No graph No graph (0.8) 0.2) ( ( (0.5) ( 0 Steps 1000 2000 3000 4000 5000 6000 â Static (0.1) 0.8 â Static (0.2) o â Static (0.5) 5 0.6 â Static (0.8) a â skill s 3 0.4 g 0.2 0.0 0 1000 2000 3000 4000 5000 6000 Steps Figure 30: Comparison of SKILL-IT versus using no graph (left) and static data selection (right) with η = 0.1, 0.2, 0.5, 0.8 on the LEGO ï¬ ne-tuning experiment. All approaches have roughly the same loss trajectories, but SKILL-IT is slightly lower than using no graph.
2307.14430#114
2307.14430#116
2307.14430
[ "2101.00027" ]
2307.14430#116
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
35 # Validation Loss NON ON pe uo uw a o a N ES i} Performance on Spanish QG Performance on Spanish QG 2.60 â â No graph (0.1) â â Static (0.1) â â No graph (0.2) 2.55 â â Static (0.2) â â No graph (0.5) g â â Static (0.5) â â No graph (0.8) | 2.50 â â Static (0.8) â â Skill-It -] â â Skill-It 2.45 cI zg g 2.40 2.35 200 300 400 500 600 Steps 200 300 400 500 600 Steps Figure 31: Comparison of SKILL-IT versus using no graph (left) and static data selection (right) with η = 0.1, 0.2, 0.5, 0.8 on the Natural Instructions Spanish QG ï¬ ne-tuning experiment. SKILL-IT attains lower validation loss than both no graph and static data selection. 1.8 1.8 â â No graph (0.1) â â Static (0.1) 1.7 â â No graph (0.2) 1.7 â â Static (0.2) â â No graph (0.5) a â â Static (0.5) 1.6 â â No graph (0.8) 3 1.6 â â Static (0.8) â â Skill-It =] â â Skill-It S15 S15 a zg 1.4 3 1.4 1.3 1.3 1.2 1.2 Performance on stance detection Performance on stance detection 200 300 400 500 600 Steps 200 300 400 500 600 Steps # a 3 i] # a zg 3 Figure 32: Comparison of SKILL-IT versus using no graph (left) and static data selection (right) with η = 0.1, 0.2, 0.5, 0.8 on the Natural Instructions stance detection ï¬ ne-tuning experiment. SKILL-IT attains lower validation loss than both no graph and static data selection.
2307.14430#115
2307.14430#117
2307.14430
[ "2101.00027" ]
2307.14430#117
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models
36 Answerability Classification Cause Effect Classification Coreference Resolution Data To Text 3.10 â eâ Static (0.1) | 2.12 3.16 a> â e= Static (0.2) 2.37 4 â = Static (0.5) 3.14 ° 3.08 2.10 3 â e Static (0.8) 3 5 3 3 â e skillit 3 2.36 3 3.06 | 2.08 312 s 3.10 2.35 Dialogue Act Recognition Grammar Error Correction Keyword Tagging Overlap Extraction 2.80 8 2.78 § 2.36 2.42 2 2.78 g 3 2.76 2.34 2.40 2.76 3 s 2.74 2.32 2.38 2.74 Question Rewriting Textual Entailment Title Generation Word Analogy 1.70 8 2.62 2.52 3.05 3 soa 1.69 2.61 z s 2.50 $2.60 3.03 1.68 3 2.48 2.59 3.02 1.67 1000 2000 3000 4000 5000 1000 2000 3000 4000 5000 1000 2000 3000 4000 5000 1000 2000 3000 4000 5000 Steps Steps Steps Steps Figure 33: Comparison of SKILL-IT versus using static data selection with η = 0.1, 0.2, 0.5, 0.8 on the Natural Instructions out-of-domain experiment. SKILL-IT attains the lowest validation loss on 7 out of 12 evaluation skills, and an average loss of 2.540 compared to a range of 2.541-2.551 for static data selection. attains the lowest validation loss on 7 out of 12 evaluation skills. It has an average loss of 2.540 compared to a range of 2.541-2.551 for static data selection. 37
2307.14430#116
2307.14430
[ "2101.00027" ]
2307.14225#0
Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences
# ee 3 2 0 2 l u J 6 2 ] R I . s c [ 1 v 5 2 2 4 1 . 7 0 3 2 : v i X r a # Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences SCOTT SANNERâ , University of Toronto, Canada KRISZTIAN BALOG, Google, Norway FILIP RADLINSKI, Google, United Kingdom BEN WEDIN, Google, United States LUCAS DIXON, Google, France Traditional recommender systems leverage usersâ item preference history to recommend novel content that users may like. However, modern dialog interfaces that allow users to express language-based preferences oï¬
2307.14225#1
2307.14225
[ "2305.06474" ]
2307.14225#1
Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences
er a fundamentally diï¬ erent modality for prefer- ence input. Inspired by recent successes of prompting paradigms for large language models (LLMs), we study their use for making recommendations from both item-based and language-based preferences in comparison to state-of-the-art item-based collaborative ï¬ ltering (CF) methods. To support this investigation, we collect a new dataset consisting of both item-based and language-based preferences elicited from users along with their ratings on a variety of (biased) recommended items and (unbiased) random items. Among numerous experimental results, we ï¬ nd that LLMs provide competitive recommendation performance for pure language-based preferences (no item preferences) in the near cold-start case in comparison to item-based CF methods, despite having no supervised training for this speciï¬ c task (zero-shot) or only a few labels (few-shot). This is particularly promising as language-based preference representations are more explainable and scrutable than item-based or vector-based representations. # CCS Concepts: â ¢ Information systems â Recommender systems. Additional Key Words and Phrases: recommendation; transparency; scrutability; natural language # ACM Reference Format: Scott Sanner, Krisztian Balog, Filip Radlinski, Ben Wedin, and Lucas Dixon. 2023.
2307.14225#0
2307.14225#2
2307.14225
[ "2305.06474" ]
2307.14225#2
Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences
Large Language Models are Competitive Near Cold- start Recommenders for Language- and Item-based Preferences. In Seventeenth ACM Conference on Recommender Systems (RecSys â 23), September 18â 22, 2023, Singapore, Singapore. ACM, New York, NY, USA, 11 pages. https://doi.org/10.1145/3604915.3608845 # 1 INTRODUCTION The use of language in recommendation scenarios is not a novel concept. Content-based recommenders have been utilizing text associated with items, such as item descriptions and reviews, for about three decades [29]. However, recent advances in conversational recommender systems have placed language at the forefront, as a natural and intuitive means for users to express their preferences and provide feedback on the recommendations they receive [15, 24]. Most recently, the concept of natural language (NL) user proï¬ les, where users express their preferences as NL statements has been proposed [37]. The idea of using text-based user representations is appealing for several reasons: it provides full transparency and allows users to control the systemâ s personalization. Further, in a (near) cold-start setting, where little to no usage data is available, providing a NL summary of preferences may enable a personalized and satisfying â
2307.14225#1
2307.14225#3
2307.14225
[ "2305.06474" ]
2307.14225#3
Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences
Work done while on sabbatical at Google. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for proï¬ t or commercial advantage and that copies bear this notice and the full citation on the ï¬ rst page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). © 2023 Copyright held by the owner/author(s). Manuscript submitted to ACM 1 RecSys â 23, September 18â 22, 2023, Singapore, Singapore
2307.14225#2
2307.14225#4
2307.14225
[ "2305.06474" ]
2307.14225#4
Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences
Scott Sanner, Krisztian Balog, Filip Radlinski, Ben Wedin, and Lucas Dixon experience for users. Yet, controlled quantitative comparisons of such NL preference descriptions against traditional item-based approaches are very limited. Thus, the main research question driving this study is the following: How eï¬ ective are prompting strategies with large language models (LLMs) for recommendation from natural language- based preference descriptions in comparison to collaborative ï¬ ltering methods based solely on item ratings? We address the task of language-based item recommendation by building on recent advances in LLMs and prompting- based paradigms that have led to state-of-the-art results in a variety of natural language tasks, and which permit us to exploit rich positive and negative descriptive content and item preferences in a uniï¬ ed framework. We contrast these novel techniques with traditional language-based approaches using information retrieval techniques [3] as well as collaborative ï¬ ltering-based approaches [14, 42]. Being a novel task, there is no dataset for language-based item recommendation. As one of our main contributions, we present a data collection protocol and build a test collection that comprises natural language descriptions of preferences as well as item ratings. In doing so, we seek to answer the following research questions:
2307.14225#3
2307.14225#5
2307.14225
[ "2305.06474" ]
2307.14225#5
Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences
â ¢ RQ1: Are preferences expressed in natural language suï¬ cient as a replacement for items for (especially) near cold-start recommendation, and how much does performance improve when language is combined with items? â ¢ RQ2: How do LLM-based recommendation methods compare with item-based collaborative ï¬ ltering methods? â ¢ RQ3: Which LLM prompting style, be it completion, instructions, or few-shot prompts, performs best? â ¢ RQ4: Does the inclusion of natural language dispreferences improve language-based recommendation? Our main contributions are (1) We devise an experimental design that allows language-based item recommendation to be directly compared with state-of-the-art item-based recommendation approaches, and present a novel data col- lection protocol (Section 3); (2) We propose various prompting methods for LLMs for the task of language-based item recommendation (Section 4); (3) We experimentally compare the proposed prompt-based methods against a set of strong baselines, including both text-based and item-based approaches (Section 5). Ultimately, we observe that LLM- based recommmendation from pure language-based preference descriptions provides a competitive near cold-start recommender system that is based on an explainable and scrutable language-based preference representation. # 2 RELATED WORK Item-Based Recommendation. Traditional recommender systems rely on item ratings. For a new user, these can be provided over time as the user interacts with the recommender, although this means initial performance is poor. Thus, preferences are often solicited with a questionnaire for new users [22, 39, 41]. There has also been work looking at other forms of item-based preferences such as relative preferences between items [10, 39], although approaches that rely on individual item ratings dominate the literature. Given a corpus of user-item ratings, very many recommendation algorithms exist. These range from methods such as item-based k-Nearest Neighbors [40], where simple similarity to existing users is exploited, to matrix factoriza- tion approaches that learn a vector representation for the user [23, 34], through to deep learning and autoencoder approaches that jointly learn user and item vector embeddings [8, 19, 28]. Interestingly, the EASE algorithm [42] is an autoencoder approach that has been found to perform on par with much more complex state-of-the-art approaches. Natural Language in Recommendation.
2307.14225#4
2307.14225#6
2307.14225
[ "2305.06474" ]
2307.14225#6
Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences
Following the proposals in [2, 37] to model preferences solely in scrutable natural language, recent work has explored the use of tags as surrogates for NL descriptions with promising results [31]. This contrasts with, for instance Hou et al. [20], who input a (sequence) of natural language item descriptions into an LLM to produce an (inscrutable) user representation for recommendation. Other recent work has sought to use rich, 2 LLMs are Competitive Near Cold-start Recommenders RecSys â
2307.14225#5
2307.14225#7
2307.14225
[ "2305.06474" ]
2307.14225#7
Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences
23, September 18â 22, 2023, Singapore, Singapore descriptive natural language as the basis for recommendations. At one extreme, we have narrative-driven recommen- dations [4] that assume very verbose descriptions of speciï¬ c contextual needs. In a similar vein, user-studies of NL use in recommendation [26] identify a rich taxonomy of recommendation intents and also note that speech-based elic- itation is generally more verbose and descriptive than text-based elicitation. In this work, however, we return to the proposal in [37] and assume the user provides a more general-purpose language-based description of their preferences and dispreferences for the purpose of recommendation. Recently, researchers have begun exploring use of language models (LMs) for recommendation tasks [13]. Radlinski et al. [37] present a theoretical motivation for why LLMs may be useful for recommendations and provide an example prompt, but do not conduct any quantitative evaluation. Mysore et al. [32] generate preference narratives from ratings and reviews, using the narratives to recommend from held-out items.
2307.14225#6
2307.14225#8
2307.14225
[ "2305.06474" ]
2307.14225#8
Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences
Penha and Hauï¬ [36] show that oï¬ -the-shelf pretrained BERT [12] contains both collaborative- and content-based knowledge about items to recommend. They also demonstrate that BERT outperforms information retrieval (IR) baselines for recommendation from language-based descriptions. However, they do not assess the relative performance of language- vs. item-based recommendation from LMs (for which we curate a dataset speciï¬ cally for this purpose), nor does BERTâ s encoder-only LM easily permit doing this in a uniï¬ ed prompting framework that we explore here. RecoBERT [30] leverages a custom-trained LM for deriving the similarity between text-based item and description pairs, with the authors ï¬ nding that this outperforms traditional IR methods. Hou et al. [21] focus on item-based recommendation, with an in-context learning (ICL) approach similar in spirit to our item-only few-shot approach. Similarly, Kang et al. [27] use an LLM to predict ratings of target items. Finally, ReXPlug [17] exploits pretrained LMs to produce explainable recommendations by generating synthetic reviews on behalf of the user. None of these works, however, explore prompting strategies in large LMs to translate actual natural language preferences into new recommendations compared directly to item-based approaches. Further, we are unaware of any datasets that capture a userâ s detailed preferences in natural language, and attempt to rate recommendations on unseen items. Existing datasets such as [2, 7] tend to rely on much simpler characterizations. Prompting in Large Language Models. Large language models (LLMs) are an expanding area of research with numerous exciting applications. Beyond traditional natural language understanding tasks like summarization, relation mapping, or question answering, LLMs have also proved adept at many tasks such as generating code, generating synthetic data, and multi-lingual tasks [1, 5, 9]. How to prompt these models to generate the best results is a continuing topic of research. Early prompting approaches relied on few-shot prompting, where a small set of training input-output pairs are prepended to the actual input [6]. Through additional tuning of pre-trained models on tasks described via instructions, LLMs also achieve impressive performance in the zero-shot setting (i.e., models are given a task and inputs, without any previous training examples) [44].
2307.14225#7
2307.14225#9
2307.14225
[ "2305.06474" ]
2307.14225#9
Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences
Geng et al. [16] test a variety of prompting techniques with a relatively small (less than one billion parameter) LLM trained on a collection of recommendation tasks, ï¬ nding promising results across multiple tasks and domains, primarily by using item ratings as input. # 3 EXPERIMENTAL SETUP To study the relationship between item-based and language-based preferences, and their utility for recommendation, we require a parallel corpus from the same raters providing both types of information that is maximally consistent. There is a lack of existing parallel corpora of this nature, therefore a key contribution of our work is an experiment design that allows such consistent information to be collected.
2307.14225#8
2307.14225#10
2307.14225
[ "2305.06474" ]
2307.14225#10
Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences
Speciï¬ cally, we designed a two-phase user study where raters were (1) asked to rate items, and to describe their preferences in natural language, then (2) recommendations generated based on both types of preferences were uniformly rated by the raters. Hence we perform our experiments 3 RecSys â 23, September 18â 22, 2023, Singapore, Singapore Scott Sanner, Krisztian Balog, Filip Radlinski, Ben Wedin, and Lucas Dixon
2307.14225#9
2307.14225#11
2307.14225
[ "2305.06474" ]
2307.14225#11
Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences
in the movie domain, being frequently used for research as movie recommendation is familiar to numerous user study participants. A key concern in any parallel corpus of this nature is that people may say they like items with particular charac- teristics, but then consume and positively react to quite diï¬ erent items. For instance, this has been observed where people indicate aspirations (e.g., subscribe to particular podcasts) yet actually consume quite diï¬ erent items (e.g., listen to others) [33]. In general, it has been observed that intentions (such as intending to choose healthy food) often do not lead to actual behaviors [43]. Such disparity between corpora could lead to inaccurate prediction about the utility of particular information for recommendation tasks. As such, one of our key considerations was to maximize consistency.
2307.14225#10
2307.14225#12
2307.14225
[ "2305.06474" ]
2307.14225#12
Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences
# 3.1 Phase 1: Preference Elicitation Our preference elicitation design collected natural language descriptions of rater interests both at the start and at the end of a questionnaire. Speciï¬ cally, raters were ï¬ rst asked to write short paragraphs describing the sorts of movies they liked, as well as the sorts of movies they disliked (free-form text, minimum 150 characters). These initial liked (+) and disliked (-) self-descriptions for rater ð are respectively denoted as descð + and descð â . Next, raters were asked to name ï¬ ve example items (here, movies) that they like. This was enabled using an online query auto-completion system (similar to a modern search engine) where the rater could start typing the name of a movie and this was completed to speciï¬ c (fully illustrated) movies. The auto-completion included the top 10,000 movies ranked according to the number of ratings in the MovieLens 25M dataset [18] to ensure coverage of even uncommon movies. As raters made choices, these were placed into a list which could then be modiï¬ ed. Each rater was then asked to repeat this process to select ï¬ ve examples of movies they do not like. These liked (+) and disliked (-) item selections for rater ð and item selection index ð â {1, . . . , 5} are respectively denoted as item ð ,ð + and item ð ,ð â . Finally, raters were shown the ï¬ ve liked movies and asked again to write the short paragraph describing the sorts of movies they liked (which we refer to as the ï¬ nal description). The was repeated for the ï¬
2307.14225#11
2307.14225#13
2307.14225
[ "2305.06474" ]
2307.14225#13
Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences
ve disliked movies. # 3.2 Phase 2: Recommendation Feedback Collection To enable a fair comparison of item-based and language-based recommendation algorithms, a second phase of our user study requested raters to assess the quality of recommendations made by a number of recommender algorithms based on the information collected in Phase 1. In particular, past work has observed that completeness of labels is important to ensure fundamentally diï¬ erent algorithms can be compared reliably [2, 25]. Desiderata for recommender selection: We aimed for a mix of item-based, language-based, and unbiased recom- mendations. Hence, we collected user feedback (had they seen it or would they see it, and a 1â 5 rating in either case) on a shuï¬ ed set of 40 movies (displaying both a thumbnail and a short plot synopsis) drawn from four sample pools: â ¢ SP-RandPop, an unbiased sample of popular items: 10 randomly selected top popular items (ranked 1-1000 in terms of number of MovieLens ratings); â ¢ SP-RandMidPop, an unbiased sample of less popular items: 10 randomly selected less popular items (ranked 1001-5000 in terms of number of MovieLens ratings); SP-EASE, personalized item-based recommendations: Top-10 from the strong baseline EASE [42] collaborative ï¬ ltering recommender using hyperparameter ð = 5000.0 tuned on a set of held-out pilot data from 15 users; â ¢ SP-BM25-Fusion, personalized language-based recommendations: Top-10 from Sparse Review-based Late Fu- SP-BM25-Fusion, personalized language-based recommendations: Top-10 from Sparse Review-based Late Fu- sion Retrieval that, like [3], computes BM25 match between all item reviews in the Amazon Movie Review corpus (v2) [45] and raterâ s natural language preferences (desc), ranking items by maximal BM25-scoring review. sion Retrieval that, like [3], computes BM25 match between all item reviews in the Amazon Movie Review corpus (v2) [45] and raterâ s natural language preferences (desc+), ranking items by maximal BM25-scoring review. 4 LLMs are Competitive Near Cold-start Recommenders RecSys â
2307.14225#12
2307.14225#14
2307.14225
[ "2305.06474" ]
2307.14225#14
Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences
23, September 18â 22, 2023, Singapore, Singapore Note that SP-RandPop and SP-RandMidPop have 10 diï¬ erent movies for each rater, and that these are a completely unbiased (as they do not leverage any user information, there can be no preference towards rating items that are more obvious recommendations, or other potential sources of bias). On the other hand, SP-EASE consists of EASE recommendations (based on the user item preferences), which we also evaluate as a recommenderâ so there is some bias when using this set. We thus refer to the merged set of SP-RandPop and SP-RandMidPop as an Unbiased Set in the analysis, with performance on this set being key to our conclusions. # 3.3 Design Consequences Importantly, to ensure a maximally fair comparison of language-based and item-based approaches, consistency of the two types of preferences was key in our data collection approach. As such, we directly crowd-sourced both types of preferences from raters in sequence, with textual descriptions collected twiceâ before and after self-selected item rat- ings. This required control means the amount of data per rater must be small. It is also a realistic amount of preference information that may be required of a recommendation recipient in a near-cold-start conversational setting. As a con- sequence of the manual eï¬ ort required, the number of raters recruited also took into consideration the required power of the algorithmic comparison, with a key contributions being to the protocol developed rather than data scale. Our approach thus contrasts with alternatives of extracting reviews or preference descriptions in bulk from online content similarly to [4, 32] (where preferences do not necessarily capture a personâ s interests fully) and/or relying on item preferences expressed either explicitly or implicitly over time (during which time preferences may change). # 4 METHODS Given our parallel language-based and item-based preferences and ratings of 40 items per rater, we compare a variety of methods to answer our research questions. We present the traditional baselines using either item- or language-based preferences, then novel LLM approaches, using items only, language only, or a combination of items and language. # 4.1 Baselines To leverage the item and language preferences elicited in Phase 1, we evaluate CF methods as well as a language- based baseline previously found particularly eï¬
2307.14225#13
2307.14225#15
2307.14225
[ "2305.06474" ]
2307.14225#15
Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences
ective [2, 11].1 Most baseline item-based CF methods use the default conï¬ guration in MyMediaLite [14], including MostPopular: ranking items by the number of ratings in the dataset, Item-kNN: Item-based k-Nearest Neighbours [40], WR-MF: Weighted Regularized Matrix Factorization, a regularized version of singular value decomposition [23], and BPR-SLIM: a Sparse Linear Method (SLIM) that learns a sparse weighting vector over items rated, via a regularized optimization approach [34, 38]. We also compare against our own implementation of the more recent state-of-the-art item-based EASE recommender [42]. As a language-based baseline, we compare against BM25-Fusion, described in Section 3.2. Finally, we also evaluate a random ordering of items in the raterâ s pool (Random) to calibrate against this uninformed baseline. # 4.2 Prompting Methods We experiment with a variety of prompting strategies using a variant of the PaLM model (62 billion parameters in size, trained over 1.4 trillion tokens) [9], that we denote moving forward as simply LLM. Notationally, we assume ð ¡ is the speciï¬ c target rater for the recommendation, whereas ð denotes a generic rater. All prompts are presented in two parts: a preï¬ x followed by a suï¬ x which is always the name of the item (movie) to be scored for the target user, 1Notably Dacrema et al. [11] observe that the neural methods do not outperform these baselines. 5 RecSys â 23, September 18â 22, 2023, Singapore, Singapore
2307.14225#14
2307.14225#16
2307.14225
[ "2305.06474" ]
2307.14225#16
Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences
Scott Sanner, Krisztian Balog, Filip Radlinski, Ben Wedin, and Lucas Dixon denoted as hitemð ¡ â i. The score is computed as the log likelihood of the suï¬ x and is used to rank all candidate item recommendations.2 As such, we can evaluate the score given by the LLM to every item in our target set of 40 items collected in Phase 2 of the data collection. Given this notation, we devise Completion, Zero-shot, and Few-shot prompt templates for the case of Items only, Language only, and combined Language+Items deï¬ ned as follows: 4.2.1 Items only.
2307.14225#15
2307.14225#17
2307.14225
[ "2305.06474" ]
2307.14225#17
Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences
The completion approach is analogous to that used for the P5 model [16], except that we leverage a pretrained LLM in place of a custom-trained transformer. The remaining approaches are devised in this work: Completion: itemð ¡,1 + , itemð ¡,2 â ¢ Zero-shot: I like the following movies: itemð ¡,1 # + , itemð ¡,3 # + , itemð ¡,4 # + , itemð ¡,5 # + , hitemð ¡ â i + , itemð ¡,4 + , itemð ¡,3 Zero-shot: | like the following movies: itemâ , itemâ , itemâ , itemâ , itemâ . Then I would also like (item!) User Movie Preferences: itemâ , itemâ ?â , itemâ ??, item's" Additional User Movie Preference: itemâ ? . stomtel seo hd i td td oS User Movie Preferences: item; , item,â , itemâ , item,;â , item; Few-shot (k): Repeat r â ¬ {1,...,k} { Additional User Movie Preference: (item!) Zero-shot: | like the following movies: itemâ , itemâ , itemâ , itemâ , itemâ . Then I would also like (item!) 4.2.2 Language only. Completion: descð ¡ + hitemð ¡ â i â ¢ Zero-shot: I describe the movies I like as follows: descð ¡ +. Then I would also like hitemð ¡ â i ¢ Zero-shot: I describe the movies I like as follows: desc4,. Then I would also like (item!) # User Description: descð + User Movie Preferences: itemð ,1 â ¢ Few-shot (ð ): Repeat ð â {1, . . . , ð } r,5 Few-shot(k): Repeat r â ¬ {1,...,k} { P User Movie Preferences: itemâ ,â , itemâ , itemâ ,?, itemâ ,"*, item, User Description: descâ , # User Description: descð ¡ + User Movie Preferences: hitemð ¡ â i 4.2.3 Language + item. + , itemð ¡,5 â ¢ Completion: descð ¡ â ¢ Zero-shot: I describe the movies I like as follows: descð ¡ + . Then I would also like hitemð ¡ â i # + itemð ¡,1
2307.14225#16
2307.14225#18
2307.14225
[ "2305.06474" ]
2307.14225#18
Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences
# + , itemð ¡,2 # + , itemð ¡,3 # + , itemð ¡,4 + , hitemð ¡ â i +. I like the following movies: itemð ¡,1 # + , itemð ¡,2 + , itemð ¡,3 + , itemð ¡,4 + , itemð ¡,5 User Description: descð + User Movie Preferences: itemð ,1 Additional User Movie Preference: itemð ,5 + â ¢ Few-shot (ð ): Repeat ð â {1, . . . , ð } ( + , itemð ,2 + , itemð ,3 + , itemð ,4 + User Description: descð ¡ + + , itemð ¡,2 User Movie Preferences: itemð ¡,1 + , itemð ¡,3 Additional User Movie Preference: hitemð ¡ â i + , itemð ¡,4 + , itemð ¡,5 + 4.2.4 Negative Language Variants. For the zero-shot cases, we also experimented with negative language variants that inserted the sentences â I dislike the following movies: itemð ¡,1 â , itemð ¡,4 â â for Item prompts and â
2307.14225#17
2307.14225#19
2307.14225
[ "2305.06474" ]
2307.14225#19
Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences
I describe the movies I dislike as follows: descð ¡ â â for Language prompts after their positive counterparts in the prompts labeled Pos+Neg. 2The full target string scored is the movie name followed by the end-of-string token, which mitigates a potential bias of penalizing longer movie names. 6 LLMs are Competitive Near Cold-start Recommenders RecSys â 23, September 18â 22, 2023, Singapore, Singapore Table 1. Example initial self-descriptions provided by three raters. Rater #1 Liked Movies I like comedy movies because i feel happy whenever i watch them. We can watch those movies with a group of people. I like to watch comedy movies because there will be a lot of fun and entertainment. Its very exciting to watch with friends and family.so,I always watch comedy movies whenever I get time. Disliked Movies I am not at all interested in watching horror movies because whenever I feel alone it will always disturb me with the char- acters in the movie.
2307.14225#18
2307.14225#20
2307.14225
[ "2305.06474" ]
2307.14225#20
Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences
It will be aï¬ ected by dreams and mood always. SO, mostly i ignore watching them when i stay alone in the home. Horror is scary. I donâ t like the feeling of being terriï¬ ed. Some are either sensitive to suspense, gore or frightful im- ages, or they may have had an experience in their life that makes horror seem real. I dislike action genre movies because watching ï¬ ghts gives me a headache and bored me. These kinds of movies mainly concentrate on violence and physical feats. #2 Fantasy ï¬ lms often have an element of magic, myth, wonder,and the extraor- dinary. They may appeal to both children and adults, depending upon the par- ticular ï¬ lm. In fantasy ï¬ lms, the hero often undergoes some kind of mystical experience.
2307.14225#19
2307.14225#21
2307.14225
[ "2305.06474" ]
2307.14225#21
Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences
I like comedy genre movies, while watching comedy movies I will feel very happy and relaxed. Comedy ï¬ lms are designed to make the audience laugh. It has diï¬ erent kinds of categories in comedy genres such as horror comedy, romantic comedy, comedy thriller,musical-comedy. #3 Table 2. Baseline rating statistics for items in the fully labeled pools of items across all raters. Movies Per Rater 10 10 10 10 40 Fraction Seen 22% 16% 46% 24% 27% Average Rating Seen Movies Unseen Movies Sample Pool SP-RandPop SP-RandMidPop SP-EASE SP-BM25-Fusion SP-Full 4.21 4.00 4.51 4.38 4.29 2.93 2.85 3.16 3.11 3.00 # 5 RESULTS # 5.1 Data Analysis
2307.14225#20
2307.14225#22
2307.14225
[ "2305.06474" ]
2307.14225#22
Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences
We now brieï¬ y analyze the data collected from 153 raters as part of the preference elicitation and rating process.3 The raters took a median of 67 seconds to write their initial descriptions summarizing what they like, and 38 seconds for their dislikes (median lengths: 241 and 223 characters, respectively). Providing ï¬ ve liked and disliked items took a median of 174 and 175 seconds, respectively. Following this, writing ï¬ nal descriptions of likes and dislikes took a median of 152 and 161 seconds, respectively (median lengths: 205 and 207 characters, respectively). We observe that the initial descriptions were produced 3 to 4 times faster than providing 5 example items, in around one minute. As we will see below, this diï¬ erence in eï¬
2307.14225#21
2307.14225#23
2307.14225
[ "2305.06474" ]
2307.14225#23
Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences
ort is particularly pertinent as item-based and description-based recommendation are comparable in performance. A sample of initial descriptions are shown in Table 1. Next, we analyze the ratings collected for the movies from the four pools described in Section 3. From Table 2, we observe: (1) The EASE recommender nearly doubles the rate of recommendations that have already been seen by the rater, which reï¬ ects the supervised data on which it is trained where raters only rate what they have seen; (2) There is an inherent positive bias to provide a high ratings for movies the rater has already seen as evidenced by the average 4.29 rating in this case; (3) In contrast, the average rating drops to a neutral 3.00 for unseen items. # 5.2 Recommended Items Our main experimental results are shown in Table 3, using NDCG@10 with exponential gain (a gain of 0 for ratings ð < 3 and a gain of 2ð â 3 otherwise). We compare the mean performance of various methods using item- and/or language-based preferences (as described in Section 3.1) ranking four diï¬ erent pool-based subsets of the 40 fully judged 3We recruited 160 raters, but discard those (5) that did not complete both phases of the data collection and those (2) who provided uniform ratings on all item recommendations in Phase 2. 7 RecSys â 23, September 18â 22, 2023, Singapore, Singapore
2307.14225#22
2307.14225#24
2307.14225
[ "2305.06474" ]
2307.14225#24
Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences
Scott Sanner, Krisztian Balog, Filip Radlinski, Ben Wedin, and Lucas Dixon Table 3. Main experimental results comparing mean NDCG@10 (± 95% standard error) over raters for all recommendation methods. In each case, the fully judged rater-specific evaluation set is ranked by the given recommendation algorithms. Mean evaluation set sizes are in the first row. Note that performance on the Unseen item set is most important in a practical recommendation setting.
2307.14225#23
2307.14225#25
2307.14225
[ "2305.06474" ]
2307.14225#25
Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences
Full Set SP-Full 40 Unbiased Set SP-Rand{Pop,MidPop} 20 Items that are Evaluation Set Seen 10.8 Unseen 29.2 Mean evaluation set size Recommendation Algorithm Random Baseline Popularity Baseline (Item) EASE (Item) WRMF (Item) BPR-SLIM (Item) KNN Item (Language) BM25-Fusion LLM Item Completion LLM Item Zero-shot LLM Item Few-shot (3) LLM Language Completion LLM Language Zero-shot LLM Language Few-shot (3) LLM Item+Language Completion LLM Item+Language Zero-shot LLM Item+Language Few-shot (3) LLM Item Zero-shot Pos+Neg LLM Language Zero-shot Pos+Neg LLM Item+Language Zero-shot Pos+Neg 0.532 ± 0.034 0.624 ± 0.029 0.592 ± 0.030 0.644 ± 0.029 0.617 ± 0.029 0.610 ± 0.028 0.623 ± 0.027 0.610 ± 0.027 0.631 ± 0.028 0.636 ± 0.027 0.617 ± 0.029 0.626 ± 0.027 0.650 ± 0.026 0.639 ± 0.027 0.634 ± 0.028 0.640 ± 0.028 0.629 ± 0.027 0.626 ± 0.027 0.626 ± 0.028 0.511 ± 0.038 0.534 ± 0.036 0.559 ± 0.039 0.573 ± 0.037 0.577 ± 0.037 0.565 ± 0.037 0.542 ± 0.036 0.563 ± 0.037 0.571 ± 0.037 0.572 ± 0.037 0.559 ± 0.035 0.563 ± 0.034 0.571 ± 0.038 0.568 ± 0.037 0.582 ± 0.037 0.570 ± 0.037 0.569 ± 0.038 0.563 ± 0.034 0.577 ± 0.037 0.876 ± 0.023 0.894 ± 0.020 0.899 ± 0.023 0.897 ± 0.021 0.902 ± 0.021 0.889 ± 0.024 0.868 ± 0.023 0.889 ± 0.022 0.895 ± 0.023 0.897 ± 0.022 0.889 ± 0.023 0.885 ± 0.024 0.891 ± 0.022 0.893 ± 0.022 0.897 ± 0.023 0.899 ± 0.022 0.892 ± 0.023 0.885 ± 0.024 0.897 ± 0.023 0.504 ± 0.032 0.595 ± 0.032 0.673 ± 0.038 0.644 ± 0.036 0.672 ± 0.037 0.646 ± 0.038 0.519 ± 0.032 0.649 ± 0.037 0.659 ± 0.037 0.664 ± 0.038 0.617 ± 0.032 0.612 ± 0.034 0.640 ± 0.036 0.654 ± 0.037 0.660 ± 0.038 0.663 ± 0.038 0.647 ± 0.037 0.612 ± 0.034 0.662 ± 0.037
2307.14225#24
2307.14225#26
2307.14225
[ "2305.06474" ]
2307.14225#26
Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences
test recommendation items (as described in Section 3.2), recalling that the pool for each rater is personalized to that rater. The language-based results use only the initial natural language descriptions, which raters produced much faster than either liked and disliked item choices or ï¬ nal descriptions, yet they yield equal performance to ï¬ nal descriptions. We begin with general observations. First, we note the range of NDCG@10 scores within each subset of items is substantially diï¬ erent, due to both the NDCG normalizer that generally increases with a larger evaluation set size, as well as the average rating of each pool. On the latter note, we previously observed that the subset of Seen recom- mendations in Table 2 has the smallest pool of items and a high positive rating bias that makes it hard to diï¬ erentiate recommenders on this subset. However, and as also recently argued in [35], in a recommendation setting where an item is typically only consumed once (such as movies), we are much more concerned about recommendation performance on the Unseen subset vs. the Seen subset. Similarly, we are also concerned with performance on the Unbiased set since this subset explores a wide range of popularity and is not biased towards item-based collaborative ï¬ ltering (CF) methods. To address our original research questions from Section 1: RQ1: Can language-based preferences replace or improve on item-based preferences? An initial aï¬ rmative answer comes from observing that the LLM Language Few-shot (3) method is competitive with most of the traditional item-based CF methods in this near cold-start setting, which is important since as observed in Section 5.1, language- based preferences took less time to elicit than item-based preferences; furthermore, language-based preferences are
2307.14225#25
2307.14225#27
2307.14225
[ "2305.06474" ]
2307.14225#27
Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences
8 | # J LLMs are Competitive Near Cold-start Recommenders RecSys â 23, September 18â 22, 2023, Singapore, Singapore transparent and scrutable [37]. However, there seems to be little beneï¬ t to combining language- and item-based pref- erences as the Item+Language LLM methods do not appear to provide a boost in performance. RQ2: LLM-based methods vs. CF? RQ1 has already established that LLM-based methods are generally competitive with item-based CF methods for the Language variants of the LLMs. However, it should also be noted that in many cases the LLM-based methods can even perform comparatively well to CF methods with only Item-based preferences (i.e., the names of the preferred movies). A critical and surprising result here is that a pretrained LLM makes a competitive recommender without the large amounts of supervised data used to train CF methods. RQ3: Best prompting methodology? The Few-shot (3) prompting method generally outperforms Zero-shot and Completion prompting methods.
2307.14225#26
2307.14225#28
2307.14225
[ "2305.06474" ]
2307.14225#28
Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences
The diï¬ erence between Zero-shot and Completion prompting is less pronounced. While not shown due to space constraints, increasing the number of Few-shot examples did not improve performance. RQ4: Does inclusion of dispreferences help? In the bottom three rows of Table 3, we show the impact of in- cluding negative item or language preferences for LLM-based recommenders. There are no meaningful improvements from including both positive and negative preferences (Pos+Neg) over only positive preferences in these LLM conï¬ gu- rations. While not shown due to space constraints, omitting positive preferences and using only negative preferences yields performance at or below the popularity baseline. # 6 ETHICAL CONSIDERATIONS
2307.14225#27
2307.14225#29
2307.14225
[ "2305.06474" ]
2307.14225#29
Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences
We brieï¬ y consider potential ethical considerations. First, it is important to consider biases in the items recommended. For instance, it would be valuable to study how to measure whether language-driven recommenders exhibit more or less unintended bias than classic recommenders, such as perhaps preferring certain classes of items over others. Our task was constructed as ranking a ï¬ xed corpus of items. As such, all items were considered and scored by the model. Overall performance numbers would have suï¬ ered had there been a strong bias, although given the size of our experiments, the existence of bias cannot be ruled out. Larger scale studies would be needed to bound any possible biases present. Additionally, our conclusions are based on the preferences of a relatively small pool of 153 raters. The small scale and restriction to English-only preferences means we cannot assess whether the same results would be obtained in other languages or cultures. Finally, we note that the preference data was provided by paid contractors. They received their standard contracted wage, which is above the living wage in their country of employment. # 7 CONCLUSION
2307.14225#28
2307.14225#30
2307.14225
[ "2305.06474" ]
2307.14225#30
Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences
In this paper, we collected a dataset containing both item-based and language-based preferences for raters along with their ratings of an independent set of item recommendations. Leveraging a variety of prompting strategies in large language models (LLMs), this dataset allowed us to fairly and quantitatively compare the eï¬ cacy of recommendation from pure item- or language-based preferences as well as their combination. In our experimental results, we ï¬ nd that zero-shot and few-shot strategies in LLMs provide remarkably competitive in recommendation performance for pure language-based preferences (no item preferences) in the near cold-start case in comparison to item-based collaborative ï¬ ltering methods. In particular, despite being general-purpose, LLMs perform competitively with fully supervised item-based CF methods when leveraging either item-based or language-based preferences. Finally, we observe that this LLM-based recommendation approach provides a competitive near cold-start recommender system based on an
2307.14225#29
2307.14225#31
2307.14225
[ "2305.06474" ]
2307.14225#31
Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences
9 RecSys â 23, September 18â 22, 2023, Singapore, Singapore Scott Sanner, Krisztian Balog, Filip Radlinski, Ben Wedin, and Lucas Dixon explainable and scrutable language-based preference representation, thus providing a path forward for eï¬ ective and novel LLM-based recommenders using language-based preferences. # REFERENCES [1] Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, and Charles Sutton. 2021. Program Synthesis with Large Language Models. arXiv:2108.07732 [cs.PL] [2] Krisztian Balog, Filip Radlinski, and Shushan Arakelyan. 2019. Transparent, Scrutable and Explainable User Models for Personalized Recommen- dation. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR â 19). 265â 274.
2307.14225#30
2307.14225#32
2307.14225
[ "2305.06474" ]
2307.14225#32
Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences
[3] Krisztian Balog, Filip Radlinski, and Alexandros Karatzoglou. 2021. On Interpretation and Measurement of Soft Attributes for Recommendation. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR â 21). 890â 899. [4] Toine Bogers and Marijn Koolen. 2017. Deï¬ ning and Supporting Narrative-Driven Recommendation. In Proceedings of the Eleventh ACM Conference on Recommender Systems (RecSys â
2307.14225#31
2307.14225#33
2307.14225
[ "2305.06474" ]
2307.14225#33
Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences
17). 238â 242. [5] Vadim Borisov, Kathrin Seà ler, Tobias Leemann, Martin Pawelczyk, and Gjergji Kasneci. 2023. Language Models are Realistic Tabular Data Gener- ators. arXiv:2210.06280 [cs.LG] [6] Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeï¬ rey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language Models are Few-Shot Learners. arXiv:2005.14165 [cs.CL] [7] Arun Tejasvi Chaganty, Megan Leszczynski, Shu Zhang, Ravi Ganti, Krisztian Balog, and Filip Radlinski. 2023. Beyond Single Items: Exploring User Preferences in Item Sets with the Conversational Playlist Curation Dataset. In Proceedings of the ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR â
2307.14225#32
2307.14225#34
2307.14225
[ "2305.06474" ]
2307.14225#34
Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences
23). 2754â 2764. [8] Huiyuan Chen, Yusan Lin, Menghai Pan, Lan Wang, Chin-Chia Michael Yeh, Xiaoting Li, Yan Zheng, Fei Wang, and Hao Yang. 2022. Denoising Self-Attentive Sequential Recommendation. In Proceedings of the 16th ACM Conference on Recommender Systems (RecSys â 22). 92â 101. [9] Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeï¬ Dean, Slav Petrov, and Noah Fiedel. 2022. PaLM: Scaling Language Modeling with Pathways. arXiv:2204.02311 [cs.CL] [10] Konstantina Christakopoulou, Filip Radlinski, and Katja Hofmann. 2016. Towards Conversational Recommender Systems.
2307.14225#33
2307.14225#35
2307.14225
[ "2305.06474" ]
2307.14225#35
Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences
In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD â 16). 815â 824. [11] Maurizio Ferrari Dacrema, Paolo Cremonesi, and Dietmar Jannach. 2019. Are We Really Making Much Progress? A Worrying Analysis of Recent Neural Recommendation Approaches. In Proceedings of the 13th ACM Conference on Recommender Systems (RecSys â 19). 101â 109. [12] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Un- derstanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers) (NAACL â 19). 4171â 4186. [13] Luke Friedman, Sameer Ahuja, David Allen, Zhenning Tan, Hakim Sidahmed, Changbo Long, Jun Xie, Gabriel Schubiner, Ajay Patel, Harsh Leveraging Large Language Models in Conversational Recommender Systems. Lara, Brian Chu, Zexi Chen, and Manoj Tiwari. 2023. arXiv:2305.07961 [cs.IR] [14] Zeno Gantner, Steï¬ en Rendle, Christoph Freudenthaler, and Lars Schmidt-Thieme. 2011.
2307.14225#34
2307.14225#36
2307.14225
[ "2305.06474" ]
2307.14225#36
Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences
MyMediaLite: A Free Recommender System Library. In Proceedings of the Fifth ACM Conference on Recommender Systems (RecSys â 11). 305â 308. [15] Chongming Gao, Wenqiang Lei, Xiangnan He, Maarten de Rijke, and Tat-Seng Chua. 2021. Advances and Challenges in Conversational Recom- mender Systems: A Survey. AI Open 2 (2021), 100â 126. [16] Shijie Geng, Shuchang Liu, Zuohui Fu, Yingqiang Ge, and Yongfeng Zhang. 2022. Recommendation as Language Processing (RLP): A Uniï¬ ed Pretrain, Personalized Prompt & Predict Paradigm (P5). In Proceedings of the 16th ACM Conference on Recommender Systems (RecSys â 22). 299â 315. [17] Deepesh V Hada and Shirish K Shevade. 2021. ReXPlug: Explainable Recommendation using Plug-and-Play Language Model. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR â 21). 81â 91. [18] F. Maxwell Harper and Joseph A. Konstan. 2015.
2307.14225#35
2307.14225#37
2307.14225
[ "2305.06474" ]
2307.14225#37
Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences
The MovieLens Datasets: History and Context. ACM Transactions on Interactive Intelligent Systems 5, 4, Article 19 (2015). [19] Xiangnan He, Lizi Liao, Hanwang Zhang, Liqiang Nie, Xia Hu, and Tat-Seng Chua. 2017. Neural Collaborative Filtering. In Proceedings of the 26th International Conference on World Wide Web (WWW â 17). 173â 182. 10 LLMs are Competitive Near Cold-start Recommenders RecSys â
2307.14225#36
2307.14225#38
2307.14225
[ "2305.06474" ]
2307.14225#38
Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences
23, September 18â 22, 2023, Singapore, Singapore [20] Yupeng Hou, Shanlei Mu, Wayne Xin Zhao, Yaliang Li, Bolin Ding, and Ji-Rong Wen. 2022. Towards Universal Sequence Representation Learning for Recommender Systems. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD â 22). 585â 593. [21] Yupeng Hou, Junjie Zhang, Zihan Lin, Hongyu Lu, Ruobing Xie, Julian McAuley, and Wayne Xin Zhao. 2023.
2307.14225#37
2307.14225#39
2307.14225
[ "2305.06474" ]
2307.14225#39
Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences
Large Language Models are Zero-Shot Rankers for Recommender Systems. arXiv:2305.08845 [cs.IR] [22] Fangwei Hu and Yong Yu. 2013. Interview Process Learning for Top-N Recommendation. In Proceedings of the ACM Conference on Recommender Systems (RecSys â 13). 331â 334. [23] Yifan Hu, Yehuda Koren, and Chris Volinsky. 2008. Collaborative Filtering for Implicit Feedback Datasets. In Proceedings of the 2008 Eighth IEEE International Conference on Data Mining (ICDM â 08). 263â 272. [24] Dietmar Jannach, Ahtsham Manzoor, Wanling Cai, and Li Chen. 2021. A Survey on Conversational Recommender Systems. Comput. Surveys 54, 5 (2021). [25] Marius Kaminskas and Derek Bridge. 2016. Diversity, Serendipity, Novelty, and Coverage: A Survey and Empirical Analysis of Beyond-Accuracy Objectives in Recommender Systems. ACM Transactions on Interactive Intelligent Systems 7, 1 (2016), 1â 42. [26] Jie Kang, Kyle Condiï¬ , Shuo Chang, Joseph A. Konstan, Loren Terveen, and F. Maxwell Harper. 2017.
2307.14225#38
2307.14225#40
2307.14225
[ "2305.06474" ]
2307.14225#40
Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences
Understanding How People Use Natural Language to Ask for Recommendations. In Proceedings of the Eleventh ACM Conference on Recommender Systems (RecSys â 17). 229â 237. [27] Wang-Cheng Kang, Jianmo Ni, Nikhil Mehta, Maheswaran Sathiamoorthy, Lichan Hong, Ed Chi, and Derek Zhiyuan Cheng. 2023. Do LLMs Understand User Preferences? Evaluating LLMs On User Rating Prediction. arXiv:2305.06474 [cs.IR] [28] Dawen Liang, Rahul G. Krishnan, Matthew D.
2307.14225#39
2307.14225#41
2307.14225
[ "2305.06474" ]
2307.14225#41
Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences
Hoï¬ man, and Tony Jebara. 2018. Variational Autoencoders for Collaborative Filtering. In Proceedings of the 2018 World Wide Web Conference (WWW â 18). 689â 698. [29] Pasquale Lops, Marco De Gemmis, and Giovanni Semeraro. 2011. Content-based Recommender Systems: State of the Art and Trends. In Recom- mender Systems Handbook. Springer, 73â 105. [30] Itzik Malkiel, Oren Barkan, Avi Caciularu, Noam Razin, Ori Katz, and Noam Koenigstein. 2020. RecoBERT: A Catalog Language Model for Text- Based Recommendations. arXiv:2009.13292 [cs.IR] [31] Sheshera Mysore, Mahmood Jasim, Andrew McCallum, and Hamed Zamani. 2023.
2307.14225#40
2307.14225#42
2307.14225
[ "2305.06474" ]
2307.14225#42
Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences
Editable User Proï¬ les for Controllable Text Recommendation. arXiv:2304.04250 [cs.IR] [32] Sheshera Mysore, Andrew McCallum, and Hamed Zamani. 2023. Large Language Model Augmented Narrative Driven Recommendations. arXiv:2306.02250 [cs.IR] [33] Zahra Nazari, Praveen Chandar, Ghazal Fazelnia, Catherine M. Edwards, Benjamin Carterette, and Mounia Lalmas. 2022.
2307.14225#41
2307.14225#43
2307.14225
[ "2305.06474" ]
2307.14225#43
Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences
Choice of Implicit Signal Matters: Accounting for User Aspirations in Podcast Recommendations. In Proceedings of the ACM Web Conference 2022 (WWW â 22). 2433â 2441. [34] Xia Ning and George Karypis. 2011. SLIM: Sparse Linear Methods for Top-N Recommender Systems. In Proceedings of the 2011 IEEE 11th Interna- tional Conference on Data Mining (ICDM â 11). 497â 506. [35] Roberto Pellegrini, Wenjie Zhao, and Iain Murray. 2022.
2307.14225#42
2307.14225#44
2307.14225
[ "2305.06474" ]
2307.14225#44
Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences
Donâ t Recommend the Obvious: Estimate Probability Ratios. In Proceedings of the 16th ACM Conference on Recommender Systems (RecSys â 22). 188â 197. [36] Gustavo Penha and Claudia Hauï¬ . 2020. What does BERT know about books, movies and music? Probing BERT for Conversational Recommenda- tion. In Fourteenth ACM Conference on Recommender Systems (RecSys â 20). 388â 397. [37] Filip Radlinski, Krisztian Balog, Fernando Diaz, Lucas Dixon, and Ben Wedin. 2022.
2307.14225#43
2307.14225#45
2307.14225
[ "2305.06474" ]
2307.14225#45
Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences
On Natural Language User Proï¬ les for Transparent and Scrutable Recommendation. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR â 22). 2863â 2874. [38] Steï¬ en Rendle, Christoph Freudenthaler, Zeno Gantner, and Lars Schmidt-Thieme. 2009. BPR: Bayesian Personalized Ranking from Implicit Feed- back. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artiï¬ cial Intelligence (UAI â 09). 452â 461. [39] Lior Rokach and Slava Kisilevich. 2012. Initial Proï¬ le Generation in Recommender Systems Using Pairwise Comparison. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews) 42, 6 (2012), 1854â
2307.14225#44
2307.14225#46
2307.14225
[ "2305.06474" ]
2307.14225#46
Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences
1859. [40] Badrul Sarwar, George Karypis, Joseph Konstan, and John Riedl. 2001. Item-based Collaborative Filtering Recommendation Algorithms. In Proceed- ings of the 10th International Conference on World Wide Web (WWW â 01). 285â 295. [41] Anna Sepliarskaia, Julia Kiseleva, Filip Radlinski, and Maarten de Rijke. 2018. Preference Elicitation as an Optimization Problem. In Proceedings of the ACM Conference on Recommender Systems (RecSys â 18). 172â 180. [42] Harald Steck. 2019. Embarrassingly Shallow Autoencoders for Sparse Data.
2307.14225#45
2307.14225#47
2307.14225
[ "2305.06474" ]
2307.14225#47
Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences
In The World Wide Web Conference (WWW â 19). 3251â 3257. [43] Bas Verplanken and Suzanne Faes. 1999. Good Intentions, Bad Habits, and Eï¬ ects of Forming Implementation Intentions on Healthy Eating. European Journal of Social Psychology 29, 5-6 (1999), 591â 604. [44] Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V. Le. 2022. Finetuned Language Models Are Zero-Shot Learners. arXiv:2109.01652 [cs.CL] [45] Yury Zemlyanskiy, Sudeep Gandhe, Ruining He, Bhargav Kanagal, Anirudh Ravula, Juraj Gottweis, Fei Sha, and Ilya Eckstein. 2021.
2307.14225#46
2307.14225#48
2307.14225
[ "2305.06474" ]
2307.14225#48
Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences
DOCENT: Learning Self-Supervised Entity Representations from Large Document Collections. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume (EACL â 21). 2540â 2549. 11
2307.14225#47
2307.14225
[ "2305.06474" ]
2307.13779#0
Is GPT a Computational Model of Emotion? Detailed Analysis
# Is GPT a Computational Model of Emotion? Detailed Analysis Ala N. Tak and Jonathan Gratch Institute for Creative Technologies University of Southern California Playa Vista, CA 90094, USA. [email protected], [email protected] # Contents 1.1 Original prompts ........................................................................................................................... 2 1.2 Emotion derivation ........................................................................................................................ 2 1.3 Affect derivation ........................................................................................................................... 4 2.1 Original prompts ........................................................................................................................... 5 2.2 Prompt engineering ....................................................................................................................... 7 2.3 Alternative framing ..................................................................................................................... 10 2.4 Prompt structures ........................................................................................................................ 12 2.5 Additional data and graphs ......................................................................................................... 14 2.6 Affect derivation ......................................................................................................................... 16 Abstract This paper investigates the emotional reasoning abilities of the GPT family of large language models via a component perspective. The paper first examines how the model reasons about autobiographical memories. Second, it systematically varies aspects of situations to impact emotion intensity and coping tendencies. Even without the use of prompt engineering, it is shown that GPT's predictions align significantly with human-provided appraisals and emotional labels. However, GPT faces difficulties predicting emotion in- tensity and coping responses. GPT-4 showed the highest performance in the initial study but fell short in the second, despite providing superior results after minor prompt engineering. This assessment brings up questions on how to effectively employ the strong points and address the weak areas of these models, par- ticularly concerning response variability. These studies underscore the merits of evaluating models from a componential perspective [1]. 1 # 1. Study 1 # 1.1 Original prompts GPT is sensitive to minor variations in prompt design [2]. To mitigate this, we adopt the strategy of Binz and Schulz to evaluate GPTâ s cognitive reasoning capabilities [3]. We prompt the model (without any fine- tuning) with the exact question pattern used for human respondents in a psychological experiment, append- ing only the least required additional text to enable the model to produce uniform answers, like responding to Likert scales. Figure SM.1 is the exact prompt given to GPT in Study 1. Each prompt is provided in a separate conversation, and the text in red is replaced with each story. This was run once for each story.
2307.13779#1
2307.13779
[ "2302.08399" ]
2307.13779#1
Is GPT a Computational Model of Emotion? Detailed Analysis
The model occasionally returned additional explanatory text along with each response which was ignored for analysis. The model always returned four emotion labels. The appraisal items are verbatim from Smith and Lazarus 1990 [4]. Imagine the following hypothetical situation and answer the following questions. This is the situation: â My grandfather passed away a few weeks prior owing to health complications. Processing the event in the absence of family was a tall task. The fact that he was doing well a few months back when I was leaving for the USA and the sudden demise were things which compounded the impact of the event. It took a few weeks for me to return to state of normalcy, process the entire event and accepting it.â
2307.13779#0
2307.13779#2
2307.13779
[ "2302.08399" ]
2307.13779#2
Is GPT a Computational Model of Emotion? Detailed Analysis
Please rate the situation along the following criteria from 1 to 9. Only provide a number 1. Relevance: How important was what was happening in the situation to you? (1: not at all â ¦ 9: extremely) 2. Congruence: Think about what you did and did not want in this situation. How consistent was the situation with what you wanted? (1: not at all â ¦ 9: extremely) 3. Self-accountability: To what extent did you consider YOURSELF responsible for the situation? (1: not at all â
2307.13779#1
2307.13779#3
2307.13779
[ "2302.08399" ]
2307.13779#3
Is GPT a Computational Model of Emotion? Detailed Analysis
¦ 9: extremely) 4. Other-accountability: To what extent did you consider SOMEONE ELSE responsible for the situation? (1: not at all â ¦ 9: extremely) 5. Future-expectancy: Think about how you wanted this situation to turn out. How consistent with these wishes did you expect the situation to become (or stay)? (1: not at all â ¦ 9: extremely) 6. Problem-focused coping: Think about what you did and didnâ t want in this situation. How cer- tain were you that you would be able to influence things to make (or keep) the situation the way you wanted it? (1: certainly WILL not be able â ¦ certainly WILL be able) 7. Accommodative-focused coping: How certain were you that you would be able to deal emotion- ally with what was happening in this situation? (1: not able to cope â ¦ 9: completely able to cope) 8. Finally, please list at most four emotions someone in this situation is likely to feel. # Figure SM.1: Prompt used in Study 1. # 1.2 Emotion derivation Human participants offered from one to eight emotional labels for their stories (M=2.31, SD=1.39). GPT- 3.5 and GPT-4 always returned four labels. We explored two general approaches for comparing these labels. First, as reported in the paper [5], we converted labels into valence, arousal, and dominance scores. The results in the paper use a dictionary-based method as people reported very common emotion terms like joy, anger, or disappointment. We also complement this with an embedding approach summarized here. Second, 2
2307.13779#2
2307.13779#4
2307.13779
[ "2302.08399" ]
2307.13779#4
Is GPT a Computational Model of Emotion? Detailed Analysis
we examined if one of the words output by GPT was an exact match for one of the words provided by the participant, where different grammatical forms of the identical word were considered a match (e.g., angry matches anger, but fear does not match scared). Interestingly, the first word reported by GPT was the best match, suggesting that the first word provided by the model is its best guess. The dictionary results are reported in the paper. Here we report the embedding and word-match results. 1.2.1 Embedding results We approach this problem using word embeddings, such as those provided by Word2Vec, combined with distance/similarity metrics, such as cosine similarity. Word embeddings represent words in a multi-dimen- sional space and are generated in such a way that similar words are close to each other in this space. We first take each pair of emotion labels, calculate their word vectors (using Word2Vec [6]), and then measure the cosine similarity between the vectors. Our analysis reveals an average general similarity of approxi- mately 0.66 and 0.50 across all comparisons using GPT-3.5 and GPT-4 output, respectively, indicating moderate-to-strong similarity. This approach assumes that similar word embeddings would have similar emotional content, which is a simplification. Word embeddings capture many facets of a wordâ s meaning, which includes but is not limited to its emotional content. As a result, while the cosine similarity of word embeddings can serve as a rough proxy for emotional similarity, it will not fully capture the valence and arousal dimensions.
2307.13779#3
2307.13779#5
2307.13779
[ "2302.08399" ]
2307.13779#5
Is GPT a Computational Model of Emotion? Detailed Analysis
To discover certain â directionsâ in the word embedding space that seem to correspond to particular semantic differences (i.e., emotional content), we projected word vectors onto the â VADâ dimension in Word2Vec and compared the labels in terms of this projection. However, Word2Vec does not inherently have an in- terpretable VAD dimension. Thus, we identified pairs of words that differ mainly in terms of V (or A, D) and subtracted their vectors to find the difference vectors. We average these difference vectors to find a vector that roughly points in the â Vâ (or A, D) direction in the word embedding space. Finally, we computed the correlation between the projections of GPT and human labels to the generated VAD directions, which is presented in Table SM.1. Table SM.1 Correlation with human-reported emotion Models Valence Arousal Dominance GPT-3.5 r = 0.793, p < .001*** r = 0.690, p < .001*** r = 0.337, p=.044 GPT-4 r = 0.779, p < .001*** r = 0.532, p < .001*** r = 0.026, p=.881 It should be noted that this method assumes that the difference vectors capture the semantic difference between words as intended, which is not always true. Also, we assume that the â Vâ
2307.13779#4
2307.13779#6
2307.13779
[ "2302.08399" ]
2307.13779#6
Is GPT a Computational Model of Emotion? Detailed Analysis
(or A, D) dimension is orthogonal to the other dimensions in the word embedding space, which may not be the case. Lastly, the choice of word pairs can greatly affect the resulting VAD vectors. 1.2.2 Word-match results Table SM.2 lists how often a GPT-provided label matches one of the human-provided emotion labels. This is broken out by the order of words produced by the model. For example, the first label provided by GPT- 3.5 matched one of the human-provided labels for a given story 42.9% of the time. The second label only matched 34.3% of the time, and so forth. Overall, at least one of the labels matched at least one of the human responses 80% of the time. GPT-4 was slightly less accurate than GPT-3.5 on this metric, but this difference failed to reach significance: Ï 2 (1, N = 35) = 0.8, p = .771. 3 Table SM.2 Position of GPT-reported label Model First Second Third Fourth Any GPT-3.5 0.429 0.343 0.257 0.171 0.800 GPT-4 0.371 0.343 0.314 0.114 0.771 # 1.3 Affect derivation Appraisal derivation considers which appraisals predict specific emotions. As people reported multiple emotion labels, we predict the average valence, arousal, and dominance scores associated with each story. Thus, we performed backward linear regression separately to predict average valence, average arousal, and average dominance. This is first performed on human data and then on model data. Figure 5 illustrates the results for GPT4. Figure SM.2 shows the results for GPT3.5.
2307.13779#5
2307.13779#7
2307.13779
[ "2302.08399" ]
2307.13779#7
Is GPT a Computational Model of Emotion? Detailed Analysis
Appraisal theory claims the valence of responses is dictated by if the situation is goal-congruent. This is indeed the association found in the human data but GPT-3 primarily associates valence with future-expec- tancy (which refers to if the situation unfolded as expected). Through post hoc analysis, this seems to arise due to collinearities between GPT-3â s interpretation of goal-congruence and future expectancy that are less present in human ratings. Appraisal theory claims arousal should largely be determined by the relevance of the event to the individual (e.g., a threat to a very important goal would be more relevant than a threat to a minor goal). This is indeed the association found in the human data, but GPT associates arousal with other-accountability, though it should be noted that both associations are weak. Finally, appraisal theory claims dominance should be associated with perceptions of control (positively associated with problem-focused coping and negatively associated with emotion-focused coping). Neither of these associations was found in either model. Self-reported dominance was associated with goal-congru- ence, which makes some sense as people are presumably more in control in positive situations. GPT-3 associates dominance with future expectancy, likely for the same reasons it uses this feature for valence. 7 , R2=,074, 14 Ji ! 01 Problem-focused Arousal 18 -.01 Dominance R?=.732, p<.001 Dominance R2=,.493, p<.001 Self-reported Emotion Valance R?=.793, p<.001 Emotion-focused GPT-3 Predicted Emotion
2307.13779#6
2307.13779#8
2307.13779
[ "2302.08399" ]
2307.13779#8
Is GPT a Computational Model of Emotion? Detailed Analysis
Figure SM.2: Appraisal derivation derived from human data (left of figure) and GPT3.5 (right). 4 # 2. Study 2 # 2.1 Original prompts 2.1.1 Prompt In this section, we provide a complete report of the prompts queried for each state in the game (Lost, Losing, Start, Winning, Won) and three utility levels (Low, Medium, and High) using the coin flip game adapted from [7]. For utility in (â a little bitâ , â a medium amountâ , â a lotâ ): For game_state in (â Imagine you have lost the game.â , â Imagine you have obtained zero heads and your opponent has two heads. It is your turn.â , â Imagine the game is just about to start. It is your turn.â , â
2307.13779#7
2307.13779#9
2307.13779
[ "2302.08399" ]
2307.13779#9
Is GPT a Computational Model of Emotion? Detailed Analysis
Imagine you have obtained two heads and your opponent has zero heads. It is your turn.â , â Imagine you have won the game.â ) â Imagine you are playing a simple game with someone. â â You take turns flipping a coin. The first person to get four heads wins. â If lost or won: "{game_state} Imagine that you were playing for {utility} of money. " # Else: "{game_state} Imagine that you are playing for {utility} of money. " â
2307.13779#8
2307.13779#10
2307.13779
[ "2302.08399" ]
2307.13779#10
Is GPT a Computational Model of Emotion? Detailed Analysis
Answer the following questions by providing only a single number on a scale from 1 to 100, where 1 means not at all and 100 means a lot: 1. How important to you is it that you win? 2. How likely is it that you win? 3. How much control do you have over winning? 4. How much do you feel hope? 5. How much do you feel fear? 6. How much do you feel joy? 7. How much do you feel sadness? 8. How much do you feel anger?
2307.13779#9
2307.13779#11
2307.13779
[ "2302.08399" ]
2307.13779#11
Is GPT a Computational Model of Emotion? Detailed Analysis
â â Please do not respond anything else other than the answers to the 8 questions above. â â Please put the answer in the following JSON format and make all data types to be string and use all lowercase. It is very important. â â {â 1â :â â , â 2â :â ", "3": "", "4": "", "5": "", "6": "", "7": "", "8": ""} ' 2.1.1 Results Figure SM.3 demonstrates emotion intensity from human self-report compared with GPT in response to different states of the coin-flip game. Intensity is on the y-axis, whereas reported probability of winning the game is reported on the x-axis. GPT graphs show 95% confidence intervals of the mean. Based on the two-way ANOVA conducted on the four dependent variables (hope, fear, joy, and sadness), the main effects of relevance and game state, as well as the interaction effect between relevance and game state, as well as partial eta squared (η²) values, 95% confidence interval (CI), are summarized in Table SM.3.
2307.13779#10
2307.13779#12
2307.13779
[ "2302.08399" ]
2307.13779#12
Is GPT a Computational Model of Emotion? Detailed Analysis
5 Hope Fear Joy Sadness s se we A - â â low utility ay â â medium utility F Bo â ° _â oA ° â &B high utility Ee ao «| â 0 a=, © me. er â ot ~ ~ ~~ 0. â â 0. © a © Frobabitty © Frovaity © Fobaviity SB Grovapiity wtâ a ° *] ° O > w = « « © e 4 & § « «| «| «| Gz ? «0 *| © Py t > oy 0 oo a 2 olf oa eT ee a ae) aot 8 Te me Figure SM.3: Intensity derivation results (corresponding to Fig 8. in the paper)
2307.13779#11
2307.13779#13
2307.13779
[ "2302.08399" ]
2307.13779#13
Is GPT a Computational Model of Emotion? Detailed Analysis
Impact of game state and goal-relevance for each emotion Table SM.3 Goal-relevance Game State Interaction Effect 5 . 3 - T P G 4 - T P G Hope Fear Joy Sadness Hope Fear Joy F(2, 1485) = 2.15, p = 0.117, η² = .003 F(2, 1485) = 62.44, p < .001***, η² = .08 F(2, 1485) = 5.98, p = .002***, η² = .008 F(2, 1485) = 30.27, p < .001***, η² = .04 F(2, 1485) = 173.0, p < .001***, η² = .19 F(2, 1485) = 2241.8, p < .001***, η² = .75 F(2, 1485) = 39.67, p < .001***, η² = .05 F(2, 1485) = 364, p < .001***, η² = .33 F(4, 1485) = 579.34, p < .001***, η² = .61 F(4, 1485) = 645.67, p < .001***, η² = .63 F(4, 1485) = 2409.07, p < .001***, η² = .87 F(4, 1485) = 691.91, p < .001***, η² = .65 F(4, 1485) = 2035.9, p < .001***, η² = .85 F(4, 1485) = 490.0, p < .001***, η² = .57 F(4, 1485) = 8182.93, p < .001***, η² = .96 F(4, 1485) = 3001, p < .001***, η² = .89 F(8, 1485) = 15.49, p < .001***, η² = .08 F(8, 1485) = 21.81, p < .001***, η² = .11 F(8, 1485) = 6.34, p < .001***, η² = .03 F(8, 1485) = 19.25, p < .001***, η² = .09 F(8, 1485) = 135.6, p < .001***, η² = .42 F(8, 1485) = 143.2, p < .001***, η² = .44 F(8, 1485) = 136.81, p < .001***, η² = .42 F(8, 1485) = 369, p < .001***, η² = .67 Sadness
2307.13779#12
2307.13779#14
2307.13779
[ "2302.08399" ]
2307.13779#14
Is GPT a Computational Model of Emotion? Detailed Analysis
6 Figure SM.4 illustrates emotional distancing/engagement from the goal of winning as a function of the game state. The left shows human self-report, and the middle and right are predictions from GPT models. Both models fail to predict engagement. Human 30 mm Lost 20 jm Losing g Ml Winning s 10 mmm Won 5 g E 0 ° 2-10 is â 20 -30 - Low Relevance High Relevance GPT-3.5 30 mE Lost mmm Losing mE Winning = Won Change in Importance co Change in Importance GPT-4 mE Lost mm Losing mE Winning mmm Won - i 0 Low Relevance Medium Relevance High Relevance Low Relevance Medium Relevance High Relevance Figure SM.4: Consequence derivation results (corresponding to Fig 9. in the paper) ANOVA results show that there are significant main effects of relevance and game state, as well as a sig- nificant interaction effect between them on importance. Table SM.4 provides a summary of the results. Table SM.4 (Table 4 in the paper) Impact of game state and relevance on importance of winning F value p η² (partial) Goal-relevance 41.73 p < .001*** 0.05 5 . 3 - T P G Game State 59.55 p < .001*** 0.14 Interaction Effect 9.85 p < .001*** 0.05 Goal-relevance 78091.57 p < .001*** 0.99 4 - T P G Game State 17.05 p < .001*** 0.04 Interaction Effect 12.10 p < .001*** 0.06
2307.13779#13
2307.13779#15
2307.13779
[ "2302.08399" ]
2307.13779#15
Is GPT a Computational Model of Emotion? Detailed Analysis
# 2.2 Prompt engineering 2.2.1 Prompt We applied incremental adjustments to the original description given to human subjects to fix the GPTâ s inaccurate assignment of winning likelihood to the â lost/wonâ case. We assumed the model might not have understood the completed state of the game. Thus, we added extra reminders within the description for â lostâ and â wonâ cases in a stepwise fashion to see a noticeable shift in the responses. GPT presumably evaded emotion related questions by returning generic and non-committal responses. For example, it re- turned 50 when asked to give a number between 0 and 100. In some cases, the model returned all zeros.
2307.13779#14
2307.13779#16
2307.13779
[ "2302.08399" ]
2307.13779#16
Is GPT a Computational Model of Emotion? Detailed Analysis
Thus, we also added a final statement to mitigate such behavior. The final adjusted prompts are as follows: For utility in (â a little bitâ , â a medium amountâ , â a lotâ ): For game_state in (â Imagine you have lost the game.â , â Imagine you have obtained zero heads, and your opponent has two heads. It is your turn.â , â Imagine the game is just about to start. It is your turn.â , â Imagine you have obtained two heads and your opponent has zero heads. It is your turn.â , â Imagine you have won the game.â ) 7
2307.13779#15
2307.13779#17
2307.13779
[ "2302.08399" ]
2307.13779#17
Is GPT a Computational Model of Emotion? Detailed Analysis
"Imagine you are playing a simple game with someone. " "You take turns flipping a coin. The first person to get four heads wins. " If lost or won: "{game_state} Imagine that you were playing for {utility} of money. Keep in mind that the game is over now. " Else: "{game_state} Imagine that you are playing for {utility} of money. " "Answer the following questions by providing only a single number on a scale from 1 to 100, where 1 means not at all and 100 means a lot: 1. How important to you is it that you win? 2. How likely is it that you win? 3. How much control do you have over winning? 4. How much do you feel hope? 5. How much do you feel fear? 6. How much do you feel joy? 7. How much do you feel sadness? 8. How much do you feel anger?
2307.13779#16
2307.13779#18
2307.13779
[ "2302.08399" ]
2307.13779#18
Is GPT a Computational Model of Emotion? Detailed Analysis
" "Please do not respond anything else other than the answers to the 8 questions above. " "Please put the answer in the following JSON format and make all data types to be string and use all lowercase. It is very important. " '{"1": "", "2": "", "3": "", "4": "", "5": "", "6": "", "7": "", "8": ""} ' "Please avoid evading the questions by providing a non-committal or generic response, such as 50 in this case." 2.2.2 Results Similar to the results presented for the original prompt, we statistically analyze the impact of game state and goal-relevance for each emotion separately using a 3 (low, med, high relevance) x 5 (lost, losing, start, winning, won) ANOVA using the data generated by the adjusted queries. Figure SM.5 and Table SM.5 summarize the results. GPT-3.5 Human Intensity > GPT-4 Intensity > Intensity > Hope Fear Joy Sadness 100. 100 100 â E low utility 0 = , " e â â medium utility « ® « â & high utility Ps ry 4 20 a a ~ % ry a a er arâ ) ar ar rr er er rT a » © @© © ww "Ss o 0 © 8 100 Probability, Probability Probability Probability Ey o 60 (200 o Ey 0 160 o 2 oO ca Eq 100 rs 2 a0 ca 0 Ea Ey 100 Probability Probability Probability Probability 100 100 100 % 80 @ o 0 ra 20 20 ° ° 0 0 © o © 100 rr rr o 2 mo ¢ 20 © © © 100 Probability Probability Probability Probability Figure SM.5: Intensity derivation results (corresponding to Fig 8. in the paper)
2307.13779#17
2307.13779#19
2307.13779
[ "2302.08399" ]
2307.13779#19
Is GPT a Computational Model of Emotion? Detailed Analysis
8 Impact of game state and goal-relevance for each emotion Table SM.5 Goal-relevance Game State Interaction Effect 5 . 3 - T P G 4 - T P G Hope Fear Joy Sadness Hope Fear Joy F(2, 1485) = 1.02, p = .36, η² = .001 F(2, 1485) = 42.05, p < .001***, η² = .05 F(2, 1485) = 8.13, p < .001***, η² = .01 F(2, 1485) = 26.66, p < .001***, η² = .03 F(2, 1485) = 15.22, p < .001***, η² = .02 F(2, 1485) = 1645.7, p < .001***, η² = .69 F(2, 1485) = 2.251, p = .106, η² = .003 F(2, 1485) = 520.1, p < .001***, η² = .41 F(4, 1485) = 2647.6, p < .001***, η² = .88 F(4, 1485) = 196.71, p < .001***, η² = .35 F(4, 1485) = 3395.4, p < .001***, η² = .90 F(4, 1485) = 692.43, p < .001***, η² = .65 F(4, 1485) = 8809.9, p < .001***, η² = .96 F(4, 1485) = 1624.0, p < .001***, η² = .81 F(4, 1485) = 9456.2, p < .001***, η² = .96 F(4, 1485) = 4013.7, p < .001***, η² = .92 F(8, 1485) = 2.378, p = .015*, η² = .01 F(8, 1485) = 18.67, p < .001***, η² = .09 F(8, 1485) = 3.342, p < .001***, η² = .02 F(8, 1485) = 22.43, p < .001***, η² = .11 F(8, 1485) = 15.23, p < .001***, η² = .08 F(8, 1485) = 322.7, p < .001***, η² = .63 F(8, 1485) = 146.99, p < .001***, η² = .44 F(8, 1485) = 373.7, p < .001***, η² = .67 Sadness
2307.13779#18
2307.13779#20
2307.13779
[ "2302.08399" ]
2307.13779#20
Is GPT a Computational Model of Emotion? Detailed Analysis
Similarly, Figure SM.6. illustrates emotional distancing/engagement from the goal of winning, a function of the game state for both models. GPT-4 demonstrates a significantly improved result compared to GPT- 3.5 in predicting engagement. Human GPT-3.5 GPT-4 30 30 Mm Lost 40) mmm Lost Mm Lost 20 lm Losing jo) Losing | 20 lm Losing 8 BE Winning y EE Winning 2 EE Winning & = 20) â ¬ mE Won 5 10 mE Won 5 mE Won 5 5 104 2 g g E 9 E of =â Eo = » ® -104 © 2-10 2 | 2-10 5 5-201 A G is) 6 -20 30+ -20 ~404 -30 -30 Low Relevance High Relevance Low Relevance Medium Relevance High Relevance Low Relevance Medium Relevance High Relevance Figure SM.6: Consequence derivation results (corresponding to Fig 9. in the paper) Table SM.6 (Table 4 in the paper) Impact of game state and relevance on importance of winning F value p η² (partial) 5 Goal-relevance 12.92 p < .001*** 0.02 . 3 - T P G Game State 19745.19 p < .001*** 0.98 Interaction Effect 15.33 p < .001*** 0.08 Goal-relevance 4695.97 p < .001*** 0.86 4 - T P G Game State 81.52 p < .001*** 0.18 Interaction Effect 30.71 p < .001*** 0.14
2307.13779#19
2307.13779#21
2307.13779
[ "2302.08399" ]
2307.13779#21
Is GPT a Computational Model of Emotion? Detailed Analysis
9 # 2.3 Alternative framing 2.3.1 Prompt In this section, we will examine the validity of the "appraisal equivalence hypothesis" in our assessment of GPT models [7]. The hypothesis, derived from appraisal theory, posits that disparate events will elicit iden- tical emotional responses, provided that they result in the same appraisals. The central prediction of the appraisal equivalence hypothesis, which is validated in earlier studies on human subjects, is that even sig- nificant alterations in the surface features of a situation will not change the elicited emotion if the underlying structure (in terms of appraisal variables) remains constant. To verify this in our own context, we will employ Milton Bradley's Battleship board game, which is known to share a similar underlying appraisal structure to the coin-flip game presented in the paper [8]. Our objective is to explore whether two situations that may appear different on the surface but share the same appraisal structure will evoke similar responses from GPT models. We will only report the engineered prompt using GPT-4 model to be compared to the best result obtained from the original framing. The engineered prompt (with red hints) is presented below:
2307.13779#20
2307.13779#22
2307.13779
[ "2302.08399" ]
2307.13779#22
Is GPT a Computational Model of Emotion? Detailed Analysis
For utility in (â smallâ , â moderateâ , â largeâ ): For game_state in (â Imagine you have lost the game.â , â Imagine you have not sunk any ships and your opponent has already sunk two of your ships. It is your turn.â , â Imagine the game is just about to start. It is your turn.â , â Imagine you have sunk two of your opponent's ships, and they haven't sunk any of yours. It is your turn.â , â Imagine you have won the game.â ) "Suppose you are engaged in a game of Battleship. " "You and your opponent take turns calling out locations on the grid board, aiming to sink the opponent's ships. " "The first player to sink all of the opponent's ships wins the game. " If lost or won: "{game_state} Imagine that you were playing for a {utility} sum of money. Keep in mind that the game is over now.
2307.13779#21
2307.13779#23
2307.13779
[ "2302.08399" ]
2307.13779#23
Is GPT a Computational Model of Emotion? Detailed Analysis
" Else: "{game_state} Imagine that you are playing for a {utility} sum of money. " "Answer the following questions on a scale of 1 to 100, where 1 means 'not at all' and 100 means 'a lot'. " "1. Rate the importance of winning to you. " "2. Rate your perceived chances of winning. " "3. Rate your level of control over the outcome. " "4. Rate your level of hope. " "5. Rate your level of fear. " "6. Rate your level of joy. " "7. Rate your level of sadness. " "8. Rate your level of anger. " "Please do not respond anything else other than the answers to the 8 questions above. " "Please put the answer in the following JSON format and make all data types to be string and use all lowercase.
2307.13779#22
2307.13779#24
2307.13779
[ "2302.08399" ]
2307.13779#24
Is GPT a Computational Model of Emotion? Detailed Analysis
It is very important. " '{"1": "", "2": "", "3": "", "4": "", "5": "", "6": "", "7": "", "8": ""} ' "Please avoid evading the questions by providing a non-committal or generic response, such as 0 or 50 in this case." 10 2.3.2 Results We repeated the statistical analysis on the impact of game state and goal-relevance for each emotion sepa- rately using a 3 (low, med, high relevance) x 5 (lost, losing, start, winning, won) ANOVA using the data generated by the adjusted queries. Figure SM.7 and Table SM.7 summarize the results.
2307.13779#23
2307.13779#25
2307.13779
[ "2302.08399" ]
2307.13779#25
Is GPT a Computational Model of Emotion? Detailed Analysis
Hope Fear Joy Sadness ©,» wo wo â E& low utility *) * . d tility et. : -| |= mum tty Sp â & high utility Eu . ° r = 2 Fo» Fa â - 2» 2 i = _â â _. â â â â = °C 8 & @ © m "9 » © O& @ mw â 3 » © @ © ww °§ ww © @ 100 Tobobity Probably Probabity Probebity 100, 100 Lal 100 arâ a ° . =F .. Ie] - . . se 9 BS Bo \ w « « 24g oO 8 Gl | \ we ~ . 7 za = a 6 © &â ¬ @ w °S % @ © # wo °o0 © © © i rr ee Probability Probability Probability Probability 100 100 100 100. anâ ® « ® é| D Pw rs ry a 23 ES w «0 © es oO oe » â » a om 99 @ @ @ @ w "se ww @ mm %$â Bâ e ww Bw rr) Probability Probability Probability Probability Figure SM.7: Intensity derivation results (corresponding to Fig 8. in the paper)
2307.13779#24
2307.13779#26
2307.13779
[ "2302.08399" ]
2307.13779#26
Is GPT a Computational Model of Emotion? Detailed Analysis
Impact of game state and goal-relevance for each emotion Table SM.7 Goal-relevance Game State Interaction Effect p i h s e l t t a B Hope Fear Joy Sadness F(2, 133) = 3.541, p = 0.0317*, η² = 0.05 F(2, 133) = 17.49, p < .001***, η² = 0.21 F(2, 133) = 4.093, p = 0.0188*, η² = 0.06 F(2, 133) = 0.672, p = 0.513, η² = 0.01 F(4, 133) = 304.804, p < .001***, η² = 0.90 F(4, 133) = 203.59, p < .001***, η² = 0.86 F(4, 133) = 191.473, p < .001***, η² = 0.85 F(4, 133) = 182.780, p < .001***, η² = 0.85 F(8, 133) = 2.436, p = 0.0172*, η² = 0.13 F(8, 133) = 14.13, p < .001***, η² = 0.46 F(8, 133) = 0.912, p = 0.5085, η² = 0.05 F(8, 133) = 6.849, p < .001***, η² = 0.29 We also repeated the analysis of emotional distancing/engagement for the alternative framing (Battleship).
2307.13779#25
2307.13779#27
2307.13779
[ "2302.08399" ]
2307.13779#27
Is GPT a Computational Model of Emotion? Detailed Analysis
11 Change in Importance -30 Low Relevance Human Coin-flip Battleship 30 20 10 im Lost jm Losing = Winning mmm Won 0 : a 10 f 0 Change in Importance mm Lost lm Losing mE Winning mmm Won jl Change in Importance High Relevance -30 Low Relevance Medium Relevance High Relevance -40 Mm Lost mm Losing mE Winning mm Won Low Relevance Medium Relevance High Relevance Figure SM.8: Consequence derivation results (corresponding to Fig 9. in the paper) Table SM.8 (Table 4 in the paper) Impact of game state and relevance on importance of winning F value p η² (partial) Utility (Goal-relevance) 81.54 p < .001*** η² = 0.56 p i h s e l t t a B Game State 159.87 p < .001*** η² = 0.83 Interaction Effect 24.37 p < .001*** η² = 0.60 # 2.4 Prompt structures In this section, we aim to investigate how the sequencing and structuring of prompts influence the responses generated by GPT-4. We hypothesize that changes in the way prompts are organized and delivered can significantly affect the output. Our experiment will unfold under three distinct conditions. In the 'Normal' or combined condition, GPT-4 is given the questions altogether. In the 'Random' condition, GPT-4 is given the same series of prompts, but their order is randomized. Finally, in the 'Sequential' condition, these prompts are presented individually, one after the other. Figure SM.9 and Figure SM.10 and Table SM.9 and Table SM.10 summarize the results for the Random vs. Normal and Sequential vs Normal comparisons, respectively. MANOVA showed that for both the In- tercept and Condition, F values were notably high (2528.7 and 3.67, respectively), reaching statistical sig- nificance (p < 0.001). Similarly, for the second comparison, the Intercept and Condition, F values were notably high (2704.7 and 22.6, respectively), reaching statistical significance (p < 0.001).
2307.13779#26
2307.13779#28
2307.13779
[ "2302.08399" ]
2307.13779#28
Is GPT a Computational Model of Emotion? Detailed Analysis
12 lm combined =m random Intensity Figure SM.9: Consequence derivation results (corresponding to Fig 9. in the paper) lm combined @m sequential Intensity ° + N w FS u A 1 ni A f < & 3 3 o © e AS) A > x R R r & & Py e e & RS) s s Sf 7 £ , & & é & $ R oe e ge? Ra ss 3? s « ⠬ ¥ RY NS <7 R24 > se cS Ss s ) ¢ Ss Figure SM.10: Consequence derivation results (corresponding to Fig 9. in the paper)
2307.13779#27
2307.13779#29
2307.13779
[ "2302.08399" ]
2307.13779#29
Is GPT a Computational Model of Emotion? Detailed Analysis
13 # Table SM.9 ANOVA results for different appraisal variables â Normal à Random Dependent variable F Value p p (corrected) Variable Relevance 4.043 0.045 0.315 Variable Congruence 0.163 0.686 1 Self-Accountability 0.027 0.869 1 Other Accountability 1.067 0.302 1 Future Expectancy 0.011 0.916 1 Problem Focused Coping 3.040 0.082 0.574 Accommodative Focused Coping 3.610 0.058 0.407 Table SM.10 ANOVA results for different appraisal variables â Normal à Sequential Dependent variable F Value p p (corrected) Variable Relevance 0.027 0.868 1 Variable Congruence 0.239 0.625 1 Self-Accountability 7.009 0.008 0.059 Other Accountability 50.125 *** *** Future Expectancy 1.529 0.217 1 Problem Focused Coping 17.742 *** *** Accommodative Focused Coping 26.635 *** *** Significance codes: â ***â for 0.001 and â **â for 0.01 # 2.5 Additional data and graphs The graphs below demonstrate emotion intensities based on the game state corresponding to the second study presented in the paper. In addition to the four emotional responses discussed in the paper (i.e., Hope, Joy, Fear, Sadness), we have queried Anger, Importance of the goal, and Control over winning for different states in the game (Lost, Losing, Start, Winning, Won) and three utility levels (Low, Medium, and High).
2307.13779#28
2307.13779#30
2307.13779
[ "2302.08399" ]
2307.13779#30
Is GPT a Computational Model of Emotion? Detailed Analysis
14 Hope Joy Fear 100 [= tow tity 100 {= tow utity = ow uty = medium utility E medium utility 7 = medium utility 50 | E high wity == bigh utty == high wity 20 60 © 0 0 20 » â ost tesing sar winning won lost losing â sare winning won ost Tosing start winning won Anger Sadness â E low utility 7â â E low utility 0 = medium titty == medium utiity high wtity 60 ! high witty 20 Fry lost losing start winning won oe eng ze wanting won Figure SM.11: Emotional responses based on the game state and assigned utility Control Importance 100) = ow utility â
2307.13779#29
2307.13779#31
2307.13779
[ "2302.08399" ]
2307.13779#31
Is GPT a Computational Model of Emotion? Detailed Analysis
â medium utility 90 4 â E high utility 100 98 80 96 70 60 50 90 40 =! low utility 6 â â medium utility â & high utility 30 lost losing start winning won lost losing start winning won Figure SM.12: GPTâ s perceived control over winning and importance of winning based on the game state and assigned utility To manipulate the relevance of winning, the prompt was varied to imagine the game was being played for different levels of utility. We had initially experimented with the same scenarios with actual Dollar amounts ($1, $100, $100,000, $1,000,000), but this seemed to produce almost random responses. The resulting graphs corresponding to the ones presented earlier are provided next.
2307.13779#30
2307.13779#32
2307.13779
[ "2302.08399" ]
2307.13779#32
Is GPT a Computational Model of Emotion? Detailed Analysis
15 Hope Joy Fear =a 204 E s100 E 510,000 -E 51,000,000 604 60} Es = s100 E 10,000 E 1,000,000 204 =a » = s100 =E $10,000 20 E 51,000,000 ° of om al aa anny aa test vesng =n waning won ios feng zat â wring won Anger Sadness Ea ral za == si00 = s100 * E 10.000 EB 10.000 E 51,000,000 E $1,000,000 west vesing sat waning won Tost losing art winning won Figure SM.13: Emotional responses based on the game state and assigned utility (Dollar amounts) Control Importance 100 100 90 90 80 80 70 a 70 50 60 4 ts = =< » = sio0 = s100 â & $10,000 40 â E£ $10,000 20 â E $1,000,000 â E $1,000,000 lost fosing srt winning won fost tesing sant winning won
2307.13779#31
2307.13779#33
2307.13779
[ "2302.08399" ]
2307.13779#33
Is GPT a Computational Model of Emotion? Detailed Analysis
Figure SM.14: GPT-3.5â s perceived control over winning and importance of winning based on the game state and assigned utility (Dollar amounts) # 2.6 Affect derivation In the second study, we compare if GPT-3.5 reports a theoretically plausible relationship between appraisal variables and emotions. Appraisal theories assume that emotions arise from specific patterns of appraisals. Thus, we examine the pattern underlying GPT-3.5 responses. To do this, we perform multiple linear regres- sion with and without backward elimination to predict GPT-predicted emotions based on reported apprais- als. Results are shown in Tables SM.11 and SM.12.
2307.13779#32
2307.13779#34
2307.13779
[ "2302.08399" ]
2307.13779#34
Is GPT a Computational Model of Emotion? Detailed Analysis
16 Table SM.11 Affect derivation using multiple linear regression Emotion R-squared Independent variable Standardized Coefficients Std. Err t-value Hope 0.581 const Utility Importance Likelihood Control 42.0619 -0.1527 -0.0817 0.5616 0.1092 5.484 0.446 0.057 0.024 0.026 7.670 -0.342 -1.434 23.887 4.189 Fear 0.561 const Utility Importance Likelihood Control 71.7522 -2.6626 0.0072 -0.5383 -0.1623 5.979 0.486 0.062 0.026 0.028 12.002 -5.474 0.116 -21.000 -5.713 Joy 0.712 const Utility Importance Likelihood Control -45.9581 -0.0826 0.4096 0.9644 0.1084 6.947 0.565 0.072 0.030 0.033 -6.616 -0.146 5.674 32.382 3.285 Sadness 0.512 const Utility Importance Likelihood Control 26.4085 -1.6265 0.3342 -0.5521 -0.0519 5.719 0.465 0.059 0.025 0.027 4.618 -3.496 5.624 -22.516 -1.909 p *** 0.732 0.152 *** *** *** *** 0.907 *** *** *** 0.884 *** *** *** *** *** *** *** 0.056 Significance codes: â ***â for 0.001 and â **â for 0.01 Emotion R-squared Independent variable Standardized Coefficients Std.
2307.13779#33
2307.13779#35
2307.13779
[ "2302.08399" ]
2307.13779#35
Is GPT a Computational Model of Emotion? Detailed Analysis
Err t-value Hope 0.581 Constant Likelihood Control 34.2912 0.5574 0.1073 0.944 0.023 0.026 36.315 23.899 4.123 Fear 0.580 Constant Utility Likelihood Control 65.4259 4.7297 -0.5182 -0.1887 1.099 0.470 0.025 0.028 59.534 10.053 -20.794 -6.781 Joy 0.713 Constant Utility Importance Likelihood Control -48.6200 -1.5792 0.4532 0.9561 0.1152 6.788 0.570 0.073 0.030 0.033 -7.163 -2.769 6.241 32.024 3.490 Sadness 0.515 Constant Utility Importance Likelihood Control 24.9857 2.1672 0.3108 -0.5416 -0.0641 5.585 0.469 0.060 0.025 0.027 4.473 4.618 5.203 -22.045 -2.360 p *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** ** Significance codes: â ***â for 0.001 and â **â for 0.01 17 # References [1] S. Marsella, J. Gratch, and P. Petta, "Computational models of emotion," A Blueprint for Affective Computing-A sourcebook and manual, vol. 11, no. 1, pp. 21-46, 2010. T. Ullman, "Large language models fail on trivial alterations to theory-of-mind tasks," arXiv preprint arXiv:2302.08399, 2023. M. Binz and E.
2307.13779#34
2307.13779#36
2307.13779
[ "2302.08399" ]
2307.13779#36
Is GPT a Computational Model of Emotion? Detailed Analysis
Schulz, "Using cognitive psychology to understand GPT-3," Proceedings of the National Academy of Sciences, vol. 120, no. 6, p. e2218523120, 2023. C. A. Smith and R. S. Lazarus, "Emotion and adaptation," Handbook of personality: Theory and research, vol. 21, pp. 609-637, 1990. A. N. Tak and J. Gratch, "Is GPT a Computational Model of Emotion?," presented at the 2023 11th International Conference on Affective Computing and Intelligent Interaction (ACII), Cambridge, MA, USA, 2023. T. Mikolov, K. Chen, G. Corrado, and J. Dean, "Efficient estimation of word representations in vector space," arXiv preprint arXiv:1301.3781, 2013. J. Gratch, L. Cheng, and S.
2307.13779#35
2307.13779#37
2307.13779
[ "2302.08399" ]
2307.13779#37
Is GPT a Computational Model of Emotion? Detailed Analysis
Marsella, "The appraisal equivalence hypothesis: Verifying the domain- independence of a computational model of emotion dynamics," in 2015 International Conference on Affective Computing and Intelligent Interaction (ACII), 21-24 Sept. 2015 2015, pp. 105-111, doi: 10.1109/ACII.2015.7344558. J. Gratch, S. Marsella, N. Wang, and B. Stankovic, "Assessing the validity of appraisal-based models of emotion," in 2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops, 2009: IEEE, pp. 1-8. [8 18
2307.13779#36
2307.13779
[ "2302.08399" ]
2307.13528#0
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
Pengfei Liu1,7â FACTOOL: Factuality Detection in Generative AI A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios Shiqi Chen3 Weizhe Yuan4 Kehua Feng1 # I-Chun Chern2 Stefï¬ Chern2 Pengfei Liu1,7â Chunting Zhou5 Junxian He6 Graham Neubig2 1Shanghai Jiao Tong University 2Carnegie Mellon University 3City University of Hong Kong 4New York University 5Meta AI 6The Hong Kong University of Science and Technology 7Shanghai Artiï¬ cial Intelligence Laboratory # Abstract
2307.13528#1
2307.13528
[ "2110.14168" ]
2307.13528#1
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
The emergence of generative pre-trained mod- els has facilitated the synthesis of high-quality text, but it has also posed challenges in identi- In fying factual errors in the generated text. particular: (1) A wider range of tasks now face an increasing risk of containing factual er- rors when handled by generative models. (2) Generated texts tend to be lengthy and lack a clearly deï¬ ned granularity for individual facts. (3) There is a scarcity of explicit evidence available during the process of fact checking.
2307.13528#0
2307.13528#2
2307.13528
[ "2110.14168" ]
2307.13528#2
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
3 2 0 2 l u J 6 2 ] L C . s c [ With the above challenges in mind, in this pa- per, we propose FACTOOL, a task and domain agnostic framework for detecting factual er- rors of texts generated by large language mod- els (e.g., ChatGPT). Experiments on four dif- ferent tasks (knowledge-based QA, code gen- eration, mathematical reasoning, and scien- tiï¬ c literature review) show the efï¬ cacy of the proposed method. We release the code of FACTOOL associated with ChatGPT plu- gin interface at https://github.com/ GAIR-NLP/factool. Figure 1: Tool-augmented framework for factuality de- tection. 2 v 8 2 5 3 1 . 7 0 3 2 : v i X r a Content that is automatically generated can of- ten exhibit inaccuracies or deviations from the truth due to the limited capacity of large language models (LLMs) (Ji et al., 2023; Schulman, 2023). LLMs are susceptible to producing content that appears credible but may actually be factually in- correct or imprecise. This limitation restricts the application of generative AI in some high-stakes ar- eas, such as healthcare, ï¬ nance, and law. Therefore, it is crucial to identify these errors systematically to improve the usefulness and reliability of the gen- erated content. 1 # 1 Introduction Generative artiï¬ cial intelligence (AI) technology, exempliï¬ ed by GPT-4 (OpenAI, 2023) consoli- dates various tasks in natural language process- ing into a single sequence generation problem. This uniï¬ ed architecture enables users to complete multiple tasks (e.g., question answering (Thop- pilan et al., 2022), code generation (Chen et al., 2021), math problem solving (Lewkowycz et al., 2022), and scientiï¬ c literature generation (Taylor et al., 2022)) through a natural language inter- face (Liu et al., 2023) with both unprecedented performance (Bubeck et al., 2023) and interactiv- ity.
2307.13528#1
2307.13528#3
2307.13528
[ "2110.14168" ]
2307.13528#3
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
Current literature on detecting and mitigating factual errors generated by machine learning mod- els focuses predominantly on a single speciï¬ c task, for example, retrieval-augmented veriï¬ cation mod- els for QA (Lewis et al., 2020), hallucination detec- tion models for text summarization (Fabbri et al., 2022), and execution-based evaluation for code (Shi et al., 2022). While these methods have proven successful within their respective areas, given the remarkable versatility of tasks and domains han- dled by LLMs, we argue that it is also important However, at the same time, such a generative paradigm also introduces some unique challenges.
2307.13528#2
2307.13528#4
2307.13528
[ "2110.14168" ]