doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
2307.01142
3
2 INTRODUCTION Previous research has demonstrated ways that intelligence can be integrated into UI [2, 4, 13, 15, 19]. In ‘Wizard of Oz’ systems, an expert manually controls UI features to simulate an intelligent user interface [4, 19]. Similarly, crowdsourcing systems, such as Soylent [2], integrate crowdworkers to power UIs through crowd workflows [13, 15]. Finally, specialized machine learning models have also been trained for a specific task and then embedded into systems and interfaces [23]. Across these systems, rules, heuristics, workflows, and specialized models guide the ways that interface affordances can be enhanced with intelligence. Recent advances in natural language processing have resulted in large language models (LLMs), such as GPT-3 [3], which have the ability to understand natural language prompts and generate relevant text responses. These models are already being used to facilitate creative work [5, 6, 9, 20, 27]. However, it is not yet clear UIST 2022, Oct 20–Nov 2, 2022, Bend, Oregon
2307.01142#3
Prompt Middleware: Mapping Prompts for Large Language Models to UI Affordances
To help users do complex work, researchers have developed techniques to integrate AI and human intelligence into user interfaces (UIs). With the recent introduction of large language models (LLMs), which can generate text in response to a natural language prompt, there are new opportunities to consider how to integrate LLMs into UIs. We present Prompt Middleware, a framework for generating prompts for LLMs based on UI affordances. These include prompts that are predefined by experts (static prompts), generated from templates with fill-in options in the UI (template-based prompts), or created from scratch (free-form prompts). We demonstrate this framework with FeedbackBuffet, a writing assistant that automatically generates feedback based on a user's text input. Inspired by prior research showing how templates can help non-experts perform more like experts, FeedbackBuffet leverages template-based prompt middleware to enable feedback seekers to specify the types of feedback they want to receive as options in a UI. These options are composed using a template to form a feedback request prompt to GPT-3. We conclude with a discussion about how Prompt Middleware can help developers integrate LLMs into UIs.
http://arxiv.org/pdf/2307.01142
Stephen MacNeil, Andrew Tran, Joanne Kim, Ziheng Huang, Seth Bernstein, Dan Mogil
cs.HC
null
null
cs.HC
20230703
20230703
[]
2307.01135
4
unanswered regarding the performance and user experience of ChatGPT-like systems compared to traditional search engines. It is crucial to examine how ChatGPT’s conversational nature affects 1 search result accuracy and relevance, explore the trade-offs between its intuitive responses and traditional search engines’ faster results, and investigate how the transition from list-based results to conversational responses influences user satisfaction and search efficiency. Addressing these questions will aid in evaluating the advantages and disadvantages of adopting chat-based search systems like ChatGPT and guide the development of more effective search tools by revealing how user behaviors may evolve with the integration of AI-powered conversational systems. In this study, we conduct a randomized online experiment with the objective of providing a comprehensive comparison between users’ search behaviors and performance when utilizing large language model-powered chatbots and search engine tools, exemplified by ChatGPT and Google Search. We aim to answer the following research questions: 1) How do user behaviors differ when utilizing ChatGPT compared to Google Search for information-seeking tasks? Are these variations consistent across different types of search tasks? 2) Does ChatGPT level user search performance across different education levels? 3) How do users perceive the information quality, trust, and user experience of ChatGPT compared to traditional search engines like Google Search? Two search tools are developed for the experiment to replicate the functions and interfaces
2307.01135#4
ChatGPT vs. Google: A Comparative Study of Search Performance and User Experience
The advent of ChatGPT, a large language model-powered chatbot, has prompted questions about its potential implications for traditional search engines. In this study, we investigate the differences in user behavior when employing search engines and chatbot tools for information-seeking tasks. We carry out a randomized online experiment, dividing participants into two groups: one using a ChatGPT-like tool and the other using a Google Search-like tool. Our findings reveal that the ChatGPT group consistently spends less time on all tasks, with no significant difference in overall task performance between the groups. Notably, ChatGPT levels user search performance across different education levels and excels in answering straightforward questions and providing general solutions but falls short in fact-checking tasks. Users perceive ChatGPT's responses as having higher information quality compared to Google Search, despite displaying a similar level of trust in both tools. Furthermore, participants using ChatGPT report significantly better user experiences in terms of usefulness, enjoyment, and satisfaction, while perceived ease of use remains comparable between the two tools. However, ChatGPT may also lead to overreliance and generate or replicate misinformation, yielding inconsistent results. Our study offers valuable insights for search engine management and highlights opportunities for integrating chatbot technologies into search engine designs.
http://arxiv.org/pdf/2307.01135
Ruiyun Xu, Yue Feng, Hailiang Chen
cs.AI, cs.HC, cs.IR
30 pages, 5 figures, 2 tables
null
cs.AI
20230703
20230703
[ { "id": "2304.07619" }, { "id": "2303.17564" }, { "id": "2303.10130" } ]
2307.01142
4
UIST 2022, Oct 20–Nov 2, 2022, Bend, Oregon how to best integrate LLMs into existing UI. In this paper, we explore three methods for integrating LLMs into UI using prompts that are predefined by experts (static prompts), generated from templates with fill-in options in the UI (template-based prompts), or created from scratch (free-form prompts). These three techniques for integrating LLMs into UIs, which we call Prompt Middleware, provide varying amounts of control and guidance to users over the underlying prompt generation process. We demonstrate the concept of Prompt Middleware by developing FeedbackBuffet, a writing assistant that generates feedback for users by guiding them through a menu of feedback options allowing them to determine the type of feedback they would like to receive. FeedbackBuffet implements the ‘template prompt’ middleware to package these feedback options into a prompt for GPT-3. # 3 PROMPT MIDDLEWARE: CONNECTING UI AFFORDANCES TO LLMS Crafting high-quality prompts is challenging [11, 22]. To help peo- ple create high-quality prompts, PromptMaker guides users to cre- ate their own prompts with templates and procedural guidance [11]. Another approach called AI Chaining simplifies a complex prompt- ing process by splitting a request into smaller requests which are individually prompted and then stitched back together [24]. This approach was shown to improve performance and transparency.
2307.01142#4
Prompt Middleware: Mapping Prompts for Large Language Models to UI Affordances
To help users do complex work, researchers have developed techniques to integrate AI and human intelligence into user interfaces (UIs). With the recent introduction of large language models (LLMs), which can generate text in response to a natural language prompt, there are new opportunities to consider how to integrate LLMs into UIs. We present Prompt Middleware, a framework for generating prompts for LLMs based on UI affordances. These include prompts that are predefined by experts (static prompts), generated from templates with fill-in options in the UI (template-based prompts), or created from scratch (free-form prompts). We demonstrate this framework with FeedbackBuffet, a writing assistant that automatically generates feedback based on a user's text input. Inspired by prior research showing how templates can help non-experts perform more like experts, FeedbackBuffet leverages template-based prompt middleware to enable feedback seekers to specify the types of feedback they want to receive as options in a UI. These options are composed using a template to form a feedback request prompt to GPT-3. We conclude with a discussion about how Prompt Middleware can help developers integrate LLMs into UIs.
http://arxiv.org/pdf/2307.01142
Stephen MacNeil, Andrew Tran, Joanne Kim, Ziheng Huang, Seth Bernstein, Dan Mogil
cs.HC
null
null
cs.HC
20230703
20230703
[]
2307.01135
5
Two search tools are developed for the experiment to replicate the functions and interfaces of ChatGPT and Google Search, respectively. We recruit research participants through the Prolific platform (https://www.prolific.co/) and randomly assign them to one of our search tools to complete three information search tasks. After removing invalid responses, our final sample includes a total of 95 participants, with 48 in the ChatGPT group and 47 in the Google Search group. The results show that participants using ChatGPT consistently spend less time on all the tasks. However, their overall task performance does not differ significantly between the two tools. 2 Interestingly, we find that ChatGPT exhibits higher performance for tasks with straightforward questions but does not perform as well in fact-checking tasks, where we observe that ChatGPT is often incapable of correcting mistakes in the user prompt. We also find that ChatGPT levels user search performance across different education levels, while search performance on Google Search is positively correlated with education level. Participants perceive ChatGPT’s responses to have higher information quality when compared to the information obtained from Google Search, despite displaying a comparable level of trust in both tools. Moreover, users in the ChatGPT group report significantly higher user experiences in terms of usefulness, enjoyment, and satisfaction, # but a similar level of perceived ease of use.
2307.01135#5
ChatGPT vs. Google: A Comparative Study of Search Performance and User Experience
The advent of ChatGPT, a large language model-powered chatbot, has prompted questions about its potential implications for traditional search engines. In this study, we investigate the differences in user behavior when employing search engines and chatbot tools for information-seeking tasks. We carry out a randomized online experiment, dividing participants into two groups: one using a ChatGPT-like tool and the other using a Google Search-like tool. Our findings reveal that the ChatGPT group consistently spends less time on all tasks, with no significant difference in overall task performance between the groups. Notably, ChatGPT levels user search performance across different education levels and excels in answering straightforward questions and providing general solutions but falls short in fact-checking tasks. Users perceive ChatGPT's responses as having higher information quality compared to Google Search, despite displaying a similar level of trust in both tools. Furthermore, participants using ChatGPT report significantly better user experiences in terms of usefulness, enjoyment, and satisfaction, while perceived ease of use remains comparable between the two tools. However, ChatGPT may also lead to overreliance and generate or replicate misinformation, yielding inconsistent results. Our study offers valuable insights for search engine management and highlights opportunities for integrating chatbot technologies into search engine designs.
http://arxiv.org/pdf/2307.01135
Ruiyun Xu, Yue Feng, Hailiang Chen
cs.AI, cs.HC, cs.IR
30 pages, 5 figures, 2 tables
null
cs.AI
20230703
20230703
[ { "id": "2304.07619" }, { "id": "2303.17564" }, { "id": "2303.10130" } ]
2307.01142
5
Where previous work has focused on making prompt engineer- ing easier, these approaches have not yet addressed two crucial aspects: 1) techniques to scaffold domain expertise into the prompting process, and 2) directly integrating LLMs into user interfaces. We propose Prompt Middleware as a framework for achieving these two goal by mapping options in the UI to generate prompts for an LLM. Prompt Middleware acts as a middle layer between the LLM and the UI, while also embedding domain expertise into the prompt- ing process. The UI abstracts away the complexity of the prompts and separates concerns between a user completing their tasks and the prompts that might guide LLMs to help them in those tasks. Summarized in Figure 1, the following sections introduce three Prompt Middleware types: static, template-based, and free-form. 3.1 Static prompts leverage best practices Prompt engineering and few-shot learning are common techniques to improve the quality of responses from LLMs [22]. We propose the concept of static prompts as a method for making these best practices available to users in a UI. A static prompt is a predefined prompt generated by experts through prompt engineering to achieve high- quality responses from an LLM. As shown in Figure 1, static prompts can be hidden behind a button in a UI to send a predefined prompt to an LLM on behalf of the user. This allows users to tap into best practices with minimal effort but at the cost of giving up control of prompt generation.
2307.01142#5
Prompt Middleware: Mapping Prompts for Large Language Models to UI Affordances
To help users do complex work, researchers have developed techniques to integrate AI and human intelligence into user interfaces (UIs). With the recent introduction of large language models (LLMs), which can generate text in response to a natural language prompt, there are new opportunities to consider how to integrate LLMs into UIs. We present Prompt Middleware, a framework for generating prompts for LLMs based on UI affordances. These include prompts that are predefined by experts (static prompts), generated from templates with fill-in options in the UI (template-based prompts), or created from scratch (free-form prompts). We demonstrate this framework with FeedbackBuffet, a writing assistant that automatically generates feedback based on a user's text input. Inspired by prior research showing how templates can help non-experts perform more like experts, FeedbackBuffet leverages template-based prompt middleware to enable feedback seekers to specify the types of feedback they want to receive as options in a UI. These options are composed using a template to form a feedback request prompt to GPT-3. We conclude with a discussion about how Prompt Middleware can help developers integrate LLMs into UIs.
http://arxiv.org/pdf/2307.01142
Stephen MacNeil, Andrew Tran, Joanne Kim, Ziheng Huang, Seth Bernstein, Dan Mogil
cs.HC
null
null
cs.HC
20230703
20230703
[]
2307.01135
6
# but a similar level of perceived ease of use. As the first empirical study to systematically compare ChatGPT with traditional search engines like Google Search, this research makes several significant contributions to the academic literature on information search and human-computer interaction. By examining the differences in user behavior when utilizing search engines compared to chatbot tools, this study sheds light on how users adapt their information-seeking strategies to fit the affordances of these distinct technologies. Furthermore, our investigation into whether ChatGPT levels user search performance across different education levels enriches the literature on the digital divide by demonstrating the democratizing effects of advanced AI-based chatbot systems. Finally, by assessing users’ perceptions of information quality, trust, and user experience in ChatGPT compared to traditional search engines, the research contributes to our understanding of user attitudes and preferences in the rapidly evolving landscape of information retrieval technologies. Importantly, the insights from this research inform the future development of large language models (LLMs) and search technologies, offering valuable guidance for creating more effective and user-centric tools in this field. 3 # 2. Literature Review Our study aims to conduct a randomized online experiment to scrutinize the differences of search performance and experience exhibited by users when employing ChatGPT and the Google search engine for information retrieval. To this end, we undertake a thorough review of recent studies concerning ChatGPT and previous research on information search as well as experimental designs involving search users.
2307.01135#6
ChatGPT vs. Google: A Comparative Study of Search Performance and User Experience
The advent of ChatGPT, a large language model-powered chatbot, has prompted questions about its potential implications for traditional search engines. In this study, we investigate the differences in user behavior when employing search engines and chatbot tools for information-seeking tasks. We carry out a randomized online experiment, dividing participants into two groups: one using a ChatGPT-like tool and the other using a Google Search-like tool. Our findings reveal that the ChatGPT group consistently spends less time on all tasks, with no significant difference in overall task performance between the groups. Notably, ChatGPT levels user search performance across different education levels and excels in answering straightforward questions and providing general solutions but falls short in fact-checking tasks. Users perceive ChatGPT's responses as having higher information quality compared to Google Search, despite displaying a similar level of trust in both tools. Furthermore, participants using ChatGPT report significantly better user experiences in terms of usefulness, enjoyment, and satisfaction, while perceived ease of use remains comparable between the two tools. However, ChatGPT may also lead to overreliance and generate or replicate misinformation, yielding inconsistent results. Our study offers valuable insights for search engine management and highlights opportunities for integrating chatbot technologies into search engine designs.
http://arxiv.org/pdf/2307.01135
Ruiyun Xu, Yue Feng, Hailiang Chen
cs.AI, cs.HC, cs.IR
30 pages, 5 figures, 2 tables
null
cs.AI
20230703
20230703
[ { "id": "2304.07619" }, { "id": "2303.17564" }, { "id": "2303.10130" } ]
2307.01142
6
3.2 Template-based prompts provide flexibility Previous researchers have shown how expertise and best prac- tices can be directly embedded into templates to guide crowdwork- ers [12], non-experts [10, 17, 18, 28], and even experts [8] to do better work. For example, Motif, a video storytelling application, leverages storytelling patterns to guide users’ story creation [12]. MacNeil et al. Inspired by how templates can guide people, we explore how expert templates might similarly guide LLMs. We propose template-based prompts as a method for generating prompts by filling in a pre- made template with options from a user interface. The template and user interface can integrate expertise and best practices while giving users more control through options in the UI. 3.3 Free-form prompts provide full control Previous research shows that developing free-form prompts can be challenging [11]. However, experts can generate high-quality prompts through the process of prompt engineering. Providing users with full control of the prompting process may be desired in some cases. Free-form prompts provide full access to users as they design their prompt from scratch.
2307.01142#6
Prompt Middleware: Mapping Prompts for Large Language Models to UI Affordances
To help users do complex work, researchers have developed techniques to integrate AI and human intelligence into user interfaces (UIs). With the recent introduction of large language models (LLMs), which can generate text in response to a natural language prompt, there are new opportunities to consider how to integrate LLMs into UIs. We present Prompt Middleware, a framework for generating prompts for LLMs based on UI affordances. These include prompts that are predefined by experts (static prompts), generated from templates with fill-in options in the UI (template-based prompts), or created from scratch (free-form prompts). We demonstrate this framework with FeedbackBuffet, a writing assistant that automatically generates feedback based on a user's text input. Inspired by prior research showing how templates can help non-experts perform more like experts, FeedbackBuffet leverages template-based prompt middleware to enable feedback seekers to specify the types of feedback they want to receive as options in a UI. These options are composed using a template to form a feedback request prompt to GPT-3. We conclude with a discussion about how Prompt Middleware can help developers integrate LLMs into UIs.
http://arxiv.org/pdf/2307.01142
Stephen MacNeil, Andrew Tran, Joanne Kim, Ziheng Huang, Seth Bernstein, Dan Mogil
cs.HC
null
null
cs.HC
20230703
20230703
[]
2307.01135
7
# 2.1 ChatGPT and Its Impacts Recent advancements in LLMs, such as ChatGPT, have generated substantial interest due to their potential impact across various domains, including but not limited to research, education, finance, and healthcare. Many experts anticipate that LLMs will revolutionize these fields and lead to a paradigm shift. At the same time, concerns have emerged regarding potential issues associated with LLMs, such as hallucination, misinformation, copyright violation, institutionalizing bias, interpretability, misapplication, and over-reliance (Jo 2023; Sohail et al. 2023; Susarla et al., 2023). Regarding the future of work, Eloundou et al. (2023) investigate the potential impact of LLMs on the U.S. labor market, suggesting that LLMs such as Generative Pre-trained Transforms exhibit traits of General-Purpose Technologies, leading to significant economic, social, and policy implications. Felten et al. (2023) present a methodology to estimate the impact of AI language modeling on various occupations and industries. Specifically, research conducted by MIT scholars demonstrates that ChatGPT significantly enhances productivity in professional writing tasks (Noy and Zhang, 2023). In the realm of academic research, ChatGPT has demonstrated the capacity to transform research practices. Studies published in prestigious journals like Nature report that ChatGPT aids researchers in tasks such as analyzing and writing scientific papers, generating code, and 4
2307.01135#7
ChatGPT vs. Google: A Comparative Study of Search Performance and User Experience
The advent of ChatGPT, a large language model-powered chatbot, has prompted questions about its potential implications for traditional search engines. In this study, we investigate the differences in user behavior when employing search engines and chatbot tools for information-seeking tasks. We carry out a randomized online experiment, dividing participants into two groups: one using a ChatGPT-like tool and the other using a Google Search-like tool. Our findings reveal that the ChatGPT group consistently spends less time on all tasks, with no significant difference in overall task performance between the groups. Notably, ChatGPT levels user search performance across different education levels and excels in answering straightforward questions and providing general solutions but falls short in fact-checking tasks. Users perceive ChatGPT's responses as having higher information quality compared to Google Search, despite displaying a similar level of trust in both tools. Furthermore, participants using ChatGPT report significantly better user experiences in terms of usefulness, enjoyment, and satisfaction, while perceived ease of use remains comparable between the two tools. However, ChatGPT may also lead to overreliance and generate or replicate misinformation, yielding inconsistent results. Our study offers valuable insights for search engine management and highlights opportunities for integrating chatbot technologies into search engine designs.
http://arxiv.org/pdf/2307.01135
Ruiyun Xu, Yue Feng, Hailiang Chen
cs.AI, cs.HC, cs.IR
30 pages, 5 figures, 2 tables
null
cs.AI
20230703
20230703
[ { "id": "2304.07619" }, { "id": "2303.17564" }, { "id": "2303.10130" } ]
2307.01142
7
4 CASE STUDY The case study methodology is a technique for illustrating an idea through the use of examples [26]. To engage more deeply with the concept of Prompt Middleware, we developed the FeedbackBuffet prototype which implements the template-based prompting design pattern. This case study illustrates what template-based prompting might look like when implemented in a user interface. 4.1 FeedbackBuffet System FeedbackBuffet is a writing assistant that allows users to request automated feedback for any writing sample, such as an essay, email, or statement of purpose, based on UI options. As shown in Fig- ure 2, UI options offer users relevant feedback options which are combined using a template to form a prompt for GPT-3. The tem- plate integrates best practices of feedback design and cues the feed- back seeker to consider qualities of good feedback. FeedbackBuffet implements the template-based prompt middleware to integrate intelligence into the interface. System Implementation. The system is implemented as a 4.1.1 ReactJS web app. The prompts are generated through template literals (i.e.: string interpolation) where each selected option from the UI is injected into the template to form a string that is sent as a prompt to OpenAI via API calls using zero-shot learning.
2307.01142#7
Prompt Middleware: Mapping Prompts for Large Language Models to UI Affordances
To help users do complex work, researchers have developed techniques to integrate AI and human intelligence into user interfaces (UIs). With the recent introduction of large language models (LLMs), which can generate text in response to a natural language prompt, there are new opportunities to consider how to integrate LLMs into UIs. We present Prompt Middleware, a framework for generating prompts for LLMs based on UI affordances. These include prompts that are predefined by experts (static prompts), generated from templates with fill-in options in the UI (template-based prompts), or created from scratch (free-form prompts). We demonstrate this framework with FeedbackBuffet, a writing assistant that automatically generates feedback based on a user's text input. Inspired by prior research showing how templates can help non-experts perform more like experts, FeedbackBuffet leverages template-based prompt middleware to enable feedback seekers to specify the types of feedback they want to receive as options in a UI. These options are composed using a template to form a feedback request prompt to GPT-3. We conclude with a discussion about how Prompt Middleware can help developers integrate LLMs into UIs.
http://arxiv.org/pdf/2307.01142
Stephen MacNeil, Andrew Tran, Joanne Kim, Ziheng Huang, Seth Bernstein, Dan Mogil
cs.HC
null
null
cs.HC
20230703
20230703
[]
2307.01135
8
4 facilitating idea generation (Dowling and Lucey 2023; Hustson 2023; Susarla et al., 2023; Van Dis et al. 2023). In finance, ChatGPT has shown promise in forecasting stock price movements and improving the performance of quantitative trading strategies (Lopez-Lira and Tang, 2023). Hansen and Kazinnik (2023) investigate the ability of GPT models to decipher Fedspeak, specifically classifying the policy stance of Federal Open Market Committee announcements as dovish or hawkish. Wu et al. (2023) introduce BloombergGTP, a large language model with 50 billion parameters trained on both general-purpose and finance-specific datasets. In the context of information search, there is a scarcity of research investigating how
2307.01135#8
ChatGPT vs. Google: A Comparative Study of Search Performance and User Experience
The advent of ChatGPT, a large language model-powered chatbot, has prompted questions about its potential implications for traditional search engines. In this study, we investigate the differences in user behavior when employing search engines and chatbot tools for information-seeking tasks. We carry out a randomized online experiment, dividing participants into two groups: one using a ChatGPT-like tool and the other using a Google Search-like tool. Our findings reveal that the ChatGPT group consistently spends less time on all tasks, with no significant difference in overall task performance between the groups. Notably, ChatGPT levels user search performance across different education levels and excels in answering straightforward questions and providing general solutions but falls short in fact-checking tasks. Users perceive ChatGPT's responses as having higher information quality compared to Google Search, despite displaying a similar level of trust in both tools. Furthermore, participants using ChatGPT report significantly better user experiences in terms of usefulness, enjoyment, and satisfaction, while perceived ease of use remains comparable between the two tools. However, ChatGPT may also lead to overreliance and generate or replicate misinformation, yielding inconsistent results. Our study offers valuable insights for search engine management and highlights opportunities for integrating chatbot technologies into search engine designs.
http://arxiv.org/pdf/2307.01135
Ruiyun Xu, Yue Feng, Hailiang Chen
cs.AI, cs.HC, cs.IR
30 pages, 5 figures, 2 tables
null
cs.AI
20230703
20230703
[ { "id": "2304.07619" }, { "id": "2303.17564" }, { "id": "2303.10130" } ]
2307.01142
8
Integrate Best Practices in Feedback Design. There are prin- 4.1.2 ciples and best practices for feedback design, such as asking a clarifying question and then making a statement [16], sandwiching criticism between two positive comments [7], and making feedback actionable [14]. The feedback template used by FeedbackBuffet is based on a feedback framework that includes valence, level of abstraction, and feedback type [1], summarized in Figure 2. We present examples of the feedback generated by GPT-3 using our template in Figure 3. 4.2 Use Case: Requesting Design Feedback To illustrate how FeedbackBuffet operates, we present the following use case about Sasha, a CS student who is taking career prepara- tory course to work on his statement of purpose. Sasha completes a first draft of his statement and he receives feedback from his instructor that critiques the structure—Sasha did not start with a strong motivation. He focused too much on the graduate program Prompt Middleware UIST 2022, Oct 20–Nov 2, 2022, Bend, Oregon
2307.01142#8
Prompt Middleware: Mapping Prompts for Large Language Models to UI Affordances
To help users do complex work, researchers have developed techniques to integrate AI and human intelligence into user interfaces (UIs). With the recent introduction of large language models (LLMs), which can generate text in response to a natural language prompt, there are new opportunities to consider how to integrate LLMs into UIs. We present Prompt Middleware, a framework for generating prompts for LLMs based on UI affordances. These include prompts that are predefined by experts (static prompts), generated from templates with fill-in options in the UI (template-based prompts), or created from scratch (free-form prompts). We demonstrate this framework with FeedbackBuffet, a writing assistant that automatically generates feedback based on a user's text input. Inspired by prior research showing how templates can help non-experts perform more like experts, FeedbackBuffet leverages template-based prompt middleware to enable feedback seekers to specify the types of feedback they want to receive as options in a UI. These options are composed using a template to form a feedback request prompt to GPT-3. We conclude with a discussion about how Prompt Middleware can help developers integrate LLMs into UIs.
http://arxiv.org/pdf/2307.01142
Stephen MacNeil, Andrew Tran, Joanne Kim, Ziheng Huang, Seth Bernstein, Dan Mogil
cs.HC
null
null
cs.HC
20230703
20230703
[]
2307.01135
9
In the context of information search, there is a scarcity of research investigating how ChatGPT influences individuals’ information-seeking behaviors compared to traditional search engines. To our knowledge, two medical studies have compared responses to health-related queries generated by ChatGPT and Google Search, finding that ChatGPT-generated responses are as valuable as, or even more valuable than, the information provided by Google (Hopkins et al., 2023; Van Bulck and Moons, 2023). However, these studies are limited in scope and represent medical experts’ opinions. Our study differs from these studies in several ways. First, we focus on tasks in the general domain rather than the medical domain. Second, we conduct a randomized online experiment with many participants who perform searches by themselves and formulate their own queries. We also collect these search users’ opinions and attitudes toward both tools. Lastly, we include an objective assessment of user search performance on both ChatGPT and Google Search. # 2.2 Information Search: Past and Present Internet search technologies have been continuously developing for over 30 years, starting with the creation of the first pre-Web Internet search engines in the early 1990s (Gasser 2006). In this 5 section, we aim to present a concise review of search technologies, offering an overview of their evolutionary progress.
2307.01135#9
ChatGPT vs. Google: A Comparative Study of Search Performance and User Experience
The advent of ChatGPT, a large language model-powered chatbot, has prompted questions about its potential implications for traditional search engines. In this study, we investigate the differences in user behavior when employing search engines and chatbot tools for information-seeking tasks. We carry out a randomized online experiment, dividing participants into two groups: one using a ChatGPT-like tool and the other using a Google Search-like tool. Our findings reveal that the ChatGPT group consistently spends less time on all tasks, with no significant difference in overall task performance between the groups. Notably, ChatGPT levels user search performance across different education levels and excels in answering straightforward questions and providing general solutions but falls short in fact-checking tasks. Users perceive ChatGPT's responses as having higher information quality compared to Google Search, despite displaying a similar level of trust in both tools. Furthermore, participants using ChatGPT report significantly better user experiences in terms of usefulness, enjoyment, and satisfaction, while perceived ease of use remains comparable between the two tools. However, ChatGPT may also lead to overreliance and generate or replicate misinformation, yielding inconsistent results. Our study offers valuable insights for search engine management and highlights opportunities for integrating chatbot technologies into search engine designs.
http://arxiv.org/pdf/2307.01135
Ruiyun Xu, Yue Feng, Hailiang Chen
cs.AI, cs.HC, cs.IR
30 pages, 5 figures, 2 tables
null
cs.AI
20230703
20230703
[ { "id": "2304.07619" }, { "id": "2303.17564" }, { "id": "2303.10130" } ]
2307.01142
9
Feedback Buffet Define your feedback: Prompt Generation Add the text you want feedback on below: Type Since the day | got my first computer, | loved programs such as hackathons and boot camps. to learn new skills. | eventually realized that | did not want to stop at a Bachelor's degree in Computer Science. This led me to want to pursue a Master of Science in Computer Science at LLM University. | am interested to learn about Machine Learning and Artificial Intelligence through LLM Univeristy’s demanding courses. These areas interest me because of the infinite possibilities ML and Al can create. | hope to use this opportunity to further expand my range of knowledge and learn skills that would let me make a direct impact in the world. Artifact Valance © Critical Deliverable ist -What first got you interested in computers and technology? low did you teach yourself to code? hat kind of projects have you undertaken in the past? -What made you want to pursue a Master's degree in Computer Science? -What specifically interests you about Machine Learning and Artificial Intelligence? -What do you hope to gaine from pursuing a Master’s degree at LLM University? © Suggestions © Questions. © Example learning about new technologies. As | got
2307.01142#9
Prompt Middleware: Mapping Prompts for Large Language Models to UI Affordances
To help users do complex work, researchers have developed techniques to integrate AI and human intelligence into user interfaces (UIs). With the recent introduction of large language models (LLMs), which can generate text in response to a natural language prompt, there are new opportunities to consider how to integrate LLMs into UIs. We present Prompt Middleware, a framework for generating prompts for LLMs based on UI affordances. These include prompts that are predefined by experts (static prompts), generated from templates with fill-in options in the UI (template-based prompts), or created from scratch (free-form prompts). We demonstrate this framework with FeedbackBuffet, a writing assistant that automatically generates feedback based on a user's text input. Inspired by prior research showing how templates can help non-experts perform more like experts, FeedbackBuffet leverages template-based prompt middleware to enable feedback seekers to specify the types of feedback they want to receive as options in a UI. These options are composed using a template to form a feedback request prompt to GPT-3. We conclude with a discussion about how Prompt Middleware can help developers integrate LLMs into UIs.
http://arxiv.org/pdf/2307.01142
Stephen MacNeil, Andrew Tran, Joanne Kim, Ziheng Huang, Seth Bernstein, Dan Mogil
cs.HC
null
null
cs.HC
20230703
20230703
[]
2307.01135
10
The first search engine, named Archie, was created in 1990 with the purpose of downloading directory listings from FTP sites and generating a database of filenames that could be easily searched (Gasser 2006). Subsequently, with the introduction of the World Wide Web in 1991, a wave of new search engines, such as Gopher, Veronica, and Jughead, emerged to assist users in navigating the rapidly expanding network. These early search engines were developed mainly based on indexing and keyword matching methods (Croft et al. 2010). In 1998, Larry Page and Sergey Brin, two Ph.D. students at Stanford University, developed a search engine algorithm called PageRank, which ranked web pages based on the number and quality of other pages linking to them (Brin and Page 1998). The PageRank approach revolutionized search by providing more relevant and high-quality results. This innovation laid the foundation for the establishment of Google, which rapidly emerged as the dominant search engine on a global scale, handling billions of search queries each day. Alongside Google, other popular search engines include Yahoo!, Bing, Baidu, Yandex and so on. The dominant search paradigm for most search engines is keyword- based search. Under this paradigm, a
2307.01135#10
ChatGPT vs. Google: A Comparative Study of Search Performance and User Experience
The advent of ChatGPT, a large language model-powered chatbot, has prompted questions about its potential implications for traditional search engines. In this study, we investigate the differences in user behavior when employing search engines and chatbot tools for information-seeking tasks. We carry out a randomized online experiment, dividing participants into two groups: one using a ChatGPT-like tool and the other using a Google Search-like tool. Our findings reveal that the ChatGPT group consistently spends less time on all tasks, with no significant difference in overall task performance between the groups. Notably, ChatGPT levels user search performance across different education levels and excels in answering straightforward questions and providing general solutions but falls short in fact-checking tasks. Users perceive ChatGPT's responses as having higher information quality compared to Google Search, despite displaying a similar level of trust in both tools. Furthermore, participants using ChatGPT report significantly better user experiences in terms of usefulness, enjoyment, and satisfaction, while perceived ease of use remains comparable between the two tools. However, ChatGPT may also lead to overreliance and generate or replicate misinformation, yielding inconsistent results. Our study offers valuable insights for search engine management and highlights opportunities for integrating chatbot technologies into search engine designs.
http://arxiv.org/pdf/2307.01135
Ruiyun Xu, Yue Feng, Hailiang Chen
cs.AI, cs.HC, cs.IR
30 pages, 5 figures, 2 tables
null
cs.AI
20230703
20230703
[ { "id": "2304.07619" }, { "id": "2303.17564" }, { "id": "2303.10130" } ]
2307.01142
10
do you hope to gaine from pursuing a Master’s degree at LLM University? © Suggestions © Questions. © Example learning about new technologies. As | got older, | © Criticism taught myself how to code and started building F projects that interest me. | would participate in cae © Grammar Content © Structure ‘Statement of Purpose © Constructive Level of Abstraction © General © Specific Template Framework for Feedback Prompting ICharacteristic Example Critica, Constructive Suggestions, Criticism, Questions, Examples Focus Style, Content, Grammar, Structure List, Paragraph, Sentence, Rating Statement of Purpose, Emil, Short Essay, Resume [ Abstraction] General, Specific Prompt | Give me (valence) and (abstraction) (type) on the : (focus) as alcieliverable) for the (artifact) below: Request G OpenAl GPT-3 #Give me critical and general questions on the content as a list for the ‘statement of purpose below Since the day | got my first computer, I loves learning abut new technologies. ‘As | got older, | taught myself how to code and started building projects that interest me. | would participate in programssuch as hackathons and boot camps to learn new skill, | eventually
2307.01142#10
Prompt Middleware: Mapping Prompts for Large Language Models to UI Affordances
To help users do complex work, researchers have developed techniques to integrate AI and human intelligence into user interfaces (UIs). With the recent introduction of large language models (LLMs), which can generate text in response to a natural language prompt, there are new opportunities to consider how to integrate LLMs into UIs. We present Prompt Middleware, a framework for generating prompts for LLMs based on UI affordances. These include prompts that are predefined by experts (static prompts), generated from templates with fill-in options in the UI (template-based prompts), or created from scratch (free-form prompts). We demonstrate this framework with FeedbackBuffet, a writing assistant that automatically generates feedback based on a user's text input. Inspired by prior research showing how templates can help non-experts perform more like experts, FeedbackBuffet leverages template-based prompt middleware to enable feedback seekers to specify the types of feedback they want to receive as options in a UI. These options are composed using a template to form a feedback request prompt to GPT-3. We conclude with a discussion about how Prompt Middleware can help developers integrate LLMs into UIs.
http://arxiv.org/pdf/2307.01142
Stephen MacNeil, Andrew Tran, Joanne Kim, Ziheng Huang, Seth Bernstein, Dan Mogil
cs.HC
null
null
cs.HC
20230703
20230703
[]
2307.01135
12
# Pokorny 2004). Google has continuously strived to enhance search result quality and improve user experience through a series of launches and updates. In addition to numerous minor adjustments, the Google search engine has introduced more than 20 significant algorithmic updates (Search Engine Journal 2023). One notable update is the Panda algorithm, implemented in 2011, which 6 introduced the concept of content quality as a ranking factor. The Panda algorithm evaluates factors such as originality, authority, and trustworthiness to assess the quality of web pages (Goodwin 2021). Machine learning algorithms play a crucial role in assigning pages a quality classification that aligns with human judgments of content quality. In 2012, Google launched the Penguin algorithm, further bolstering search quality. This update specifically targeted web spam by identifying and devaluing pages that employed black hat link building techniques to artificially boost their rankings (Schwartz 2016). By penalizing such manipulative practices, the Penguin algorithm aimed to ensure that search results prioritized high-quality and relevant content. The state-of-the-art search technologies utilize artificial intelligence and knowledge graphs.
2307.01135#12
ChatGPT vs. Google: A Comparative Study of Search Performance and User Experience
The advent of ChatGPT, a large language model-powered chatbot, has prompted questions about its potential implications for traditional search engines. In this study, we investigate the differences in user behavior when employing search engines and chatbot tools for information-seeking tasks. We carry out a randomized online experiment, dividing participants into two groups: one using a ChatGPT-like tool and the other using a Google Search-like tool. Our findings reveal that the ChatGPT group consistently spends less time on all tasks, with no significant difference in overall task performance between the groups. Notably, ChatGPT levels user search performance across different education levels and excels in answering straightforward questions and providing general solutions but falls short in fact-checking tasks. Users perceive ChatGPT's responses as having higher information quality compared to Google Search, despite displaying a similar level of trust in both tools. Furthermore, participants using ChatGPT report significantly better user experiences in terms of usefulness, enjoyment, and satisfaction, while perceived ease of use remains comparable between the two tools. However, ChatGPT may also lead to overreliance and generate or replicate misinformation, yielding inconsistent results. Our study offers valuable insights for search engine management and highlights opportunities for integrating chatbot technologies into search engine designs.
http://arxiv.org/pdf/2307.01135
Ruiyun Xu, Yue Feng, Hailiang Chen
cs.AI, cs.HC, cs.IR
30 pages, 5 figures, 2 tables
null
cs.AI
20230703
20230703
[ { "id": "2304.07619" }, { "id": "2303.17564" }, { "id": "2303.10130" } ]
2307.01142
12
Figure 2: FeedbackBuffet enables users to insert writing samples (1) and select from a set of predefined options for the type of feedback they want to receive (2). Using a template, these options are combined to form a prompt (3) which is sent to GPT-3 using OpenAI’s public API (4). GPT-3 then generates feedback, which is displayed in a text box for the user to review (5). before motivating the reader. After adding motivation based on his journey into computers, he uses FeedbackBuffet to get more feedback before his next class. He pastes his statement into the in- put area and selects options to request feedback about the content of this draft. These options along with his draft are packaged as a prompt and then sent to GPT-3. He receives the feedback shown in Figure 2. Based on this feedback, Sasha edits his statement to add more specific details about how he learned to code by forming an informal group with his friends. He continues to iterate on his statement of purpose, periodically referring back to FeedbackBuffet, and he is excited to show his progress to his instructor.
2307.01142#12
Prompt Middleware: Mapping Prompts for Large Language Models to UI Affordances
To help users do complex work, researchers have developed techniques to integrate AI and human intelligence into user interfaces (UIs). With the recent introduction of large language models (LLMs), which can generate text in response to a natural language prompt, there are new opportunities to consider how to integrate LLMs into UIs. We present Prompt Middleware, a framework for generating prompts for LLMs based on UI affordances. These include prompts that are predefined by experts (static prompts), generated from templates with fill-in options in the UI (template-based prompts), or created from scratch (free-form prompts). We demonstrate this framework with FeedbackBuffet, a writing assistant that automatically generates feedback based on a user's text input. Inspired by prior research showing how templates can help non-experts perform more like experts, FeedbackBuffet leverages template-based prompt middleware to enable feedback seekers to specify the types of feedback they want to receive as options in a UI. These options are composed using a template to form a feedback request prompt to GPT-3. We conclude with a discussion about how Prompt Middleware can help developers integrate LLMs into UIs.
http://arxiv.org/pdf/2307.01142
Stephen MacNeil, Andrew Tran, Joanne Kim, Ziheng Huang, Seth Bernstein, Dan Mogil
cs.HC
null
null
cs.HC
20230703
20230703
[]
2307.01135
13
The state-of-the-art search technologies utilize artificial intelligence and knowledge graphs. For instance, back in 2012, Google announced a Google Knowledge Graph covering a wide swath of subject matter and applied the knowledge graph to enable more intelligent search by providing instant answer to users’ search queries (Google 2012). Google Hummingbird, an algorithm launched in 2013, is an AI-based system that helps understand the context and meaning behind search queries. The Hummingbird algorithm has advanced beyond the basic practice of matching keywords in a query to keywords on a webpage. Instead, it has become more sophisticated in its ability to present pages that closely align with the inherent topic of the search query (Montti 2022). Additionally, the Hummingbird algorithm enables processing of longer conversational search queries. The potential of generative AI to revolutionize information search is immense due to its great performance in natural language understanding. However, there is still much to explore regarding how this cutting-edge technology impacts users’ search performance and experience. Understanding these effects is crucial for harnessing the full potential of generative AI to enhance user experience in information search. In this study, we delve into this domain and strive to provide 7 a comprehensive comparison between traditional search engines and ChatGPT, shedding light on their respective strengths and capabilities. # 2.3 Experimental Design for Search Users
2307.01135#13
ChatGPT vs. Google: A Comparative Study of Search Performance and User Experience
The advent of ChatGPT, a large language model-powered chatbot, has prompted questions about its potential implications for traditional search engines. In this study, we investigate the differences in user behavior when employing search engines and chatbot tools for information-seeking tasks. We carry out a randomized online experiment, dividing participants into two groups: one using a ChatGPT-like tool and the other using a Google Search-like tool. Our findings reveal that the ChatGPT group consistently spends less time on all tasks, with no significant difference in overall task performance between the groups. Notably, ChatGPT levels user search performance across different education levels and excels in answering straightforward questions and providing general solutions but falls short in fact-checking tasks. Users perceive ChatGPT's responses as having higher information quality compared to Google Search, despite displaying a similar level of trust in both tools. Furthermore, participants using ChatGPT report significantly better user experiences in terms of usefulness, enjoyment, and satisfaction, while perceived ease of use remains comparable between the two tools. However, ChatGPT may also lead to overreliance and generate or replicate misinformation, yielding inconsistent results. Our study offers valuable insights for search engine management and highlights opportunities for integrating chatbot technologies into search engine designs.
http://arxiv.org/pdf/2307.01135
Ruiyun Xu, Yue Feng, Hailiang Chen
cs.AI, cs.HC, cs.IR
30 pages, 5 figures, 2 tables
null
cs.AI
20230703
20230703
[ { "id": "2304.07619" }, { "id": "2303.17564" }, { "id": "2303.10130" } ]
2307.01142
13
Researchers have identified several challenges individuals face when interacting with general purpose AI, such as a lack of aware- ness about the AI’s capabilities which can lead them to request overly complicated on non-existent tasks from the AI agent [25]. Researchers are still developing an understanding of the capabil- ities of LLMs, but in this paper we show it is possible to convey known possibilities afforded by LLMs through a UI. Through static prompts, users can use prompts that have been engineered by ex- perts to be effective. Through template-based prompts, they can choose from a list of menu options to generate prompts that have been previously tested by experts. This ability to communicate the capabilities afforded by LLMs has the potential to make them more accessible for non-experts.
2307.01142#13
Prompt Middleware: Mapping Prompts for Large Language Models to UI Affordances
To help users do complex work, researchers have developed techniques to integrate AI and human intelligence into user interfaces (UIs). With the recent introduction of large language models (LLMs), which can generate text in response to a natural language prompt, there are new opportunities to consider how to integrate LLMs into UIs. We present Prompt Middleware, a framework for generating prompts for LLMs based on UI affordances. These include prompts that are predefined by experts (static prompts), generated from templates with fill-in options in the UI (template-based prompts), or created from scratch (free-form prompts). We demonstrate this framework with FeedbackBuffet, a writing assistant that automatically generates feedback based on a user's text input. Inspired by prior research showing how templates can help non-experts perform more like experts, FeedbackBuffet leverages template-based prompt middleware to enable feedback seekers to specify the types of feedback they want to receive as options in a UI. These options are composed using a template to form a feedback request prompt to GPT-3. We conclude with a discussion about how Prompt Middleware can help developers integrate LLMs into UIs.
http://arxiv.org/pdf/2307.01142
Stephen MacNeil, Andrew Tran, Joanne Kim, Ziheng Huang, Seth Bernstein, Dan Mogil
cs.HC
null
null
cs.HC
20230703
20230703
[]
2307.01135
14
7 a comprehensive comparison between traditional search engines and ChatGPT, shedding light on their respective strengths and capabilities. # 2.3 Experimental Design for Search Users To examine how users interact with search engines to address various search tasks and how an enhanced search engine design improves user performance, researchers often employ experimental methods that involve simulating realistic search scenarios. These methods allow researchers to observe and analyze users’ search behavior and performance in controlled experimental settings (e.g., Adipat et al. 2011; Liu et al., 2020; Sagar et al., 2019; Storey et al. 2008). In our study, we build upon these prior studies by designing a range of search tasks for our experiment. We manipulate the task complexity, drawing inspiration from studies such as Wildemuth and Freund (2004) and Liu et al. (2020). Additionally, we incorporate commonly used metrics to assess search performance and user experience, including time spent on search tasks, task performance, perceived information quality, satisfaction, and others (Sargar et al. 2019, Liu 2021). # 3. Experimental Design and Data
2307.01135#14
ChatGPT vs. Google: A Comparative Study of Search Performance and User Experience
The advent of ChatGPT, a large language model-powered chatbot, has prompted questions about its potential implications for traditional search engines. In this study, we investigate the differences in user behavior when employing search engines and chatbot tools for information-seeking tasks. We carry out a randomized online experiment, dividing participants into two groups: one using a ChatGPT-like tool and the other using a Google Search-like tool. Our findings reveal that the ChatGPT group consistently spends less time on all tasks, with no significant difference in overall task performance between the groups. Notably, ChatGPT levels user search performance across different education levels and excels in answering straightforward questions and providing general solutions but falls short in fact-checking tasks. Users perceive ChatGPT's responses as having higher information quality compared to Google Search, despite displaying a similar level of trust in both tools. Furthermore, participants using ChatGPT report significantly better user experiences in terms of usefulness, enjoyment, and satisfaction, while perceived ease of use remains comparable between the two tools. However, ChatGPT may also lead to overreliance and generate or replicate misinformation, yielding inconsistent results. Our study offers valuable insights for search engine management and highlights opportunities for integrating chatbot technologies into search engine designs.
http://arxiv.org/pdf/2307.01135
Ruiyun Xu, Yue Feng, Hailiang Chen
cs.AI, cs.HC, cs.IR
30 pages, 5 figures, 2 tables
null
cs.AI
20230703
20230703
[ { "id": "2304.07619" }, { "id": "2303.17564" }, { "id": "2303.10130" } ]
2307.01142
14
5 DISCUSSION In this paper, we build on existing research [2, 10, 17, 18, 21] for integrating expertise and intelligence into UIs. We introduce the Prompt Middleware Framework to guide the process of integrating LLMs into a UI. We demonstrate this vision with FeedbackBuffet, a intelligent writing assistant that automatically generates feedback based on text input. Given that previous attempts at integrating intelligence may require effort to acquire intelligence sources or be costly, FeedbackBuffet offers a lightweight method for integrating intelligence and best practices into a UI. FeedbackBuffet’s UI acts as a facade around the LLM, abstracting away the complexity of interacting with LLMs. While FeedbackBuffet currently focuses on template-based prompts, we could include static prompts as well. For example, with a button titled ‘Pros and Cons’, which would send the prompt shown in Figure 1 to an LLM. As future work, we plan to evaluate three systems, including FeedbackBuffet, that embody these three types of prompt middle- ware to understand how best to integrate LLMs into existing UI. Through this evaluation, we also hope to develop a better under- standing of how much control is desired when interacting with LLMs through UI. While complete control in the form of free-form prompts might be desired in some contexts, it likely depends. For example, a feedback system based on static prompts, which provide less control, may simplify the feedback request process.
2307.01142#14
Prompt Middleware: Mapping Prompts for Large Language Models to UI Affordances
To help users do complex work, researchers have developed techniques to integrate AI and human intelligence into user interfaces (UIs). With the recent introduction of large language models (LLMs), which can generate text in response to a natural language prompt, there are new opportunities to consider how to integrate LLMs into UIs. We present Prompt Middleware, a framework for generating prompts for LLMs based on UI affordances. These include prompts that are predefined by experts (static prompts), generated from templates with fill-in options in the UI (template-based prompts), or created from scratch (free-form prompts). We demonstrate this framework with FeedbackBuffet, a writing assistant that automatically generates feedback based on a user's text input. Inspired by prior research showing how templates can help non-experts perform more like experts, FeedbackBuffet leverages template-based prompt middleware to enable feedback seekers to specify the types of feedback they want to receive as options in a UI. These options are composed using a template to form a feedback request prompt to GPT-3. We conclude with a discussion about how Prompt Middleware can help developers integrate LLMs into UIs.
http://arxiv.org/pdf/2307.01142
Stephen MacNeil, Andrew Tran, Joanne Kim, Ziheng Huang, Seth Bernstein, Dan Mogil
cs.HC
null
null
cs.HC
20230703
20230703
[]
2307.01135
15
# 3. Experimental Design and Data We employ a between-subjects design with two conditions (LLM-powered chatbot vs. traditional search engine) in the online experiment, where participants are randomly assigned to one of the two conditions. For this purpose, we develop two website tools: one simulating ChatGPT and the other mimicking Google Search. To ensure a realistic user experience, we closely replicate the interfaces of ChatGPT and Google Search. Figures 1 and 2 display screenshots of the interfaces for each tool, respectively. For the chatbot tool, we employ OpenAI’s Chat Completion API (https://platform.openai.com/docs/api-reference/chat) and the gpt-3.5-turbo model to generate responses to user prompts. Both user prompts and API responses are displayed on the same webpage for each chat session. The chatbot retains memory of the past few rounds of prompts and 8
2307.01135#15
ChatGPT vs. Google: A Comparative Study of Search Performance and User Experience
The advent of ChatGPT, a large language model-powered chatbot, has prompted questions about its potential implications for traditional search engines. In this study, we investigate the differences in user behavior when employing search engines and chatbot tools for information-seeking tasks. We carry out a randomized online experiment, dividing participants into two groups: one using a ChatGPT-like tool and the other using a Google Search-like tool. Our findings reveal that the ChatGPT group consistently spends less time on all tasks, with no significant difference in overall task performance between the groups. Notably, ChatGPT levels user search performance across different education levels and excels in answering straightforward questions and providing general solutions but falls short in fact-checking tasks. Users perceive ChatGPT's responses as having higher information quality compared to Google Search, despite displaying a similar level of trust in both tools. Furthermore, participants using ChatGPT report significantly better user experiences in terms of usefulness, enjoyment, and satisfaction, while perceived ease of use remains comparable between the two tools. However, ChatGPT may also lead to overreliance and generate or replicate misinformation, yielding inconsistent results. Our study offers valuable insights for search engine management and highlights opportunities for integrating chatbot technologies into search engine designs.
http://arxiv.org/pdf/2307.01135
Ruiyun Xu, Yue Feng, Hailiang Chen
cs.AI, cs.HC, cs.IR
30 pages, 5 figures, 2 tables
null
cs.AI
20230703
20230703
[ { "id": "2304.07619" }, { "id": "2303.17564" }, { "id": "2303.10130" } ]
2307.01142
15
6 CONCLUSION In this paper, we present FeedbackBuffet, a writing assistant that generates feedback on writing samples using GPT-3. The user can choose from a set of feedback options that are combined using a template to form a prompt for GPT-3. This system demonstrates UIST 2022, Oct 20–Nov 2, 2022, Bend, Oregon how templates can serve as middleware to map affordances in a user interface to prompt a large language model. This work serves as an initial step toward developing a prompt middleware that can bridge the gap between users and large language models. REFERENCES [1] Ibis Alvarez, Anna Espasa, and Teresa Guasch. 2012. The value of feedback in improving collaborative writing assignments in an online learning environment. Studies in Higher Education 37, 4 (2012), 387–400. [2] Michael S Bernstein, Greg Little, Robert C Miller, Björn Hartmann, Mark S Ackerman, David R Karger, David Crowell, and Katrina Panovich. 2010. Soylent: a word processor with a crowd inside. In Proceedings of the 23nd annual ACM symposium on User interface software and technology. 313–322.
2307.01142#15
Prompt Middleware: Mapping Prompts for Large Language Models to UI Affordances
To help users do complex work, researchers have developed techniques to integrate AI and human intelligence into user interfaces (UIs). With the recent introduction of large language models (LLMs), which can generate text in response to a natural language prompt, there are new opportunities to consider how to integrate LLMs into UIs. We present Prompt Middleware, a framework for generating prompts for LLMs based on UI affordances. These include prompts that are predefined by experts (static prompts), generated from templates with fill-in options in the UI (template-based prompts), or created from scratch (free-form prompts). We demonstrate this framework with FeedbackBuffet, a writing assistant that automatically generates feedback based on a user's text input. Inspired by prior research showing how templates can help non-experts perform more like experts, FeedbackBuffet leverages template-based prompt middleware to enable feedback seekers to specify the types of feedback they want to receive as options in a UI. These options are composed using a template to form a feedback request prompt to GPT-3. We conclude with a discussion about how Prompt Middleware can help developers integrate LLMs into UIs.
http://arxiv.org/pdf/2307.01142
Stephen MacNeil, Andrew Tran, Joanne Kim, Ziheng Huang, Seth Bernstein, Dan Mogil
cs.HC
null
null
cs.HC
20230703
20230703
[]
2307.01135
16
8 responses, allowing it to conduct natural conversations. For the search engine tool, we utilize Google’s Custom Search JSON API (https://developers.google.com/custom-search/v1/overview) to handle search queries. The search results are displayed on different pages, with each page containing at most 10 result items. To monitor user search behavior, we provide each participant with a pre-registered user account. Participants must log in to their assigned accounts and use the corresponding tool to perform their tasks. For the ChatGPT tool, we record each user prompt and the corresponding response generated by the GPT-3.5 model. For the Google Search tool, we track each submitted search query, the different page views of search results for the same query, and any clicks on search result items. The timestamps of these user and API actions are also recorded. We recruit 112 participants through the Prolific platform, using the selection criteria of
2307.01135#16
ChatGPT vs. Google: A Comparative Study of Search Performance and User Experience
The advent of ChatGPT, a large language model-powered chatbot, has prompted questions about its potential implications for traditional search engines. In this study, we investigate the differences in user behavior when employing search engines and chatbot tools for information-seeking tasks. We carry out a randomized online experiment, dividing participants into two groups: one using a ChatGPT-like tool and the other using a Google Search-like tool. Our findings reveal that the ChatGPT group consistently spends less time on all tasks, with no significant difference in overall task performance between the groups. Notably, ChatGPT levels user search performance across different education levels and excels in answering straightforward questions and providing general solutions but falls short in fact-checking tasks. Users perceive ChatGPT's responses as having higher information quality compared to Google Search, despite displaying a similar level of trust in both tools. Furthermore, participants using ChatGPT report significantly better user experiences in terms of usefulness, enjoyment, and satisfaction, while perceived ease of use remains comparable between the two tools. However, ChatGPT may also lead to overreliance and generate or replicate misinformation, yielding inconsistent results. Our study offers valuable insights for search engine management and highlights opportunities for integrating chatbot technologies into search engine designs.
http://arxiv.org/pdf/2307.01135
Ruiyun Xu, Yue Feng, Hailiang Chen
cs.AI, cs.HC, cs.IR
30 pages, 5 figures, 2 tables
null
cs.AI
20230703
20230703
[ { "id": "2304.07619" }, { "id": "2303.17564" }, { "id": "2303.10130" } ]
2307.01142
16
[3] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in Neural Information Processing Systems 33 (2020), 1877–1901. [4] Nils Dahlbäck, Arne Jönsson, and Lars Ahrenberg. 1993. Wizard of Oz stud- ies—why and how. Knowledge-based systems 6, 4 (1993), 258–266. [5] Giulia Di Fede, Davide Rocchesso, Steven P. Dow, and Salvatore Andolina. 2022. The Idea Machine: LLM-Based Expansion, Rewriting, Combination, and Sugges- tion of Ideas. In Proceedings of the 14th Conference on Creativity and Cognition (Venice, Italy) (C&C ’22). Association for Computing Machinery, New York, NY, USA, 623–627. https://doi.org/10.1145/3527927.3535197
2307.01142#16
Prompt Middleware: Mapping Prompts for Large Language Models to UI Affordances
To help users do complex work, researchers have developed techniques to integrate AI and human intelligence into user interfaces (UIs). With the recent introduction of large language models (LLMs), which can generate text in response to a natural language prompt, there are new opportunities to consider how to integrate LLMs into UIs. We present Prompt Middleware, a framework for generating prompts for LLMs based on UI affordances. These include prompts that are predefined by experts (static prompts), generated from templates with fill-in options in the UI (template-based prompts), or created from scratch (free-form prompts). We demonstrate this framework with FeedbackBuffet, a writing assistant that automatically generates feedback based on a user's text input. Inspired by prior research showing how templates can help non-experts perform more like experts, FeedbackBuffet leverages template-based prompt middleware to enable feedback seekers to specify the types of feedback they want to receive as options in a UI. These options are composed using a template to form a feedback request prompt to GPT-3. We conclude with a discussion about how Prompt Middleware can help developers integrate LLMs into UIs.
http://arxiv.org/pdf/2307.01142
Stephen MacNeil, Andrew Tran, Joanne Kim, Ziheng Huang, Seth Bernstein, Dan Mogil
cs.HC
null
null
cs.HC
20230703
20230703
[]
2307.01135
17
We recruit 112 participants through the Prolific platform, using the selection criteria of being located in the USA and having English as their first language. Participants are also required to use desktop computers to complete the study. These participants are then randomly assigned to either the ChatGPT or Google Search group for the experiment. Participants are instructed to use the assigned tool to complete three tasks, and the use of any other tools is strictly prohibited. Furthermore, we require participants to avoid relying on their own knowledge to provide answers. We also ask participants to record the time they spend on each task using a timer. To ensure that participants clearly understand the requirements and instructions, we include two comprehension questions before they can proceed with the tasks. In addition, participants who fail attention checks during the process are removed from our sample. Our final sample comprises 95 participants, with 48 using the ChatGPT tool and 47 using the Google Search tool for the tasks. We design three tasks with varying levels of complexity for the experiment by referring to previous research on search user experiment. Specifically, Task 1 involves a specific question that asks participants to find out “the name of the first woman to travel in space and her age at the time 9
2307.01135#17
ChatGPT vs. Google: A Comparative Study of Search Performance and User Experience
The advent of ChatGPT, a large language model-powered chatbot, has prompted questions about its potential implications for traditional search engines. In this study, we investigate the differences in user behavior when employing search engines and chatbot tools for information-seeking tasks. We carry out a randomized online experiment, dividing participants into two groups: one using a ChatGPT-like tool and the other using a Google Search-like tool. Our findings reveal that the ChatGPT group consistently spends less time on all tasks, with no significant difference in overall task performance between the groups. Notably, ChatGPT levels user search performance across different education levels and excels in answering straightforward questions and providing general solutions but falls short in fact-checking tasks. Users perceive ChatGPT's responses as having higher information quality compared to Google Search, despite displaying a similar level of trust in both tools. Furthermore, participants using ChatGPT report significantly better user experiences in terms of usefulness, enjoyment, and satisfaction, while perceived ease of use remains comparable between the two tools. However, ChatGPT may also lead to overreliance and generate or replicate misinformation, yielding inconsistent results. Our study offers valuable insights for search engine management and highlights opportunities for integrating chatbot technologies into search engine designs.
http://arxiv.org/pdf/2307.01135
Ruiyun Xu, Yue Feng, Hailiang Chen
cs.AI, cs.HC, cs.IR
30 pages, 5 figures, 2 tables
null
cs.AI
20230703
20230703
[ { "id": "2304.07619" }, { "id": "2303.17564" }, { "id": "2303.10130" } ]
2307.01142
17
[6] Zijian Ding, Arvind Srinivasan, Stephen Macneil, and Joel Chan. 2023. Fluid Transformers and Creative Analogies: Exploring Large Language Models’ Ca- pacity for Augmenting Cross-Domain Analogical Creativity. In Proceedings of the 15th Conference on Creativity and Cognition (Virtual Event, USA) (C&C ’23). Association for Computing Machinery, New York, NY, USA, 489–505. https://doi.org/10.1145/3591196.3593516 [7] Dennis M Docheff. 1990. The feedback sandwich. Journal of Physical Education, Recreation & Dance 61, 9 (1990), 17–18. [8] Atul Gawande. 2009. The Checklist Manifesto: How to Get Things Right. Metropol- itan Books. [9] Ziheng Huang, Kexin Quan, Joel Chan, and Stephen MacNeil. 2023. CausalMap- per: Challenging Designers to Think in Systems with Causal Maps and Large Language Model. In Proceedings of the 15th Conference on Creativity and Cognition (Virtual Event, USA) (C&C ’23). Association for Computing Machinery, New York, NY, USA, 325–329. https://doi.org/10.1145/3591196.3596818
2307.01142#17
Prompt Middleware: Mapping Prompts for Large Language Models to UI Affordances
To help users do complex work, researchers have developed techniques to integrate AI and human intelligence into user interfaces (UIs). With the recent introduction of large language models (LLMs), which can generate text in response to a natural language prompt, there are new opportunities to consider how to integrate LLMs into UIs. We present Prompt Middleware, a framework for generating prompts for LLMs based on UI affordances. These include prompts that are predefined by experts (static prompts), generated from templates with fill-in options in the UI (template-based prompts), or created from scratch (free-form prompts). We demonstrate this framework with FeedbackBuffet, a writing assistant that automatically generates feedback based on a user's text input. Inspired by prior research showing how templates can help non-experts perform more like experts, FeedbackBuffet leverages template-based prompt middleware to enable feedback seekers to specify the types of feedback they want to receive as options in a UI. These options are composed using a template to form a feedback request prompt to GPT-3. We conclude with a discussion about how Prompt Middleware can help developers integrate LLMs into UIs.
http://arxiv.org/pdf/2307.01142
Stephen MacNeil, Andrew Tran, Joanne Kim, Ziheng Huang, Seth Bernstein, Dan Mogil
cs.HC
null
null
cs.HC
20230703
20230703
[]
2307.01135
18
9 of her flight” (Wildemuth and Freund 2004). In Task 2, participants are required to list five websites with links that can be used for booking a flight between two cities (Phoenix and Cincinnati) in the USA. Task 3 is a fact-checking task in which we ask participants to read an excerpt of a news article and fact-check three italicized statements: (1) The 2009 United Nations Climate Change Conference, commonly known as the Copenhagen Summit, was held in Copenhagen, Denmark, between 7 and 15 December; (2) On the final day of the conference, the UN climate summit reached a weak outline of a global agreement in Copenhagen, which fell significantly below the expectations of Britain and many poor countries; and (3) The United States drew substantial criticism from numerous observers as they arrived at the talks with a proposal of a mere 6% reduction in emissions based on 1990 levels. Participants need to indicate whether each statement is “True” or “False” and provide evidence or corrections, if any. The design of Task 3 draws inspiration from a prior study conducted by Liu et al. (2020), but the specific details have been developed from scratch.
2307.01135#18
ChatGPT vs. Google: A Comparative Study of Search Performance and User Experience
The advent of ChatGPT, a large language model-powered chatbot, has prompted questions about its potential implications for traditional search engines. In this study, we investigate the differences in user behavior when employing search engines and chatbot tools for information-seeking tasks. We carry out a randomized online experiment, dividing participants into two groups: one using a ChatGPT-like tool and the other using a Google Search-like tool. Our findings reveal that the ChatGPT group consistently spends less time on all tasks, with no significant difference in overall task performance between the groups. Notably, ChatGPT levels user search performance across different education levels and excels in answering straightforward questions and providing general solutions but falls short in fact-checking tasks. Users perceive ChatGPT's responses as having higher information quality compared to Google Search, despite displaying a similar level of trust in both tools. Furthermore, participants using ChatGPT report significantly better user experiences in terms of usefulness, enjoyment, and satisfaction, while perceived ease of use remains comparable between the two tools. However, ChatGPT may also lead to overreliance and generate or replicate misinformation, yielding inconsistent results. Our study offers valuable insights for search engine management and highlights opportunities for integrating chatbot technologies into search engine designs.
http://arxiv.org/pdf/2307.01135
Ruiyun Xu, Yue Feng, Hailiang Chen
cs.AI, cs.HC, cs.IR
30 pages, 5 figures, 2 tables
null
cs.AI
20230703
20230703
[ { "id": "2304.07619" }, { "id": "2303.17564" }, { "id": "2303.10130" } ]
2307.01142
18
[10] Julie S Hui, Darren Gergle, and Elizabeth M Gerber. 2018. Introassist: A tool to support writing introductory help requests. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. 1–13. [11] Ellen Jiang, Kristen Olson, Edwin Toh, Alejandra Molina, Aaron Donsbach, Michael Terry, and Carrie J Cai. 2022. PromptMaker: Prompt-based Prototyping with Large Language Models. In CHI Conference on Human Factors in Computing Systems Extended Abstracts. 1–8. [12] Joy Kim, Mira Dontcheva, Wilmot Li, Michael S. Bernstein, and Daniela Steinsapir. 2015. Motif: Supporting Novice Creativity through Expert Patterns. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (Seoul, Republic of Korea) (CHI ’15). Association for Computing Machinery, New York, NY, USA, 1211–1220. https://doi.org/10.1145/2702123.2702507 [13] Aniket Kittur, Susheel Khamkar, Paul André, and Robert Kraut. 2012. Crowd- Weaver: visually managing complex crowd work. In Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work. 1033–1036.
2307.01142#18
Prompt Middleware: Mapping Prompts for Large Language Models to UI Affordances
To help users do complex work, researchers have developed techniques to integrate AI and human intelligence into user interfaces (UIs). With the recent introduction of large language models (LLMs), which can generate text in response to a natural language prompt, there are new opportunities to consider how to integrate LLMs into UIs. We present Prompt Middleware, a framework for generating prompts for LLMs based on UI affordances. These include prompts that are predefined by experts (static prompts), generated from templates with fill-in options in the UI (template-based prompts), or created from scratch (free-form prompts). We demonstrate this framework with FeedbackBuffet, a writing assistant that automatically generates feedback based on a user's text input. Inspired by prior research showing how templates can help non-experts perform more like experts, FeedbackBuffet leverages template-based prompt middleware to enable feedback seekers to specify the types of feedback they want to receive as options in a UI. These options are composed using a template to form a feedback request prompt to GPT-3. We conclude with a discussion about how Prompt Middleware can help developers integrate LLMs into UIs.
http://arxiv.org/pdf/2307.01142
Stephen MacNeil, Andrew Tran, Joanne Kim, Ziheng Huang, Seth Bernstein, Dan Mogil
cs.HC
null
null
cs.HC
20230703
20230703
[]
2307.01135
19
been developed from scratch. Each participant is given a link to the randomly assigned tool, as well as a username and password to use the tool. After searching and obtaining results from the tool, they need to provide their answers on our study webpage hosted on Qualtrics. Later, we check the accuracy of these submitted answers to assess the search performance of participants. There are standard answers to Tasks 1 and 3. Although there can be many different correct answers to Task 2, each answer can be easily checked by visiting the provided links and verifying their contents. To accomplish this, we employ two research assistants (RAs) who manually and independently verify whether each link submitted in Task 2 points to a flight booking website, is a valid web link, directs to the homepage only, or displays a flight between Phoenix and Cincinnati. In cases where the two RAs disagree, one of the co-authors steps in as a tiebreaker and makes the final judgment call. 10
2307.01135#19
ChatGPT vs. Google: A Comparative Study of Search Performance and User Experience
The advent of ChatGPT, a large language model-powered chatbot, has prompted questions about its potential implications for traditional search engines. In this study, we investigate the differences in user behavior when employing search engines and chatbot tools for information-seeking tasks. We carry out a randomized online experiment, dividing participants into two groups: one using a ChatGPT-like tool and the other using a Google Search-like tool. Our findings reveal that the ChatGPT group consistently spends less time on all tasks, with no significant difference in overall task performance between the groups. Notably, ChatGPT levels user search performance across different education levels and excels in answering straightforward questions and providing general solutions but falls short in fact-checking tasks. Users perceive ChatGPT's responses as having higher information quality compared to Google Search, despite displaying a similar level of trust in both tools. Furthermore, participants using ChatGPT report significantly better user experiences in terms of usefulness, enjoyment, and satisfaction, while perceived ease of use remains comparable between the two tools. However, ChatGPT may also lead to overreliance and generate or replicate misinformation, yielding inconsistent results. Our study offers valuable insights for search engine management and highlights opportunities for integrating chatbot technologies into search engine designs.
http://arxiv.org/pdf/2307.01135
Ruiyun Xu, Yue Feng, Hailiang Chen
cs.AI, cs.HC, cs.IR
30 pages, 5 figures, 2 tables
null
cs.AI
20230703
20230703
[ { "id": "2304.07619" }, { "id": "2303.17564" }, { "id": "2303.10130" } ]
2307.01142
19
[14] Markus Krause, Tom Garncarz, JiaoJiao Song, Elizabeth M. Gerber, Brian P. Bailey, and Steven P. Dow. 2017. Critique Style Guide: Improving Crowdsourced Design Feedback with a Natural Language Model. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (Denver, Colorado, USA) (CHI ’17). Association for Computing Machinery, New York, NY, USA, 4627–4639. https://doi.org/10.1145/3025453.3025883 MacNeil et al. [15] Anand Kulkarni, Matthew Can, and Björn Hartmann. 2012. Collaboratively crowdsourcing workflows with turkomatic. In Proceedings of the acm 2012 con- ference on computer supported cooperative work. 1003–1012.
2307.01142#19
Prompt Middleware: Mapping Prompts for Large Language Models to UI Affordances
To help users do complex work, researchers have developed techniques to integrate AI and human intelligence into user interfaces (UIs). With the recent introduction of large language models (LLMs), which can generate text in response to a natural language prompt, there are new opportunities to consider how to integrate LLMs into UIs. We present Prompt Middleware, a framework for generating prompts for LLMs based on UI affordances. These include prompts that are predefined by experts (static prompts), generated from templates with fill-in options in the UI (template-based prompts), or created from scratch (free-form prompts). We demonstrate this framework with FeedbackBuffet, a writing assistant that automatically generates feedback based on a user's text input. Inspired by prior research showing how templates can help non-experts perform more like experts, FeedbackBuffet leverages template-based prompt middleware to enable feedback seekers to specify the types of feedback they want to receive as options in a UI. These options are composed using a template to form a feedback request prompt to GPT-3. We conclude with a discussion about how Prompt Middleware can help developers integrate LLMs into UIs.
http://arxiv.org/pdf/2307.01142
Stephen MacNeil, Andrew Tran, Joanne Kim, Ziheng Huang, Seth Bernstein, Dan Mogil
cs.HC
null
null
cs.HC
20230703
20230703
[]
2307.01135
20
10 After participants complete the tasks, we ask them to fill out a questionnaire and collect their perceptions of ease of use, usefulness, enjoyment, and satisfaction with using the tool. We also collect their perceived information quality of the tool’s responses and trust in using the tool. Moreover, we check the manipulations by asking participants about the features of the assigned tool. At the end of the questionnaire, we collect the participants’ background information (e.g., age, gender, level of education, etc.), their prior experience with ChatGPT and search engines, and their prior knowledge on the topics of the given search tasks. The detailed measurements are illustrated in the Appendix. # 4. Results In this section, we present the results of our experiment based on the analyses of participants’ task responses, questionnaire answers, and search behaviors tracked through server logs. Table 1 reports the results of manipulation and randomization checks. Table 2 presents comparisons between the two experimental groups (ChatGPT vs. Google Search) regarding search efficiency, efforts, performance, and user experience. Analysis of Variance (ANOVA) is performed to evaluate whether the differences between the two groups are significant. # 4.1 Manipulation and Randomization Checks
2307.01135#20
ChatGPT vs. Google: A Comparative Study of Search Performance and User Experience
The advent of ChatGPT, a large language model-powered chatbot, has prompted questions about its potential implications for traditional search engines. In this study, we investigate the differences in user behavior when employing search engines and chatbot tools for information-seeking tasks. We carry out a randomized online experiment, dividing participants into two groups: one using a ChatGPT-like tool and the other using a Google Search-like tool. Our findings reveal that the ChatGPT group consistently spends less time on all tasks, with no significant difference in overall task performance between the groups. Notably, ChatGPT levels user search performance across different education levels and excels in answering straightforward questions and providing general solutions but falls short in fact-checking tasks. Users perceive ChatGPT's responses as having higher information quality compared to Google Search, despite displaying a similar level of trust in both tools. Furthermore, participants using ChatGPT report significantly better user experiences in terms of usefulness, enjoyment, and satisfaction, while perceived ease of use remains comparable between the two tools. However, ChatGPT may also lead to overreliance and generate or replicate misinformation, yielding inconsistent results. Our study offers valuable insights for search engine management and highlights opportunities for integrating chatbot technologies into search engine designs.
http://arxiv.org/pdf/2307.01135
Ruiyun Xu, Yue Feng, Hailiang Chen
cs.AI, cs.HC, cs.IR
30 pages, 5 figures, 2 tables
null
cs.AI
20230703
20230703
[ { "id": "2304.07619" }, { "id": "2303.17564" }, { "id": "2303.10130" } ]
2307.01142
20
[16] Fritz Lekschas, Spyridon Ampanavos, Pao Siangliulue, Hanspeter Pfister, and Krzysztof Z. Gajos. 2021. Ask Me or Tell Me? Enhancing the Effectiveness of Crowdsourced Design Feedback. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 564, 12 pages. https: //doi.org/10.1145/3411764.3445507 [17] Stephen MacNeil, Zijian Ding, Kexin Quan, Thomas j Parashos, Yajie Sun, and Steven P. Dow. 2021. Framing Creative Work: Helping Novices Frame Better Problems through Interactive Scaffolding. In Creativity and Cognition (Virtual Event, Italy) (C&C ’21). Association for Computing Machinery, New York, NY, USA, Article 30, 10 pages. https://doi.org/10.1145/3450741.3465261
2307.01142#20
Prompt Middleware: Mapping Prompts for Large Language Models to UI Affordances
To help users do complex work, researchers have developed techniques to integrate AI and human intelligence into user interfaces (UIs). With the recent introduction of large language models (LLMs), which can generate text in response to a natural language prompt, there are new opportunities to consider how to integrate LLMs into UIs. We present Prompt Middleware, a framework for generating prompts for LLMs based on UI affordances. These include prompts that are predefined by experts (static prompts), generated from templates with fill-in options in the UI (template-based prompts), or created from scratch (free-form prompts). We demonstrate this framework with FeedbackBuffet, a writing assistant that automatically generates feedback based on a user's text input. Inspired by prior research showing how templates can help non-experts perform more like experts, FeedbackBuffet leverages template-based prompt middleware to enable feedback seekers to specify the types of feedback they want to receive as options in a UI. These options are composed using a template to form a feedback request prompt to GPT-3. We conclude with a discussion about how Prompt Middleware can help developers integrate LLMs into UIs.
http://arxiv.org/pdf/2307.01142
Stephen MacNeil, Andrew Tran, Joanne Kim, Ziheng Huang, Seth Bernstein, Dan Mogil
cs.HC
null
null
cs.HC
20230703
20230703
[]
2307.01135
21
# 4.1 Manipulation and Randomization Checks We first conduct a manipulation check to ensure that our manipulation is successfully implemented and that our experimental design is valid. As shown in Panel A of Table 1, participants in the ChatGPT group believe that they use a tool that is significantly different from a traditional search engine and features a conversational interface (5.61 vs. 4.64, p<0.05). The manipulation check questions utilize a 7-point scale, with higher scores indicating participants’ belief that the search tool used in the task has a conversational interface and differs from traditional search engines. 11 Since some participants fail the attention checks and are consequently removed from our sample, we also verify the validity of the randomization for our final sample by comparing the participants’ demographics (age, gender, level of education, employment status), prior knowledge of the topics, and prior experience with relevant technologies. The results in Panel B of Table 1 confirm that there are no significant differences between the two groups in these aspects. # 4.2 Search Efficiency We begin by comparing the two search tools, focusing on participants’ search efficiency, quantified as time spent on each task (including providing answers) and using the search tool. Panel A of Table 2 reports the comparison results between the two experimental groups. Notably, we employ two approaches to measure the time: self-reported task time by participants and objective time spent using the tool retrieved from server logs.
2307.01135#21
ChatGPT vs. Google: A Comparative Study of Search Performance and User Experience
The advent of ChatGPT, a large language model-powered chatbot, has prompted questions about its potential implications for traditional search engines. In this study, we investigate the differences in user behavior when employing search engines and chatbot tools for information-seeking tasks. We carry out a randomized online experiment, dividing participants into two groups: one using a ChatGPT-like tool and the other using a Google Search-like tool. Our findings reveal that the ChatGPT group consistently spends less time on all tasks, with no significant difference in overall task performance between the groups. Notably, ChatGPT levels user search performance across different education levels and excels in answering straightforward questions and providing general solutions but falls short in fact-checking tasks. Users perceive ChatGPT's responses as having higher information quality compared to Google Search, despite displaying a similar level of trust in both tools. Furthermore, participants using ChatGPT report significantly better user experiences in terms of usefulness, enjoyment, and satisfaction, while perceived ease of use remains comparable between the two tools. However, ChatGPT may also lead to overreliance and generate or replicate misinformation, yielding inconsistent results. Our study offers valuable insights for search engine management and highlights opportunities for integrating chatbot technologies into search engine designs.
http://arxiv.org/pdf/2307.01135
Ruiyun Xu, Yue Feng, Hailiang Chen
cs.AI, cs.HC, cs.IR
30 pages, 5 figures, 2 tables
null
cs.AI
20230703
20230703
[ { "id": "2304.07619" }, { "id": "2303.17564" }, { "id": "2303.10130" } ]
2307.01142
21
[18] Stephen MacNeil, Ziheng Huang, Kenneth Chen, Zijian Ding, Alexander Yu, Kendall Nakai, and Steven P. Dow. 2023. Combining Freeform Curation with Structured Templates. In Creativity and Cognition (Gathertown) (C&C ’19). ACM, New York, NY, USA, 11 pages. https://doi.org/10.1145/3591196.3593337 [19] David Maulsby, Saul Greenberg, and Richard Mander. 1993. Prototyping an Intelligent Agent through Wizard of Oz. In Proceedings of the INTERACT ’93 and CHI ’93 Conference on Human Factors in Computing Systems (Amsterdam, The Netherlands) (CHI ’93). Association for Computing Machinery, New York, NY, USA, 277–284. https://doi.org/10.1145/169059.169215 [20] Piotr Mirowski, Kory W Mathewson, Jaylen Pittman, and Richard Evans. 2023. Co-Writing Screenplays and Theatre Scripts with Language Models: Evaluation by Industry Professionals. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–34.
2307.01142#21
Prompt Middleware: Mapping Prompts for Large Language Models to UI Affordances
To help users do complex work, researchers have developed techniques to integrate AI and human intelligence into user interfaces (UIs). With the recent introduction of large language models (LLMs), which can generate text in response to a natural language prompt, there are new opportunities to consider how to integrate LLMs into UIs. We present Prompt Middleware, a framework for generating prompts for LLMs based on UI affordances. These include prompts that are predefined by experts (static prompts), generated from templates with fill-in options in the UI (template-based prompts), or created from scratch (free-form prompts). We demonstrate this framework with FeedbackBuffet, a writing assistant that automatically generates feedback based on a user's text input. Inspired by prior research showing how templates can help non-experts perform more like experts, FeedbackBuffet leverages template-based prompt middleware to enable feedback seekers to specify the types of feedback they want to receive as options in a UI. These options are composed using a template to form a feedback request prompt to GPT-3. We conclude with a discussion about how Prompt Middleware can help developers integrate LLMs into UIs.
http://arxiv.org/pdf/2307.01142
Stephen MacNeil, Andrew Tran, Joanne Kim, Ziheng Huang, Seth Bernstein, Dan Mogil
cs.HC
null
null
cs.HC
20230703
20230703
[]
2307.01135
22
Based on self-reported results, on average, it takes participants in the ChatGPT group 11.35 minutes to complete the three tasks, while it takes those in the Google Search group 18.75 minutes (i.e., 65.20% more). Across all three tasks, the ChatGPT group consistently spends much less time on each task than the Google Search group. All these differences are statistically significant at the # 1% level. Furthermore, we analyze the server logs of the two search tools to calculate the time spent on each task in an objective way. For the ChatGPT group, time spent on search is measured by the time span from the user’s initial query to the last response received from the ChatGPT API. For the Google Search tool, we use the duration between the user’s first query and their last click to capture the time spent. If the last query is not followed by any click, the end time is the user’s last query. It is worth noting that while the server-log measures are more objective, they are likely an underestimate of the true time spend on the tool because the server log does not record the exact 12
2307.01135#22
ChatGPT vs. Google: A Comparative Study of Search Performance and User Experience
The advent of ChatGPT, a large language model-powered chatbot, has prompted questions about its potential implications for traditional search engines. In this study, we investigate the differences in user behavior when employing search engines and chatbot tools for information-seeking tasks. We carry out a randomized online experiment, dividing participants into two groups: one using a ChatGPT-like tool and the other using a Google Search-like tool. Our findings reveal that the ChatGPT group consistently spends less time on all tasks, with no significant difference in overall task performance between the groups. Notably, ChatGPT levels user search performance across different education levels and excels in answering straightforward questions and providing general solutions but falls short in fact-checking tasks. Users perceive ChatGPT's responses as having higher information quality compared to Google Search, despite displaying a similar level of trust in both tools. Furthermore, participants using ChatGPT report significantly better user experiences in terms of usefulness, enjoyment, and satisfaction, while perceived ease of use remains comparable between the two tools. However, ChatGPT may also lead to overreliance and generate or replicate misinformation, yielding inconsistent results. Our study offers valuable insights for search engine management and highlights opportunities for integrating chatbot technologies into search engine designs.
http://arxiv.org/pdf/2307.01135
Ruiyun Xu, Yue Feng, Hailiang Chen
cs.AI, cs.HC, cs.IR
30 pages, 5 figures, 2 tables
null
cs.AI
20230703
20230703
[ { "id": "2304.07619" }, { "id": "2303.17564" }, { "id": "2303.10130" } ]
2307.01142
22
[21] Vineet Pandey, Justine Debelius, Embriette R Hyde, Tomasz Kosciolek, Rob Knight, and Scott Klemmer. 2018. Docent: transforming personal intuitions to scientific hypotheses through content learning and process training. In Proceedings of the Fifth Annual ACM Conference on Learning at Scale. 1–10. [22] Laria Reynolds and Kyle McDonell. 2021. Prompt Programming for Large Lan- guage Models: Beyond the Few-Shot Paradigm. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI EA ’21). Association for Computing Machinery, New York, NY, USA, Article 314, 7 pages. https://doi.org/10.1145/3411763.3451760 [23] Sarah Theres Völkel, Christina Schneegass, Malin Eiband, and Daniel Buschek. 2020. What is "Intelligent" in Intelligent User Interfaces? A Meta-Analysis of 25 Years of IUI. In Proceedings of the 25th International Conference on Intelligent User Interfaces (Cagliari, Italy) (IUI ’20). Association for Computing Machinery, New York, NY, USA, 477–487. https://doi.org/10.1145/3377325.3377500
2307.01142#22
Prompt Middleware: Mapping Prompts for Large Language Models to UI Affordances
To help users do complex work, researchers have developed techniques to integrate AI and human intelligence into user interfaces (UIs). With the recent introduction of large language models (LLMs), which can generate text in response to a natural language prompt, there are new opportunities to consider how to integrate LLMs into UIs. We present Prompt Middleware, a framework for generating prompts for LLMs based on UI affordances. These include prompts that are predefined by experts (static prompts), generated from templates with fill-in options in the UI (template-based prompts), or created from scratch (free-form prompts). We demonstrate this framework with FeedbackBuffet, a writing assistant that automatically generates feedback based on a user's text input. Inspired by prior research showing how templates can help non-experts perform more like experts, FeedbackBuffet leverages template-based prompt middleware to enable feedback seekers to specify the types of feedback they want to receive as options in a UI. These options are composed using a template to form a feedback request prompt to GPT-3. We conclude with a discussion about how Prompt Middleware can help developers integrate LLMs into UIs.
http://arxiv.org/pdf/2307.01142
Stephen MacNeil, Andrew Tran, Joanne Kim, Ziheng Huang, Seth Bernstein, Dan Mogil
cs.HC
null
null
cs.HC
20230703
20230703
[]
2307.01135
23
12 moment when participants finish a task or leave the tool. In addition, the server-log measures are also likely to exclude the time for participants to refine their answers and fill out the questionnaire. As such, the time spent calculated from server logs is relatively less than the time reported by the participants themselves. Nevertheless, we use this time retrieved from server logs as an alternative measure to cross-validate the search efficiency between the two tools, in addition to participants’ self-reported time. The results in the lower part of Panel A in Table 2 suggest a consistent pattern such that the time spent on the ChatGPT is significantly less than that on Google Search across all three tasks.
2307.01135#23
ChatGPT vs. Google: A Comparative Study of Search Performance and User Experience
The advent of ChatGPT, a large language model-powered chatbot, has prompted questions about its potential implications for traditional search engines. In this study, we investigate the differences in user behavior when employing search engines and chatbot tools for information-seeking tasks. We carry out a randomized online experiment, dividing participants into two groups: one using a ChatGPT-like tool and the other using a Google Search-like tool. Our findings reveal that the ChatGPT group consistently spends less time on all tasks, with no significant difference in overall task performance between the groups. Notably, ChatGPT levels user search performance across different education levels and excels in answering straightforward questions and providing general solutions but falls short in fact-checking tasks. Users perceive ChatGPT's responses as having higher information quality compared to Google Search, despite displaying a similar level of trust in both tools. Furthermore, participants using ChatGPT report significantly better user experiences in terms of usefulness, enjoyment, and satisfaction, while perceived ease of use remains comparable between the two tools. However, ChatGPT may also lead to overreliance and generate or replicate misinformation, yielding inconsistent results. Our study offers valuable insights for search engine management and highlights opportunities for integrating chatbot technologies into search engine designs.
http://arxiv.org/pdf/2307.01135
Ruiyun Xu, Yue Feng, Hailiang Chen
cs.AI, cs.HC, cs.IR
30 pages, 5 figures, 2 tables
null
cs.AI
20230703
20230703
[ { "id": "2304.07619" }, { "id": "2303.17564" }, { "id": "2303.10130" } ]
2307.01142
23
[24] Tongshuang Wu, Michael Terry, and Carrie Jun Cai. 2022. AI Chains: Transparent and Controllable Human-AI Interaction by Chaining Large Language Model Prompts. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 385, 22 pages. https://doi.org/10.1145/3491102. 3517582 [25] Qian Yang, Aaron Steinfeld, Carolyn Rosé, and John Zimmerman. 2020. Re- Examining Whether, Why, and How Human-AI Interaction Is Uniquely Difficult to Design. Association for Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3313831.3376301 [26] Robert K Yin et al. 2018. Case study research and applications: Design and methods. Los Angeles, UK: Sage (2018). [27] Ann Yuan, Andy Coenen, Emily Reif, and Daphne Ippolito. 2022. Wordcraft: story writing with large language models. In 27th International Conference on Intelligent User Interfaces. 841–852.
2307.01142#23
Prompt Middleware: Mapping Prompts for Large Language Models to UI Affordances
To help users do complex work, researchers have developed techniques to integrate AI and human intelligence into user interfaces (UIs). With the recent introduction of large language models (LLMs), which can generate text in response to a natural language prompt, there are new opportunities to consider how to integrate LLMs into UIs. We present Prompt Middleware, a framework for generating prompts for LLMs based on UI affordances. These include prompts that are predefined by experts (static prompts), generated from templates with fill-in options in the UI (template-based prompts), or created from scratch (free-form prompts). We demonstrate this framework with FeedbackBuffet, a writing assistant that automatically generates feedback based on a user's text input. Inspired by prior research showing how templates can help non-experts perform more like experts, FeedbackBuffet leverages template-based prompt middleware to enable feedback seekers to specify the types of feedback they want to receive as options in a UI. These options are composed using a template to form a feedback request prompt to GPT-3. We conclude with a discussion about how Prompt Middleware can help developers integrate LLMs into UIs.
http://arxiv.org/pdf/2307.01142
Stephen MacNeil, Andrew Tran, Joanne Kim, Ziheng Huang, Seth Bernstein, Dan Mogil
cs.HC
null
null
cs.HC
20230703
20230703
[]
2307.01135
24
We attribute the observed difference in search efficiency between ChatGPT and Google Search to the distinct ways in which users interact with and obtain information from these tools. When using Google Search, users must formulate search queries on their own, often going through a trial-and-error process to find the most relevant results. This can be time-consuming, as users need to sift through search results, sometimes relying on luck to find the desired information. On the other hand, ChatGPT allows users to simply ask a question in natural language, streamlining the search process. ChatGPT then provides a summarized answer to the user’s question, eliminating the need for additional research or reading. This more direct method of obtaining information enables users to find answers more efficiently, resulting in significantly less time spent for the ChatGPT group compared to the Google Search group. Remarkably, our findings based on server logs reveal that the average time spent on Task 1 and Task 2 using the ChatGPT tool is less than one minute, suggesting that participants made only a limited number of queries and were able to obtain answers directly from ChatGPT. This further underscores the efficiency of ChatGPT in providing immediate responses to users, especially in search tasks with specific and clear information needs. 13 # 4.3 Search Efforts
2307.01135#24
ChatGPT vs. Google: A Comparative Study of Search Performance and User Experience
The advent of ChatGPT, a large language model-powered chatbot, has prompted questions about its potential implications for traditional search engines. In this study, we investigate the differences in user behavior when employing search engines and chatbot tools for information-seeking tasks. We carry out a randomized online experiment, dividing participants into two groups: one using a ChatGPT-like tool and the other using a Google Search-like tool. Our findings reveal that the ChatGPT group consistently spends less time on all tasks, with no significant difference in overall task performance between the groups. Notably, ChatGPT levels user search performance across different education levels and excels in answering straightforward questions and providing general solutions but falls short in fact-checking tasks. Users perceive ChatGPT's responses as having higher information quality compared to Google Search, despite displaying a similar level of trust in both tools. Furthermore, participants using ChatGPT report significantly better user experiences in terms of usefulness, enjoyment, and satisfaction, while perceived ease of use remains comparable between the two tools. However, ChatGPT may also lead to overreliance and generate or replicate misinformation, yielding inconsistent results. Our study offers valuable insights for search engine management and highlights opportunities for integrating chatbot technologies into search engine designs.
http://arxiv.org/pdf/2307.01135
Ruiyun Xu, Yue Feng, Hailiang Chen
cs.AI, cs.HC, cs.IR
30 pages, 5 figures, 2 tables
null
cs.AI
20230703
20230703
[ { "id": "2304.07619" }, { "id": "2303.17564" }, { "id": "2303.10130" } ]
2307.01142
24
[28] Alvin Yuan, Kurt Luther, Markus Krause, Sophie Isabel Vennix, Steven P Dow, and Bjorn Hartmann. 2016. Almost an Expert: The Effects of Rubrics and Expertise on Perceived Value of Crowdsourced Design Critiques. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing (San Francisco, California, USA) (CSCW ’16). Association for Computing Machinery, New York, NY, USA, 1005–1017. https://doi.org/10.1145/2818048.2819953 Prompt Middleware UIST 2022, Oct 20–Nov 2, 2022, Bend, Oregon
2307.01142#24
Prompt Middleware: Mapping Prompts for Large Language Models to UI Affordances
To help users do complex work, researchers have developed techniques to integrate AI and human intelligence into user interfaces (UIs). With the recent introduction of large language models (LLMs), which can generate text in response to a natural language prompt, there are new opportunities to consider how to integrate LLMs into UIs. We present Prompt Middleware, a framework for generating prompts for LLMs based on UI affordances. These include prompts that are predefined by experts (static prompts), generated from templates with fill-in options in the UI (template-based prompts), or created from scratch (free-form prompts). We demonstrate this framework with FeedbackBuffet, a writing assistant that automatically generates feedback based on a user's text input. Inspired by prior research showing how templates can help non-experts perform more like experts, FeedbackBuffet leverages template-based prompt middleware to enable feedback seekers to specify the types of feedback they want to receive as options in a UI. These options are composed using a template to form a feedback request prompt to GPT-3. We conclude with a discussion about how Prompt Middleware can help developers integrate LLMs into UIs.
http://arxiv.org/pdf/2307.01142
Stephen MacNeil, Andrew Tran, Joanne Kim, Ziheng Huang, Seth Bernstein, Dan Mogil
cs.HC
null
null
cs.HC
20230703
20230703
[]
2307.01135
25
information needs. 13 # 4.3 Search Efforts We examine the user prompts and search queries extracted from server logs to understand how users interact with the AI-powered chatbot and search engine. Specifically, we focus on how the participants formulate queries during search tasks, as indicated by the average number of queries for each task and the average length of their queries. The results presented in Panel B of Table 2 show that participants in the ChatGPT group use a similar number of queries across three tasks as those in the Google Search group, but the average query length is significantly larger for the ChatGPT group. For the number of queries, participants in the ChatGPT group use significantly fewer search queries (i.e., user prompts) to complete the first task compared to those in the Google Search group (1.55 vs. 2.13, p<0.01). In Task 2, while participants in the ChatGPT group still use a relatively smaller number of queries, the difference is minor and marginally significant at the 10% level. Task 2 involves compiling a list of websites with links, which is a task that Google is well-suited for. Therefore, participants in both groups can complete the task with minimal effort, using fewer than two queries on average. Conversely, for the more complex Task 3, there is no significant difference between the two search tools, although participants in the ChatGPT group conduct slightly more queries than those in the Google Search group.
2307.01135#25
ChatGPT vs. Google: A Comparative Study of Search Performance and User Experience
The advent of ChatGPT, a large language model-powered chatbot, has prompted questions about its potential implications for traditional search engines. In this study, we investigate the differences in user behavior when employing search engines and chatbot tools for information-seeking tasks. We carry out a randomized online experiment, dividing participants into two groups: one using a ChatGPT-like tool and the other using a Google Search-like tool. Our findings reveal that the ChatGPT group consistently spends less time on all tasks, with no significant difference in overall task performance between the groups. Notably, ChatGPT levels user search performance across different education levels and excels in answering straightforward questions and providing general solutions but falls short in fact-checking tasks. Users perceive ChatGPT's responses as having higher information quality compared to Google Search, despite displaying a similar level of trust in both tools. Furthermore, participants using ChatGPT report significantly better user experiences in terms of usefulness, enjoyment, and satisfaction, while perceived ease of use remains comparable between the two tools. However, ChatGPT may also lead to overreliance and generate or replicate misinformation, yielding inconsistent results. Our study offers valuable insights for search engine management and highlights opportunities for integrating chatbot technologies into search engine designs.
http://arxiv.org/pdf/2307.01135
Ruiyun Xu, Yue Feng, Hailiang Chen
cs.AI, cs.HC, cs.IR
30 pages, 5 figures, 2 tables
null
cs.AI
20230703
20230703
[ { "id": "2304.07619" }, { "id": "2303.17564" }, { "id": "2303.10130" } ]
2307.01142
25
Statement of Pu Good morning, My name is Alex and | am from LLM Security. We are a company dedicated to protecting homes with necessary security tools. It is our mission to keep your family safe by ensuring the best security system to meet your needs and budget. If you are interested, please contact me at [email protected] or call me at 123-456-7890. I'm looking forward to hearing from you! Best, Alex B For as long as | can remember, it has been a dream of mine to get into LLM University. | am confident that with a computer science degree from this institution, | will be more successful in obtaining my career goals. Following the bachelor of science from this university, | hope to get a career in software engineering to develop technological tools that will help to better society and assist those not familiar with computers and tech. As someone who values a solid education and places importance in staying on honor roll, | believe | am a perfect candidate for your institution. | look forward to a productive four years in this university and all great outcomes that will follow. Give me <critical> and <specific> <suggestions> on the <content> as a <list> for the <email> below Give me <constructive> and <specific> <criticism> on the
2307.01142#25
Prompt Middleware: Mapping Prompts for Large Language Models to UI Affordances
To help users do complex work, researchers have developed techniques to integrate AI and human intelligence into user interfaces (UIs). With the recent introduction of large language models (LLMs), which can generate text in response to a natural language prompt, there are new opportunities to consider how to integrate LLMs into UIs. We present Prompt Middleware, a framework for generating prompts for LLMs based on UI affordances. These include prompts that are predefined by experts (static prompts), generated from templates with fill-in options in the UI (template-based prompts), or created from scratch (free-form prompts). We demonstrate this framework with FeedbackBuffet, a writing assistant that automatically generates feedback based on a user's text input. Inspired by prior research showing how templates can help non-experts perform more like experts, FeedbackBuffet leverages template-based prompt middleware to enable feedback seekers to specify the types of feedback they want to receive as options in a UI. These options are composed using a template to form a feedback request prompt to GPT-3. We conclude with a discussion about how Prompt Middleware can help developers integrate LLMs into UIs.
http://arxiv.org/pdf/2307.01142
Stephen MacNeil, Andrew Tran, Joanne Kim, Ziheng Huang, Seth Bernstein, Dan Mogil
cs.HC
null
null
cs.HC
20230703
20230703
[]
2307.01135
26
Regarding query length, our findings suggest that ChatGPT users tend to formulate significantly longer queries in search tasks compared to Google Search users. The results show that the query length from participants in the ChatGPT group is consistently greater across all three tasks than those in the Google Search group. This is likely because ChatGPT is designed to engage in natural language conversations with users. In contrast to Google Search, which requires short and concise keyword input, ChatGPT allows users to interact in a more conversational manner. 14 Consequently, users may feel more comfortable using longer, natural language queries and providing additional context and details about their inquiries when interacting with ChatGPT. Our findings highlight the need for users to adapt their search habits due to the unique conversational search paradigms employed by ChatGPT, as opposed to the keyword-centric design of traditional search engines. # 4.4 Search Performance
2307.01135#26
ChatGPT vs. Google: A Comparative Study of Search Performance and User Experience
The advent of ChatGPT, a large language model-powered chatbot, has prompted questions about its potential implications for traditional search engines. In this study, we investigate the differences in user behavior when employing search engines and chatbot tools for information-seeking tasks. We carry out a randomized online experiment, dividing participants into two groups: one using a ChatGPT-like tool and the other using a Google Search-like tool. Our findings reveal that the ChatGPT group consistently spends less time on all tasks, with no significant difference in overall task performance between the groups. Notably, ChatGPT levels user search performance across different education levels and excels in answering straightforward questions and providing general solutions but falls short in fact-checking tasks. Users perceive ChatGPT's responses as having higher information quality compared to Google Search, despite displaying a similar level of trust in both tools. Furthermore, participants using ChatGPT report significantly better user experiences in terms of usefulness, enjoyment, and satisfaction, while perceived ease of use remains comparable between the two tools. However, ChatGPT may also lead to overreliance and generate or replicate misinformation, yielding inconsistent results. Our study offers valuable insights for search engine management and highlights opportunities for integrating chatbot technologies into search engine designs.
http://arxiv.org/pdf/2307.01135
Ruiyun Xu, Yue Feng, Hailiang Chen
cs.AI, cs.HC, cs.IR
30 pages, 5 figures, 2 tables
null
cs.AI
20230703
20230703
[ { "id": "2304.07619" }, { "id": "2303.17564" }, { "id": "2303.10130" } ]
2307.01142
26
and <specific> <suggestions> on the <content> as a <list> for the <email> below Give me <constructive> and <specific> <criticism> on the <style> as a <paragraph> for the <email> below Give me <critical> and <specific> <suggestions> on the <content> as a <list> for the <statement of purpose> below Give me <constructive> and <specific> <criticism> on the <style> as a <paragraph> for the <statement of purpose> below -A more specific subject line such as "LLM Security-Home Protection Solutions" -In the email, mention the type of security systems that LLM Security offers -Mention what makes LLM Security stand out from other security companies -Alex B's signature should include his title such as “Security Consultant, LLM Security" The style of this email is very straightforward and to the point. It does a good job of introducing the company and its mission, and then provides contact information for follow- up. However, it is somewhat generic in its approach and could be more personalized to the recipient. For example, it could mention something specific about the recipient's home security needs or budget. Additionally, the email could be shorter and more concise. - focus on why you are interested in pursuing
2307.01142#26
Prompt Middleware: Mapping Prompts for Large Language Models to UI Affordances
To help users do complex work, researchers have developed techniques to integrate AI and human intelligence into user interfaces (UIs). With the recent introduction of large language models (LLMs), which can generate text in response to a natural language prompt, there are new opportunities to consider how to integrate LLMs into UIs. We present Prompt Middleware, a framework for generating prompts for LLMs based on UI affordances. These include prompts that are predefined by experts (static prompts), generated from templates with fill-in options in the UI (template-based prompts), or created from scratch (free-form prompts). We demonstrate this framework with FeedbackBuffet, a writing assistant that automatically generates feedback based on a user's text input. Inspired by prior research showing how templates can help non-experts perform more like experts, FeedbackBuffet leverages template-based prompt middleware to enable feedback seekers to specify the types of feedback they want to receive as options in a UI. These options are composed using a template to form a feedback request prompt to GPT-3. We conclude with a discussion about how Prompt Middleware can help developers integrate LLMs into UIs.
http://arxiv.org/pdf/2307.01142
Stephen MacNeil, Andrew Tran, Joanne Kim, Ziheng Huang, Seth Bernstein, Dan Mogil
cs.HC
null
null
cs.HC
20230703
20230703
[]
2307.01135
27
# 4.4 Search Performance To assess search performance, we evaluate each participant’s answer to each task using a scoring system based on a total of 10 points. Each task in our experiment has objective answers. For example, the correct answers for Task 1 are Valentina Tereshkova (the name of the first woman to travel in space) and 26 years old (her age at the time of her flight). Participants earn 5 points for each correct answer, achieving a full score of 10 points if both answers are correct. Similarly, in Task 2, participants can get 2 points for each correct answer out of five websites with links. In Task 3, participants are required to check three statements and provide evidence. We assign equal weight to each check, such that participants can earn 10/3 points for each correct answer. This scoring system allows us to compare the search performance between the ChatGPT and Google Search # groups effectively.
2307.01135#27
ChatGPT vs. Google: A Comparative Study of Search Performance and User Experience
The advent of ChatGPT, a large language model-powered chatbot, has prompted questions about its potential implications for traditional search engines. In this study, we investigate the differences in user behavior when employing search engines and chatbot tools for information-seeking tasks. We carry out a randomized online experiment, dividing participants into two groups: one using a ChatGPT-like tool and the other using a Google Search-like tool. Our findings reveal that the ChatGPT group consistently spends less time on all tasks, with no significant difference in overall task performance between the groups. Notably, ChatGPT levels user search performance across different education levels and excels in answering straightforward questions and providing general solutions but falls short in fact-checking tasks. Users perceive ChatGPT's responses as having higher information quality compared to Google Search, despite displaying a similar level of trust in both tools. Furthermore, participants using ChatGPT report significantly better user experiences in terms of usefulness, enjoyment, and satisfaction, while perceived ease of use remains comparable between the two tools. However, ChatGPT may also lead to overreliance and generate or replicate misinformation, yielding inconsistent results. Our study offers valuable insights for search engine management and highlights opportunities for integrating chatbot technologies into search engine designs.
http://arxiv.org/pdf/2307.01135
Ruiyun Xu, Yue Feng, Hailiang Chen
cs.AI, cs.HC, cs.IR
30 pages, 5 figures, 2 tables
null
cs.AI
20230703
20230703
[ { "id": "2304.07619" }, { "id": "2303.17564" }, { "id": "2303.10130" } ]
2307.01142
27
could mention something specific about the recipient's home security needs or budget. Additionally, the email could be shorter and more concise. - focus on why you are interested in pursuing an LLM degree, and how you believe it will help you achieve your career goals - highlight your academic achievements and discuss how you believe they will help you succeed in the program - discuss your interest in computer science and how you believe it will help you in your career as a software engineer - explain how you want to use your degree to help society, and how you plan to assist those who are not familiar with technology Your statement of purpose is well-written and provides a clear overview of your educational and career aspirations. However, there are a few areas where your style could be improved. In particular, your paragraph could be more concise and your sentences more concise. Additionally, your use of grammar and punctuation could be more consistent throughout your statement. Overall, however, your statement is strong and provides a good overview of your goals and qualifications.
2307.01142#27
Prompt Middleware: Mapping Prompts for Large Language Models to UI Affordances
To help users do complex work, researchers have developed techniques to integrate AI and human intelligence into user interfaces (UIs). With the recent introduction of large language models (LLMs), which can generate text in response to a natural language prompt, there are new opportunities to consider how to integrate LLMs into UIs. We present Prompt Middleware, a framework for generating prompts for LLMs based on UI affordances. These include prompts that are predefined by experts (static prompts), generated from templates with fill-in options in the UI (template-based prompts), or created from scratch (free-form prompts). We demonstrate this framework with FeedbackBuffet, a writing assistant that automatically generates feedback based on a user's text input. Inspired by prior research showing how templates can help non-experts perform more like experts, FeedbackBuffet leverages template-based prompt middleware to enable feedback seekers to specify the types of feedback they want to receive as options in a UI. These options are composed using a template to form a feedback request prompt to GPT-3. We conclude with a discussion about how Prompt Middleware can help developers integrate LLMs into UIs.
http://arxiv.org/pdf/2307.01142
Stephen MacNeil, Andrew Tran, Joanne Kim, Ziheng Huang, Seth Bernstein, Dan Mogil
cs.HC
null
null
cs.HC
20230703
20230703
[]
2307.01135
28
# groups effectively. Panel C of Table 2 presents the comparison results for search performance. On average, participants in the ChatGPT group score a total of 8.55, while participants in the Google Search group score 8.77. The difference between the two groups is only -0.22, which is statistically insignificant at the 10% level. These findings are particularly noteworthy, considering that the Google Search group spends 65.2% more time (as demonstrated in our earlier analysis) to achieve a same level of performance. The implications of this are substantial, as it suggests that ChatGPT 15 can significantly enhance the productivity of search users while maintaining the same level of task performance.
2307.01135#28
ChatGPT vs. Google: A Comparative Study of Search Performance and User Experience
The advent of ChatGPT, a large language model-powered chatbot, has prompted questions about its potential implications for traditional search engines. In this study, we investigate the differences in user behavior when employing search engines and chatbot tools for information-seeking tasks. We carry out a randomized online experiment, dividing participants into two groups: one using a ChatGPT-like tool and the other using a Google Search-like tool. Our findings reveal that the ChatGPT group consistently spends less time on all tasks, with no significant difference in overall task performance between the groups. Notably, ChatGPT levels user search performance across different education levels and excels in answering straightforward questions and providing general solutions but falls short in fact-checking tasks. Users perceive ChatGPT's responses as having higher information quality compared to Google Search, despite displaying a similar level of trust in both tools. Furthermore, participants using ChatGPT report significantly better user experiences in terms of usefulness, enjoyment, and satisfaction, while perceived ease of use remains comparable between the two tools. However, ChatGPT may also lead to overreliance and generate or replicate misinformation, yielding inconsistent results. Our study offers valuable insights for search engine management and highlights opportunities for integrating chatbot technologies into search engine designs.
http://arxiv.org/pdf/2307.01135
Ruiyun Xu, Yue Feng, Hailiang Chen
cs.AI, cs.HC, cs.IR
30 pages, 5 figures, 2 tables
null
cs.AI
20230703
20230703
[ { "id": "2304.07619" }, { "id": "2303.17564" }, { "id": "2303.10130" } ]
2307.01135
29
15 can significantly enhance the productivity of search users while maintaining the same level of task performance. While there is no significant difference in the overall task performance between the two experimental groups, a detailed comparison reveals varying performances across individual tasks. Notably, in Task 1, all participants using ChatGPT achieve full marks, displaying superior performance and suggesting that ChatGPT is highly effective at fact retrieval. In contrast, Google Search users make several errors, with an average score of 8.19. The difference of 1.81 is statistically significant at the 1% level. Although the first search result provided by Google contains the correct answer to Task 1, participants still need to read through the result page to find the correct information. Due to multiple names mentioned in the article, participants often mistakenly use the wrong name. Consequently, the ChatGPT group’s performance is significantly better than the Google Search group’s in Task 1.
2307.01135#29
ChatGPT vs. Google: A Comparative Study of Search Performance and User Experience
The advent of ChatGPT, a large language model-powered chatbot, has prompted questions about its potential implications for traditional search engines. In this study, we investigate the differences in user behavior when employing search engines and chatbot tools for information-seeking tasks. We carry out a randomized online experiment, dividing participants into two groups: one using a ChatGPT-like tool and the other using a Google Search-like tool. Our findings reveal that the ChatGPT group consistently spends less time on all tasks, with no significant difference in overall task performance between the groups. Notably, ChatGPT levels user search performance across different education levels and excels in answering straightforward questions and providing general solutions but falls short in fact-checking tasks. Users perceive ChatGPT's responses as having higher information quality compared to Google Search, despite displaying a similar level of trust in both tools. Furthermore, participants using ChatGPT report significantly better user experiences in terms of usefulness, enjoyment, and satisfaction, while perceived ease of use remains comparable between the two tools. However, ChatGPT may also lead to overreliance and generate or replicate misinformation, yielding inconsistent results. Our study offers valuable insights for search engine management and highlights opportunities for integrating chatbot technologies into search engine designs.
http://arxiv.org/pdf/2307.01135
Ruiyun Xu, Yue Feng, Hailiang Chen
cs.AI, cs.HC, cs.IR
30 pages, 5 figures, 2 tables
null
cs.AI
20230703
20230703
[ { "id": "2304.07619" }, { "id": "2303.17564" }, { "id": "2303.10130" } ]
2307.01135
30
We further examine the participants’ task performance between the two groups across different education levels. Intriguingly, as illustrated in Figure 3, we observe that participants holding a doctorate degree show no significant difference in answering Task 1, regardless of whether they use ChatGPT or Google Search. However, the performance of Google Search users with other educational backgrounds is consistently lower than that of ChatGPT users. This result aligns with a recent finding by Noy and Zhang (2023) that ChatGPT helps reduce inequality between workers by benefiting low-ability workers more. In our study, participants exhibit the same performance in Task 1, irrespective of their educational backgrounds, when using ChatGPT, while the performance of Google Search users largely depends on their education levels. Based on Figure 3, we infer that using Google Search tends to be more challenging for users with lower levels of education. 16
2307.01135#30
ChatGPT vs. Google: A Comparative Study of Search Performance and User Experience
The advent of ChatGPT, a large language model-powered chatbot, has prompted questions about its potential implications for traditional search engines. In this study, we investigate the differences in user behavior when employing search engines and chatbot tools for information-seeking tasks. We carry out a randomized online experiment, dividing participants into two groups: one using a ChatGPT-like tool and the other using a Google Search-like tool. Our findings reveal that the ChatGPT group consistently spends less time on all tasks, with no significant difference in overall task performance between the groups. Notably, ChatGPT levels user search performance across different education levels and excels in answering straightforward questions and providing general solutions but falls short in fact-checking tasks. Users perceive ChatGPT's responses as having higher information quality compared to Google Search, despite displaying a similar level of trust in both tools. Furthermore, participants using ChatGPT report significantly better user experiences in terms of usefulness, enjoyment, and satisfaction, while perceived ease of use remains comparable between the two tools. However, ChatGPT may also lead to overreliance and generate or replicate misinformation, yielding inconsistent results. Our study offers valuable insights for search engine management and highlights opportunities for integrating chatbot technologies into search engine designs.
http://arxiv.org/pdf/2307.01135
Ruiyun Xu, Yue Feng, Hailiang Chen
cs.AI, cs.HC, cs.IR
30 pages, 5 figures, 2 tables
null
cs.AI
20230703
20230703
[ { "id": "2304.07619" }, { "id": "2303.17564" }, { "id": "2303.10130" } ]
2307.01135
31
levels of education. 16 We observe no significant difference in performance between the two experimental groups in Task 2. Given that Task 2 requires a list of websites and links, it is important to note that Google Search’s default output inherently presents a list of relevant websites and links, while ChatGPT excels at providing summarized responses. As a result, both tools demonstrate comparably exceptional performance, as indicated by average scores that are close to the full mark (9.81 and 9.74, respectively). Upon examining the link details more closely, we find that most links provided by participants in the ChatGPT group are homepages of flight booking websites, while the websites provided by participants in the Google Search group specifically direct to flights between the two cities (i.e., from Phoenix to Cincinnati) as required in the task. If we consider answers specifying the correct flight departure and destination as the criterion for performance evaluation, the Google Search group performs significantly better than the ChatGPT group (8.88 vs. 5.00, p<0.01). Considering that users typically need to provide specific keywords in Google Search, it is more likely to yield targeted and specific results compared to the more general responses generated by ChatGPT. We further examine the performance distributions based on participants’ educational backgrounds. As illustrated in Figure 4, no significant differences are evident between the two groups across various education levels. Notably, participants holding master’s and doctorate
2307.01135#31
ChatGPT vs. Google: A Comparative Study of Search Performance and User Experience
The advent of ChatGPT, a large language model-powered chatbot, has prompted questions about its potential implications for traditional search engines. In this study, we investigate the differences in user behavior when employing search engines and chatbot tools for information-seeking tasks. We carry out a randomized online experiment, dividing participants into two groups: one using a ChatGPT-like tool and the other using a Google Search-like tool. Our findings reveal that the ChatGPT group consistently spends less time on all tasks, with no significant difference in overall task performance between the groups. Notably, ChatGPT levels user search performance across different education levels and excels in answering straightforward questions and providing general solutions but falls short in fact-checking tasks. Users perceive ChatGPT's responses as having higher information quality compared to Google Search, despite displaying a similar level of trust in both tools. Furthermore, participants using ChatGPT report significantly better user experiences in terms of usefulness, enjoyment, and satisfaction, while perceived ease of use remains comparable between the two tools. However, ChatGPT may also lead to overreliance and generate or replicate misinformation, yielding inconsistent results. Our study offers valuable insights for search engine management and highlights opportunities for integrating chatbot technologies into search engine designs.
http://arxiv.org/pdf/2307.01135
Ruiyun Xu, Yue Feng, Hailiang Chen
cs.AI, cs.HC, cs.IR
30 pages, 5 figures, 2 tables
null
cs.AI
20230703
20230703
[ { "id": "2304.07619" }, { "id": "2303.17564" }, { "id": "2303.10130" } ]
2307.01135
32
degrees achieve perfect scores in both groups. In contrast, user performance in Task 3 (the fact-checking task) is significantly worse in the ChatGPT group than in the Google Search group (5.83 vs. 8.37, p<0.01). Examining the responses from ChatGPT reveals that it often aligns with the input query, replicating inaccuracies in subsequent responses. For instance, when we enter the prompt “Is the following statement true or false? ‘The 2009 United Nations Climate Change Conference, commonly known as the Copenhagen Summit, was held in Copenhagen, Denmark, between 7 and 15 December.’” to 17 ChatGPT, it responds to us with “The statement is true. The 2009 United Nations Climate Change
2307.01135#32
ChatGPT vs. Google: A Comparative Study of Search Performance and User Experience
The advent of ChatGPT, a large language model-powered chatbot, has prompted questions about its potential implications for traditional search engines. In this study, we investigate the differences in user behavior when employing search engines and chatbot tools for information-seeking tasks. We carry out a randomized online experiment, dividing participants into two groups: one using a ChatGPT-like tool and the other using a Google Search-like tool. Our findings reveal that the ChatGPT group consistently spends less time on all tasks, with no significant difference in overall task performance between the groups. Notably, ChatGPT levels user search performance across different education levels and excels in answering straightforward questions and providing general solutions but falls short in fact-checking tasks. Users perceive ChatGPT's responses as having higher information quality compared to Google Search, despite displaying a similar level of trust in both tools. Furthermore, participants using ChatGPT report significantly better user experiences in terms of usefulness, enjoyment, and satisfaction, while perceived ease of use remains comparable between the two tools. However, ChatGPT may also lead to overreliance and generate or replicate misinformation, yielding inconsistent results. Our study offers valuable insights for search engine management and highlights opportunities for integrating chatbot technologies into search engine designs.
http://arxiv.org/pdf/2307.01135
Ruiyun Xu, Yue Feng, Hailiang Chen
cs.AI, cs.HC, cs.IR
30 pages, 5 figures, 2 tables
null
cs.AI
20230703
20230703
[ { "id": "2304.07619" }, { "id": "2303.17564" }, { "id": "2303.10130" } ]
2307.01135
33
Conference was indeed held in Copenhagen, Denmark, between 7 and 15 December.” The accurate conference dates are between 7 and 18 December. Surprisingly, after we change our prompt to “When was the 2009 UN climate change conference held?”, ChatGPT provides the correct answer. More importantly, participants often demonstrate a lack of diligence when using ChatGPT and are less motivated to further verify and rectify any misinformation in its responses. According to our observations, 70.8% of the participants in the ChatGPT group demonstrate an overreliance on ChatGPT responses by responding with “True” for the first statement. While evaluating the accuracy of the third statement in Task 3, we observe that ChatGPT tends to offer inconsistent answers for the same prompt during multiple trials. Furthermore, although it occasionally recognizes the statement as incorrect, it fails to provide accurate information (i.e., the exact percentage of emission reduction). Similar to Tasks 1 and 2, we examine the distributions of Task 3 performance across different education levels. As depicted in Figure 5, participants in the ChatGPT group consistently have lower performance than Google Search users across all education levels. The performance of the ChatGPT group does not vary with participants’ education backgrounds. By contrast, performance with Google Search is positively related to users’ education levels. Users with advanced education levels demonstrate greater proficiency in using Google Search to correct mistakes in the fact-checking task.
2307.01135#33
ChatGPT vs. Google: A Comparative Study of Search Performance and User Experience
The advent of ChatGPT, a large language model-powered chatbot, has prompted questions about its potential implications for traditional search engines. In this study, we investigate the differences in user behavior when employing search engines and chatbot tools for information-seeking tasks. We carry out a randomized online experiment, dividing participants into two groups: one using a ChatGPT-like tool and the other using a Google Search-like tool. Our findings reveal that the ChatGPT group consistently spends less time on all tasks, with no significant difference in overall task performance between the groups. Notably, ChatGPT levels user search performance across different education levels and excels in answering straightforward questions and providing general solutions but falls short in fact-checking tasks. Users perceive ChatGPT's responses as having higher information quality compared to Google Search, despite displaying a similar level of trust in both tools. Furthermore, participants using ChatGPT report significantly better user experiences in terms of usefulness, enjoyment, and satisfaction, while perceived ease of use remains comparable between the two tools. However, ChatGPT may also lead to overreliance and generate or replicate misinformation, yielding inconsistent results. Our study offers valuable insights for search engine management and highlights opportunities for integrating chatbot technologies into search engine designs.
http://arxiv.org/pdf/2307.01135
Ruiyun Xu, Yue Feng, Hailiang Chen
cs.AI, cs.HC, cs.IR
30 pages, 5 figures, 2 tables
null
cs.AI
20230703
20230703
[ { "id": "2304.07619" }, { "id": "2303.17564" }, { "id": "2303.10130" } ]
2307.01135
34
Google Search to correct mistakes in the fact-checking task. # 4.5 User Experience The data collected from the questionnaire provides additional support for the aforementioned arguments. Our results in Panel D of Table 2 show that participants in the ChatGPT group perceive the information in the responses to be of considerably higher quality than those in the Google Search group (5.90 vs. 4.62, p<0.01). ChatGPT delivers organized responses in 18
2307.01135#34
ChatGPT vs. Google: A Comparative Study of Search Performance and User Experience
The advent of ChatGPT, a large language model-powered chatbot, has prompted questions about its potential implications for traditional search engines. In this study, we investigate the differences in user behavior when employing search engines and chatbot tools for information-seeking tasks. We carry out a randomized online experiment, dividing participants into two groups: one using a ChatGPT-like tool and the other using a Google Search-like tool. Our findings reveal that the ChatGPT group consistently spends less time on all tasks, with no significant difference in overall task performance between the groups. Notably, ChatGPT levels user search performance across different education levels and excels in answering straightforward questions and providing general solutions but falls short in fact-checking tasks. Users perceive ChatGPT's responses as having higher information quality compared to Google Search, despite displaying a similar level of trust in both tools. Furthermore, participants using ChatGPT report significantly better user experiences in terms of usefulness, enjoyment, and satisfaction, while perceived ease of use remains comparable between the two tools. However, ChatGPT may also lead to overreliance and generate or replicate misinformation, yielding inconsistent results. Our study offers valuable insights for search engine management and highlights opportunities for integrating chatbot technologies into search engine designs.
http://arxiv.org/pdf/2307.01135
Ruiyun Xu, Yue Feng, Hailiang Chen
cs.AI, cs.HC, cs.IR
30 pages, 5 figures, 2 tables
null
cs.AI
20230703
20230703
[ { "id": "2304.07619" }, { "id": "2303.17564" }, { "id": "2303.10130" } ]
2307.01135
35
18 complete sentences to users’ queries, potentially making the information more accessible. However, we do not identify a significant difference in participants’ trust in using these two tools. Participants tend to accept the responses as provided and exhibit a lack of inclination to question the information sources from both tools. While participants display a similar level of trust in using both tools, Google Search users may need to exert more effort and spend additional time browsing webpages to locate relevant information. Therefore, their perceived information quality is lower. In contrast, ChatGPT’s convenience may discourage participants from further exploring and verifying information in its responses, resulting in subpar performance in fact-checking tasks. In addition, participants in the ChatGPT group find it to be more useful and enjoyable and express greater satisfaction with the tool compared to those in the Google Search group. Perceived ease of use is relatively higher in the ChatGPT group than in the Google Search group, but the difference is not significant at the 5% level. This may be attributed to people’s existing familiarity with Google, and the tasks in our experiments may not pose a significant challenge for them. # 5. Discussion and Conclusion
2307.01135#35
ChatGPT vs. Google: A Comparative Study of Search Performance and User Experience
The advent of ChatGPT, a large language model-powered chatbot, has prompted questions about its potential implications for traditional search engines. In this study, we investigate the differences in user behavior when employing search engines and chatbot tools for information-seeking tasks. We carry out a randomized online experiment, dividing participants into two groups: one using a ChatGPT-like tool and the other using a Google Search-like tool. Our findings reveal that the ChatGPT group consistently spends less time on all tasks, with no significant difference in overall task performance between the groups. Notably, ChatGPT levels user search performance across different education levels and excels in answering straightforward questions and providing general solutions but falls short in fact-checking tasks. Users perceive ChatGPT's responses as having higher information quality compared to Google Search, despite displaying a similar level of trust in both tools. Furthermore, participants using ChatGPT report significantly better user experiences in terms of usefulness, enjoyment, and satisfaction, while perceived ease of use remains comparable between the two tools. However, ChatGPT may also lead to overreliance and generate or replicate misinformation, yielding inconsistent results. Our study offers valuable insights for search engine management and highlights opportunities for integrating chatbot technologies into search engine designs.
http://arxiv.org/pdf/2307.01135
Ruiyun Xu, Yue Feng, Hailiang Chen
cs.AI, cs.HC, cs.IR
30 pages, 5 figures, 2 tables
null
cs.AI
20230703
20230703
[ { "id": "2304.07619" }, { "id": "2303.17564" }, { "id": "2303.10130" } ]
2307.01135
36
# 5. Discussion and Conclusion This study provides a comprehensive comparison of search performance and user experience between ChatGPT and Google Search. By conducting a randomized online experiment, the research highlights the trade-offs between the conversational nature of ChatGPT and the list-based results of traditional search engines like Google. On one hand, the utilization of ChatGPT has shown considerable enhancements in work efficiency, enabling users to accomplish tasks in less time, and can foster a more favorable user experience. On the other hand, it is important to note that ChatGPT does not always outperform traditional search engines. While ChatGPT excels in generating responses to straightforward questions and offering general solutions, this convenience may inadvertently hinder users from engaging in further exploration and identifying 19 misinformation within its responses. The findings based on our survey questions further support that people believe the information generated by ChatGPT has higher quality and is more accessible than Google Search, and they hold a similar view of trust in both results. Interestingly, our findings suggest that ChatGPT has a leveling effect on user performance, regardless of their educational backgrounds, while users with higher levels of education display more proficiency in using Google Search. using Google Search.
2307.01135#36
ChatGPT vs. Google: A Comparative Study of Search Performance and User Experience
The advent of ChatGPT, a large language model-powered chatbot, has prompted questions about its potential implications for traditional search engines. In this study, we investigate the differences in user behavior when employing search engines and chatbot tools for information-seeking tasks. We carry out a randomized online experiment, dividing participants into two groups: one using a ChatGPT-like tool and the other using a Google Search-like tool. Our findings reveal that the ChatGPT group consistently spends less time on all tasks, with no significant difference in overall task performance between the groups. Notably, ChatGPT levels user search performance across different education levels and excels in answering straightforward questions and providing general solutions but falls short in fact-checking tasks. Users perceive ChatGPT's responses as having higher information quality compared to Google Search, despite displaying a similar level of trust in both tools. Furthermore, participants using ChatGPT report significantly better user experiences in terms of usefulness, enjoyment, and satisfaction, while perceived ease of use remains comparable between the two tools. However, ChatGPT may also lead to overreliance and generate or replicate misinformation, yielding inconsistent results. Our study offers valuable insights for search engine management and highlights opportunities for integrating chatbot technologies into search engine designs.
http://arxiv.org/pdf/2307.01135
Ruiyun Xu, Yue Feng, Hailiang Chen
cs.AI, cs.HC, cs.IR
30 pages, 5 figures, 2 tables
null
cs.AI
20230703
20230703
[ { "id": "2304.07619" }, { "id": "2303.17564" }, { "id": "2303.10130" } ]
2307.01135
37
using Google Search. As users increasingly seek more efficient and user-friendly search tools, the integration of AI-powered conversational systems like ChatGPT can significantly impact the search engine market. Businesses and search engine providers must consider the advantages and disadvantages of adopting chat-based search systems to enhance search efficiency, performance, and user experience. Future research should explore other types of search tasks and gain a deeper understanding of how users interact with AI-powered conversational systems differently from traditional search engines. It is also important to investigate the long-term effects of adopting such systems on search behaviors and the search engine market. Lastly, future studies could examine the integration of chat and search functionalities and explore the optimal balance between # conversational and keyword-based approaches. 20 # References Adipat, B., Zhang, D., and Zhou, L. 2011. “The Effects of Tree-view based Presentation Adaptation on Mobile Web Browsing,” MIS Quarterly (35:1), pp. 99-121. Brin, S., Page, L. 1998. “The Anatomy of a Large-Scale Hypertextual Web Search Engine,” Computer Networks and ISDN Systems (30), pp. 107-117.
2307.01135#37
ChatGPT vs. Google: A Comparative Study of Search Performance and User Experience
The advent of ChatGPT, a large language model-powered chatbot, has prompted questions about its potential implications for traditional search engines. In this study, we investigate the differences in user behavior when employing search engines and chatbot tools for information-seeking tasks. We carry out a randomized online experiment, dividing participants into two groups: one using a ChatGPT-like tool and the other using a Google Search-like tool. Our findings reveal that the ChatGPT group consistently spends less time on all tasks, with no significant difference in overall task performance between the groups. Notably, ChatGPT levels user search performance across different education levels and excels in answering straightforward questions and providing general solutions but falls short in fact-checking tasks. Users perceive ChatGPT's responses as having higher information quality compared to Google Search, despite displaying a similar level of trust in both tools. Furthermore, participants using ChatGPT report significantly better user experiences in terms of usefulness, enjoyment, and satisfaction, while perceived ease of use remains comparable between the two tools. However, ChatGPT may also lead to overreliance and generate or replicate misinformation, yielding inconsistent results. Our study offers valuable insights for search engine management and highlights opportunities for integrating chatbot technologies into search engine designs.
http://arxiv.org/pdf/2307.01135
Ruiyun Xu, Yue Feng, Hailiang Chen
cs.AI, cs.HC, cs.IR
30 pages, 5 figures, 2 tables
null
cs.AI
20230703
20230703
[ { "id": "2304.07619" }, { "id": "2303.17564" }, { "id": "2303.10130" } ]
2307.01135
38
Croft, W. B., Metzler, D., & Strohman, T. (2010). Search engines: Information retrieval in practice (Vol. 520, pp. 131-141). Reading: Addison-Wesley. Dowling, M., & Lucey, B. (2023). ChatGPT for (finance) research: The Bananarama conjecture. Finance Research Letters, 53, 1-6. Eloundou, T., Manning, S., Mishkin, P., & Rock, D. (2023). GPTs are GPTs: An Early Look at the # Labor Market Impact Potential of Large Language Models. arXiv:2303.10130. Working # Paper. Felten, E. W., Raj, M., & Seamans, R. (2023). How will Language Modelers like ChatGPT Affect Occupations and Industries? Working Paper. Gasser, U. (2005). Regulating search engines: Taking stock and looking ahead. Yale JL & Tech., 8, 201. Goodwin (2021). A Complete Guide to the Google Panda Update: 2011-21. Search Engine Journal. Accessed on June 25, 2023.
2307.01135#38
ChatGPT vs. Google: A Comparative Study of Search Performance and User Experience
The advent of ChatGPT, a large language model-powered chatbot, has prompted questions about its potential implications for traditional search engines. In this study, we investigate the differences in user behavior when employing search engines and chatbot tools for information-seeking tasks. We carry out a randomized online experiment, dividing participants into two groups: one using a ChatGPT-like tool and the other using a Google Search-like tool. Our findings reveal that the ChatGPT group consistently spends less time on all tasks, with no significant difference in overall task performance between the groups. Notably, ChatGPT levels user search performance across different education levels and excels in answering straightforward questions and providing general solutions but falls short in fact-checking tasks. Users perceive ChatGPT's responses as having higher information quality compared to Google Search, despite displaying a similar level of trust in both tools. Furthermore, participants using ChatGPT report significantly better user experiences in terms of usefulness, enjoyment, and satisfaction, while perceived ease of use remains comparable between the two tools. However, ChatGPT may also lead to overreliance and generate or replicate misinformation, yielding inconsistent results. Our study offers valuable insights for search engine management and highlights opportunities for integrating chatbot technologies into search engine designs.
http://arxiv.org/pdf/2307.01135
Ruiyun Xu, Yue Feng, Hailiang Chen
cs.AI, cs.HC, cs.IR
30 pages, 5 figures, 2 tables
null
cs.AI
20230703
20230703
[ { "id": "2304.07619" }, { "id": "2303.17564" }, { "id": "2303.10130" } ]
2307.01135
39
Goodwin (2021). A Complete Guide to the Google Panda Update: 2011-21. Search Engine Journal. Accessed on June 25, 2023. Google. (2012). “Introducing the Knowledge Graph: Things, not Strings,” https://googleblog.blogspot.com/2012/05/introducing-knowledge-graph-things-not.html. Hansen, A. L., & Kazinnik, S. (2023). Can ChatGPT Decipher Fedspeak? Working Paper. 21 Hopkins, A. M., Logan, J. M., Kichenadasse, G., & Sorich, M. J. (2023). Artificial intelligence chatbots will revolutionize how cancer patients access information: ChatGPT represents a paradigm-shift. JNCI Cancer Spectrum, 7(2), 1-3. paradigm-shift. JNCI Cancer Spectrum, 7(2), 1-3. Hutson, M. (2022). Could AI help you to write your next paper?. Nature, 611(7934), 192-193. Jo, A. (2023). The promise and peril of generative AI. Nature, 614(1), 214-216. Kleinberg, J. M. 1999. “Authoritative Sources in a Hyperlinked Environment,” Journal of the ACM (46:5), pp. 604-632.
2307.01135#39
ChatGPT vs. Google: A Comparative Study of Search Performance and User Experience
The advent of ChatGPT, a large language model-powered chatbot, has prompted questions about its potential implications for traditional search engines. In this study, we investigate the differences in user behavior when employing search engines and chatbot tools for information-seeking tasks. We carry out a randomized online experiment, dividing participants into two groups: one using a ChatGPT-like tool and the other using a Google Search-like tool. Our findings reveal that the ChatGPT group consistently spends less time on all tasks, with no significant difference in overall task performance between the groups. Notably, ChatGPT levels user search performance across different education levels and excels in answering straightforward questions and providing general solutions but falls short in fact-checking tasks. Users perceive ChatGPT's responses as having higher information quality compared to Google Search, despite displaying a similar level of trust in both tools. Furthermore, participants using ChatGPT report significantly better user experiences in terms of usefulness, enjoyment, and satisfaction, while perceived ease of use remains comparable between the two tools. However, ChatGPT may also lead to overreliance and generate or replicate misinformation, yielding inconsistent results. Our study offers valuable insights for search engine management and highlights opportunities for integrating chatbot technologies into search engine designs.
http://arxiv.org/pdf/2307.01135
Ruiyun Xu, Yue Feng, Hailiang Chen
cs.AI, cs.HC, cs.IR
30 pages, 5 figures, 2 tables
null
cs.AI
20230703
20230703
[ { "id": "2304.07619" }, { "id": "2303.17564" }, { "id": "2303.10130" } ]
2307.01135
40
Liu, J. (2021). Deconstructing search tasks in interactive information retrieval: A systematic review of task dimensions and predictors. Information Processing & Management, 58(3), 1- 17. Liu, J., Sarkar, S., & Shah, C. (2020). Identifying and predicting the states of complex search tasks. In Proceedings of the 2020 conference on human information interaction and retrieval (pp. 193-202). Lopez-Lira, A., & Tang, Y. (2023). Can ChatGPT forecast stock price movements? Return predictability and large language models. arXiv preprint arXiv:2304.07619. Working Paper. Microsoft. (2023). Reinventing search with a new AI-powered Microsoft Bing and Edge, your copilot for the web. https://blogs.microsoft.com/blog/2023/02/07/reinventing-search-with- a-new-ai-powered-microsoft-bing-and-edge-your-copilot-for-the-web/. Montti (2022). Google’s Hummingbird Update: How It Changed Search. Search Engine Journal. Accessed on June 25, 2023. Noy, S., & Zhang W. (2023). Experimental evidence on the productivity effects of generative artificial intelligence. Working Paper. 22
2307.01135#40
ChatGPT vs. Google: A Comparative Study of Search Performance and User Experience
The advent of ChatGPT, a large language model-powered chatbot, has prompted questions about its potential implications for traditional search engines. In this study, we investigate the differences in user behavior when employing search engines and chatbot tools for information-seeking tasks. We carry out a randomized online experiment, dividing participants into two groups: one using a ChatGPT-like tool and the other using a Google Search-like tool. Our findings reveal that the ChatGPT group consistently spends less time on all tasks, with no significant difference in overall task performance between the groups. Notably, ChatGPT levels user search performance across different education levels and excels in answering straightforward questions and providing general solutions but falls short in fact-checking tasks. Users perceive ChatGPT's responses as having higher information quality compared to Google Search, despite displaying a similar level of trust in both tools. Furthermore, participants using ChatGPT report significantly better user experiences in terms of usefulness, enjoyment, and satisfaction, while perceived ease of use remains comparable between the two tools. However, ChatGPT may also lead to overreliance and generate or replicate misinformation, yielding inconsistent results. Our study offers valuable insights for search engine management and highlights opportunities for integrating chatbot technologies into search engine designs.
http://arxiv.org/pdf/2307.01135
Ruiyun Xu, Yue Feng, Hailiang Chen
cs.AI, cs.HC, cs.IR
30 pages, 5 figures, 2 tables
null
cs.AI
20230703
20230703
[ { "id": "2304.07619" }, { "id": "2303.17564" }, { "id": "2303.10130" } ]
2307.01135
41
Noy, S., & Zhang W. (2023). Experimental evidence on the productivity effects of generative artificial intelligence. Working Paper. 22 Pokorný, J. 2004. “Web Searching and Information Retrieval,” Computing in Science & Engineering (6:4), pp. 43-48. Reuters. (2023). OpenAI tech gives Microsoft’s Bing a boost in search battle with Google. https://www.reuters.com/technology/openai-tech-gives-microsofts-bing-boost-search-bat Schwartz (2016). Google updates Penguin, says it now runs in real time within the core search algorithm. Search Engine Land. Accessed on June 25, 2023. algorithm. Search Engine Land. Accessed on June 25, 2023. Seach Engine Journal (2023). History of Google Algorithm Updates. Accessed on June 25, 2023. Sohail, S. S., Farhat, F., Himeur, Y., Nadeem, M., Madsen, D. Ø., Singh, Y., Atalla, S. & Mansoor, W. (2023). The future of GPT: A taxonomy of existing ChatGPT research, current challenges, and possible future directions. Working Paper. and possible future directions. Working Paper.
2307.01135#41
ChatGPT vs. Google: A Comparative Study of Search Performance and User Experience
The advent of ChatGPT, a large language model-powered chatbot, has prompted questions about its potential implications for traditional search engines. In this study, we investigate the differences in user behavior when employing search engines and chatbot tools for information-seeking tasks. We carry out a randomized online experiment, dividing participants into two groups: one using a ChatGPT-like tool and the other using a Google Search-like tool. Our findings reveal that the ChatGPT group consistently spends less time on all tasks, with no significant difference in overall task performance between the groups. Notably, ChatGPT levels user search performance across different education levels and excels in answering straightforward questions and providing general solutions but falls short in fact-checking tasks. Users perceive ChatGPT's responses as having higher information quality compared to Google Search, despite displaying a similar level of trust in both tools. Furthermore, participants using ChatGPT report significantly better user experiences in terms of usefulness, enjoyment, and satisfaction, while perceived ease of use remains comparable between the two tools. However, ChatGPT may also lead to overreliance and generate or replicate misinformation, yielding inconsistent results. Our study offers valuable insights for search engine management and highlights opportunities for integrating chatbot technologies into search engine designs.
http://arxiv.org/pdf/2307.01135
Ruiyun Xu, Yue Feng, Hailiang Chen
cs.AI, cs.HC, cs.IR
30 pages, 5 figures, 2 tables
null
cs.AI
20230703
20230703
[ { "id": "2304.07619" }, { "id": "2303.17564" }, { "id": "2303.10130" } ]
2307.01135
42
and possible future directions. Working Paper. Storey, V. C., Burton-Jones, A., Sugumaran, V., and Purao, S. 2008. “CONQUER: Methodology for Context-Aware Query Processing on the World Wide Web,” Information Systems Research (19:1), pp. 3-25. Research (19:1), pp. 3-25. Susarla, A., Gopal, R., Thatcher, J. B., & Sarker, S. (2023). The Janus effect of generative AI: Charting the path for responsible conduct of scholarly activities in information systems. Information Systems Research, Articles in Advance, 1-10. Yahoo! Finance. (2023). Microsoft’s Bing is the first threat to Google’s search dominance in decades. Retrieved from https://finance.yahoo.com/news/microsofts-bing-is-the-first-threatto-googles-search-dominance-in-decades-210913597.html Van Dis, E. A., Bollen, J., Zuidema, W., van Rooij, R., & Bockting, C. L. (2023). ChatGPT: Five priorities for research. Nature, 614(7947), 224-226. 23
2307.01135#42
ChatGPT vs. Google: A Comparative Study of Search Performance and User Experience
The advent of ChatGPT, a large language model-powered chatbot, has prompted questions about its potential implications for traditional search engines. In this study, we investigate the differences in user behavior when employing search engines and chatbot tools for information-seeking tasks. We carry out a randomized online experiment, dividing participants into two groups: one using a ChatGPT-like tool and the other using a Google Search-like tool. Our findings reveal that the ChatGPT group consistently spends less time on all tasks, with no significant difference in overall task performance between the groups. Notably, ChatGPT levels user search performance across different education levels and excels in answering straightforward questions and providing general solutions but falls short in fact-checking tasks. Users perceive ChatGPT's responses as having higher information quality compared to Google Search, despite displaying a similar level of trust in both tools. Furthermore, participants using ChatGPT report significantly better user experiences in terms of usefulness, enjoyment, and satisfaction, while perceived ease of use remains comparable between the two tools. However, ChatGPT may also lead to overreliance and generate or replicate misinformation, yielding inconsistent results. Our study offers valuable insights for search engine management and highlights opportunities for integrating chatbot technologies into search engine designs.
http://arxiv.org/pdf/2307.01135
Ruiyun Xu, Yue Feng, Hailiang Chen
cs.AI, cs.HC, cs.IR
30 pages, 5 figures, 2 tables
null
cs.AI
20230703
20230703
[ { "id": "2304.07619" }, { "id": "2303.17564" }, { "id": "2303.10130" } ]
2307.01135
43
23 Wu, S., Irsoy, O., Lu, S., Dabravolski, V., Dredze, M., Gehrmann, S., Kambadur, P., Rosenberg, D., & Mann, G. (2023). BloombergGPT: A Large Language Model for Finance. arXiv:2303.17564. Working Paper. Van Bulck, L., & Moons, P. (2023). What if your patient switches from Dr. Google to Dr. ChatGPT? # A vignette-based survey of the trustworthiness, value, and danger of ChatGPT-generated responses to health questions. European Journal of Cardiovascular Nursing, 00, 1-4. 24 # Figure 1. Screenshot of the Chatbot Tool (ChatGPT)
2307.01135#43
ChatGPT vs. Google: A Comparative Study of Search Performance and User Experience
The advent of ChatGPT, a large language model-powered chatbot, has prompted questions about its potential implications for traditional search engines. In this study, we investigate the differences in user behavior when employing search engines and chatbot tools for information-seeking tasks. We carry out a randomized online experiment, dividing participants into two groups: one using a ChatGPT-like tool and the other using a Google Search-like tool. Our findings reveal that the ChatGPT group consistently spends less time on all tasks, with no significant difference in overall task performance between the groups. Notably, ChatGPT levels user search performance across different education levels and excels in answering straightforward questions and providing general solutions but falls short in fact-checking tasks. Users perceive ChatGPT's responses as having higher information quality compared to Google Search, despite displaying a similar level of trust in both tools. Furthermore, participants using ChatGPT report significantly better user experiences in terms of usefulness, enjoyment, and satisfaction, while perceived ease of use remains comparable between the two tools. However, ChatGPT may also lead to overreliance and generate or replicate misinformation, yielding inconsistent results. Our study offers valuable insights for search engine management and highlights opportunities for integrating chatbot technologies into search engine designs.
http://arxiv.org/pdf/2307.01135
Ruiyun Xu, Yue Feng, Hailiang Chen
cs.AI, cs.HC, cs.IR
30 pages, 5 figures, 2 tables
null
cs.AI
20230703
20230703
[ { "id": "2304.07619" }, { "id": "2303.17564" }, { "id": "2303.10130" } ]
2307.01135
44
24 # Figure 1. Screenshot of the Chatbot Tool (ChatGPT) | first woman to travel in space xX Q 4422AWog yi Q All About 600,000,000 results (0.52 seconds) Who was the first woman in space? | Royal Museums Greenwich https:/Avww.rmg.co.uk/stories/topics/who-was-first-woma... Who was the first woman in space? | Royal Museums Greenwich The first woman to travel in space was Soviet cosmonaut, Valentina Tereshkova. On 16 June 1963, Tereshkova was launched on a solo mission aboard the ... Soviet cosmonaut Valentina Tereshkova becomes the first woman in ... https:/Avww.history.com/this-day-in-history/first-woman-in... Soviet cosmonaut Valentina Tereshkova becomes the first woman in ... Feb 9, 2010 ... On June 16, 1963, aboard Vostok 6, Soviet Cosmonaut Valentina Tereshkova becomes the first woman to travel into space. After 48 orbits and ... Sally Ride (1951-2012) | NASA Astronaut / First American Woman in ... https://solarsystem.nasa.gov/people/1760/sally-ride-195. Sally Ride (1951-2012) | NASA Astronaut / First American Woman in ...
2307.01135#44
ChatGPT vs. Google: A Comparative Study of Search Performance and User Experience
The advent of ChatGPT, a large language model-powered chatbot, has prompted questions about its potential implications for traditional search engines. In this study, we investigate the differences in user behavior when employing search engines and chatbot tools for information-seeking tasks. We carry out a randomized online experiment, dividing participants into two groups: one using a ChatGPT-like tool and the other using a Google Search-like tool. Our findings reveal that the ChatGPT group consistently spends less time on all tasks, with no significant difference in overall task performance between the groups. Notably, ChatGPT levels user search performance across different education levels and excels in answering straightforward questions and providing general solutions but falls short in fact-checking tasks. Users perceive ChatGPT's responses as having higher information quality compared to Google Search, despite displaying a similar level of trust in both tools. Furthermore, participants using ChatGPT report significantly better user experiences in terms of usefulness, enjoyment, and satisfaction, while perceived ease of use remains comparable between the two tools. However, ChatGPT may also lead to overreliance and generate or replicate misinformation, yielding inconsistent results. Our study offers valuable insights for search engine management and highlights opportunities for integrating chatbot technologies into search engine designs.
http://arxiv.org/pdf/2307.01135
Ruiyun Xu, Yue Feng, Hailiang Chen
cs.AI, cs.HC, cs.IR
30 pages, 5 figures, 2 tables
null
cs.AI
20230703
20230703
[ { "id": "2304.07619" }, { "id": "2303.17564" }, { "id": "2303.10130" } ]
2307.01135
45
# Figure 2. Screenshot of the Search Tool (Google Search) 25 Performance on Task 1 Across Different Education Levels =ChatGPT Group Google Search Group 0 h | | | | High School or Below Some College Credit, Bachelor's Degree Master's Degree Doctorate Degree No Degree - a oo Average Performance Score on Task 1 Ne Level of Education # Figure 3. Comparisons of Performance on Task 1 between Two Groups across Education Levels Performance on Task 2 Across Different Education Levels =ChatGPT Group Google Search Group 10.1 9.9 98 9.7 9.6 9.5 94 ] 9.3 High School or Some College Credit, Bachelor's Degree Master's Degree Doctorate Degree Below No Degree Average Performance Score on Task 2 Level of Education Figure 4. Comparisons of Performance on Task 2 between Two Groups across Education Levels (General Criterion) 26 Performance on Task 3 Across Different Education Levels =ChatGPT Group Google Search Group High School or Below Some College Credit, Bachelor's Degree Master's Degree Doctorate Degree No Degree 12 10 oo a iS) Average Performance Score on Task 3 a Level of Education Figure 5. Comparisons of Performance on Task 3 between Two Groups across Education Levels 27 # Table 1. Manipulation and Randomization Checks
2307.01135#45
ChatGPT vs. Google: A Comparative Study of Search Performance and User Experience
The advent of ChatGPT, a large language model-powered chatbot, has prompted questions about its potential implications for traditional search engines. In this study, we investigate the differences in user behavior when employing search engines and chatbot tools for information-seeking tasks. We carry out a randomized online experiment, dividing participants into two groups: one using a ChatGPT-like tool and the other using a Google Search-like tool. Our findings reveal that the ChatGPT group consistently spends less time on all tasks, with no significant difference in overall task performance between the groups. Notably, ChatGPT levels user search performance across different education levels and excels in answering straightforward questions and providing general solutions but falls short in fact-checking tasks. Users perceive ChatGPT's responses as having higher information quality compared to Google Search, despite displaying a similar level of trust in both tools. Furthermore, participants using ChatGPT report significantly better user experiences in terms of usefulness, enjoyment, and satisfaction, while perceived ease of use remains comparable between the two tools. However, ChatGPT may also lead to overreliance and generate or replicate misinformation, yielding inconsistent results. Our study offers valuable insights for search engine management and highlights opportunities for integrating chatbot technologies into search engine designs.
http://arxiv.org/pdf/2307.01135
Ruiyun Xu, Yue Feng, Hailiang Chen
cs.AI, cs.HC, cs.IR
30 pages, 5 figures, 2 tables
null
cs.AI
20230703
20230703
[ { "id": "2304.07619" }, { "id": "2303.17564" }, { "id": "2303.10130" } ]
2307.01135
46
Figure 5. Comparisons of Performance on Task 3 between Two Groups across Education Levels 27 # Table 1. Manipulation and Randomization Checks Measure ChatGPT (48 participants) Google Search (47 participants) Difference (ChatGPT – Google) Panel A. Manipulation Check Perceived features of the assigned tool 5.61 4.64 0.98 Panel B. Randomization Check Age Gender Education level Employment Status Familiarity with topics in the tasks Prior experience with search engines Frequency of search engine usage Self-rated search skill Prior experience with ChatGPT 3.00 1.40 2.79 2.13 3.56 4.98 1.08 3.00 2.83 3.23 1.30 2.74 1.85 4.09 4.98 1.17 2.98 3.32 -0.23 0.10 0.05 0.27 0.52 0.00 -0.09 0.02 -0.49 F-statistic 6.77** 1.88 0.72 0.05 0.84 1.97 0.00 1.17 1.02 1.68 Notes: (1) Analysis of variance (ANOVA) is employed to test the difference between the two groups. (2) Significant level: *** p < 0.01; ** p < 0.05; * p < 0.1. 28 Table 2. Comparisons of Search Performance, Behavior, and User Experience
2307.01135#46
ChatGPT vs. Google: A Comparative Study of Search Performance and User Experience
The advent of ChatGPT, a large language model-powered chatbot, has prompted questions about its potential implications for traditional search engines. In this study, we investigate the differences in user behavior when employing search engines and chatbot tools for information-seeking tasks. We carry out a randomized online experiment, dividing participants into two groups: one using a ChatGPT-like tool and the other using a Google Search-like tool. Our findings reveal that the ChatGPT group consistently spends less time on all tasks, with no significant difference in overall task performance between the groups. Notably, ChatGPT levels user search performance across different education levels and excels in answering straightforward questions and providing general solutions but falls short in fact-checking tasks. Users perceive ChatGPT's responses as having higher information quality compared to Google Search, despite displaying a similar level of trust in both tools. Furthermore, participants using ChatGPT report significantly better user experiences in terms of usefulness, enjoyment, and satisfaction, while perceived ease of use remains comparable between the two tools. However, ChatGPT may also lead to overreliance and generate or replicate misinformation, yielding inconsistent results. Our study offers valuable insights for search engine management and highlights opportunities for integrating chatbot technologies into search engine designs.
http://arxiv.org/pdf/2307.01135
Ruiyun Xu, Yue Feng, Hailiang Chen
cs.AI, cs.HC, cs.IR
30 pages, 5 figures, 2 tables
null
cs.AI
20230703
20230703
[ { "id": "2304.07619" }, { "id": "2303.17564" }, { "id": "2303.10130" } ]
2307.01135
48
Measure Group Mean ChatGPT (48 participants) Google Search (47 participants) Panel A. Search Efficiency Self-reported Task Time (min) Total time for three tasks Time spent on task 1 Time spent on task 2 Time spent on task 3 Time Spent on Search Tool (min) Total time for three tasks Time spent on task 1 Time spent on task 2 Time spent on task 3 11.35 1.83 2.40 7.12 5.79 0.34 0.52 4.93 18.75 3.37 3.61 11.78 14.95 2.42 2.78 9.81 -7.40 -1.54 -1.20 -4.66 -9.15 -2.08 -2.26 -4.88 Panel B. Search Efforts Total # of queries on three tasks # of queries (task 1) # of queries (task 2) # of queries (task 3) Average query length for three tasks Query length (task 1) Query length (task 2) Query length (task 3) 7.36 1.55 1.30 4.51 37.54 13.50 18.43 80.72 8.13 2.13 1.65 4.35 12.05 9.90 6.11 19.82 0.77 -0.58 -0.35 0.16
2307.01135#48
ChatGPT vs. Google: A Comparative Study of Search Performance and User Experience
The advent of ChatGPT, a large language model-powered chatbot, has prompted questions about its potential implications for traditional search engines. In this study, we investigate the differences in user behavior when employing search engines and chatbot tools for information-seeking tasks. We carry out a randomized online experiment, dividing participants into two groups: one using a ChatGPT-like tool and the other using a Google Search-like tool. Our findings reveal that the ChatGPT group consistently spends less time on all tasks, with no significant difference in overall task performance between the groups. Notably, ChatGPT levels user search performance across different education levels and excels in answering straightforward questions and providing general solutions but falls short in fact-checking tasks. Users perceive ChatGPT's responses as having higher information quality compared to Google Search, despite displaying a similar level of trust in both tools. Furthermore, participants using ChatGPT report significantly better user experiences in terms of usefulness, enjoyment, and satisfaction, while perceived ease of use remains comparable between the two tools. However, ChatGPT may also lead to overreliance and generate or replicate misinformation, yielding inconsistent results. Our study offers valuable insights for search engine management and highlights opportunities for integrating chatbot technologies into search engine designs.
http://arxiv.org/pdf/2307.01135
Ruiyun Xu, Yue Feng, Hailiang Chen
cs.AI, cs.HC, cs.IR
30 pages, 5 figures, 2 tables
null
cs.AI
20230703
20230703
[ { "id": "2304.07619" }, { "id": "2303.17564" }, { "id": "2303.10130" } ]
2307.01135
49
8.13 2.13 1.65 4.35 12.05 9.90 6.11 19.82 0.77 -0.58 -0.35 0.16 25.49 3.60 12.32 60.90 Panel C. Search Performance (Full Score: 10) Average performance score on three tasks Performance score on task 1 Performance score on task 2 If answers pointing to the destinations Performance score on task 3 8.55 10.00 9.81 5.00 5.83 8.77 8.19 9.74 8.88 8.37 -0.22 1.81 0.07 -3.88 -2.54 Panel D. User Experience Perceived information quality Technology trust Perceived ease of use Perceived usefulness Perceived enjoyment Satisfaction 5.90 5.38 6.00 6.19 5.87 6.06 4.62 5.30 5.57 5.30 4.74 5.27 1.27 0.07 0.43 0.89 1.12 0.79 F-statistic 26.88*** 18.09*** 7.22*** 22.86*** 34.81*** 22.11*** 40.39*** 14.06*** 1.30 7.13*** 3.39* 0.09 27.59*** 12.84*** 156.63*** 18.74*** 0.83 19.46***
2307.01135#49
ChatGPT vs. Google: A Comparative Study of Search Performance and User Experience
The advent of ChatGPT, a large language model-powered chatbot, has prompted questions about its potential implications for traditional search engines. In this study, we investigate the differences in user behavior when employing search engines and chatbot tools for information-seeking tasks. We carry out a randomized online experiment, dividing participants into two groups: one using a ChatGPT-like tool and the other using a Google Search-like tool. Our findings reveal that the ChatGPT group consistently spends less time on all tasks, with no significant difference in overall task performance between the groups. Notably, ChatGPT levels user search performance across different education levels and excels in answering straightforward questions and providing general solutions but falls short in fact-checking tasks. Users perceive ChatGPT's responses as having higher information quality compared to Google Search, despite displaying a similar level of trust in both tools. Furthermore, participants using ChatGPT report significantly better user experiences in terms of usefulness, enjoyment, and satisfaction, while perceived ease of use remains comparable between the two tools. However, ChatGPT may also lead to overreliance and generate or replicate misinformation, yielding inconsistent results. Our study offers valuable insights for search engine management and highlights opportunities for integrating chatbot technologies into search engine designs.
http://arxiv.org/pdf/2307.01135
Ruiyun Xu, Yue Feng, Hailiang Chen
cs.AI, cs.HC, cs.IR
30 pages, 5 figures, 2 tables
null
cs.AI
20230703
20230703
[ { "id": "2304.07619" }, { "id": "2303.17564" }, { "id": "2303.10130" } ]
2307.01135
51
Note: (1) Analysis of variance (ANOVA) is employed to test the difference between the two groups. (2) Significant level: *** p < 0.01; ** p < 0.05; * p < 0.1. 29 # Appendix. Measurements Please indicate your experience and perceptions when you use the provided search tool for doing the above tasks based on a 7-points scale from strongly disagree to strongly agree. (Note: for the items with (-), we code them in a reverse way for analyses.) # Information quality 1) 2) 3) What I am looking for during the search does not seem to be available. (-) I see a lot of not good or useless information in the responses. (-) There is just too much information in the responses. (-) # Technology trust 4) 5) 6) 7) I fully rely on the answers from the provided tool in completing the tasks. I feel I count on the provided tool when working on the tasks. I think the answers from the provided tool are reliable. I do not know if I can trust the answers from the provided tool. (-) # Perceived ease of use 8) 9) 10) My interaction with the provided tool is clear and understandable. 11) 12) I find completing the tasks using the provided tool easily. I feel skillful in doing the tasks by the provided tool.
2307.01135#51
ChatGPT vs. Google: A Comparative Study of Search Performance and User Experience
The advent of ChatGPT, a large language model-powered chatbot, has prompted questions about its potential implications for traditional search engines. In this study, we investigate the differences in user behavior when employing search engines and chatbot tools for information-seeking tasks. We carry out a randomized online experiment, dividing participants into two groups: one using a ChatGPT-like tool and the other using a Google Search-like tool. Our findings reveal that the ChatGPT group consistently spends less time on all tasks, with no significant difference in overall task performance between the groups. Notably, ChatGPT levels user search performance across different education levels and excels in answering straightforward questions and providing general solutions but falls short in fact-checking tasks. Users perceive ChatGPT's responses as having higher information quality compared to Google Search, despite displaying a similar level of trust in both tools. Furthermore, participants using ChatGPT report significantly better user experiences in terms of usefulness, enjoyment, and satisfaction, while perceived ease of use remains comparable between the two tools. However, ChatGPT may also lead to overreliance and generate or replicate misinformation, yielding inconsistent results. Our study offers valuable insights for search engine management and highlights opportunities for integrating chatbot technologies into search engine designs.
http://arxiv.org/pdf/2307.01135
Ruiyun Xu, Yue Feng, Hailiang Chen
cs.AI, cs.HC, cs.IR
30 pages, 5 figures, 2 tables
null
cs.AI
20230703
20230703
[ { "id": "2304.07619" }, { "id": "2303.17564" }, { "id": "2303.10130" } ]
2307.01135
52
I find completing the tasks using the provided tool easily. I feel skillful in doing the tasks by the provided tool. I find the provided tool easy to use. I find using the provided tool is cognitively demanding. (-) # Perceived usefulness 13) Using the provided tool enabled me to accomplish the tasks quickly. 14) Using the provided tool enhanced my effectiveness in completing the tasks. 15) # Perceived enjoyment 16) Using the provided tool is enjoyable. 17) I have fun with the provided tool. 18) I find using the provided tool interesting. Satisfaction 19) 20) 21) I am satisfied with the use of the provided tool. I am pleased to use the provided tool. I like using the provided tool. # Manipulation check questions: 22) 23) I used a tool that is similar to a traditional search engine. (-) I used a tool with a conversation interface. 30
2307.01135#52
ChatGPT vs. Google: A Comparative Study of Search Performance and User Experience
The advent of ChatGPT, a large language model-powered chatbot, has prompted questions about its potential implications for traditional search engines. In this study, we investigate the differences in user behavior when employing search engines and chatbot tools for information-seeking tasks. We carry out a randomized online experiment, dividing participants into two groups: one using a ChatGPT-like tool and the other using a Google Search-like tool. Our findings reveal that the ChatGPT group consistently spends less time on all tasks, with no significant difference in overall task performance between the groups. Notably, ChatGPT levels user search performance across different education levels and excels in answering straightforward questions and providing general solutions but falls short in fact-checking tasks. Users perceive ChatGPT's responses as having higher information quality compared to Google Search, despite displaying a similar level of trust in both tools. Furthermore, participants using ChatGPT report significantly better user experiences in terms of usefulness, enjoyment, and satisfaction, while perceived ease of use remains comparable between the two tools. However, ChatGPT may also lead to overreliance and generate or replicate misinformation, yielding inconsistent results. Our study offers valuable insights for search engine management and highlights opportunities for integrating chatbot technologies into search engine designs.
http://arxiv.org/pdf/2307.01135
Ruiyun Xu, Yue Feng, Hailiang Chen
cs.AI, cs.HC, cs.IR
30 pages, 5 figures, 2 tables
null
cs.AI
20230703
20230703
[ { "id": "2304.07619" }, { "id": "2303.17564" }, { "id": "2303.10130" } ]
2307.00184
0
3 2 0 2 p e S 1 2 ] L C . s c [ 3 v 4 8 1 0 0 . 7 0 3 2 : v i X r a # Personality Traits in Large Language Models Greg Serapio-Garc´ıa,1,2,3† Mustafa Safdari,1† Cl´ement Crepy,4 Luning Sun,3 Stephen Fitz,5 Peter Romero,3,5 Marwa Abdulhai,6 Aleksandra Faust,1‡ Maja Matari´c1‡* 1Google DeepMind. 2Department of Psychology, University of Cambridge. 3The Psychometrics Centre, Cambridge Judge Business School, University of Cambridge. 4Google Research. 5Keio University. 6University of California, Berkeley. †Contributed equally. ‡Jointly supervised. # Abstract
2307.00184#0
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
1
# Abstract The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI. # 1 Introduction
2307.00184#1
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
2
# 1 Introduction Large language models (LLMs) have revolutionized natural language processing with their ability to gen- erate human-like text. As LLMs become ubiqui- tous and are increasingly used by the general pub- lic world-wide, the synthetic personality embedded in these models and its potential for misalignment are becoming a topic of importance for responsible AI. Some observed LLM agents have inadvertently mani- fested undesirable personality profiles1, raising serious safety and fairness concerns in AI, computational so- cial science, and psychology research [36]. LLMs are large-capacity machine-learned models that generate text, recently inspired major breakthroughs in natural language processing (NLP) and conversational agents
2307.00184#2
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
3
[116, 80, 15]. Vast amounts of human-generated train- ing data [11] enable LLMs to mimic human character- istics in their outputs and exhibit a form of synthetic personality. Personality encompasses an entity’s char- acteristic patterns of thought, feeling, and behavior [2, 93]. In humans, personality is formed from biolog- ical and social factors, and fundamentally influences daily interactions and preferences [92]. Psychomet- rics, the science of psychological test construction and validation [95], provides an empirical framework for quantifying human personality through psychometric testing [102]. To date, validated psychometric meth- ods for quantifying human personality have not been applied to LLMs end-to-end; while past works [36] have attempted to measure personality in LLMs with psychometric tests, there remains a scientific need to formally evaluate the reliability and validity of these measurements in the LLM context. 1https://www.nytimes.com/2023/02/16/technology/bing- chatbot-microsoft-chatgpt.html 1
2307.00184#3
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
4
1https://www.nytimes.com/2023/02/16/technology/bing- chatbot-microsoft-chatgpt.html 1 1) Administer Psychometric Tests 2) Evaluate Reliability all reliability metrics > 0.70? Guttman's Lambda 6 4) Evaluate Criterion Validity] Pos. |Achievement Neg. ate Aggression creativit -— Gs») ES : Criterion rs align w/ human data? 3) Evaluate Convergent & Discriminant Validity IPIP-NEO /Subscales| SS BFI subscales Convergent rs 2 0.807 5) Judge Construct Validity Construct Validity Figure 1: Methodology for Establishing Construct Validity. LLMs are administered two personality tests, with the variation injected through a set of Descriptive Personas, Test Instructions, and Item Postambles. The scored LLM re- sponses are analyzed for reliability, convergent validity, discriminant validity, and criterion validity.
2307.00184#4
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
5
Our work answers the open question: Do LLMs simulate human personality traits in reliable, valid, and practically meaningful ways, and if so, can LLM- synthesized personality profiles be verifiably shaped along desired dimensions? We contribute a methodol- ogy for administering personality-based psychometric tests to LLMs, evaluating the reliability and validity of the resulting measurements, and also shaping LLM- synthesized personality traits. First, to administer psy- chometric tests to LLMs, we developed a structured prompting method that simulates persona descriptions and introduces prompt variations. Next, the test score variation created by this prompting is used to power a suite of statistical analyses assessing the reliability of the resulting measurements. Last, we present a novel prompting methodology that shapes personality
2307.00184#5
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
6
traits at nine levels using 104 trait adjectives. Apply- ing the described methodology to a family of LLMs, we found that: 1) evidence of the reliability and valid- ity of LLM-synthesized personality measurements is stronger for larger and instruction fine-tuned models; 2) personality in LLM outputs can be shaped along desired dimensions to mimic specific human person- ality profiles; and 3) shaped personality verifiably in- fluences LLM behavior in common downstream (i.e., subsequent) tasks, such as writing social media posts [98]. By providing a methodology for quantifying and validating measurements of personality in LLMs, this work establishes a foundation for principled LLM as- sessment that is especially important as LLMs and, more generally, foundation models continue to grow in popularity and scale. By leveraging psychometrics, 2 this work translates established measurement theory from quantitative social science and psychological as- sessment to the fledgling science of LLMs, a field that is poised to grow and necessitates both a solid founda- tion and interdisciplinary expertise and perspectives. # 2 Quantifying and Validating Per- sonality Traits in LLMs
2307.00184#6
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
7
LLMs are starting to meet most of the key require- ments for human-like language use, including conver- sation, contextual understanding, coherent and rele- vant responses, adaptability and learning, question an- swering, dialog, and text generation [80, 116, 101]. These impressive NLP capabilities are a result of LLMs’ abilities to learn language distribution, aided by increasing model sizes [11, 117], training on mas- sive datasets of text, and further fine-tuning toward usage preferences [115] (see Appendix A). Taken to- gether, they enable LLMs to enact convincing, human- like personas, sparking debate over the existence and extent of personality [74], human values [97], and other psychological phenomena [110] potentially em- bedded in these models. Personality is a foundational socio-behavioral phenomenon in psychology that, for humans, predicts a broad spectrum of health, social, economic, and political behaviors crucial for individ- ual and societal success [9]. For example, personal- ity has been extensively studied as an antecedent of human values [85]. Decades of research have fur- ther shown how personality information is richly en- coded in
2307.00184#7
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
8
ity has been extensively studied as an antecedent of human values [85]. Decades of research have fur- ther shown how personality information is richly en- coded in human language [31, 96]. LLMs not only comprise the vast sociopolitical, economic, and be- havioral data they are trained on, they also generate language that inherently expresses personality con- tent. For this reason, the ability to measure and val- idate LLM-synthesized personality holds promise for LLM safety, responsibility, and alignment efforts [27], which have so far primarily focused on mitigating spe- cific harms rather than examining more fundamental patterns of model behavior. Ultimately, personality as an empirical framework [47] provides both theory and methodology for quantifying latent traits in LLMs that
2307.00184#8
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
9
3 are potentially predictive of LLM behaviors in diverse inference tasks (see Appendix B). Recent work has tried to identify unintended conse- quences of the improved abilities of LLMs, including their use of deceptive and manipulative language [62], gender, racial, or religious bias in behavioral experi- ments [1], and violent language, among many others [7]. LLMs can also be inconsistent in dialogue [65], explanation generation, and factual knowledge extrac- tion. Prior attempts to probe psychological phenomena such as personality and human values in LLMs have informally measured personality using questionnaires and, in some cases, preliminarily assessed the quality of LLM questionnaire responses [74]. Past work has also explored methods, such as few-shot prompting, to mitigate undesirable and extreme personality profiles exhibited in LLM outputs. However, so far no work has addressed how to systematically measure and psy- chometrically validate measurements of LLM person- ality in light of their highly variable outputs and hy- persensitivity to prompting. We further detail related work in Appendix C.
2307.00184#9
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
10
The question of how to systematically verify syn- thetic personality in LLMs highlights calls from re- sponsible AI researchers [41] to scientifically evaluate construct validity when studying social-psychological phenomena in AI systems, as inaccurate conceptions of such phenomena directly impact mitigation and governance efforts. Construct validity, a central crite- rion of scientific measurement [18], refers to the abil- ity of a measure to reliably and accurately reflect the latent phenomenon (i.e., construct) it was designed to quantify. The only published exploration of personal- ity and psychodemographics in LLMs [74] questioned the validity of the survey responses returned by GPT-3; it found an inconsistent pattern in HEXACO Personal- ity Inventory [58] and human value survey responses. That study preliminarily evaluated measurement qual- ity in terms of “theoretical reliability:” how the inter- facet correlations of GPT-3’s HEXACO data aligned with those observed among human HEXACO data. More formal psychometric evaluations of reliability— and more crucially, construct validity—are required
2307.00184#10
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
11
to verify questionnaire-based measurements of latent psychological traits in LLMs. An LLM may display elevated levels of agreeableness through its answers on a personality questionnaire, but those answers may not form internally consistent patterns across the entire questionnaire; tests administered to LLMs may not be empirically reliable. Concurrently, the reliability of LLM responses to a questionnaire purporting to mea- sure agreeableness may not necessarily reflect its ten- dency to behave agreeably across other tasks; tests ad- ministered to LLMs may not be empirically valid. # 2.1 Methodology Overview
2307.00184#11
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
12
# 2.1 Methodology Overview We quantified LLM personality traits and evaluated the ability of LLMs to meaningfully emulate hu- man personality traits in two stages. First, using the structured prompting methodology proposed in Sec- tion 2.1.1, we repeatedly administered two personal- ity assessments of different lengths and theoretical tra- ditions, alongside 11 separate psychometric tests of personality-related constructs, to a variety of LLMs. Second, as described in Section 2.1.2 and unique to this work, we rigorously evaluated the psychometric properties of LLM responses through a suite of statis- tical analyses of reliability and construct validity. The resulting metrics facilitate a comparison of the varied abilities of LLMs to reliably and validly synthesize personality traits and provide insight into LLM prop- erties that drive these abilities. See Figure 1 for an overview of the test validation process. For all studies, we used models from the PaLM fam- ily [15] because of their established performance on generative tasks, especially in conversational contexts [124]. We varied model selections across three key di- mensions: model size, question answering (Q&A) task fine-tuning, and training method (see Appendix D for details). # 2.1.1 Administering Psychometric Tests to LLMs
2307.00184#12
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
13
# 2.1.1 Administering Psychometric Tests to LLMs Quantifying LLMs personality traits requires a mea- surement methodology that is reproducible, yet flex- ible enough to facilitate formal testing of reliability 4 and validity across diverse prompts and measures. To administer psychometric tests to LLMs, we leveraged their ability to score possible completions of a pro- vided prompt. We used prompts to instruct models to rate items (i.e., descriptive statements such as “I am the life of the party.”) from each psychometric test on a standardized response scale (e.g., 1 = “strongly dis- agree” vs. 5 = “strongly agree”). We simulated an LLM’s chosen response to an item by ranking the con- ditional log probabilities of its response scale options, framed as possible continuations of the prompt [15] (e.g., “1” vs. “5”). This constrained mode of LLM in- ference is often used in multiple choice question and answer (Q&A) tasks to score possible options [46] (cf. inference by generating text [11, 15, 116]). Using this technique, item responses were not influences by con- tent contained in other items, mitigating measurement error due to item order.
2307.00184#13
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
14
We administered two personality inventories— primary and secondary—to gauge if LLM responses to psychometric tests of different lengths and distinct theoretical traditions converged, indicating convergent validity. We selected the widely-used IPIP-NEO [33], a 300-item open-source representation of the Revised NEO Personality Inventory [19] as our primary mea- sure of personality. As a secondary measure, we em- ployed the Big Five Inventory (BFI) [48], a 44-item measure developed in the lexical tradition [102]. Both tests assess the Big Five traits (i.e., domains) of per- sonality [47], comprising dedicated subscales mea- suring extraversion, agreeableness, conscientiousness, neuroticism, and openness to experience. Appendix E details the scoring scheme of and rationale behind the selection. To validate these measures of personal- ity in the LLM context, we additionally administered 11 psychometric tests of theoretically-related external criteria, each corresponding to at least one Big Five domain. In short, response variation generated by structured prompting was necessary to analyze the reliability and validity of LLM personality measurements, described next in Section 2.1.2. The prompt for each psycho- metric test item consisted of three main parts: an Item Preamble, the Item itself, and an Item Postamble. Each
2307.00184#14
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
15
Table 1: Prompt components. Item Preamble Item Item Postamble. An Item Preamble consists of a Persona Instruction, Persona Description, and Test Instruction. Supplemental Tables 5 and 7 detail all Item Preambles and Item Postambles used in the experiments. # Examples of Controlled Prompt Variations For the following task, respond in a way that matches this description: "My favorite food is mushroom ravioli. works at a bank. I work in an animal shelter." Evaluating the statement, "I value cooperation over competition", please rate how accurately this describes you on a scale from 1 to 5 (where 1 = "very inaccurate", 2 = "moderately inaccurate", 3 = "neither accurate nor inaccurate", 4 = "moderately accurate", and 5 = "very accurate"): For the following task, respond in a way that matches this description: "I blog about salt water aquarium ownership. my clothes. I’m allergic to peanuts. mom raised me by herself and taught me to play baseball." Thinking about the statement, "I see myself as someone who is talkative", please rate your agreement on a scale from A to E (where A = "strongly disagree", B = "disagree", C = "neither agree nor disagree", D = "agree", and E = "strongly agree"):
2307.00184#15
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
16
Item Preamble contained a Persona Instruction, a Per- sona Description, and an Item Instruction (Table 1). When administering a psychometric test, we system- atically modified the Persona Descriptions, Item In- structions, and Item Postambles surrounding each item to generate simulated response profiles, unique combi- nations of a prompt that were reused within and across administered measures to statistically link LLM re- sponse variation in one measure to response variation in another measure. Persona Instructions instructed the model to follow a given Persona Description and remained fixed across all experiments. A given Per- sona Description contained one of 50 short demo- graphic descriptions (listed in Supplemental Table 6) sampled from an existing dialogue dataset [123] to an- chor LLM responses to a social context and create nec- essary variation in responses across prompts, with de- scriptions like “I like to remodel homes.” or “My fa- vorite holiday is Halloween.” Item Instructions were introductory phrases (adapted from original test in- structions where possible) that conveyed to the model that it was answering a survey item (e.g., “Thinking about
2307.00184#16
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
17
from original test in- structions where possible) that conveyed to the model that it was answering a survey item (e.g., “Thinking about the statement, ...”). A given Item was a descriptive statement (accompanied by a rating scale) taken from a given psychometric test (e.g., “I see myself as someone who is talkative”). Item Postambles pre- sented the possible standardized responses the model could choose from.
2307.00184#17
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
18
Appendix F discusses the prompt design motivation and provides a full set of Persona Descriptions, Item Instructions, and Item Postambles. # 2.1.2 Reliability and Construct Validity After all the psychometric tests are administered, across all the prompt variations, the next stage estab- lished whether LLM measurements of personality de- rived from the IPIP-NEO are reliable and externally meaningful—that they demonstrated construct valid- ity. In psychometrics, and across any science involv- ing measurement, the construct validity of a given test requires reliability. Reliability refers to the con- sistency and dependability of a test’s measurements. Construct validity can be evaluated in terms of conver- gent, discriminant, and criterion validity [18]. A test demonstrates convergent validity when it sufficiently 5
2307.00184#18
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
19
5 Table 2: Results summary across experiments, their parameters, and tested models. Convergent validity (Convrg.) summarized by the average convergent correlation between IPIP-NEO and BFI domain scores (Figure 7); discriminant validity (Discr.) summarized by the average difference between an IPIP-NEO domain’s convergent correlation with all of its (absolute) respective discriminant correlations; criterion validity (Criter.) summarized from Supplemental Figures 8a, 8b, 8c, 8d, and 8e; single trait shaping performance (Single) summarized from Supplemental Table 13; multiple trait shaping performance (Multi.) summarized from 3; shaping performance in downstream text generation task (Dwnstr.) summarized from Figure 4. Results over LLM variants: Base, instruction-tuned (IT), and compute-optimally trained (CO). Overall performance (Ovrll.) per model summarized across all experiments. −− unacceptable; − poor to neutral; + neutral to good; ++ excellent. ∗ removed two items with no variance to compute reliability metrics. Some models were not tested (n.t.) across shaping experiments. We conducted independent and concurrent personality shaping experiments on models where personality test data were sufficiently reliable. Personality shaping in a downstream task was tested on the most capable model to optimize computational cost.
2307.00184#19
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
20
Reliability Construct Validity Discr. Convrg. Criter. Single Shaping Multi. Dwnstr. Ovrll. Model PaLM 62B Flan-PaLM 8B Flan-PaLM 62B Flan-PaLM 540B Flan-PaLMChilla 62B Variant Base IT IT IT IT, CO −− + + ++ +∗ 0.05 −0.24 0.23 0.69 0.41 0.87 0.51 0.90 0.48 0.87 −− − + + ++ n.t. + + ++ + n.t. −− + ++ + n.t. n.t. n.t. ++ n.t. −− − + ++ + Prompt Set Parameters Personality Profiles Descriptive Personas Item Instructions Items Item Postambles Simulated Response Profiles 0 50 5 419 5 1,250 45 50 1 300 1 2,250 32 50 1 300 1 1,600 45 50 0 0 0 2,250 Section/Appendix 2.2.1/I.2 2.2.2/I.3 2.2.3/I.3 3.3/K.1 3.3/K.2 4.2/M
2307.00184#20
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
21
relates to purported indicators of the test’s target con- struct. Discriminant validity refers to how sufficiently unrelated a test is to indicators of unrelated constructs. Criterion validity indicates how well a test relates to theoretically-linked external outcomes. Appendix G contains further details on validity. To evaluate the reliability and construct validity of the LLM responses, we conducted a suite of statisti- cal analyses informed by formal standards of psycho- metric test construction and validation (see Appendix G.2). We organized these analyses by three subtypes of reliability and construct validity, respectively. In this work, a personality trait is validly synthesized in an LLM only when the LLM responses meet all tested indices of reliability and construct validity. Figure 1 provides an overview of the process and validity crite- ria, while Appendix H presents the full methodology for evaluating the construct validity of LLM personality measurements.
2307.00184#21
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
22
Reliability The reliability of each IPIP-NEO and BFI subscale, the extent to which their LLM mea- surements of personality were consistent and depend- able, was quantified by formal psychometric standards of internal consistency reliability (operationalized as Cronbach’s α, Eq. (1), and Guttman’s, Eq. λ6 (2) and composite reliability (operationalized as McDonald’s ω, Eq. (3)). See Appendix G.1 for additional informa- tion on these reliability metrics. Convergent and Discriminant Validity We evalu- ated the LLM-specific convergent and discriminant va- lidity of the IPIP-NEO as components of construct va- lidity, according to published standards [12, 4].2 The 2Throughout this work, we use thresholds recommended by Evans [25] in evaluations of correlation strengths. 6
2307.00184#22
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
23
2Throughout this work, we use thresholds recommended by Evans [25] in evaluations of correlation strengths. 6 convergent validity of the IPIP-NEO for each model, the test’s quality in terms of how strongly it relates to purported indicators of the same targeted construct, was quantified in terms of how strongly each of its five subscales convergently correlated with their cor- responding BFI subscale (e.g., IPIP-NEO Extraver- sion’s convergent correlation with BFI Extraversion), on average. The discriminant validity of the IPIP-NEO per model, its quality in terms of how relatively unre- lated its subscales are to purported indicators of non- targeted constructs, was determined when the aver- age difference (∆) between its convergent and respec- tive discriminant correlations with the BFI (e.g. IPIP- NEO Extraversion’s discriminant correlation with BFI Agreeableness) was at least moderate (≥ 0.40). We used Pearson’s correlation coefficient (r; Eq. (4)) in these and subsequent validity analyses of continuous data.
2307.00184#23
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
24
Criterion Validity As another component of con- struct validity, the criterion validity of a psychome- tric test gauges its ability to relate to theoretically connected non-target criteria. To evaluate the LLM- specific criterion validity of the IPIP-NEO, we admin- istered tests of 11 external criteria theoretically con- nected to personality (Supplemental Table 8) and cor- related each IPIP-NEO subscale with its correspond- ing external tests. A given IPIP-NEO subscale demon- strated criterion validity when the strength and di- rection of its correlations with tested external criteria matched or exceeded statistical associations reported for humans. # 2.2 Personality Measurement and Validation Results We found that LLM personality measurements were reliable and valid in medium (62B) and large (540B) instruction fine-tuned variants of PaLM. Of all the models we tested, Flan-PaLM 540B was best able to reliably and validly synthesize personality traits. The Construct Validity columns of Table 2 summarize our personality measurement and validation results; Ap- pendix I lists further details, such as descriptive statistics across all results in Appendix I.1. # 2.2.1 Reliability Results Since metrics computed for both personality measures relatively converged, we focus our reporting of relia- bility for our primary measure, the IPIP-NEO.
2307.00184#24
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
25
Since metrics computed for both personality measures relatively converged, we focus our reporting of relia- bility for our primary measure, the IPIP-NEO. Among models of the same size (i.e., PaLM, Flan-PaLM, and Flan-PaLMChilla), instruction fine- tuned variants’ personality test data were highly reli- able (all three metrics were in the mid to high 0.90s, on average). In contrast, responses from the base PaLM 62B (a non-instruction-tuned model) were unreliable (−0.55 ≤ α ≤ 0.67). Across different models of the same training configuration (i.e., Flan-PaLM 8B, Flan-PaLM 62B, and Flan-PaLM 540B), the reliabil- ity of synthetic personality scores (i.e., α) increased with model size, improving from acceptable to excel- lent. Appendix I.2 and Supplemental Table 10 sum- marizes personality test reliability results by model in more detail. # 2.2.2 Convergent and Discriminant Validation Results
2307.00184#25
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
26
# 2.2.2 Convergent and Discriminant Validation Results Convergent and discriminant validity evaluations of LLM personality measurements allowed us to draw two conclusions. First, convergent and discriminant validity improved as model size increased. Second, convergent and discriminant validity of LLM per- sonality test scores related to model instruction fine- tuning. Table 2 contains results summary, while Ap- pendix I.3 and Supplemental Table 11 detail quantita- tive results. Convergent validity by model size: The convergent validity of Flan-PaLM’s personality test data was in- consistent at 8B parameters (Figure 7). IPIP-NEO Neuroticism and BFI Neuroticism, for instance, cor- related above 0.80 (constituting excellent convergent validity), while IPIP-NEO Openness and BFI Open- ness subscales correlated at less than 0.40 (indicating inadequately low convergence). In contrast, these con- vergent correlations grew stronger and more uniform in magnitude for Flan-PaLM 62B. We found that con7 vergent correlations between LLM IPIP-NEO and BFI scores were strongest for Flan-PaLM 540B.
2307.00184#26
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
27
vergent correlations between LLM IPIP-NEO and BFI scores were strongest for Flan-PaLM 540B. Discriminant validity by model size: Indices of dis- criminant validity similarly improved with model size. The absolute magnitude of all five convergent correla- tions between the IPIP-NEO and BFI for Flan-PaLM 62B and Flan-PaLM 540B were the strongest of their respective rows and columns of the multitrait- multimethod matrix (MTMM) [12] outlined in Ap- pendix H. Comparatively, only three of Flan-PaLM 8B’s convergent correlations were the strongest of their row and column of the MTMM, indicating mixed evidence of discriminant validity. For instance, the av- erage difference between Flan-PaLM’s convergent and respective discriminant correlations increased from 0.23 at 8B parameters to 0.51 at 540B parameters (Supplemental Table 11).
2307.00184#27
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
28
Convergent validity by model configuration: Out of PaLM, Flan-PaLM, and Flan-PaLMChilla of the same size (62B), scores on the IPIP-NEO and BFI were strongly (convergently) correlated only for instruction fine-tuned models: Flan-PaLM and Flan-PaLMChilla (Figure 7). Of these three sets of model responses, Flan-PaLMChilla 62B’s IPIP-NEO scores presented the strongest evidence of convergent validity, with an average convergent correlation of 0.90 (Supplemental Table 11). Discriminant validity by model configuration: Evi- dence for discriminant validity clearly favored instruc- tion fine-tuned Flan-PaLM over (base) PaLM when holding model size constant at 62B parameters. Again, all five of Flan-PaLMChilla 62B’s convergent correla- tions passed established standards [12] of discriminant validity. In contrast, PaLM 62B’s discriminant cor- relations (avg. rdisc = 0.29) outweighed their con- rconv = vergent counterparts in many cases (avg. 0.05; Supplemental Table 11), indicating that, for this model, personality measurements were not consistent across different modes of assessment. # 2.2.3 Criterion Validity Results
2307.00184#28
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
29
# 2.2.3 Criterion Validity Results The criterion validity of synthetic personality mea- surements in LLMs, relative to convergent and dis8 criminant validity, similarly varied across LLM char- acteristics of size and instruction fine-tuning. Mea- surements of larger, instruction fine-tuned models showed stronger criterion validity relative to those of their smaller, non-instruction-tuned counterparts. Sup- plemental Figure 8 summarizes the results by Big Five domain. Extraversion. Human extraversion is strongly cor- related with positive affect and moderately negatively correlated with negative affect [113]. Simulated IPIP- NEO Extraversion scores for all, but base, PaLM mod- els showed excellent evidence of criterion validity in their relation to PANAS Positive Affect and Negative Affect subscale scores (see Supplemental Figure 8a). This suggests that the criterion validity of extraversion measurements in LLMs may only emerge due to in- struction fine-tuning. LLM response alignment with human personality research—in terms of the strength and direction of correlations between personality and emotions—increased with model size.
2307.00184#29
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]