id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
listlengths 1
1
|
---|---|---|---|---|---|---|
2308.08493#52 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | The show is incredibly insightful, with characters that are truly believable and situations that mirror the real-life experiences in a typical high school. The humor is sharp and intelligent, making it a refreshing change from the usual sitcoms. Itâ s a shame that it didnâ t run for longer, as it was a brilliant depiction of school life that was both entertaining and thought-provoking. Bromwell High is definitely a show that deserves more recognition for its realistic portrayal of the education system. Reference Instance: Sentence 1: Although they were born on different planets, Oscar-winning actor Nicolas Cageâ s new son and Superman have something in common - both were named Kal-el. Sentence 2: Nicolas Cageâ | 2308.08493#51 | 2308.08493#53 | 2308.08493 | [
"2110.14168"
]
|
2308.08493#53 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | s son is called Kal-el. Label: 1 (entailment) Instruction: Finish Sentence 2 based on Sentence 1, such that the following label shows the log- ical relationship between Sentence 1 and Sentence 2. BLEURT Score: 0.62 ROUGE-L Score: 0.57 Sentence 1: Although they were born on different planets, Oscar-winning actor Nicolas Cageâ s new son and Superman have something in common - both were named Kal-el. | 2308.08493#52 | 2308.08493#54 | 2308.08493 | [
"2110.14168"
]
|
2308.08493#54 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | # Label: 1 (entailment) # Sentence 2: Nicolas Cageâ s new son and Superman share the same name, Kal-el. Reference Instance: Summary: Kim is about to tell mom that Harry bought a new sofa, and he needs grey pillows. Instruction: Finish the second piece based on the ï¬ rst piece, such that these two pieces become a single summary. # BLEURT Score: 0.44 ROUGE-L Score: 0.27 First Piece: | 2308.08493#53 | 2308.08493#55 | 2308.08493 | [
"2110.14168"
]
|
2308.08493#55 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Kim is about to tell mom that Harry bought # Second Piece: a new car without consulting her first. 17 # D STATISTICAL ANALYSIS: BOOTSTRAP RESAMPLING We examine the statistical signiï¬ cance of results stemming from guided versus general instructions. Bootstrap resampling technique, involving 10,000 samples in the resampling process, is employed for this investigation (Efron 1979; Efron & Tibshirani 1993; Efron 2003). We concentrate on the alternative hypothesis that posits guided instructions produce outcomes closer to reference instances than those generated from general instructions, as evaluated by ï¬ uency, quality, and similarity. The performance metrics utilized here are BLEURT and ROUGE-L scores. We regard the ROUGE-L and BLEURT scores as statistically signiï¬ cant if the p-values â ¤ 0.05. We list all the computed p- values in Table 8 and highlight the statistically signiï¬ cant results by marking them with an asterisk in Table 4. Table 8: p-values for differences between BLEURT and ROUGE-L scores of guided and general instructions, computed using bootstrap resampling with 10,000 resampling samples. p-values â ¤ 0.05 indicate statistically signiï¬ cant results. | 2308.08493#54 | 2308.08493#56 | 2308.08493 | [
"2110.14168"
]
|
2308.08493#56 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Model Metric Split Instruction Datasets IMDB AG News Yelp RTE WNLI SAMSum XSum GPT-4 BLEURT Train Guided Test/Valid Guided 0.319 1.000 0.005 0.000 0.981 0.041 0.000 1.000 0.075 0.035 0.478 0.283 ROUGE-L Guided Train Test/Valid Guided 0.017 0.509 0.000 0.000 0.073 0.000 0.000 0.465 0.165 0.003 0.424 0.105 0.115 0.170 0.000 0.000 GPT-3.5 BLEURT Train Guided Test/Valid Guided 1.000 0.992 0.006 0.134 1.000 0.465 0.008 0.932 0.030 0.020 0.746 0.293 ROUGE-L Guided Train Test/Valid Guided 0.374 0.190 0.000 0.042 0.000 0.968 0.000 0.000 0.051 0.044 0.312 0.147 0.093 0.321 0.068 0.152 # E CONTINUED TRAINING OF GPT-3.5 BASE MODEL FOR INTENTIONAL CONTAMINATION For our validation study for contamination using the GPT-3.5 base model, we employ the previously referenced snapshot, gpt-3.5-turbo-0613. To conduct continued training on GPT-3.5, we submit a ï¬ ne-tuning job via the OpenAI API. While the model provider terms the option of continued training as ï¬ ne-tuning, our approach does not center around conventional ï¬ ne-tuning. Our objective is to reproduce what the LLMâ in our case, GPT-3.5â potentially observed during its pre-training phase when exposed to web data. To achieve this, we format the data in a way that encompasses the dataset title and its associated division, coupled with the entire details of the instance. We embed this information since it represents the minimal metadata an instance might possess when extracted from web data. | 2308.08493#55 | 2308.08493#57 | 2308.08493 | [
"2110.14168"
]
|
2308.08493#57 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | All data formats we used to introduce data contamination are listed in Table 9. Each dataset instance is formatted according to the provided formats, including both the name of the dataset and the spe- ciï¬ c split from which it derives, as metadata. It is important to clarify that our approach completely differs from instruction tuning, as we do not incorporate any speciï¬ c instructions within the data. Due to our projectâ s budget limitations and our emphasis on a manageable number of training sam- ples, we opt to work with one dataset for each task in our validation study. In particular, we take 100 random samples, ensuring they were evenly distributed based on the label, from the training splits of the AG News, RTE, and XSum datasets to expose the GPT-3.5 base model. For training, all default hyperparameters set by OpenAI are maintained during our continued training phase. Upon training completion, we utilize particular checkpoints provided by OpenAI. For every experiment, the base model of GPT-3.5 is separately contaminated using each dataset split, resulting in three separate checkpoints, each associated with one of the aforementioned dataset splits. | 2308.08493#56 | 2308.08493#58 | 2308.08493 | [
"2110.14168"
]
|
2308.08493#58 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | 18 Table 9: A complete list of all data formats used to contaminate the GPT-3.5 base model by fur- ther training. Each of these data formats is separately used to format every single instance with respect to the dataset task. Placeholders are as follows: {split name} indicates the split name; {dataset name} refers to the dataset name; {instance} represents a full instance in classiï¬ - cation datasets; {sentence1} and {sentence2} stand for premise and hypothesis in NLI-based datasets; {document} and {summary} correspond to entire document and its summary for a sin- gle instance in the summarization datasets; and {label} is replaced with the input instanceâ s label where applicable. # Task Data Format This is an instance from the {split name} split of the {dataset name} dataset. Classiï¬ cation Instance: {instance} Label: {label} This is an instance from the {split name} split of the {dataset name} dataset. NLI Sentence 1: {sentence1} Sentence 2: {sentence2} Label: {label} This is an instance from the {split name} split of the {dataset name} dataset. Summarization Document: {document} Summary: {summary} F EXAMPLES OF REPLICAS GENERATED PRE AND POST CONTAMINATION OF GPT-3.5 In Table 10, we showcase two examples of exact replicas derived from our controlled contamina- tion study with GPT-3.5. These replicas are generated from the contaminated checkpoints obtained through additional training of the GPT-3.5 base model on the subset of the training partitions of the AG News and XSum datasets. Additionally, we highlight a near-exact match achieved from an instance in the training set of the GSM8k dataset when using GPT-4 as the underlying LLM. All these replicas are produced via the guided instructions. # G DETAILED DESCRIPTION OF DATASETS IMDB Movie Reviews Dataset. The IMDB Movie Reviews dataset is a balanced corpus of 50,000 movie reviews used for sentiment analysis tasks. It is split evenly into 25,000 training and 25,000 testing reviews, each further balanced for positive and negative sentiments. In this dataset, positive reviews are identiï¬ | 2308.08493#57 | 2308.08493#59 | 2308.08493 | [
"2110.14168"
]
|
2308.08493#59 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | ed by a score that is 7 or more out of 10, while negative reviews are denoted by a score that falls at 4 or below out of 10. AG News Dataset. The AG News dataset, a commonly used benchmark, encapsulates news articles from the AGâ s corpus website. It is neatly divided into four categorical classes, namely world, sports, business, and science/technology. The dataset contains 496,835 categorized news articles from 2,000 news sources. For each class, the AG News dataset furnishes 30,000 training and 1,900 test samples. Yelp Dataset. The dataset is sourced from the Yelp Dataset Challenge conducted in 2015, containing a massive number of 1,569,264 samples, all of which include review texts. This dataset is the foundation for two distinct classiï¬ cation tasks. The ï¬ rst task involves predicting the exact count of stars assigned by the user, while the second task is to predict the polarity label, with a perspective that categorizes 1- and 2-star ratings as negative, and 3- and 4-star ratings as positive. For the full-scale star rating prediction, the dataset includes 130,000 training samples and 10,000 testing samples for each star category. Similarly, the polarity-based dataset comprises 280,000 training samples along with 19,000 test samples, distributed among each polarity category. | 2308.08493#58 | 2308.08493#60 | 2308.08493 | [
"2110.14168"
]
|
2308.08493#60 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | 19 Table 10: Examples of exact matches from GPT-3.5 after contamination with a subset of train splits of AG News and XSum datasets, alongside a near-exact instance reproduction from the GSM8k training set using GPT-4. Match types (exact/near-exact matches) are provided only for the gener- ated completions from contaminated GPT-3.5 checkpoints, when contrasted with the completions from the GPT-3.5 base model (all completions by the GPT-3.5 base model are inexact matches). All completions are generated by guided instruction. Method Reference Instance and Its Replica by Guided Instruction Reference instance from the train split of AG News dataset: Article: Report: Pennington Set to Sign Extension with Jets (Reuters) Reuters - The New York Jets and quarterback Chad Pennington are close to a deal that would make the quarterback the highest- paid player in franchise history. Label: 1 (Sports) Instruction: You are provided with the ï¬ rst piece of an instance from the train split of the AG News dataset. Finish the second piece of the instance as exactly appeared in the dataset. Only rely on the original form of the instance in the dataset to ï¬ nish the second piece. Label: 1 (Sports) First Piece: Report: Pennington Set to Sign Extension with Jets (Reuters) Reuters - The New York Jets and quarterback Chad Pennington are close to a deal that would make the Second Piece: â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â - GPT-3.5: GPT-4 ICL: Exact Match Human Eval.: | 2308.08493#59 | 2308.08493#61 | 2308.08493 | [
"2110.14168"
]
|
2308.08493#61 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Exact Match 28-year-old signal caller the highest-paid player in franchise history, according to sources familiar with the negotiations. The extension is expected to keep Pennington with the Jets for the next five seasons, solidifying his position as the teamâ s leader on the field. The deal comes after Penningtonâ s impressive performance last season, where he led the Jets to the playoffs and earned the NFL Comeback Player of the Year award. With this extension, the Jets are showing their commitment to Pennington and their belief in his ability to lead the team to success. â | 2308.08493#60 | 2308.08493#62 | 2308.08493 | [
"2110.14168"
]
|
2308.08493#62 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â - Contaminated GPT-3.5: quarterback the highest- paid player in franchise history. Reference instance from the train split of XSum dataset: Summary: Astronomers have found evidence for a planet being devoured by its star, yielding insights into the fate that will befall Earth in billions of years. Instruction: You are provided with the ï¬ rst piece of a one-sentence summary from the train split of the XSum dataset. Finish the second piece of the summary as exactly appeared in the dataset. Only rely on the original form of the summary in the dataset to ï¬ nish the second piece. First Piece: Astronomers have found evidence for a planet being devoured by its star, yielding insights Second Piece: â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â | 2308.08493#61 | 2308.08493#63 | 2308.08493 | [
"2110.14168"
]
|
2308.08493#63 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | - GPT-3.5: into the process of planetary destruction and the fate of exoplanets. â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â - Contaminated GPT-3.5: into the fate that will befall Earth in billions of years. GPT-4 ICL: Exact Match Human Eval.: Exact Match # Reference instance from the train split of GSM8k dataset: Question: | 2308.08493#62 | 2308.08493#64 | 2308.08493 | [
"2110.14168"
]
|
2308.08493#64 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | John writes 20 pages a day. How long will it take him to write 3 books that are 400 pages each? Instruction: You are provided with the ï¬ rst piece of a question from the train split of the GSM8k dataset. Finish the second piece of the question as exactly appeared in the dataset. Only rely on the original form of the question in the dataset to ï¬ nish the second piece. First Piece: John writes 20 pages a day. How long will it take him to write 3 Second Piece: â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â | 2308.08493#63 | 2308.08493#65 | 2308.08493 | [
"2110.14168"
]
|
2308.08493#65 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | - GPT-4: books of 400 pages each? # GPT-4 ICL: Near-exact Match Human Eval.: Near-exact Match 20 Recognizing Textual Entailment (RTE) Dataset. The Recognizing Textual Entailment (RTE) dataset originates from a succession of annual textual entailment challenges. These datasets were combined by the authors of the benchmark using data from four different editions: RTE1 (Dagan et al. 2005), RTE2 (Haim et al. 2006), RTE3 (Giampiccolo et al. 2007), and RTE5 (Bentivogli et al. 2009). The examples within these datasets were primarily formulated using text from news and Wikipedia sources. To maintain consistency, all these datasets were adapted into a two-class split. For those datasets that initially consisted of three classes, the categories of â neutralâ and â contradictionâ were combined to form a single class termed â not entailmentâ . | 2308.08493#64 | 2308.08493#66 | 2308.08493 | [
"2110.14168"
]
|
2308.08493#66 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | The RTE dataset combined has 2,490 examples for training, 277 examples for validation, and 3,000 examples for testing. Winograd Natural Language Inference (WNLI) Dataset. The WNLI (Winograd Natural Lan- guage Inference) dataset is a benchmark for natural language understanding tasks, particularly for evaluating coreference resolution and pronoun disambiguation in context. The dataset is derived from the original Winograd Schema Challenge (Levesque et al. 2012) and contains sentence pairs where a pronoun needs to be resolved by determining whether it refers to the same entity as the previous sentence. While the dataset has a balanced training set between two classes, the test set is imbalanced, with 635 training examples, 146 testing examples, and 71 validation examples. SAMSum Dataset. The SAMSum dataset, compiled by the Samsung R&D Institute in Poland, comprises around 16,000 English messenger-style conversations with summaries. These dialogues, created by linguists, reï¬ ect a variety of styles, registers, and topics similar to real-life messenger interactions. Each conversation is annotated with a third-person summary and categorized based on the number of utterances, ranging from 3-30. The dataset primarily consists of two-person dialogues. Extreme Summarization (XSum) Dataset. The Extreme Summarization (XSum) dataset serves as an evaluation dataset for abstractive single-document summarization systems. Its objective is to generate a concise one-sentence summary that answers the question, â What is the article about?â . The dataset comprises 226,711 news articles, each accompanied by a one-sentence summary. These articles were collected from BBC articles spanning the years 2010 to 2017 and cover a wide range of domains, including news, politics, sports, weather, business, technology, science, health, family, education, entertainment, and arts. | 2308.08493#65 | 2308.08493#67 | 2308.08493 | [
"2110.14168"
]
|
2308.08493#67 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | The ofï¬ cial random split allocates 90% (204,045 documents) for training, 5% (11,332 documents) for validation, and 5% (11,334 documents) for the test set, respectively. Grade School Math 8k (GSM8k) Dataset. The GSM8k dataset is a curated dataset consisting of 8,500 linguistically diverse grade school math word problems, crafted meticulously by human au- thors. This collection is divided into 7,500 training examples and 1,000 designated for testing. The complexity of these problems varies, requiring between 2 to 8 sequential steps for resolution. Pre- dominantly, the solutions entail executing a series of basic arithmetic operationsâ namely addition, subtraction, multiplication, and divisionâ to deduce the ï¬ nal answer. This dataset is ideal for tasks involving multi-step mathematical reasoning. 21 | 2308.08493#66 | 2308.08493 | [
"2110.14168"
]
|
|
2308.07540#0 | CALYPSO: LLMs as Dungeon Masters' Assistants | 3 2 0 2 g u A 5 1 ] L C . s c [ 1 v 0 4 5 7 0 . 8 0 3 2 : v i X r a # CALYPSO: LLMs as Dungeon Mastersâ Assistants Andrew Zhu1, Lara J. Martin2*, Andrew Head1, Chris Callison-Burch1 1University of Pennsylvania 2University of Maryland, Baltimore County {andrz, head, ccb}@seas.upenn.edu, [email protected] # Abstract | 2308.07540#1 | 2308.07540 | [
"1706.03762"
]
|
|
2308.07540#1 | CALYPSO: LLMs as Dungeon Masters' Assistants | The role of a Dungeon Master, or DM, in the game Dun- geons & Dragons is to perform multiple tasks simultaneously. The DM must digest information about the game setting and monsters, synthesize scenes to present to other players, and respond to the playersâ interactions with the scene. Doing all of these tasks while maintaining consistency within the narrative and story world is no small feat of human cogni- tion, making the task tiring and unapproachable to new play- ers. Large language models (LLMs) like GPT-3 and ChatGPT have shown remarkable abilities to generate coherent natural language text. In this paper, we conduct a formative evalua- tion with DMs to establish the use cases of LLMs in D&D and tabletop gaming generally. We introduce CALYPSO, a system of LLM-powered interfaces that support DMs with in- formation and inspiration specific to their own scenario. CA- LYPSO distills game context into bite-sized prose and helps brainstorm ideas without distracting the DM from the game. When given access to CALYPSO, DMs reported that it gener- ated high-fidelity text suitable for direct presentation to play- ers, and low-fidelity ideas that the DM could develop further while maintaining their creative agency. We see CALYPSO as exemplifying a paradigm of AI-augmented tools that pro- vide synchronous creative assistance within established game worlds, and tabletop gaming more broadly. # Introduction Dungeons & Dragons (D&D) (Gygax and Arneson 1974) is a tabletop role-playing game (TTRPG)â a collaborative storytelling game where a group of players each create and play as their own character, exploring a world created by and challenges set by another player known as the Dungeon Master (DM). It is the DMâ s role to play the non-player char- acters and monsters, and to write the overarching plot of the game. As a co-creative storytelling game, Dungeons & Dragons presents multiple unique challenges for AI systems aiming to interact with it intelligently. Over the course of a game, which is played out across multiple sessions spanning a long duration of time (often multiple months to years), the DM and the other players work together to produce a narra- tive grounded in commonsense reasoning and thematic con- | 2308.07540#0 | 2308.07540#2 | 2308.07540 | [
"1706.03762"
]
|
2308.07540#2 | CALYPSO: LLMs as Dungeon Masters' Assistants | *Work done while at the University of Pennsylvania. Copyright © 2023, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Encounter Understanding (GPT-3) ®"The Dire Wolf is a cunning and strong predator...â Roll Encounter (Encounter Table) @ 14100 (53) 8 Dire Wolf " | Dire Wolf Focused Brainstorming e â ChatGPT) i: â How do players find them?â large beast Hit Points: 39 © "The players hear distant, haunting howls echoing throughout the forest...â Encounter Scene (Human) Ik in the forest, | 2308.07540#1 | 2308.07540#3 | 2308.07540 | [
"1706.03762"
]
|
2308.07540#3 | CALYPSO: LLMs as Dungeon Masters' Assistants | Figure 1: After rolling a random encounter (red), DMs can use LLMs with CALYPSO to help generate an encounter scene and digest information about monsters. CALYPSO can present monster information concisely (green) and brain- storm conversationally (purple) to help build a compelling narrative to present to players (purple). sistency (Ammanabrolu et al. 2020; Bergstr¨om 2011). As the group plays for longer, the players define more of the world and ad-hoc rules for interacting with it (van Velsen, Williams, and Verhulsdonck 2009). In order to make in- character decisions, each individual player must maintain a personal understanding of the game world which they build from the game history (Martin, Sood, and Riedl 2018) while keeping track of what information other players and their characters know (Zhou et al. 2023). By using an AI co-DM tool, human DMs can devote more mental energy to cognitively demanding tasks of being a DM, such as improvising dialog of NPCs (non-player char- acters) or repairing the script of their planned campaign. Fur- thermore, an AI co-DM would drastically reduce the barrier of entry into DMing. Therefore, an AI co-DM tool would be invaluable to the D&D community. An effective AI co-DM tool should not only produce language output for a coherent and compelling natural DM to effectively use for inspiration but also account for an immense amount of background context and require- ments for internal consistencyâ both within D&D rules and within a given scenario or campaign. Large language models (LLMs), such as GPT-3 (Brown et al. 2020) and ChatGPT (OpenAI 2022), have shown impressive abilities to generate coherent text. Some (Callison-Burch et al. 2022; Zhu et al. 2023) have even applied LLMs to the problem of D&D di- alog and narrative by finetuning the models with structured information. Whereas these works used structured informa- tion scraped from user data to fine-tune a single model, we use existing data in D&D source books to improve genera- tion using zero-shot prompting with multiple models. | 2308.07540#2 | 2308.07540#4 | 2308.07540 | [
"1706.03762"
]
|
2308.07540#4 | CALYPSO: LLMs as Dungeon Masters' Assistants | In this paper, we present a study in which we created a LLM-augmented tool to assist DMs in playing D&D. We employed the following methods: 1. We interviewed DMs to understand how they digest game information and learn design motivations for AI as- sistants in the domain. 2. We created a gameplay setting that allowed us to study D&D gameplay on a larger scale than other recent works and invited 71 players to participate. 3. We created a system of three LLM-powered interfaces, which we call CALYPSO (Collaborative Assistant for Lore and Yielding Plot Synthesis Objectives), that DMs and players could use as they played D&D, and studied the ways in which DMs and players incorporated them into their creative process over four months using estab- lished HCI methods. We show that language models are capable â co-DMsâ â not a player in the same way that the human players and DM are, but still a synchronous agent that acts as a guide for the human DM. We provide insights into how TTRPG play- ers actually want to use these tools and present validated so- lutions that can extend beyond the D&D domain. Our study shows that a system designed with these motivations in mind saw consistent prolonged usage among a community of cre- ative writers. 2 Background and Related Work 2.1 Dungeons & Dragons in the Time of COVID Traditionally, Dungeons & Dragons is played in person. Players use physical character sheets and monster stats ref- erenced from books containing hundreds of prewritten â stat blocksâ (as pictured in Figure 2a) (Perkins et al. 2014). | 2308.07540#3 | 2308.07540#5 | 2308.07540 | [
"1706.03762"
]
|
2308.07540#5 | CALYPSO: LLMs as Dungeon Masters' Assistants | DMs have the option to create a world of their own to play in (also sometimes called â homebrewingâ a setting) or to set their game in a professionally written â moduleâ : a book contain- ing a detailed outline of an adventure, including the setting, non-player characters, predesigned challenges and monster encounters, and lore. Previous works have explored meth- ods of how to present information in these existing settings more clearly to DMs, such as through a computer-generated adventure flowchart (Acharya, Mateas, and Wardrip-Fruin 2021) or recommender systems for relevant entities in a scene (Perez, Eisemann, and Bidarra 2021). Since the beginning of the COVID-19 pandemic, there has been a shift towards playing D&D online (Yuan et al. 2021). Rather than using physical character sheets and reference books while playing in person, a large number of groups instead play virtually using tools like D&D Beyond (2017) for virtual character sheets and reference books, Discord for messaging, virtual tabletops like Foundry (Foundry Gaming, LLC 2019) to simulate maps, and game state trackers like Avrae (Zhu and D&D Beyond 2016) to track character and monster stats. For inspiration and immersion, DMs also use online tools like dScryb (2020), which provides prewritten text, Tabletop Audio (Roven 2014), which provides sound- boards and soundscapes, and random tables published in D&D source books (Crawford, Perkins, and Wyatt 2014), which provide a prewritten set of options, for specific sce- narios (e.g. encountering a dragon). 2.2 Large Language Models and D&D Large language models (LLMs) are a recent development in the area of Natural Language Processing that have demon- strated emergent capabilities of understanding usersâ input and replying directly in the userâ s language (c.f. a machine language). A neural architecture based on the Transformer (Vaswani et al. 2017), they are capable of learning user- defined tasks with no additional training (â few-shotâ or â in- contextâ learning) and referencing concepts defined in their large training corpus (Brown et al. 2020). | 2308.07540#4 | 2308.07540#6 | 2308.07540 | [
"1706.03762"
]
|
2308.07540#6 | CALYPSO: LLMs as Dungeon Masters' Assistants | Although there has been some work looking at playing Dungeons & Dragons using earlier neural language mod- els (Louis and Sutton 2018; Martin, Sood, and Riedl 2018; Rameshkumar and Bailey 2020), the introduction of LLMs has created a renewed interest in researching tabletop gam- ing. Callison-Burch et al. (2022) frame D&D as a dialogue challenge and examine whether LLMs are capable of pre- dicting a playerâ s next utterance based on the conversational history, finding that local game-specific state context is im- portant for grounded narrative generation. Newman and Liu (2022) use LLMs to generate novel material (namely spells) that is consistent with the style and rules of the game. Zhou et al. (2023) create a system that models the intents of D&D players using LLMs to inform a surrogate Theory of Mind. Zhu et al. (2023) instrument a game state tracker to provide concrete actor stats and combat state, finding that LLMs are capable of producing interesting roleplay in combat scenar- ios and predicting the action a player will take. They high- light the importance of player and DM agency in LLM- generated texts, proposing that LLMs are better suited for assistant-style use cases. Kelly, Mateas, and Wardrip-Fruin (2023) present a preliminary work using LLMs to identify player questions from live transcriptions of gameplay and suggest in-character responses. Santiago et al. (2023) have proposed multiple scenarios where LLMs and other generative AI models may be used to assist DMs, and discuss the ways AI may be used. In this workshop paper, they hypothesize the potential for AI to help inspire and take cognitive burden off the DM and pro- vide brainstorming inspiration, but also weaknesses where AI may fall back onto overused tropes or underrepresent mi- nority groups. In this work, we explore and expand upon many of these hypotheses through interviews with DMs. We create a system where DMs can fluently incorporate a LLM into their creative process and run a broad study on its use and failure cases. LLMs have been explored as a writing assistant in other modalities as well, using various methods to assist in col- laboratively building a narrative. These works have exam- | 2308.07540#5 | 2308.07540#7 | 2308.07540 | [
"1706.03762"
]
|
2308.07540#7 | CALYPSO: LLMs as Dungeon Masters' Assistants | ined the use of conversational agents (Coenen et al. 2021; Ippolito et al. 2022), writing in established settings (Akoury et al. 2020), and other human-in-the-loop methods (Chung et al. 2022; Roemmele and Gordon 2015; Samuel, Mateas, and Wardrip-Fruin 2016; Calderwood et al. 2020; Yang et al. 2022; Kreminski et al. 2022). There has also been work proposing LLMs for multimodal co-creative frameworks (Lin, Agarwal, and Riedl 2022). Overall, these techniques differ from D&D and other TTRPGs in that they primarily focus on a single writer/creator interacting with the system, rather than the multi-player experience in TTRPGs where all players directly interact with the story. To our knowledge, our work is the first to examine con- crete implementations of multiple unique interaction modal- ities in and outside of combat scenarios and the ways D&D players interact with language models on this scale. 3 Design Motivation To better understand the friction DMs face in looking up reference material midgame, we conducted interviews and ran workshop sessions with seven DMs (referred to as D1- 7 below) from a wide range of backgrounds before creating our system. Participants ranged from 1 to 39 years of ex- perience playing D&D (various editions). In these sessions, we asked DMs how they approached improvising encoun- ters â i.e., to run random encounters that are generated on the fly (usually by rolling on an encounter table). In random encounters, DMs do not have time to research the monsterâ s stats and lore beforehand and think of backstories as to why the monster ended up in a particular setting. From these in- terviews, we identify several ways how an AI system could be helpful to DMs: | 2308.07540#6 | 2308.07540#8 | 2308.07540 | [
"1706.03762"
]
|
2308.07540#8 | CALYPSO: LLMs as Dungeon Masters' Assistants | Inspiration. As proposed by Santiago et al. (2023), we find that DMs desired the ability to use a language model to generate the first draft of an encounter, which they could then build on top of with their own ideas (D1-3). Different DMs envisioned giving the system varying amounts of con- trol over the narrative. D3 expressed that they would want a system to write a scene that they would then vet and choose whether to present it verbatim to their players, edit it to their liking, or use as inspiration to overcome writerâ | 2308.07540#7 | 2308.07540#9 | 2308.07540 | [
"1706.03762"
]
|
2308.07540#9 | CALYPSO: LLMs as Dungeon Masters' Assistants | s block. D1 and D2 envisioned using the systemâ s generation verbatim to present an initial scene to players while they either read the complete text of the monster description (D2) or to reduce cognitive load (D1). Strategic Copilot. One DM mentioned that managing both narrative gameplay and tracking monster stats and mechanics overwhelmed their short-term memory, and ex- pressed interest in a system that could aid them in making strategic decisions and acting as a high-level copilot. They expressed that the large amount of low-level management was a barrier to them running more D&D, and that they wanted to â feel more like an orchestra conductor over some- one whoâ s both putting down the train tracks AND fueling the trainâ | 2308.07540#8 | 2308.07540#10 | 2308.07540 | [
"1706.03762"
]
|
2308.07540#10 | CALYPSO: LLMs as Dungeon Masters' Assistants | (D4). Another DM said that DMs often fail to take into ac- count monstersâ unique abilities and stats when running en- counters, making simplifications to manage a large num- ber of monsters. For example, a monster with very high intelligence and low dexterity attempting to move sneakily â should know not to move and make a bunch of noiseâ (D6). Thematic Commonsense. We asked DMs what parts of monstersâ game statistics they found to be the most impor- tant for their understanding of how to use a monster in their game, and found that multiple DMs used a concept of â | 2308.07540#9 | 2308.07540#11 | 2308.07540 | [
"1706.03762"
]
|
2308.07540#11 | CALYPSO: LLMs as Dungeon Masters' Assistants | base- lineâ monsters to gain a broad understanding of a monster when they first encounter it. The idea of the baseline mon- ster was not to find a specific monster to compare another to, but to determine which parts of an individual monsterâ s game statistics to focus on, and which parts to use prior the- matic commonsense to fill in. In this context, we define thematic commonsense as the DMâ s intuitive understanding of D&D as a game with me- dieval fantasy themes, and how they might draw inspira- tion from other works of fantasy literature. For example, a DM might intuitively understand that a dragon is a kind of winged reptile with a fire breath based on their consumption of other fantasy works, reason that all dragons are capable of flight, and focus on a particular dragonâ s unique abilities rather than flight speed (D7). Although D&D reference ma- terial does not include an explicit description of the dragonâ s fire breath, the DM might base their narration on depictions of fire breath from other authors. We find this similar to the idea of a genus-differentia def- inition (Parry and Hacker 1991), in that DMs use their gen- eral background understanding of fantasy settings to define their personal genus and supplement prior knowledge by skimming monster reference books for differentia. This sug- gests that co-DM systems should focus on helping DMs ex- tract these differentiae, and that they also require the same extensive background knowledge as the user. For the D&D domain, we believe that LLMs such as GPT-3 (Brown et al. 2020) have included sufficient information on the game and the game books themselves in their training corpus so as to establish such a background knowledge. However, we are in- terested in methods for establishing this thematic common- sense knowledge for works not included in modelsâ training data in future work. | 2308.07540#10 | 2308.07540#12 | 2308.07540 | [
"1706.03762"
]
|
2308.07540#12 | CALYPSO: LLMs as Dungeon Masters' Assistants | Simple Language. Multiple DMs emphasized that they would like a co-DM system to present monster information in plain language, rather than the elaborate prose found in game reference manuals (D3-6). As a work of fantasy litera- ture, D&D publications (including reference manuals) often use heavy figurative language and obscure words. For exam- ple, the first paragraph of an owlbearâ s description reads: An owlbearâ s screech echoes through dark valleys and benighted forests, piercing the quiet night to an- nounce the death of its prey. Feathers cover the thick, shaggy coat of its bearlike body, and the limpid pupils of its great round eyes stare furiously from its owlish head (Crawford, Mearls, and Perkins 2018, pg. 147). This style of description continues for seven additional paragraphs. | 2308.07540#11 | 2308.07540#13 | 2308.07540 | [
"1706.03762"
]
|
2308.07540#13 | CALYPSO: LLMs as Dungeon Masters' Assistants | On average, across all D&D monsters published on D&D Beyond, a monsterâ s description and list of abil- ities contains 374 words (min: 0, max: 2,307). DMs often use multiple monsters together in the same encounter, com- pounding the amount of information they must hold in their mind. Monster descriptions often include descriptions of the monster, its abilities, and lore. Some DMsâ preferred method of referencing monster lore while running the game was to skim the full monster entry, and the complex and long prose often led to DMs feeling overwhelmed (D4, D5). Other DMs wanted a short and salient mechanical (i.e. focusing on mon- sterâ s game abilities and actions) description, rather than a narrative (lore and history-focused) one (D3, D6). Overall, the complexity of monster descriptions led DMs to forget parts of monstersâ lore or abilities during game- play (D5) or use overly broad simplifications that did not capture an individual monsterâ s uniqueness (D6). While of- fline resources exist to help DMs run monsters (e.g. Amman (2019)), they cannot account for the environment or generate a unique scenario for each encounter with the same monster. | 2308.07540#12 | 2308.07540#14 | 2308.07540 | [
"1706.03762"
]
|
2308.07540#14 | CALYPSO: LLMs as Dungeon Masters' Assistants | We believe that LLMsâ capability to summarize and gener- ate unique material is particularly applicable to these chal- lenges. # Implementation In this section, we describe the three interfaces we developed to provide DMs with the sorts of support they desired. These interfaces were designed with â in the wildâ deployment in mind: 1. Encounter Understanding: a zero-shot method to gener- ate a concise setup of an encounter, using GPT-3. 2. Focused Brainstorming: a conversational method for DMs to ask additional questions about an encounter or refine an encounter summary, using ChatGPT. 3. Open-Domain Chat Baseline: a conversational interface without the focus of an encounter, using ChatGPT. Our implementation differs from other efforts to develop AI-powered co-creative agents in two ways. First, compared to models where the AI acts as the writer, AI-generated con- tent is not necessarily directly exposed to the audience. CA- LYPSO only presents ideas to a human DM, who has final say over what is presented to the players. Second, compared to co-writing assistants where the writer has plentiful time to iterate, the time between idea and presentation is very short. Since the DM uses CALYPSO in the midst of running a real game, CALYPSO should be frictionless to adopt and should not slow down the game. 4.1 Encounter Understanding The first interface we provided to DMs was a button to use a large language model to distill down game statistics and lore available in published monster stat blocks. To accomplish this, we prompted GPT-3 (Brown et al. 2020) (specifically, the text-davinci-003 model) with the text of the chosen en- counter, the description of the setting the encounter was tak- ing place in, and the game statistics and lore of each monster involved in the encounter. The full prompts are available in Appendix A. We began by presenting the LLM with the task to sum- marize monstersâ abilities and lore and the environment. We collected feedback from DMs after generating the extracted information by allowing them to select a positive or nega- tive feedback button, and optionally leave comments in an in-app modal. This interaction is illustrated in Figure 2. Summarization. At first, we prompted GPT-3 to â summa- rize the following D&D setting and monsters for a DMâ | 2308.07540#13 | 2308.07540#15 | 2308.07540 | [
"1706.03762"
]
|
2308.07540#15 | CALYPSO: LLMs as Dungeon Masters' Assistants | s notes without mentioning game stats,â then pasted verbatim the text description of the setting and monster information. For decoding, we used a temperature of 0.9, top-p of 0.95, and frequency and presence penalties of 1. Based on feed- back from DMs (discussed in Section 6.1), we later changed to a more abstract â understandingâ task described below. Abstractive Understanding. In the understanding task, we prompted GPT-3 with the more abstract task to help the DM â understandâ the encounter, along with explicit in- structions to focus on the unique aspects of each creature, use information from mythology and common sense, and to mention how multiple creatures interact with each other. Af- ter these instructions, we included the same information as the Summarization task above. Finally, if a monster had no written description, we included instructions in place of the monsterâ s description telling CALYPSO to provide the DM information from mythology and common sense. | 2308.07540#14 | 2308.07540#16 | 2308.07540 | [
"1706.03762"
]
|
2308.07540#16 | CALYPSO: LLMs as Dungeon Masters' Assistants | For de- coding, we used a temperature of 0.8, top-p of 0.95, and a frequency penalty of 0.5. 4.2 Focused Brainstorming To handle cases where a single round of information extrac- tion was not sufficient or a DM had additional focused ques- tions or ideas they wanted assistance elaborating, we also provided an interface to open a private thread for focused brainstorming. Available at any time after an encounter was randomly chosen, we provided the same encounter informa- tion as in the Encounter Understanding interface as an initial prompt to ChatGPT (i.e., gpt-3.5-turbo) (OpenAI 2022). If the DM had used the Encounter Understanding interface to generate an information block, we also provided it as con- text (Figure 4). The full prompts are available in Appendix A. For decoding, we used a temperature of 1, top-p of 0.95, and a frequency penalty of 0.3. 4.3 Open-Domain Chat Baseline Finally, we made a baseline open-domain chat interface available to all players, without the focus of an encounter. As this interface was available at any time and open-ended, it helped provide a baseline for how DMs would use AI chat- bots generally. To access the interface, users were able to run a bot command, which would start a new thread. We prompted ChatGPT to take on the persona of a fantasy crea- ture knowledgeable about D&D, and generated replies to every message sent in a thread opened in this manner. | 2308.07540#15 | 2308.07540#17 | 2308.07540 | [
"1706.03762"
]
|
2308.07540#17 | CALYPSO: LLMs as Dungeon Masters' Assistants | For decoding, we used a temperature of 1, top-p of 0.95, and a frequency penalty of 0.3. Unlike the private threads cre- ated by the Focused Brainstorming interface, open-domain conversation threads were public and allowed other users to join. Rolling for random encounter... NLPDemo - Tier 2 Roll: 14100 (61)= 61 1d4 (3) = 3 Owlbear 32 Help me understand the monsters Brainstorm with me NLPDemo- Tier 2 Roll: 14100 (61) = 61 1d4 (3) = 3 Owlbear Encounter Summary Rolling for random encounter... rolling Rolling for random encounter... NLPDemo - Tier 2 Roll: 1d100 (61) = 61 1d4 (3)= 3 Owlbear Encounter Summary The owlbear is a monstrous combination of owl and bear; its shaggy coat is covered in feathers and it has an owlish head with round eyes. It emits a terrifying screech that announces the death of its prey and it preys on most creatures within its hunting grounds. Its intelligent but difficult to tame, and it is often used as a mount or guard by those living in remote settlements. | 2308.07540#16 | 2308.07540#18 | 2308.07540 | [
"1706.03762"
]
|
2308.07540#18 | CALYPSO: LLMs as Dungeon Masters' Assistants | While its origins remain uncertain, some believe they have always existed in the Feywild. These three owlbears are likely hunting together as a family and they are consummate predators. They screech to declare their territory, flush prey into their hunting grounds, and drag partially devoured kills back to their den. They can be dangerous foes when provoked, showing no fear for foes larger than themselves, so approach with caution The summary was helpful! The summary wasr't that helpful. hundreds of feet tall and dozens of fet across at their base, but a mati Forest Encounter (Levee 10) dense SUA pena Tothe (a) An example of the kinds of information DMs reference in an encounter. Digesting the myriad sources of information and using them to build a compelling narrative is a challeng- ing task. an (b) After encounter, the interface DMs used to interact with CALYPSO. rolling (c) An example of CALYPSOâ s summary of the encounter. Figure 2: Using the Encounter Understanding interface to generate a distilled summary of an encounter. 5 Experimental Setup By deploying CALYPSO in the wild, we sought to learn how real DMs would adopt the new technology (if at all) and the emergent use cases that would arise. We set up a special â play-by-post living worldâ game, which we describe below, and invited 71 players and DMs (referred to as P1-71) to participate by posting on D&D re- cruitment forums. While preserving the core foundations of D&D, our setup allowed us to conduct a large-scale study with a greater number of play sessions than studying indi- vidual games of D&D. In this section, we describe our methodology for setting up this large-scale D&D game. # 5.1 D&D Game Setup models without having to add an additional step to transcribe verbal play into text. Living World. Our setup takes aspects from playing both prewritten modules and homebrew worlds. Traditionally, groups are comprised of 1 DM and 3-6 players playing in different worlds created by the DM, who play in regularly scheduled 3-4 hour play sessions (most commonly, once a week). To allow for a larger scale study, in our setting, all 71 players exist in the same would, which we created. | 2308.07540#17 | 2308.07540#19 | 2308.07540 | [
"1706.03762"
]
|
2308.07540#19 | CALYPSO: LLMs as Dungeon Masters' Assistants | To emu- late traditional play sessions, players form groups of 3-6 (on average) to partake in self-contained quests in the setting, always returning to a central hub after each quest. Within the hub, players are free to interact with each other, allow- ing room for storytelling and character development through roleplay without a DM. Outside the hub, we created a di- verse set of environments that players could explore, each with a short description and image. All gameplay occurred on our Discord server. We used Avrae, a D&D Discord bot with over five million users, to facilitate gameplay. Avrae is commonly used to run D&D games in this fashion, so the large-scale game was famil- iar to players and DMs (Zhu et al. 2023). All participants were asked to review the serverâ s research statement and to provide their informed consent before participating. Partic- ipants were compensated with free access to all published D&D game materials (worth $981.35). We explain the core differences between a traditional game of D&D and our setup here: Play-by-Post. While most commonly D&D is played in person or using a virtual teleconference, a large number of players also play in a text-only mode known as â play-by- postâ . In play-by-post games, rather than acting out char- acters using voices and body movements, players narrate their charactersâ actions and speech in a textual format. This text-based modality allowed us to monitor a large number of play sessions and allowed players to interface with language 5.2 Mode of Play In the CALYPSO study, the primary mode of play was a random encounter system: players explore the established world and the DM is given a set of monsters to use (ran- domly selected from a table of encounters). The DM must then create an encounter involving those monsters for their players to play in. By giving DMs access to the interfaces de- scribed above, we study how language models can be used with focused local knowledge to help a DM understand com- plex monster lore, interactions, and how to incorporate new encounters into an existing setting. Figure 1 shows the flow of how an encounter is randomly chosen and how a DM might use CALYPSO to help build a compelling narrative. 6 Results and Discussion In this section, we compile the ways that DMs adopted CA- LYPSOâ | 2308.07540#18 | 2308.07540#20 | 2308.07540 | [
"1706.03762"
]
|
2308.07540#20 | CALYPSO: LLMs as Dungeon Masters' Assistants | s three interfaces into their games. We highlight the successes and failures of LLMs, and describe its emergent use cases. We refer to the DMs who used CALYPSO as P1- P71 in this section. 6.1 Encounter Understanding Summarization. DMs interacted with the summarization model in 37 encounters, indicating that the summary helped them understand the monsters and setting in 13 encounters and did not help in 7 encounters. We found that DMs appreciated the summary for distill- ing long information from stat blocks (P10) or highlighting an aspect of the monster to focus on (P8). However, when prompted with the task to summarize provided information, GPT-3 would focus too much on numeric game stats (de- spite the prompt stating not to) and summarize the environ- ment and monsters separately, often focusing too much on the environment (P3). For example, for an encounter involv- ing 12 Blink Dogs, the modelâ s output simply paraphrased the information written in the setting and stat block rather than suggesting more interesting interactions: CALYPSO: | 2308.07540#19 | 2308.07540#21 | 2308.07540 | [
"1706.03762"
]
|
2308.07540#21 | CALYPSO: LLMs as Dungeon Masters' Assistants | A warm and calm autumnal forest with a river running through it, the players can travel through this area towards the distant mountain range. The creatures they may encounter here are blink dogs; they have an armor class of 13, 22 Hit Points, a speed of 40 ft., as well as skills in Perception and Stealth. They also possess limited intelligence and understand Sylvan though cannot speak it. Although the information presented by CALYPSO is fac- tually correct, DMs argued that simply rephrasing the infor- mation in the monsterâ s reference material â doesnâ t really help in running an encounterâ (P8) and generally wanted â more information about [the monsters] and why they might interact with [the players]â (P11). DMs wanted CALYPSO to generate details that were entirely omitted from the refer- ence material, using the written reference as only a basis to gain an understanding of the monster. Another common case where CALYPSO did not present a helpful summary was when a monster did not have a published prose description, only numeric game stats. Most commonly, such creatures were an animal found in real life (e.g. a brown bear; P3) or relied entirely on a DMâ s the- matic commonsense to run (e.g. a snow golem; P3). In these cases, the model would only use minimal external knowl- edge (e.g. noting that giant boars usually had â thick, protec- tive skinâ ) rather than using thematic commonsense to make up a more interesting narrative (discussed in Section 3). We found that in order for the LLM to use external knowledge about monsters rather than only what was written in game stats, we had to prompt it with explicit instructions to do so. With these findings in mind, we remodeled the summariza- tion task to a more abstract â understandingâ task, in which we provided the model explicit instructions to use thematic commonsense. Abstractive Understanding. Overall, DMs much pre- ferred this more abstract task: over interactions in 114 en- counters, DMs indicated that the summary helped them un- derstand the monsters and setting in 55 encounters and did Helpful Not Helpful No Feedback Summarization | Abstractive | 0% 25% 50% 75% Figure 3: | 2308.07540#20 | 2308.07540#22 | 2308.07540 | [
"1706.03762"
]
|
2308.07540#22 | CALYPSO: LLMs as Dungeon Masters' Assistants | DMs found the Abstractive Understanding method of distilling monster information more consistently helpful than the Summarization method. not help in 2 encounters. Figure 3 shows the breakdown of feedback for each interface. Primarily, DMs using this model were satisfied with its ability to describe monsters based on real-life animals (P8), creature behavior (P3), interactions with players (P3), how to roleplay creatures (P12), and general performance (P3, P8, P11-13). The most common undesired behavior DMs noticed was the model hallucinating, or making up, facts about the monsters, such as giving giant spiders telepathy (P15), an insect monster a poison ability (P8), or a frost sala- mander wings (P14). Given the same example encounter as above, the modelâ s output includes much more useful infor- mation about appearance and monster behavior: CALYPSO: | 2308.07540#21 | 2308.07540#23 | 2308.07540 | [
"1706.03762"
]
|
2308.07540#23 | CALYPSO: LLMs as Dungeon Masters' Assistants | The blink dogs are canine creatures, about the size of a large dog. Their fur is a mottled grey and white colour, and their eyes flash yellow when they use their supernatural ability to blink in and out of ex- istence. They are surprisingly fast despite their size, and can easily keep up with most horses. In terms of behavior, the blink dogs are curious but shy creatures who prefer to avoid contact with other creatures un- less absolutely necessary. If threatened, they will use their blinking power to confuse and disorientate their opponents before attacking with a pack mentality. The blink dogs of this forest seem particularly adept at working together as a unit and can prove a formidable foe if provoked. | 2308.07540#22 | 2308.07540#24 | 2308.07540 | [
"1706.03762"
]
|
2308.07540#24 | CALYPSO: LLMs as Dungeon Masters' Assistants | As the Summarization and Abstractive Understanding tasks differ only in prompting methods, we demonstrate how sensitive prompting approaches can be. It is possible that GPT-3 is overfit on the word â summarizeâ , and even subtle changes in prompt design encourage original and domain-specific generation. With these alterations, we found that LLMs are capable of synthesizing creative ideas about D&D monsters using thematic commonsense contained in its training priors. We also found that if we repeated a certain phrase mul- tiple times in the prompt, e.g. â you may use information from mythology and common senseâ (which could occur if an encounter involved multiple monsters with no published lore), the model would often include the phrase â mythology and common senseâ | 2308.07540#23 | 2308.07540#25 | 2308.07540 | [
"1706.03762"
]
|
2308.07540#25 | CALYPSO: LLMs as Dungeon Masters' Assistants | verbatim in its output. This is likely a case of degenerative output (Holtzman et al. 2020). To prevent this, each time a set phrase would have been tem- plated in, we instead randomly chose 2-4 words from the set partially devoured kills back to their den. They can be dangerous foes when provoked, showing no fear for foes larger than themselves, so approach with caution. © Dungeon Master Today at 16:08 What are these three owlbears hunting? & Calypso Today at 16:08 These three owlbears are hunting for their next meal, which could include a variety of creatures found in the woodlands. As opportunistic predators, they may target large herbivores such as deer or elk, as well as smaller creatures like rabbits, raccoons, and even snakes if they come across them. They may also attack other predators if they pose a threat or a chance for an easy meal. The presence of these owlbears in the area may have been noticed by the local fauna, causing some creatures to become more cautious and scarce. This could potentially make the owlbears more aggressive and desperate in their search for food, leading to a higher chance of them attacking any unwary adventurers that happen upon their path. | 2308.07540#24 | 2308.07540#26 | 2308.07540 | [
"1706.03762"
]
|
2308.07540#26 | CALYPSO: LLMs as Dungeon Masters' Assistants | C+) Message "Brainstorm: 1d4 (3) = *3° Ow... a Ge Figure 4: Using the Focused Brainstorming interface to ask specific questions about an encounter. CALYPSO suggests reasons why the players might encounter the monsters and how they might act. {â folkloreâ , â common senseâ , â mythologyâ , â cultureâ } and inserted them in a random order (e.g. â use information from common sense, mythology, and folkloreâ ). Effect of Hallucinations. We find that not all hallucina- tions are undesired. In many cases, the model suggests mon- ster behaviors or appearances that are not explicitly written out in monster descriptions, such as the appearance of the blink dogsâ fur in the example above. More drastic devia- tions, such as the model suggesting giving a creature wings, were however undesired. DMs often take creative liberty to synthesize sensical information that isnâ t included in the source material. As shown above, they expect their tools to do the same when necessary â while the Summarization interface was more conservative in ensuring it did not hallucinate any details, the Abstractive Understanding interface was more well- received even with minor hallucinations. Since the DM acts as a curator of the modelâ s output, the DM can choose which of the generations to accept. 6.2 Focused Brainstorming In total, DMs used the focused brainstorming model in 71 encounters, comprising a total of 162 rounds of conversa- tion. DMs used the brainstorming model in a number of diverse way, which we qualitatively coded and tabulate in Table 1. Here, we discuss these use cases and some failure cases. General and Specific Descriptions. The most common way DMs used the interface was to ask it for a high level description of a given encounter and specific descriptions of points in the encounter. Since our prompt included infor- mation on the setting and involved monsters, the model was able to reference the information in its description. Addition- ally, the conversational nature of the language model added to its context, so DMs could reference earlier ideas without having to repeat them. This allowed DMs to ask CALYPSO to simply â describe this sceneâ or â describe Xâ | 2308.07540#25 | 2308.07540#27 | 2308.07540 | [
"1706.03762"
]
|
2308.07540#27 | CALYPSO: LLMs as Dungeon Masters' Assistants | without hav- ing to specify additional details (P3, P8-10, P12, P16-20). After presenting an idea to their players and seeing what part of the encounter players interacted with, the DM was also able to ask follow-up questions to describe in detail specific elements the players interacted with. For example, when running an encounter involving a shipâ s figurehead that was washed ashore, P3 first asked for a description of the figurehead. Then, when the players investigated it fur- ther, the DM followed up by asking for â a description about its construction, specifically how it was carved, and perhaps what D&D race crafted it.â This allowed DMs to elabo- rate on specific parts of an encounter when it became rel- evant, rather than presenting a large amount of information up front. However, DMs found that the model struggled sometimes to describe combat, and suggested that including more infor- mation about the combat state (similar to Zhu et al. (2023)) or map placement information could help generate more specific descriptions (P3, P9). Some DMs used these de- scriptions verbatim (P3, P8, P17), while others picked out particularly vivid phrases to use in a description of their own (P3, P8, P10, P12, P20). Others disagreed with the modelâ s description and wrote their own instead (P13, P16, P18, P19). Strategy. Another common use case for DMs was to ask the model for monstersâ | 2308.07540#26 | 2308.07540#28 | 2308.07540 | [
"1706.03762"
]
|
2308.07540#28 | CALYPSO: LLMs as Dungeon Masters' Assistants | â motives, tactics, and who they might prioritize [in a fight]â (P8-9, P12-13, P19, P23). As discussed in section 3 (Strategic Copilot), coming up with and sticking to strategies for each monster can be over- whelming, and often DMs use simplifications to manage their mental load. This use case allowed DMs to create more engaging fights with clearer paths to resolutions by de- scribing a creatureâ s motive and specific tactics the creature would use. For example, when a DM asked how a pack of ten wolves might approach a camping party, the model sug- gested to have the wolves â circle around the camp, hiding behind trees and bushes [...] and wait until a member of the party is alone and vulnerable before striking, hoping to sep- arate and weaken the groupâ | 2308.07540#27 | 2308.07540#29 | 2308.07540 | [
"1706.03762"
]
|
2308.07540#29 | CALYPSO: LLMs as Dungeon Masters' Assistants | (P8). Similar to the interactions with descriptions, these DMs did not always use the strategy presented by the model; sometimes they picked and chose interesting suggestions, while other times they chose a dif- ferent approach. Making Decisions. Some DMs used the model to get an opinion on two options they had already written or thought of (P3, P8-9, P12-14, P18, P23). For example, when players encountered a ravine whose bottom was enshrouded in mist, one DM asked whether the mist should hide a very long or Use Case General Descriptions Asking the model to generate a high-level Description description of a scene and encounter. Specific Descriptions Asking specific questions about parts of the encounter, often in response to player actions. Using the model to understand monster motives and get suggestions for their tac- tics. Using the model to decide how the DM should run a given encounter. Generating a list of multiple ideas to build off of individually. Strategy Making Decisions List of Ideas Example â | 2308.07540#28 | 2308.07540#30 | 2308.07540 | [
"1706.03762"
]
|
2308.07540#30 | CALYPSO: LLMs as Dungeon Masters' Assistants | Describe this encounter from the playerâ s perspec- tive.â (P8) â Describe drinking games that the satyrs are taking part in that are so dangerous someone could get hurt doing them.â (P17) â Why would a Displacer Beast Kitten leave the safety of its den if it believes an intruder is nearby?â (P12) â Should a diplomatic solution be possible for this en- counter?â (P14) â give me encounter ideasâ (P10) â ...make up more [magic items] to make this en- counter more interesting.â (P19) Table 1: A list of common ways DMs used the Focused Brainstorming interface. | 2308.07540#29 | 2308.07540#31 | 2308.07540 | [
"1706.03762"
]
|
2308.07540#31 | CALYPSO: LLMs as Dungeon Masters' Assistants | short drop. The model would sometimes simply give feed- back on both of the options without choosing one (â Both options have their merits depending on the tone and style of your game...â ; P3) and sometimes give a more straightfor- ward answer (â ...would that revenant have a vengeance to- wards the party member?â / â Yes, absolutely...â ; P12). DMs did not ask the model to come to a conclusive decision, sug- gesting that the model providing its â opinionâ helped inspire the DM, without relying on it to run the encounter. List of Ideas. In this use case, the DM simply asks the model for a list of ideas; for example, a list of magic items sea-dwelling humanoids might have (P10). We believe that the reasoning for this use case is the same reason that makes random tables (as discussed in Section 2.1) a popu- lar method of inspiration â however, compared to prewritten random tables, LLMs have the powerful capability of gener- ating unique â random tableâ entries customized for specific contexts. another case, the model insists that it is incapable of playing D&D, likely due to efforts to prevent the model from making claims of abilities it does not possess. Although generally infrequent, these artifacts suggest that domain-specific fine- tuning may improve modelsâ performance. | 2308.07540#30 | 2308.07540#32 | 2308.07540 | [
"1706.03762"
]
|
2308.07540#32 | CALYPSO: LLMs as Dungeon Masters' Assistants | 6.3 Open-Domain Chat Baseline Participants chatted with CALYPSO in 51 unique threads, comprising a total of 2,295 rounds of conversation. Com- pared to conversations with the AI in the Focused Brain- storming interface, conversations lasted much longer (aver- aging 45.0 rounds per interaction vs. the brainstorming in- terfaceâ s 2.3). Without the time pressure of an active game that the DM is responsible for, participants spent more time playing with the model and refining its responses to gener- ate high-level quest ideas (P3, P8, P12, P16), character and location names (P3, P9, P19, P22), role-play specific charac- ters from other games (P3, P9, P12, P16), and write fanfic- tion about events happening between their characters in the game (P3, P8, P9, P16, P21), among other non-D&D uses. Failure Cases. The most common failure case was when DMs tried to invoke other tools (such as a dice rolling or spell search bot) available in the brainstorming chat. As the model responded to every message in the thread, it would also respond to the other toolâ s invocation and reply with a generic error message or try to infer the other toolâ s output (e.g. â !check stealthâ / â Abominable Yeti stealth check: 18â , hallucinating a result while ignoring the output of an ac- tual dice roller). | 2308.07540#31 | 2308.07540#33 | 2308.07540 | [
"1706.03762"
]
|
2308.07540#33 | CALYPSO: LLMs as Dungeon Masters' Assistants | In some cases, the DM attempted to upload an image, which the model was unable to view. Finally, as discussed in Section 6.1, the model sometimes hallucinated facts about creatures and rules. We believe multimodality (allowing the model to view images) and allowing the model to use tools (e.g. to retrieve rules text, spell descriptions, or search monsters) to be an interesting direction to explore in future work. However, during a game of D&D, DMs did not have the time luxury to iterate on responses for hours. | 2308.07540#32 | 2308.07540#34 | 2308.07540 | [
"1706.03762"
]
|
2308.07540#34 | CALYPSO: LLMs as Dungeon Masters' Assistants | Without CALYPSOâ s management of the game, DMs would have to spend many turns of conversation copying and pasting in- formation to provide it to the LLM, taking attention away from the game and making the baseline implementation un- suitable for real-world adoption. We believe this highlights the difference between syn- chronous and asynchronous systems and the importance of removing friction from AI-augmented user interfaces as dis- cussed in Section 4 â while the human user may have the capability to supply a LLM with additional information, the time and computational burden should be on the syn- chronous system rather than the user. We also find that certain artifacts of the modelâ s training process influences its output. For example, the model would sometimes refuse to suggest (fantasy) races, likely due to efforts to reduce the potential for real-world racial bias. | 2308.07540#33 | 2308.07540#35 | 2308.07540 | [
"1706.03762"
]
|
2308.07540#35 | CALYPSO: LLMs as Dungeon Masters' Assistants | In 7 Conclusions In this paper, we present CALYPSO, a system of three LLM-powered interfaces that DMs could use to assist them in preparing and running focused monster encounters in an established setting, and a large-scale study of how 71 D&D players incorporated CALYPSO into their gameplay. Through interviews with DMs, we established common themes and desires for AI-augmented DM tools, and used these motivations and iterative design to guide our develop- ment. In conclusion, we found that: 1. | 2308.07540#34 | 2308.07540#36 | 2308.07540 | [
"1706.03762"
]
|
2308.07540#36 | CALYPSO: LLMs as Dungeon Masters' Assistants | LLMs are capable brainstorming partners. DMs used CALYPSO to generate both low-fidelity ideas that they could grow using their own creative expression, and guided it to generate high-fidelity descriptions they could present to other players with only minor edits. 2. LLMs present thematic commonsense when prompted to. Having been trained on a large corpus containing D&D texts and discussions, works of fantasy literature, and descriptions of real-world creatures, CALYPSO was able to fill in gaps in the D&D literature by probing into thematically relevant common sense knowledge. How- ever, we found that to access this trove of information, the LLM had to be explicitly prompted to do so. | 2308.07540#35 | 2308.07540#37 | 2308.07540 | [
"1706.03762"
]
|
2308.07540#37 | CALYPSO: LLMs as Dungeon Masters' Assistants | 3. LLMs assist, rather than replace, human DMs. CALYPSO was designed to aid a human DM while maintaining their creative agency. We find that human DMs use AI co-DMs to understand complex rules text, brainstorm interactions between non-player characters or monsters, and present DMs with suggestions that the DM can weave into a story to present to players without taking away from the pace of the game. Human creativity is an integral part of sto- rytelling games like D&D, and it is important for future AI tools to always maintain the humanâ s creative agency. | 2308.07540#36 | 2308.07540#38 | 2308.07540 | [
"1706.03762"
]
|
2308.07540#38 | CALYPSO: LLMs as Dungeon Masters' Assistants | # A LLM Prompts In this section, we provide the prompts used in the CALYPSO system. Generally, we make use of Markdown-style headers to divide sections of the prompt. For chat-based models, we annotate each message with the corresponding role (system, assistant, or user, as exposed in the ChatGPT API). # A.1 Encounter Understanding Summarization Summarize the following D&D setting and monsters for a Dungeon Masterâ s notes without mentioning game stats. Setting ======= <Setting description inserted here.> Creatures ========= <Name> ------ <Statistics and lore inserted here. If the encounter involves multiple creatures, repeat for each creature.> Summary ======= Abstractive Understanding Your name is Calypso, and your job is to help the Dungeon Master with an encounter. Your task is to help the DM understand the setting and creatures as a group, focusing mainly on appearance and how they act. Especially focus on what makes each creature # stand out. Avoid mentioning game stats. | 2308.07540#37 | 2308.07540#39 | 2308.07540 | [
"1706.03762"
]
|
2308.07540#39 | CALYPSO: LLMs as Dungeon Masters' Assistants | You may use information from common sense, mythology, and culture. If there are multiple creatures, conclude by mentioning how they interact. Encounter: <Encounter inserted here.> The rest of the prompt follows as in the Summarization prompt above, beginning with the setting. If a monster did not have published lore, we inserted the string â Calypso, please provide the DM with information about the (mon- ster name) using information from (folklore, common sense, mythology, and culture)â (see section 6.1) in place of lore. | 2308.07540#38 | 2308.07540#40 | 2308.07540 | [
"1706.03762"
]
|
2308.07540#40 | CALYPSO: LLMs as Dungeon Masters' Assistants | # A.2 Focused Brainstorming SYSTEM: You are a creative D&D player and DM named Calypso. Avoid mentioning game stats. You may use information from common sense, mythology, and culture. USER: Iâ m running this D&D encounter: < Encounter inserted here.> <Setting and creatures inserted here, in the same format as Abstractive Understanding.> Your job is to help brainstorm some ideas for the encounter. If the DM used the Encounter Understanding interface be- fore starting a brainstorming thread, we add an additional message to the prompt: USER: Hereâ s what I have so far: <Summary generated by Encounter Understanding inserted here.> This allows the DM to reference ideas proposed by CA- LYPSO in its summary without having to repeat the entire message, aiding continuity. # Acknowledgments Thank you to the Northern Lights Province Discord server for playing with us and being so enthusiastic about AI and D&D! Thank you to the NLP server staff - friends and play- ers who helped us write rules, settings, game mechanics, and manage so many players: Ryan Crowley, Nicki Dulmage- Bekker, @ephesia, @lyra.kat, and Joseph Keen. Finally, thank you to D&D Beyond for providing us with access to monster information and game materials. This material is based upon work supported by the Na- tional Science Foundation under Grant #2030859 to the Computing Research Association for the CIFellows Project. References Acharya, D.; Mateas, M.; and Wardrip-Fruin, N. 2021. Inter- views Towards Designing Support Tools for TTRPG Game Masters. In Mitchell, A.; and Vosmeer, M., eds., Interactive Storytelling, Lecture Notes in Computer Science, 283â 287. Cham: Springer International Publishing. ISBN 978-3-030- 92300-6. Akoury, N.; Wang, S.; Whiting, J.; Hood, S.; Peng, N.; and Iyyer, M. 2020. STORIUM: A Dataset and Evaluation Plat- In Pro- form for Machine-in-the-Loop Story Generation. ceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), 6470â 6484. On- line: Association for Computational Linguistics. Amman, K. 2019. | 2308.07540#39 | 2308.07540#41 | 2308.07540 | [
"1706.03762"
]
|
2308.07540#41 | CALYPSO: LLMs as Dungeon Masters' Assistants | The Monsters Know What Theyâ re Doing. New York, NY: Gallery Books. ISBN 9781982122669. Ammanabrolu, P.; Cheung, W.; Tu, D.; Broniec, W.; and Riedl, M. 2020. Bringing Stories Alive: Generating Inter- active Fiction Worlds. Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertain- ment, 16(1): 3â 9. Bergstr¨om, K. 2011. | 2308.07540#40 | 2308.07540#42 | 2308.07540 | [
"1706.03762"
]
|
2308.07540#42 | CALYPSO: LLMs as Dungeon Masters' Assistants | Framing Storytelling with Games. In Si, M.; Thue, D.; Andr´e, E.; Lester, J. C.; Tanenbaum, T. J.; and Zammitto, V., eds., Interactive Storytelling, Lec- ture Notes in Computer Science, 170â 181. Berlin, Heidel- berg: Springer. ISBN 978-3-642-25289-1. Brown, T.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J. D.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; Agarwal, S.; Herbert-Voss, A.; Krueger, G.; Henighan, T.; Child, R.; Ramesh, A.; Ziegler, D.; Wu, J.; Winter, C.; Hesse, C.; Chen, M.; Sigler, E.; Litwin, M.; Gray, S.; Chess, B.; Clark, J.; Berner, C.; McCandlish, S.; Radford, A.; Sutskever, I.; and Amodei, D. 2020. Language Mod- els are Few-Shot Learners. In Larochelle, H.; Ranzato, M.; Hadsell, R.; Balcan, M.; and Lin, H., eds., Advances in Neu- ral Information Processing Systems, volume 33, 1877â | 2308.07540#41 | 2308.07540#43 | 2308.07540 | [
"1706.03762"
]
|
2308.07540#43 | CALYPSO: LLMs as Dungeon Masters' Assistants | 1901. Curran Associates, Inc. Calderwood, A.; Qiu, V.; Gero, K. I.; and Chilton, L. B. 2020. How Novelists Use Generative Language Models: An Exploratory User Study. In International Conference on Intelligent User Interfaces (IUI) Workshops. Cagliari, Italy: ACM. Callison-Burch, C.; Singh Tomar, G.; Martin, L. J.; Ippolito, D.; Bailis, S.; and Reitter, D. 2022. | 2308.07540#42 | 2308.07540#44 | 2308.07540 | [
"1706.03762"
]
|
2308.07540#44 | CALYPSO: LLMs as Dungeon Masters' Assistants | Dungeons and Dragons as a Dialogue Challenge for Artificial Intelligence. In Con- ference on Empirical Methods in Natural Language Pro- cessing (EMNLP), 9379â 9393. Abu Dhabi, United Arab Emirates: Association for Computational Linguistics. Chung, J. J. Y.; Kim, W.; Yoo, K. M.; Lee, H.; Adar, E.; and Chang, M. 2022. TaleBrush: Sketching Stories with Gener- ative Pretrained Language Models. | 2308.07540#43 | 2308.07540#45 | 2308.07540 | [
"1706.03762"
]
|
2308.07540#45 | CALYPSO: LLMs as Dungeon Masters' Assistants | In CHI Conference on Human Factors in Computing Systems, 1â 19. New Orleans LA USA: ACM. ISBN 978-1-4503-9157-3. Coenen, A.; Davis, L.; Ippolito, D.; Reif, E.; and Yuan, A. 2021. Wordcraft: a Human-AI Collaborative Editor for In First Workshop on Bridging Human- Story Writing. Computer Interaction and Natural Language Processing at EACL 2021. Association for Computational Linguistics. Crawford, J.; Mearls, M.; and Perkins, C. 2018. | 2308.07540#44 | 2308.07540#46 | 2308.07540 | [
"1706.03762"
]
|
2308.07540#46 | CALYPSO: LLMs as Dungeon Masters' Assistants | D&D Basic Rules. Renton, WA: Wizards of the Coast. Crawford, J.; Perkins, C.; and Wyatt, J. 2014. Dungeon Mas- terâ s Guide. Renton, WA: Wizards of the Coast. D&D Beyond. 2017. dndbeyond.com/. dScryb. 2020. dScryb. https://dscryb.com/. Foundry Gaming, LLC. 2019. Foundry Virtual Tabletop. https://foundryvtt.com/. Gygax, G.; and Arneson, D. 1974. Dungeons & Dragons. | 2308.07540#45 | 2308.07540#47 | 2308.07540 | [
"1706.03762"
]
|
2308.07540#47 | CALYPSO: LLMs as Dungeon Masters' Assistants | Holtzman, A.; Buys, J.; Du, L.; Forbes, M.; and Choi, Y. 2020. The Curious Case of Neural Text Degeneration. In International Conference on Learning Representations. Ippolito, D.; Yuan, A.; Coenen, A.; and Burnam, S. 2022. Creative Writing with an AI-Powered Writing Assistant: Perspectives from Professional Writers. ArXiv:2211.05030 [cs]. Kelly, J.; Mateas, M.; and Wardrip-Fruin, N. 2023. Towards Computational Support with Language Models for TTRPG In Proceedings of the 18th International Game Masters. Conference on the Foundations of Digital Games, FDG â | 2308.07540#46 | 2308.07540#48 | 2308.07540 | [
"1706.03762"
]
|
2308.07540#48 | CALYPSO: LLMs as Dungeon Masters' Assistants | 23, 1â 4. New York, NY, USA: Association for Computing Ma- chinery. ISBN 978-1-4503-9855-8. Kreminski, M.; Dickinson, M.; Wardrip-Fruin, N.; and Mateas, M. 2022. Loose Ends: A Mixed-Initiative Creative Interface for Playful Storytelling. Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, 18(1): 120â 128. Number: 1. Lin, Z.; Agarwal, R.; and Riedl, M. 2022. Creative Wand: A System to Study Effects of Communications in Co-Creative Settings. AAAI Conference on Artificial Intelligence and In- teractive Digital Entertainment (AIIDE), 18(1): 45â 52. Louis, A.; and Sutton, C. 2018. | 2308.07540#47 | 2308.07540#49 | 2308.07540 | [
"1706.03762"
]
|
2308.07540#49 | CALYPSO: LLMs as Dungeon Masters' Assistants | Deep Dungeons and Drag- ons: Learning Character-Action Interactions from Role- Playing Game Transcripts. In Conference of the North Amer- ican Chapter of the Association for Computational Linguis- tics: Human Language Technologies (NAACL-HLT), volume Volume 2 (Short Papers), 708â 713. New Orleans, Louisiana: Association for Computational Linguistics. Martin, L. J.; Sood, S.; and Riedl, M. O. 2018. Dungeons and DQNs: Toward Reinforcement Learning Agents that Play Tabletop Roleplaying Games. | 2308.07540#48 | 2308.07540#50 | 2308.07540 | [
"1706.03762"
]
|
2308.07540#50 | CALYPSO: LLMs as Dungeon Masters' Assistants | In Wu, H.-Y.; Si, M.; and Jhala, A., eds., Joint Workshop on Intelligent Narra- tive Technologies and Workshop on Intelligent Cinematog- raphy and Editing (INT-WICED). Edmonton, AB, Canada: http://ceur-ws.org. Newman, P.; and Liu, Y. 2022. Generating Descriptive and Rules-Adhering Spells for Dungeons & Dragons Fifth In Proceedings of the 9th Workshop on Games Edition. and Natural Language Processing within the 13th Language Resources and Evaluation Conference, 54â 60. | 2308.07540#49 | 2308.07540#51 | 2308.07540 | [
"1706.03762"
]
|
2308.07540#51 | CALYPSO: LLMs as Dungeon Masters' Assistants | Marseille, France: European Language Resources Association. OpenAI. 2022. Introducing ChatGPT. https://openai.com/ blog/chatgpt. Parry, W. T.; and Hacker, E. A. 1991. Aristotelian logic. Albany, NY: State University of New York Press. ISBN 9780791406892. Perez, M. R. B.; Eisemann, E.; and Bidarra, R. 2021. | 2308.07540#50 | 2308.07540#52 | 2308.07540 | [
"1706.03762"
]
|
2308.07540#52 | CALYPSO: LLMs as Dungeon Masters' Assistants | A Synset-Based Recommender Method for Mixed-Initiative Narrative World Creation. In Mitchell, A.; and Vosmeer, M., eds., Interactive Storytelling, Lecture Notes in Com- puter Science, 13â 28. Cham: Springer International Publish- ing. ISBN 978-3-030-92300-6. Perkins, C.; Crawford, J.; Sims, C.; Thompson, R.; Lee, P.; Mearls, M.; Schwalb, R. J.; Sernett, M.; Townshend, S.; and Wyatt, J. 2014. Monster Manual. | 2308.07540#51 | 2308.07540#53 | 2308.07540 | [
"1706.03762"
]
|
2308.07540#53 | CALYPSO: LLMs as Dungeon Masters' Assistants | Renton, WA: Wizards of the Coast. Rameshkumar, R.; and Bailey, P. 2020. Storytelling with Di- alogue: A Critical Role Dungeons and Dragons Dataset. In Annual Meeting of the Association for Computational Lin- guistics (ACL), 5121â 5134. Online: Association for Compu- tational Linguistics. Roemmele, M.; and Gordon, A. S. 2015. Creative Help: A Story Writing Assistant. | 2308.07540#52 | 2308.07540#54 | 2308.07540 | [
"1706.03762"
]
|
2308.07540#54 | CALYPSO: LLMs as Dungeon Masters' Assistants | In Schoenau-Fog, H.; Bruni, L. E.; Louchart, S.; and Baceviciute, S., eds., Interactive Story- telling, volume 9445, 81â 92. Cham: Springer International Publishing. ISBN 978-3-319-27035-7 978-3-319-27036-4. Series Title: Lecture Notes in Computer Science. Roven, T. 2014. Tabletop Audio. https://tabletopaudio.com/. Samuel, B.; Mateas, M.; and Wardrip-Fruin, N. 2016. The Design of Writing Buddy: A Mixed-Initiative Approach To- wards Computational Story Collaboration. In Nack, F.; and Gordon, A. S., eds., Interactive Storytelling, volume 10045, 388â 396. Cham: Springer International Publishing. ISBN 978-3-319-48278-1 978-3-319-48279-8. Series Title: Lec- ture Notes in Computer Science. Santiago, J. M., III; Parayno, R. L.; Deja, J. A.; and Sam- son, B. P. V. 2023. | 2308.07540#53 | 2308.07540#55 | 2308.07540 | [
"1706.03762"
]
|
2308.07540#55 | CALYPSO: LLMs as Dungeon Masters' Assistants | Rolling the Dice: Imagining Genera- tive AI as a Dungeons & Dragons Storytelling Companion. ArXiv:2304.01860 [cs]. van Velsen, M.; Williams, J.; and Verhulsdonck, G. 2009. Table-Top Gaming Narratology for Digital Interactive Sto- rytelling. In Iurgel, I. A.; Zagalo, N.; and Petta, P., eds., In- teractive Storytelling, Lecture Notes in Computer Science, 109â 120. Berlin, Heidelberg: Springer. ISBN 978-3-642- 10643-9. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, L.; and Polosukhin, I. 2017. At- tention Is All You Need. arXiv:1706.03762. Yang, D.; Zhou, Y.; Zhang, Z.; Jia, T.; Li, J.; and Lc, R. 2022. AI as an Active Writer: Interaction strategies with generated text in human-AI collaborative fiction writing. In Joint Pro- ceedings of the ACM IUI Workshops 2022. | 2308.07540#54 | 2308.07540#56 | 2308.07540 | [
"1706.03762"
]
|
2308.07540#56 | CALYPSO: LLMs as Dungeon Masters' Assistants | Helsinki, Fin- land. Yuan, Y.; Cao, J.; Wang, R.; and Yarosh, S. 2021. Tabletop Games in the Age of Remote Collaboration: Design Oppor- tunities for a Socially Connected Game Experience. In Pro- ceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 1â 14. Yokohama Japan: ACM. ISBN 978-1-4503-8096-6. Zhou, P.; Zhu, A.; Hu, J.; Pujara, J.; Ren, X.; Callison-Burch, C.; Choi, Y.; and Ammanabrolu, P. 2023. I Cast Detect Thoughts: Learning to Converse and Guide with Intents and Theory-of-Mind in Dungeons and Dragons. In Proceedings of the 61st Annual Meeting of the Association for Computa- tional Linguistics (Volume 1: Long Papers), 11136â 11155. Toronto, Canada: Association for Computational Linguis- tics. Zhu, A.; Aggarwal, K.; Feng, A.; Martin, L.; and Callison- Burch, C. 2023. | 2308.07540#55 | 2308.07540#57 | 2308.07540 | [
"1706.03762"
]
|
2308.07540#57 | CALYPSO: LLMs as Dungeon Masters' Assistants | FIREBALL: A Dataset of Dungeons and Dragons Actual-Play with Structured Game State Informa- tion. In Proceedings of the 61st Annual Meeting of the Asso- ciation for Computational Linguistics (Volume 1: Long Pa- pers), 4171â 4193. Toronto, Canada: Association for Com- putational Linguistics. Zhu, A.; and D&D Beyond. 2016. Avrae. https://avrae.io/. | 2308.07540#56 | 2308.07540 | [
"1706.03762"
]
|
|
2308.07201#0 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | 3 2 0 2 g u A 4 1 ] L C . s c [ 1 v 1 0 2 7 0 . 8 0 3 2 : v i X r a # CHATEVAL: TOWARDS BETTER LLM-BASED EVALUA- TORS THROUGH MULTI-AGENT DEBATE Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Zhiyuan Liuâ Department of Computer Science and Technology Tsinghua University [email protected] Shanghang Zhang Peking University # ABSTRACT Text evaluation has historically posed significant challenges, often demanding substantial labor and time cost. With the emergence of large language models (LLMs), researchers have explored LLMsâ potential as alternatives for human evaluation. While these single-agent-based approaches show promise, experi- mental results suggest that further advancements are needed to bridge the gap between their current effectiveness and human-level evaluation quality. Recog- nizing that best practices of human evaluation processes often involve multiple human annotators collaborating in the evaluation, we resort to a multi-agent debate framework, moving beyond single-agent prompting strategies. The multi-agent- based approach enables a group of LLMs to synergize with an array of intelli- gent counterparts, harnessing their distinct capabilities and expertise to enhance efficiency and effectiveness in handling intricate tasks. In this paper, we con- struct a multi-agent referee team called ChatEval to autonomously discuss and evaluate the quality of generated responses from different models on open-ended questions and traditional natural language generation (NLG) tasks. We derive insights and lessons from practical scenarios where humans instigate group dis- cussions for brainstorming and propose different communication strategies within ChatEval. Our experiments on two benchmark tasks illustrate that ChatEval deliv- ers superior accuracy and correlation in alignment with human assessment. Fur- thermore, we find that the diverse role prompts (different personas) are essen- tial in the multi-agent debate process; that is, utilizing the same role description in the prompt can lead to a degradation in performance. Our qualitative analy- sis also shows that ChatEval transcends mere textual scoring, offering a human- mimicking evaluation process for reliable assessments. Our code is available at https://github.com/chanchimin/ChatEval. | 2308.07201#1 | 2308.07201 | [
"2303.04048"
]
|
|
2308.07201#1 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | # INTRODUCTION Evaluating the quality of text generated by language models or written by humans has long been a challenging endeavor, consistently garnering substantial attention (Celikyilmaz et al., 2020). Tra- ditional methodologies predominantly rely on human annotation of texts (Callison-Burch, 2009), an approach considered overly demanding in terms of time and cost. Automatic evaluation metrics based on n-grams, such as Rouge (Lin, 2004), BLEU (Papineni et al., 2002), and METEOR (Baner- jee & Lavie, 2005), have been proposed to tackle this issue (Kondrak, 2005). However, these methods have been shown to exhibit a relatively weak correlation with human judgments, partic- ularly in the context of tasks involving open-ended generation or requiring domain-specific exper- tise (Novikova et al., 2017). Recent advancements in the field of natural language processing have led to the emergence of billion-parameter scale LLMs, such as GPT-3 (Brown et al., 2020). These LLMs have demon- | 2308.07201#0 | 2308.07201#2 | 2308.07201 | [
"2303.04048"
]
|
2308.07201#2 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | â Corresponding author. Email: [email protected] 1 strated remarkable capabilities across diverse downstream tasks, presenting new opportunities for text quality evaluation using such models. Moreover, various training paradigms have been pro- posed to endow LLMs with the ability to accomplish tasks in a zero-shot manner and better adhere to human-provided instructions (Ouyang et al., 2022; Sanh et al., 2021; Wei et al., 2021). These advancements facilitate the prompting of LLMs to evaluate generated text, effectively simulating human evaluators in the assessment process. In view of the impressive text understanding and instruction-following capabilities of recent LLMs, a body of literature (Liu et al., 2023b; Chiang & Lee, 2023; Gao et al., 2023; Shen et al., 2023) has adopted LLM as an evaluator to assess the quality of responses to open-ended questions or tradi- tional NLG tasks, including dialogue response generation and summarization. This methodology is dubbed LLM-as-a-judge (Zheng et al., 2023). Findings from these researches indicate that LLM can mimic human behavior and provide evaluations that correspond with human judgments, revealing a potentially scalable and transparent alternative to costly and laborious human evaluations. While a single powerful LLM can already tackle various missions, emerging studies suggest that multiple LLMs can further improve one another through debate and cooperation (Li et al., 2023a; Liang et al., 2023). By incorporating multiple LLMs into an integrated group and designing specific interaction mechanisms, different LLMs can engage in proposing and deliberating unique responses and thought processes across several rounds. This approach leads to enhanced factuality of gen- erated responses (Du et al., 2023) and improvement in the completion of arduous tasks (Li et al., 2023a; Qian et al., 2023). Furthermore, the multi-agent group also addresses and mitigates the Degeneration-of-Thought (DOT) problem (Liang et al., 2023). In the human evaluation processes, relying on a single perspective can introduce bias and instabil- ity in the results (Karpinska et al., 2021). | 2308.07201#1 | 2308.07201#3 | 2308.07201 | [
"2303.04048"
]
|
2308.07201#3 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | Recognizing this, best practices often involve multiple human annotators collaborating in the evaluation (Van Der Lee et al., 2019). Drawing inspiration from this collaborative and iterative human evaluation approach, we propose ChatEval, a system that enables each agent to employ varied communication strategies in collaborative discussion, working towards formulating final judgments. Furthermore, to enrich the evaluation dynamics, every agent within ChatEval is endowed with a unique persona. This deliberate design ensures that each agent focuses on distinct perspectives or brings specific expertise to the table. By doing so, the collective evaluation benefits from a more comprehensive lens, capturing nuances and subtleties that a single perspective might overlook. We derive this idea primarily from the insight of â | 2308.07201#2 | 2308.07201#4 | 2308.07201 | [
"2303.04048"
]
|
2308.07201#4 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | There are a thousand Hamlets in a thousand peopleâ s eyesâ , meaning that every person has their unique interpretation or perspective, especially applicable to text evaluation. Indeed, these divergent perspectives shape the comprehensive and multifaceted assessment of Hamlet. Another underlying intuition of our work stems from renowned concepts in sociology and biology, including Collective Intelligence(Woolley et al., 2010) and Cognitive Synergy(Luppi et al., 2022), where multiple cognitive processes or sys- tems interact and cooperate in a way that produces a combined effect greater than the sum of their separate effects. To summarize, the main contribution of our work is as follows: | 2308.07201#3 | 2308.07201#5 | 2308.07201 | [
"2303.04048"
]
|
2308.07201#5 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | 1. We propose a multi-agent-based framework called ChatEval that aligns better with human preferences compared with single-agent-based approaches as depicted in Figure 1. 2. We propose various communication strategies and demonstrate the necessity of diverse role prompts in multi-agent debate scenarios. Itâ s designed to be both composable and scalable, enabling re- searchers to implement their unique communication strategies easily. We hope this con- tributes to advancing research in the field of communicative agents and beyond. # 2 METHODOLOGY In this section, we elaborate on the principal components in ChatEval including debater agents, diverse role specification, communication strategy, and provide a detailed overview of each compo- nentâ s role and functionality1. 1our code repository is built on top of https://github.com/OpenBMB/AgentVerse. 2 be g bg Large Language Model (LLM) Based Agent Single-Agent method oo AFter carefully reviewing the improve my time j UZP responses of both responses .. | Ox) management skills? = think ASSISTANT | is better. Wi â ees Sy {ASSISTANT | Imp improving | your time management y Multi-Agent debate 8 |AFter discussing thoroughly with] >) my co-workers, we are convinced that ASSISTANT 2 is | Wd better based on the reason -) SS | your time management, |_| skills involves e) | some tips to improve | Ce ee Figure 1: When several referees participate in the evaluation process, they can discuss with each other and finally give a judgment that is better aligned with human annotators. Debater Agents. Debater agents are one of the most significant components in our framework. We treat each individual LLM as an agent and ask them to generate their response from the given prompt2. Responses from other agents are served as chat history which will be replaced in the prompt template. After configuring the agents, we then start the group debate where each agent autonomously receives responses from the others and, in turn, delivers its own responses to them. | 2308.07201#4 | 2308.07201#6 | 2308.07201 | [
"2303.04048"
]
|
2308.07201#6 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | It should be noted that the whole process does not require human intervention. Diverse Role Specification. As presented in Section 1, diverse role specification is necessary for the framework as well. Although all the agents share a common prompt template, we substitute the role description slot with diverse role prompts, specifying distinct personalities for different agents. We take inspiration from Wu et al. (2023) and formulate an analogous role description. Communication Strategy. How to maintain the chat history is another significant issue in ChatEval. In our work, we use a more intuitive term to illustrate the maintenance of the chat history called communication strategy. In a nutshell, different communication strategies can be seen as different approaches to maintaining and manipulating their chat history. As is shown in Figure 2, We primarily design three different communication strategies and illustrate them as follows: | 2308.07201#5 | 2308.07201#7 | 2308.07201 | [
"2303.04048"
]
|
2308.07201#7 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | 1. One-By-One. During each round of the debate, the debater agents take turns in a set order to generate their response based on the current observation. When itâ s time for a debater agent to respond, we directly concatenate what previous other agents have said into its chat history slot. 2. Simultaneous-Talk. Unlike the one-by-one strategy, we carry out an alternative com- munication strategy called simultaneous-talk, where debater agents are prompted to asyn- chronously generate responses in each iteration of the discussion to nullify the impact of the speaking order. 3. Simultaneous-Talk-with-Summarizer. The main difference between this strategy and simultaneous-talk is that we additionally employ another LLM as a summarizer. At the end of each iteration of the debate, we prompt this extra LLM to summarize the messages conveyed so far and concatenate this summarization into all debater agentsâ | 2308.07201#6 | 2308.07201#8 | 2308.07201 | [
"2303.04048"
]
|
2308.07201#8 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | chat history slots. 2The full prompt template can be found in Appendix A. 3 (a) One-by-One (b) Simultaneous-Talk (c) Simultaneous-Talk-with-Summarizer Alice a e 8-8 | x N round â Alice Lona gâ 8â ? % N round: Figure 2: The overall schematic diagram of our proposed three different kinds of communication strategy. The direction of the arrows represents the flow of information, meaning that what this person says will be appended to the chat history of the person pointed to by the arrow. Full algorithm description of the above communication strategies can be found in Appendix B. Unlike previous work like Du et al. (2023), we do not explicitly ask the debater agents to reach a consensus at the end of the debate. In situations where the response format relies on direct compar- ison, we derive the final results from the majority vote among various annotators. Conversely, if the response format requires a direct score, we calculate the average score obtained from multiple annotators. This methodological approach ensures the impartiality and balance of our evaluation process. | 2308.07201#7 | 2308.07201#9 | 2308.07201 | [
"2303.04048"
]
|
2308.07201#9 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | # 3 EXPERIMENTS We evaluate ChatEval on two benchmarks, FairEval and Topical-Chat which represent the cate- gories of open-ended question answer and dialogue response generation, respectively. IMPLEMENTATION DETAILS We choose to utilize models from OpenAIâ s GPT family as our LLMs in ChatEval, including GPT-4 and ChatGPT (GPT-3.5-turbo) and set the temperature to 0 to ensure reproducibility. The rationale behind this selection is the exceptional performance these models offer, being among the most ad- vanced and powerful in the world. Additionally, their accessibility and ease of use through APIs enable us to directly call and interact with the models during our research, significantly simplifying the process. In our current research, we focus on homogeneous groups of LLMs. That is, within a given multi-agent group, all LLMs belong to the same GPT family model, either all GPT-4 or all ChatGPT. We acknowledge the potential of heterogeneous groups for future research, which could provide fascinating insights into how strong models and weak models can cooperate in a multi-agent setting. | 2308.07201#8 | 2308.07201#10 | 2308.07201 | [
"2303.04048"
]
|
2308.07201#10 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | 3.2 BENCHMARKS The detailed introduction of different categories and benchmarks are listed as follows: Open-ended Question Answer is a key component within the field of NLP and generative AI. It necessitates an AI system to provide comprehensive, detailed, and human-like responses to questions that donâ t have a predefined or fixed set of possible answers. The work of Chiang et al. (2023) encompasses a collection of 80 open-ended questions originating from a wide array of categories, including common-sense, counterfactual, coding, etc. We then take the human annotation results from Wu et al. (2023) to conduct the experiments in this paper. For each question, they direct three annotators to evaluate the replies given by Vicuna-13B and ChatGPT through the given rules and finally derive the results by the majority votes among the annotators. Dialogue Response Generation is a task involves creating a coherent and contextually appropriate response to a given input dialogue. We draw upon the Topical-Chat (Gopalakrishnan et al., 2019) dataset for our study. We then take the human annotation results from Mehri & Eskenazi (2020) where they carry out the annotations on 60 dialogue contexts with each response generated by 6 different systems. Human evaluators analyzed these responses based on natural, coherence, engag- ingness, groundedness, and understandable, where we take the first four dimensions for experiments in our paper following Zhong et al. (2022). 4 # 3.3 BASELINES We evaluate ChatEval against following methods. As the main portion of our comparison, we pri- marily focuses on the single-agent-based method. Single-Agent means that we directly query an LLM to generate the response towards the evaluation3. We use Multi-Agent to represent ChatEval where several agents discuss towards the evaluation. By default, we configure the communication strategy to one-by-one, agent numbers to 2, and discussion turns to 2 in this section and employ po- sition calibration techniques in both single-agent and multi-agent settings. We will discuss more de- bate configurations in Section 4 for completeness. For the open-ended question answer task, we also compare our method with FairEval (Wang et al., 2023b). They propose various strategies to improve the evaluation performance of a LLM including Multiple Evidence Calibration (MEC) and Balanced Position Calibration (BPC). | 2308.07201#9 | 2308.07201#11 | 2308.07201 | [
"2303.04048"
]
|
2308.07201#11 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | For the dialogue response generation task, we also compare our method with G-EVAL (Liu et al., 2023b). They utilize CoT and probability-weighted summation for their method. Additionally, we include results from n-gram-based metrics, such as ROUGE (Lin, 2004), BLEU (Papineni et al., 2002) and embedding-based metrics such as BERTScore (Zhang et al., 2019). 3.4 RESULTS FOR OPEN-ENDED QUESTION ANSWERS We adopt the same evaluation approach as Wang et al. (2023b) to assess the annotation results produced by different methods and annotators. Specifically, we calculate the Accuracy (Acc.), which measures the proportion of correctly classified instances out of the total instances, and the Kappa correlation coefficient (Kap.) (McHugh, 2012) which gauges the agreement between results from models and human annotators while taking into account the possibility of agreement occurring by chance. Both metrics provide insights into the reliability and consistency of the annotations. We take the human annotation results and FairEvalâ s (Wang et al., 2023b) best results from their paper. As is shown in Table 1, different annotators can reach a relatively high agreement and perform better than any other LLM-based approach. Still, the average human annotations accuracy which is 71.7% shows there exists a certain degree of discrepancy among different unique individuals revealing that text evaluation is absolutely an arduous task. The second part and the third part of Table 1 show the results of FairEvalâ s method and the results of our proposed method respectively. We find that (1) ChatEval can enhance the performance of the evaluation process, achieving higher alignment with human preference compared with single-agent evaluation. Specifically, the multi-agent-based method improves the accuracy by 6.2% for ChatGPT and 2.5% for GPT-4; (2) ChatEval surpasses FairEvalâ s best results within both ChatGPT and GPT-4 settings showing the effectiveness of our proposed method. 3.5 RESULTS FOR DIALOGUE RESPONSE GENERATION For the dialogue response generation benchmarks, we align the evaluation method with Zhong et al. (2022), calculating the turn-level Spearman and Kendall-Tau correlation in correspondence with hu- man judgments on four aspects (naturalness, coherence, engagingness and groundedness). | 2308.07201#10 | 2308.07201#12 | 2308.07201 | [
"2303.04048"
]
|
2308.07201#12 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | Results can be found in Table 2. In the first part of Table 2, we demonstrate that n-gram-based metrics and embedding-based metrics perform overall poorly on all the aspects evaluated illustrating that these methods can hardly reveal human preference. In the second part of Table 2, we show the results from the G-eval (Liu et al., 2023b) paper. They first ask the LLM to generate intermediate thought and finally calculate the weighted summation of the output scores based on the probabil- ity. The results show that their method outperforms previous traditional metrics depicting the fact that the LLM-based evaluator is effective and reliable for evaluating the dialogue response genera- tion task. While their method delivers sound results, our proposed approach raises the bar in terms of performance for GPT-4. Specifically, ChatEval improves the average Spearman and Kendall- Tau correlation by 0.096 (16.3%) and 0.057 (10.0%) respectively. Additionally, compared with the single-agent method, ChatEval amplifies the performance both for ChatGPT and GPT-4, showing the effectiveness of our method which is aligned with the results in Section 3.4. 3We use the same prompt template as our multi-agent debate settings in single-agent baseline except that we ignore some slot. | 2308.07201#11 | 2308.07201#13 | 2308.07201 | [
"2303.04048"
]
|
2308.07201#13 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | 5 Table 1: Accuracy (Acc.) and Kappa correlation coefficient (Kap.) of different methods on FairEval benchmark. Evaluator Methods Human Annotator1 Annotator2 Annotator3 FairEval ChatGPT GPT-4 Ours ChatGPT ChatGPT GPT-4 GPT-4 - - - MEC+BPC MEC+BPC Single-Agent Multi-Agent Single-Agent Multi-Agent Acc. (%) Kap. 68.8 76.3 70 0.5 0.62 0.5 58.7 62.5 0.31 0.37 53.8 60.0 61.3 63.8 0.27 0.33 0.36 0.40 Table 2: Turn-level Spearman (Ï ) and Kendall-Tau (Ï ) correlations of different methods on Topical- Chat benchmark, SA means Single-Agent and MA means Multi-Agent. Our ChatGPT settings should be compared to G-EVAL-3.5, and GPT-4 settings should be compared to G-EVAL-4. | 2308.07201#12 | 2308.07201#14 | 2308.07201 | [
"2303.04048"
]
|
2308.07201#14 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | Naturalness Ï 0.146 0.176 0.203 0.193 0.300 0.295 ROUGE-L 0.175 0.180 0.235 0.131 0.316 0.232 BLEU-4 0.209 0.226 0.233 0.214 0.335 0.317 BERTScore 0.539 0.532 0.544 0.519 0.691 0.660 G-EVAL-3.5 0.565 0.549 0.605 0.594 0.631 0.627 G-EVAL-4 ChatGPT(SA) 0.474 0.421 0.527 0.482 0.599 0.549 ChatGPT(MA) 0.441 0.396 0.500 0.454 0.664 0.607 0.532 0.483 0.591 0.535 0.734 0.676 GPT-4(SA) 0.630 0.571 0.619 0.561 0.765 0.695 GPT-4(MA) Coherence Ï Engagingness Groundedness Ï Metrics Ï Ï Ï Ï 0.327 0.310 0.310 0.213 0.317 0.291 0.567 0.586 0.551 0.531 0.576 0.558 0.602 0.583 0.774 0.750 0.722 0.700 Ï Average Ï 0.244 0.244 0.259 0.189 0.274 0.262 0.585 0.574 0.588 0.575 0.544 0.503 0.552 0.510 0.658 0.611 0.684 0.632 Ï # 4 ANALYSIS In this section, we further explore the key components encompassed in ChatEval. | 2308.07201#13 | 2308.07201#15 | 2308.07201 | [
"2303.04048"
]
|
2308.07201#15 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | We discuss the importance of diverse role prompts in Section 4.1, the effect of different communication strategies in Section 4.2, and the impact of role numbers and discussion turns in Section 4.3. If not specified otherwise, we choose the FairEval benchmark and ChatGPT as the backbone LLM for the analysis. 4.1 THE IMPORTANCE OF DIVERSE ROLE PROMPTS Previously in Table 1 and 2, we demonstrate that ChatEval equipped with diverse role configura- tions can significantly improve the performance of evaluation. We further consider whether it is necessary to design diverse role prompts for the evaluation system. To answer so, we carry out the experiments by replacing all the role prompt with â You are now an Annotator, one of the referees in the text evaluation task.â and keeping other prompt unchanged. We experiment with the one-by-one communication strategy and 2 agents with 2 discussion turns. The results in Table 3 illustrate that ChatEval with the same role prompt design underperforms that with diverse role prompt design and cannot effectively enhance the performance compared with single-agent setting, highlighting the cruciality of diverse role prompt design in the multi-agent debate framework. 4.2 THE STUDY OF COMMUNICATION STRATEGIES As shown in Figure 2, we also design three different communication strategy termed as one-by-one, simultaneous-talk, simultaneous-talk-with-summarizer. The detailed descriptions and formal for- 6 mulations can be found in Appendix B. We experiment with 3 agents and 2 discussion turns with diverse role prompts in this section. As is shown in Table 4, we can find that the one-by-one commu- nication strategy is more effective than other strategies for ChatGPT setting. Although the other two communication strategies did not perform as robustly as the one-by-one strategy, it is noteworthy that they still exceeded the performance of the naive single-agent method. Furthermore, the vari- ations in performance among three different communication strategies underscore the influence of different strategies on the effectiveness of the evaluation process, revealing the potential for further exploration and optimization of ChatEval. Thus, future studies could be aimed at a more com- prehensive understanding of different communication strategies, and how they could be effectively employed to enhance performance. This could serve as an avenue for substantial improvements and novel insights in the multi-agent debate framework. # 4.3 THE IMPACT OF ROLE NUMBERS AND DISCUSSION TURNS | 2308.07201#14 | 2308.07201#16 | 2308.07201 | [
"2303.04048"
]
|
2308.07201#16 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | We then study the impact of different role numbers and discussion turns. From Figure 3a, a dis- cernible trend is observed in the relationship between the role number and both Acc. and Kap. within the FairEval dataset. As the role number increases, there is a corresponding growth in Acc. and Kap. This pattern reaches an apex with an Acc. of 62.5% at role numbers 3 and 4 before de- clining at role number 5. Despite this decline in accuracy, Kap. continues to ascend between role numbers 3 and 4, underscoring the effectiveness of incorporating diverse roles within ChatEval. Conversely, no significant upward trend is detected with respect to the increase in discussion turns, as is shown in Figure 3b. This observation aligns with the findings in Liang et al. (2023); Du et al. (2023), highlighting a consistent phenomenon where continual discussion often leads to stagna- tion or even degradation of performance. Such a trend may be attributed to issues associated with the ever-increasing context length, which consequently diminishes the performance. This insight prompts a more nuanced understanding of the balance needed between role differentiation and dis- cussion dynamics to optimize the performance of ChatEval. # 4.4 QUALITATIVE ANALYSIS Table 5 showcases the debate process towards the evaluation of two assistantsâ responses to the open-ended question â What are the most effective ways to deal with stress?â . Assistant 1â s response is : | 2308.07201#15 | 2308.07201#17 | 2308.07201 | [
"2303.04048"
]
|
2308.07201#17 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | Here are some effective ways to deal with stress: 1. Exercise regularly: Physical activity can help reduce stress and improve mood. 2. Practice relaxation techniques: Deep breathing, meditation, and yoga are some effective ways to relax and reduce stress levels. 3. Get enough sleep: Lack of sleep can increase stress levels. Try to get at least 7-8 hours of sleep each night. 4. Eat a healthy diet: A well-balanced diet can help reduce stress levels and improve overall health. 5. Connect with others: Spending time with friends and family, joining a support group, or participating in social activities can help reduce stress levels and improve mood. 6. Time management: Prioritizing tasks and creating a schedule can help reduce stress and increase productivity. 7. Seek professional help: If stress is impacting your daily life, consider seeking professional help such as therapy or counseling. | 2308.07201#16 | 2308.07201#18 | 2308.07201 | [
"2303.04048"
]
|
2308.07201#18 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | Assistant 2â s response is : Stress can be caused by a variety of factors, including work, relationships, health, and financial issues. Here are some effective ways to manage stress: 1. Exercise: Regular exercise releases endorphins, which are natural stress- relievers. It can be as simple as going for a walk or jog, practicing yoga, or joining a fitness class. 2. Mindfulness: Mindfulness practices, such as meditation or deep breathing, can 7 help to reduce stress and improve focus. 3. Social Support: Spending time with family and friends, or seeking support from a mental health professional, can provide a sense of connection and help to reduce feelings of isolation and loneliness. 4. Healthy Eating: A balanced diet that includes plenty of fruits, vegetables, and whole grains can help to reduce stress levels. 5. Good Sleep: Getting enough sleep is essential for good mental and physical health. Aim for 7-8 hours per night and establish a regular sleep routine. 6. Time Management: Prioritizing tasks and setting realistic goals can help to reduce stress levels by giving a sense of control and direction. 7. Relaxation Techniques: Techniques such as progressive muscle relaxation, aro- matherapy, and massage can help to reduce stress levels and promote relaxation. 8. Seek professional help: If stress is interfering with daily life, it may be time to seek professional help from a therapist or counselor. We can find that both of the responses produce similar strategies and equally compelling descriptions for dealing with stress, making it challenging to discern significant disparity in terms of quality. It is in this context of nuanced evaluation that the significance of the ChatEval process emerges. To understand this complexity better, We first outline the ChatEval process and subsequently delve into the agentsâ constructive behaviors during discussions. As is shown in Table 5, Alice first points out that the response of Assistant 2 contains more detailed information and he prefers to choose Assistant 2 as a better response. Bob then says that she agrees with Aliceâ s assessments, but in the meantime, she also points out that Assistant 1â s response is also concise and carries out a thought-provoking question. Carol then gives the feedback that she believes both responses are equally valuable. In the subsequent discussion, Bob indicates that Assistant 1â s response is straightforward while Assistant 2â | 2308.07201#17 | 2308.07201#19 | 2308.07201 | [
"2303.04048"
]
|
2308.07201#19 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | s is detailed, suggesting that the effectiveness of the response should depend on the context and individualâ s needs. At the end of the debate, we finally extract the evaluation results that both responses are of the same quality which is identical to human annotation results. From this sequence, we can pinpoint several fascinating behaviors exhibited by the agents: (1) Opening Statement: Alice initiates the debate with a clear stance, establishing the foundational argument and guiding the trajectory of the subsequent discourse. (2) Alternative Proposal: Bob introduces an alternative viewpoint, emphasizing the need to consider diverse interpretations. This not only broadens the discussion but also stimulates critical thinking. In the context of a debate, the introduction of an alternative proposal prevents the stagnation of thought, challenges pre-existing bias, and uncovers considerations that might otherwise be overlooked, ensuring that the discussions are well-rounded. (3) Stance Maintenance: | 2308.07201#18 | 2308.07201#20 | 2308.07201 | [
"2303.04048"
]
|
2308.07201#20 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | Aliceâ s persistent adherence to her initial stance, even when faced with opposing views, exemplifies commitment and challenges other participants to re- fine their perspectives. By firmly holding his position, Alice encourages depth in the discourse, prompting others to dive deeper into their arguments and perhaps consider aspects they hadnâ t pre- viously. It ensures the conversation remains robust, focused, and continually evolving, driving all participants to a higher level of engagement and critical thinking. (4) Seeking Consensus: The dis- cussionâ s climax reveals a collective agreement amongst the participants, which is reached through mutual understanding and compromise, underlining the value of each presented viewpoint. In light of the above, ChatEval stands out not just as a tool for comparison but as an embodiment of interactive natural language dialogue. By simulating human argumentative interactions, it differen- tiates itself from static, single-presented opinions. This dynamic interaction showcases the richness and complexity of language, capturing nuances often missed in singular viewpoints. As such, Chat- Eval offers a reliable evaluation process that not only mirrors human discourse but also highlights the transformative power of collaborative dialogue. This positions it uniquely, underscoring its sig- nificant potential to execute text evaluation tasks both reliably and effectively. 5 RELATED WORK Automatic NLG evaluation In the landscape of NLG, evaluating the quality of generated text rep- resents a particularly arduous task. For a significant period, evaluation was primarily dependent on | 2308.07201#19 | 2308.07201#21 | 2308.07201 | [
"2303.04048"
]
|
2308.07201#21 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | 8 Table 3: Effect of diverse role specification on FairEval benchmark. Evaluator Methods ChatGPT ChatGPT Multi-Agent (Same Role Prompt) ChatGPT Multi-Agent (Diverse Role Prompt) Single-Agent Acc. (%) Kap. 0.27 53.8 0.25 53.8 0.33 60 Table 4: Comparing of different communication strategies on FairEval benchmark. Evaluator Communication Strategies One-by-One ChatGPT Simultaneous-Talk ChatGPT Simultaneous-Talk-with-Summarizer ChatGPT Acc. (%) Kap. 0.33 60 0.28 55 0.27 55 human annotations, a process that is labor-intensive and limited by scalability issues. Automatic NLG evaluation attempts to address these challenges by leveraging computational models to assess the quality of a generated text. Previous work lies on the following categories: (1) n-gram-based metrics: ROUGE (Lin, 2004) is a set of metrics that compute the amount of overlap between n- grams in the machine-generated summaries and the reference summaries. BLEU (Papineni et al., 2002) compare the generated text with reference translations, based on the co-occurrence of n-grams in both texts. In spite of being easily and widely used, the above method is incapable of capturing syntactic and semantic similarity (Stent et al., 2005). (2) embedding-based metrics: Word embed- dings are vector representations of words that capture their semantic properties, such that words with similar meanings have similar embeddings. A bunch of work leverages word embeddings to evaluate the semantic similarity between two pieces of text. BERTScore (Zhang et al., 2019) use contextual- ized word embeddings from transformer models like BERT (Devlin et al., 2018), BLEURT (Sellam et al., 2020) utilize supervised training data to enhance the performance. MoverScore (Zhao et al., 2019) combine contextualized word embeddings with Earth Moverâ s Distance (Rubner et al., 2000). (3) LLM-based metrics: Amidst the flourishing advancement of LLM which embodies a wealth of information derived from extensive training data, using LLM as an evaluator has experienced no- table progress. | 2308.07201#20 | 2308.07201#22 | 2308.07201 | [
"2303.04048"
]
|
2308.07201#22 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | GPTScore (Fu et al., 2023) utilize conditional probability to assign the text a score representing its quality. Wang et al. (2023a) explore the potential of utilizing ChatGPT as an NLG evaluator by prompting it to score a text directly. Wang et al. (2023c) curate a reliable dataset con- taining pairwise comparison and evaluation explanation which can be used to train a foundation model making it a better evaluator. Bai et al. (2023) propose decentralized evaluation to provide fairer evaluation results. G-EVAL (Liu et al., 2023b) propose probability-weighted techniques to calibrate the score given by a single LLM. Communicative Agents Most recently, significant attention has been dedicated to the development of communicative agents. These agents, often acted by LLMs like ChatGPT or GPT-4, are designed to interact and communicate effectively with other agents or human users using natural language. The primary goal is to facilitate more productive and efficient interaction and collaboration as differ- ent agents can autonomously communicate and negotiate to tackle a more complex task collectively. Several studies have explored various aspects of communicative agents. Li et al. (2023a) propose a cooperative agent framework dubbed as role-playing enabling agents to autonomously cooperate to solve complex tasks. Park et al. (2023) create a sandbox environment consisting of 25 individual virtual entities endowed with a character description and memory system. Every intelligent agent is capable of autonomously interacting with other agents and the environment simulating reliable human behavior. Qian et al. (2023) establish a chat-based software development framework that can complete a software design and produce executable software at a reduced cost compared to recruiting human programmers. Liu et al. (2023a) utilize a sandbox environment to curate reliable datasets in better alignment with human preference and train a socially-aligned LLM. Liang et al. (2023) and Du et al. (2023) also make use of the multi-agent debate framework in other scenarios such as translation and arithmetic problems resulting in better results. Wang et al. (2023d) propose an alternative method called self-collaboration to enable the communication of agents by utilizing a single LLM prompted by multi-persona descriptions. Mandi et al. (2023) propose a novel framework | 2308.07201#21 | 2308.07201#23 | 2308.07201 | [
"2303.04048"
]
|
2308.07201#23 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | 9 Table 5: An illustrative example from ChatEval presenting a discussion process involving par- ticipants Alice ( ), Bob ( ) and Carol ( ). They initiate a group talk with the purpose of determining the superior response from two assistants. Due to spatial constraints within the layout, we use ... to omit some redundant text. | e Both Assistant 1 and Assistant 2 provided comprehensive and detailed answers. They both suggested similar strategies for dealing with stress, such as exercise, relaxation techniques, getting enough sleep, eating a healthy diet, connecting with others, time management, and seek- ing professional help. However, Assistant 2 went a step further by mentioning mindfulness social support, and relaxation techniques like progressive muscle relaxation, aromatherapy, and mas- sage. Therefore, I would lean towards Assistant 2 for providing a slightly more detailed response. | 2308.07201#22 | 2308.07201#24 | 2308.07201 | [
"2303.04048"
]
|
2308.07201#24 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | b | I agree with the Aliceâ s assessments. However, I would like to point out that Assistant 1â s answer was more concise and to the point that ... . On the other hand, Assistant 2â s answer . In your opinion, does the additional detail provided by Assistant was more detailed and ... 2 make its response more helpful, or could the conciseness of Assistant 1â s response be seen as more beneficial? In terms of consistency with the question, both responses are equally consistent as they both provided effective ways to deal with stress. | 2308.07201#23 | 2308.07201#25 | 2308.07201 | [
"2303.04048"
]
|
2308.07201#25 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | Assistant 1â s response was indeed more concise, while Assistant 2 provided more detailed techniques. The effectiveness of these responses could depend on the individualâ s preference for brevity or detail. Therefore, I believe both responses are equally valuable. =f I appreciate the points made by both of you. While Assistant 1â s response was indeed more concise, I believe that the additional detail provided by Assistant 2 could be more beneficial for someone looking for a variety of strategies to manage stress. The inclusion of mindfulness practices and ... . Therefore, I still lean towards Assistant 2â s response as being slightly more helpful. b | However, I would like to emphasize that the effectiveness of a response can also de- pend on the context and the individualâ s needs. For someone who is overwhelmed and looking for quick, straightforward advice, the conciseness of Assistant 1â s response might be more bene- ficial. On the other hand, for someone who is looking for a variety of strategies to try, the detailed response of Assistant 2 might be more helpful. Therefore, I believe both responses have their merits and itâ s difficult to definitively say one is better than the other. What do you think about this perspective? | 2308.07201#24 | 2308.07201#26 | 2308.07201 | [
"2303.04048"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.