doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2308.07540 | 1 | The role of a Dungeon Master, or DM, in the game Dun- geons & Dragons is to perform multiple tasks simultaneously. The DM must digest information about the game setting and monsters, synthesize scenes to present to other players, and respond to the playersâ interactions with the scene. Doing all of these tasks while maintaining consistency within the narrative and story world is no small feat of human cogni- tion, making the task tiring and unapproachable to new play- ers. Large language models (LLMs) like GPT-3 and ChatGPT have shown remarkable abilities to generate coherent natural language text. In this paper, we conduct a formative evalua- tion with DMs to establish the use cases of LLMs in D&D and tabletop gaming generally. We introduce CALYPSO, a system of LLM-powered interfaces that support DMs with in- formation and inspiration specific to their own scenario. CA- LYPSO distills game context into bite-sized prose and helps brainstorm ideas without distracting the DM from the game. When given access to CALYPSO, DMs reported that it gener- ated high-fidelity text suitable for direct presentation to play- ers, and low-fidelity ideas that the DM could develop further while | 2308.07540#1 | CALYPSO: LLMs as Dungeon Masters' Assistants | The role of a Dungeon Master, or DM, in the game Dungeons & Dragons is to
perform multiple tasks simultaneously. The DM must digest information about the
game setting and monsters, synthesize scenes to present to other players, and
respond to the players' interactions with the scene. Doing all of these tasks
while maintaining consistency within the narrative and story world is no small
feat of human cognition, making the task tiring and unapproachable to new
players. Large language models (LLMs) like GPT-3 and ChatGPT have shown
remarkable abilities to generate coherent natural language text. In this paper,
we conduct a formative evaluation with DMs to establish the use cases of LLMs
in D&D and tabletop gaming generally. We introduce CALYPSO, a system of
LLM-powered interfaces that support DMs with information and inspiration
specific to their own scenario. CALYPSO distills game context into bite-sized
prose and helps brainstorm ideas without distracting the DM from the game. When
given access to CALYPSO, DMs reported that it generated high-fidelity text
suitable for direct presentation to players, and low-fidelity ideas that the DM
could develop further while maintaining their creative agency. We see CALYPSO
as exemplifying a paradigm of AI-augmented tools that provide synchronous
creative assistance within established game worlds, and tabletop gaming more
broadly. | http://arxiv.org/pdf/2308.07540 | Andrew Zhu, Lara J. Martin, Andrew Head, Chris Callison-Burch | cs.CL, cs.HC | 11 pages, 4 figures. AIIDE 2023 | null | cs.CL | 20230815 | 20230815 | [
{
"id": "1706.03762"
},
{
"id": "2304.01860"
},
{
"id": "2211.05030"
}
] |
2308.07540 | 3 | # Introduction
Dungeons & Dragons (D&D) (Gygax and Arneson 1974) is a tabletop role-playing game (TTRPG)âa collaborative storytelling game where a group of players each create and play as their own character, exploring a world created by and challenges set by another player known as the Dungeon Master (DM). It is the DMâs role to play the non-player char- acters and monsters, and to write the overarching plot of the game.
As a co-creative storytelling game, Dungeons & Dragons presents multiple unique challenges for AI systems aiming to interact with it intelligently. Over the course of a game, which is played out across multiple sessions spanning a long duration of time (often multiple months to years), the DM and the other players work together to produce a narra- tive grounded in commonsense reasoning and thematic con*Work done while at the University of Pennsylvania. Copyright © 2023, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. | 2308.07540#3 | CALYPSO: LLMs as Dungeon Masters' Assistants | The role of a Dungeon Master, or DM, in the game Dungeons & Dragons is to
perform multiple tasks simultaneously. The DM must digest information about the
game setting and monsters, synthesize scenes to present to other players, and
respond to the players' interactions with the scene. Doing all of these tasks
while maintaining consistency within the narrative and story world is no small
feat of human cognition, making the task tiring and unapproachable to new
players. Large language models (LLMs) like GPT-3 and ChatGPT have shown
remarkable abilities to generate coherent natural language text. In this paper,
we conduct a formative evaluation with DMs to establish the use cases of LLMs
in D&D and tabletop gaming generally. We introduce CALYPSO, a system of
LLM-powered interfaces that support DMs with information and inspiration
specific to their own scenario. CALYPSO distills game context into bite-sized
prose and helps brainstorm ideas without distracting the DM from the game. When
given access to CALYPSO, DMs reported that it generated high-fidelity text
suitable for direct presentation to players, and low-fidelity ideas that the DM
could develop further while maintaining their creative agency. We see CALYPSO
as exemplifying a paradigm of AI-augmented tools that provide synchronous
creative assistance within established game worlds, and tabletop gaming more
broadly. | http://arxiv.org/pdf/2308.07540 | Andrew Zhu, Lara J. Martin, Andrew Head, Chris Callison-Burch | cs.CL, cs.HC | 11 pages, 4 figures. AIIDE 2023 | null | cs.CL | 20230815 | 20230815 | [
{
"id": "1706.03762"
},
{
"id": "2304.01860"
},
{
"id": "2211.05030"
}
] |
2308.07540 | 4 | Encounter Understanding (GPT-3) ®"The Dire Wolf is a cunning and strong predator...â Roll Encounter (Encounter Table) @ 14100 (53) 8 Dire Wolf " | Dire Wolf Focused Brainstorming e âChatGPT) i: âHow do players find them?â large beast Hit Points: 39 © "The players hear distant, haunting howls echoing throughout the forest...â Encounter Scene (Human) Ik in the forest,
Figure 1: After rolling a random encounter (red), DMs can use LLMs with CALYPSO to help generate an encounter scene and digest information about monsters. CALYPSO can present monster information concisely (green) and brain- storm conversationally (purple) to help build a compelling narrative to present to players (purple).
sistency (Ammanabrolu et al. 2020; Bergstr¨om 2011). As the group plays for longer, the players define more of the world and ad-hoc rules for interacting with it (van Velsen, Williams, and Verhulsdonck 2009). In order to make in- character decisions, each individual player must maintain a personal understanding of the game world which they build from the game history (Martin, Sood, and Riedl 2018) while keeping track of what information other players and their characters know (Zhou et al. 2023). | 2308.07540#4 | CALYPSO: LLMs as Dungeon Masters' Assistants | The role of a Dungeon Master, or DM, in the game Dungeons & Dragons is to
perform multiple tasks simultaneously. The DM must digest information about the
game setting and monsters, synthesize scenes to present to other players, and
respond to the players' interactions with the scene. Doing all of these tasks
while maintaining consistency within the narrative and story world is no small
feat of human cognition, making the task tiring and unapproachable to new
players. Large language models (LLMs) like GPT-3 and ChatGPT have shown
remarkable abilities to generate coherent natural language text. In this paper,
we conduct a formative evaluation with DMs to establish the use cases of LLMs
in D&D and tabletop gaming generally. We introduce CALYPSO, a system of
LLM-powered interfaces that support DMs with information and inspiration
specific to their own scenario. CALYPSO distills game context into bite-sized
prose and helps brainstorm ideas without distracting the DM from the game. When
given access to CALYPSO, DMs reported that it generated high-fidelity text
suitable for direct presentation to players, and low-fidelity ideas that the DM
could develop further while maintaining their creative agency. We see CALYPSO
as exemplifying a paradigm of AI-augmented tools that provide synchronous
creative assistance within established game worlds, and tabletop gaming more
broadly. | http://arxiv.org/pdf/2308.07540 | Andrew Zhu, Lara J. Martin, Andrew Head, Chris Callison-Burch | cs.CL, cs.HC | 11 pages, 4 figures. AIIDE 2023 | null | cs.CL | 20230815 | 20230815 | [
{
"id": "1706.03762"
},
{
"id": "2304.01860"
},
{
"id": "2211.05030"
}
] |
2308.07540 | 5 | By using an AI co-DM tool, human DMs can devote more mental energy to cognitively demanding tasks of being a DM, such as improvising dialog of NPCs (non-player char- acters) or repairing the script of their planned campaign. Fur- thermore, an AI co-DM would drastically reduce the barrier of entry into DMing. Therefore, an AI co-DM tool would be invaluable to the D&D community.
An effective AI co-DM tool should not only produce language output for a coherent and compelling natural DM to effectively use for inspiration but also account for an immense amount of background context and require- ments for internal consistencyâboth within D&D rules and within a given scenario or campaign. Large language models (LLMs), such as GPT-3 (Brown et al. 2020) and ChatGPT (OpenAI 2022), have shown impressive abilities to generate
coherent text. Some (Callison-Burch et al. 2022; Zhu et al. 2023) have even applied LLMs to the problem of D&D di- alog and narrative by finetuning the models with structured information. Whereas these works used structured informa- tion scraped from user data to fine-tune a single model, we use existing data in D&D source books to improve genera- tion using zero-shot prompting with multiple models. | 2308.07540#5 | CALYPSO: LLMs as Dungeon Masters' Assistants | The role of a Dungeon Master, or DM, in the game Dungeons & Dragons is to
perform multiple tasks simultaneously. The DM must digest information about the
game setting and monsters, synthesize scenes to present to other players, and
respond to the players' interactions with the scene. Doing all of these tasks
while maintaining consistency within the narrative and story world is no small
feat of human cognition, making the task tiring and unapproachable to new
players. Large language models (LLMs) like GPT-3 and ChatGPT have shown
remarkable abilities to generate coherent natural language text. In this paper,
we conduct a formative evaluation with DMs to establish the use cases of LLMs
in D&D and tabletop gaming generally. We introduce CALYPSO, a system of
LLM-powered interfaces that support DMs with information and inspiration
specific to their own scenario. CALYPSO distills game context into bite-sized
prose and helps brainstorm ideas without distracting the DM from the game. When
given access to CALYPSO, DMs reported that it generated high-fidelity text
suitable for direct presentation to players, and low-fidelity ideas that the DM
could develop further while maintaining their creative agency. We see CALYPSO
as exemplifying a paradigm of AI-augmented tools that provide synchronous
creative assistance within established game worlds, and tabletop gaming more
broadly. | http://arxiv.org/pdf/2308.07540 | Andrew Zhu, Lara J. Martin, Andrew Head, Chris Callison-Burch | cs.CL, cs.HC | 11 pages, 4 figures. AIIDE 2023 | null | cs.CL | 20230815 | 20230815 | [
{
"id": "1706.03762"
},
{
"id": "2304.01860"
},
{
"id": "2211.05030"
}
] |
2308.07540 | 6 | In this paper, we present a study in which we created a LLM-augmented tool to assist DMs in playing D&D. We employed the following methods:
1. We interviewed DMs to understand how they digest game information and learn design motivations for AI as- sistants in the domain.
2. We created a gameplay setting that allowed us to study D&D gameplay on a larger scale than other recent works and invited 71 players to participate.
3. We created a system of three LLM-powered interfaces, which we call CALYPSO (Collaborative Assistant for Lore and Yielding Plot Synthesis Objectives), that DMs and players could use as they played D&D, and studied the ways in which DMs and players incorporated them into their creative process over four months using estab- lished HCI methods.
We show that language models are capable âco-DMsâ â not a player in the same way that the human players and DM are, but still a synchronous agent that acts as a guide for the human DM. We provide insights into how TTRPG play- ers actually want to use these tools and present validated so- lutions that can extend beyond the D&D domain. Our study shows that a system designed with these motivations in mind saw consistent prolonged usage among a community of cre- ative writers. | 2308.07540#6 | CALYPSO: LLMs as Dungeon Masters' Assistants | The role of a Dungeon Master, or DM, in the game Dungeons & Dragons is to
perform multiple tasks simultaneously. The DM must digest information about the
game setting and monsters, synthesize scenes to present to other players, and
respond to the players' interactions with the scene. Doing all of these tasks
while maintaining consistency within the narrative and story world is no small
feat of human cognition, making the task tiring and unapproachable to new
players. Large language models (LLMs) like GPT-3 and ChatGPT have shown
remarkable abilities to generate coherent natural language text. In this paper,
we conduct a formative evaluation with DMs to establish the use cases of LLMs
in D&D and tabletop gaming generally. We introduce CALYPSO, a system of
LLM-powered interfaces that support DMs with information and inspiration
specific to their own scenario. CALYPSO distills game context into bite-sized
prose and helps brainstorm ideas without distracting the DM from the game. When
given access to CALYPSO, DMs reported that it generated high-fidelity text
suitable for direct presentation to players, and low-fidelity ideas that the DM
could develop further while maintaining their creative agency. We see CALYPSO
as exemplifying a paradigm of AI-augmented tools that provide synchronous
creative assistance within established game worlds, and tabletop gaming more
broadly. | http://arxiv.org/pdf/2308.07540 | Andrew Zhu, Lara J. Martin, Andrew Head, Chris Callison-Burch | cs.CL, cs.HC | 11 pages, 4 figures. AIIDE 2023 | null | cs.CL | 20230815 | 20230815 | [
{
"id": "1706.03762"
},
{
"id": "2304.01860"
},
{
"id": "2211.05030"
}
] |
2308.07540 | 7 | 2 Background and Related Work 2.1 Dungeons & Dragons in the Time of COVID Traditionally, Dungeons & Dragons is played in person. Players use physical character sheets and monster stats ref- erenced from books containing hundreds of prewritten âstat blocksâ (as pictured in Figure 2a) (Perkins et al. 2014). DMs have the option to create a world of their own to play in (also sometimes called âhomebrewingâ a setting) or to set their game in a professionally written âmoduleâ: a book contain- ing a detailed outline of an adventure, including the setting, non-player characters, predesigned challenges and monster encounters, and lore. Previous works have explored meth- ods of how to present information in these existing settings more clearly to DMs, such as through a computer-generated adventure flowchart (Acharya, Mateas, and Wardrip-Fruin 2021) or recommender systems for relevant entities in a scene (Perez, Eisemann, and Bidarra 2021). | 2308.07540#7 | CALYPSO: LLMs as Dungeon Masters' Assistants | The role of a Dungeon Master, or DM, in the game Dungeons & Dragons is to
perform multiple tasks simultaneously. The DM must digest information about the
game setting and monsters, synthesize scenes to present to other players, and
respond to the players' interactions with the scene. Doing all of these tasks
while maintaining consistency within the narrative and story world is no small
feat of human cognition, making the task tiring and unapproachable to new
players. Large language models (LLMs) like GPT-3 and ChatGPT have shown
remarkable abilities to generate coherent natural language text. In this paper,
we conduct a formative evaluation with DMs to establish the use cases of LLMs
in D&D and tabletop gaming generally. We introduce CALYPSO, a system of
LLM-powered interfaces that support DMs with information and inspiration
specific to their own scenario. CALYPSO distills game context into bite-sized
prose and helps brainstorm ideas without distracting the DM from the game. When
given access to CALYPSO, DMs reported that it generated high-fidelity text
suitable for direct presentation to players, and low-fidelity ideas that the DM
could develop further while maintaining their creative agency. We see CALYPSO
as exemplifying a paradigm of AI-augmented tools that provide synchronous
creative assistance within established game worlds, and tabletop gaming more
broadly. | http://arxiv.org/pdf/2308.07540 | Andrew Zhu, Lara J. Martin, Andrew Head, Chris Callison-Burch | cs.CL, cs.HC | 11 pages, 4 figures. AIIDE 2023 | null | cs.CL | 20230815 | 20230815 | [
{
"id": "1706.03762"
},
{
"id": "2304.01860"
},
{
"id": "2211.05030"
}
] |
2308.07540 | 8 | Since the beginning of the COVID-19 pandemic, there has been a shift towards playing D&D online (Yuan et al. 2021). Rather than using physical character sheets and reference books while playing in person, a large number of groups instead play virtually using tools like D&D Beyond (2017) for virtual character sheets and reference books, Discord for messaging, virtual tabletops like Foundry (Foundry Gaming,
LLC 2019) to simulate maps, and game state trackers like Avrae (Zhu and D&D Beyond 2016) to track character and monster stats. For inspiration and immersion, DMs also use online tools like dScryb (2020), which provides prewritten text, Tabletop Audio (Roven 2014), which provides sound- boards and soundscapes, and random tables published in D&D source books (Crawford, Perkins, and Wyatt 2014), which provide a prewritten set of options, for specific sce- narios (e.g. encountering a dragon). | 2308.07540#8 | CALYPSO: LLMs as Dungeon Masters' Assistants | The role of a Dungeon Master, or DM, in the game Dungeons & Dragons is to
perform multiple tasks simultaneously. The DM must digest information about the
game setting and monsters, synthesize scenes to present to other players, and
respond to the players' interactions with the scene. Doing all of these tasks
while maintaining consistency within the narrative and story world is no small
feat of human cognition, making the task tiring and unapproachable to new
players. Large language models (LLMs) like GPT-3 and ChatGPT have shown
remarkable abilities to generate coherent natural language text. In this paper,
we conduct a formative evaluation with DMs to establish the use cases of LLMs
in D&D and tabletop gaming generally. We introduce CALYPSO, a system of
LLM-powered interfaces that support DMs with information and inspiration
specific to their own scenario. CALYPSO distills game context into bite-sized
prose and helps brainstorm ideas without distracting the DM from the game. When
given access to CALYPSO, DMs reported that it generated high-fidelity text
suitable for direct presentation to players, and low-fidelity ideas that the DM
could develop further while maintaining their creative agency. We see CALYPSO
as exemplifying a paradigm of AI-augmented tools that provide synchronous
creative assistance within established game worlds, and tabletop gaming more
broadly. | http://arxiv.org/pdf/2308.07540 | Andrew Zhu, Lara J. Martin, Andrew Head, Chris Callison-Burch | cs.CL, cs.HC | 11 pages, 4 figures. AIIDE 2023 | null | cs.CL | 20230815 | 20230815 | [
{
"id": "1706.03762"
},
{
"id": "2304.01860"
},
{
"id": "2211.05030"
}
] |
2308.07540 | 9 | 2.2 Large Language Models and D&D Large language models (LLMs) are a recent development in the area of Natural Language Processing that have demon- strated emergent capabilities of understanding usersâ input and replying directly in the userâs language (c.f. a machine language). A neural architecture based on the Transformer (Vaswani et al. 2017), they are capable of learning user- defined tasks with no additional training (âfew-shotâ or âin- contextâ learning) and referencing concepts defined in their large training corpus (Brown et al. 2020). | 2308.07540#9 | CALYPSO: LLMs as Dungeon Masters' Assistants | The role of a Dungeon Master, or DM, in the game Dungeons & Dragons is to
perform multiple tasks simultaneously. The DM must digest information about the
game setting and monsters, synthesize scenes to present to other players, and
respond to the players' interactions with the scene. Doing all of these tasks
while maintaining consistency within the narrative and story world is no small
feat of human cognition, making the task tiring and unapproachable to new
players. Large language models (LLMs) like GPT-3 and ChatGPT have shown
remarkable abilities to generate coherent natural language text. In this paper,
we conduct a formative evaluation with DMs to establish the use cases of LLMs
in D&D and tabletop gaming generally. We introduce CALYPSO, a system of
LLM-powered interfaces that support DMs with information and inspiration
specific to their own scenario. CALYPSO distills game context into bite-sized
prose and helps brainstorm ideas without distracting the DM from the game. When
given access to CALYPSO, DMs reported that it generated high-fidelity text
suitable for direct presentation to players, and low-fidelity ideas that the DM
could develop further while maintaining their creative agency. We see CALYPSO
as exemplifying a paradigm of AI-augmented tools that provide synchronous
creative assistance within established game worlds, and tabletop gaming more
broadly. | http://arxiv.org/pdf/2308.07540 | Andrew Zhu, Lara J. Martin, Andrew Head, Chris Callison-Burch | cs.CL, cs.HC | 11 pages, 4 figures. AIIDE 2023 | null | cs.CL | 20230815 | 20230815 | [
{
"id": "1706.03762"
},
{
"id": "2304.01860"
},
{
"id": "2211.05030"
}
] |
2308.07540 | 10 | Although there has been some work looking at playing Dungeons & Dragons using earlier neural language mod- els (Louis and Sutton 2018; Martin, Sood, and Riedl 2018; Rameshkumar and Bailey 2020), the introduction of LLMs has created a renewed interest in researching tabletop gam- ing. Callison-Burch et al. (2022) frame D&D as a dialogue challenge and examine whether LLMs are capable of pre- dicting a playerâs next utterance based on the conversational history, finding that local game-specific state context is im- portant for grounded narrative generation. Newman and Liu (2022) use LLMs to generate novel material (namely spells) that is consistent with the style and rules of the game. Zhou et al. (2023) create a system that models the intents of D&D players using LLMs to inform a surrogate Theory of Mind. Zhu et al. (2023) instrument a game state tracker to provide concrete actor stats and combat state, finding that LLMs are capable of producing interesting roleplay in combat scenar- ios and predicting the action a player will take. They high- light the importance of player and DM agency in LLM- generated texts, proposing | 2308.07540#10 | CALYPSO: LLMs as Dungeon Masters' Assistants | The role of a Dungeon Master, or DM, in the game Dungeons & Dragons is to
perform multiple tasks simultaneously. The DM must digest information about the
game setting and monsters, synthesize scenes to present to other players, and
respond to the players' interactions with the scene. Doing all of these tasks
while maintaining consistency within the narrative and story world is no small
feat of human cognition, making the task tiring and unapproachable to new
players. Large language models (LLMs) like GPT-3 and ChatGPT have shown
remarkable abilities to generate coherent natural language text. In this paper,
we conduct a formative evaluation with DMs to establish the use cases of LLMs
in D&D and tabletop gaming generally. We introduce CALYPSO, a system of
LLM-powered interfaces that support DMs with information and inspiration
specific to their own scenario. CALYPSO distills game context into bite-sized
prose and helps brainstorm ideas without distracting the DM from the game. When
given access to CALYPSO, DMs reported that it generated high-fidelity text
suitable for direct presentation to players, and low-fidelity ideas that the DM
could develop further while maintaining their creative agency. We see CALYPSO
as exemplifying a paradigm of AI-augmented tools that provide synchronous
creative assistance within established game worlds, and tabletop gaming more
broadly. | http://arxiv.org/pdf/2308.07540 | Andrew Zhu, Lara J. Martin, Andrew Head, Chris Callison-Burch | cs.CL, cs.HC | 11 pages, 4 figures. AIIDE 2023 | null | cs.CL | 20230815 | 20230815 | [
{
"id": "1706.03762"
},
{
"id": "2304.01860"
},
{
"id": "2211.05030"
}
] |
2308.07540 | 12 | Santiago et al. (2023) have proposed multiple scenarios where LLMs and other generative AI models may be used to assist DMs, and discuss the ways AI may be used. In this workshop paper, they hypothesize the potential for AI to help inspire and take cognitive burden off the DM and pro- vide brainstorming inspiration, but also weaknesses where AI may fall back onto overused tropes or underrepresent mi- nority groups. In this work, we explore and expand upon many of these hypotheses through interviews with DMs. We create a system where DMs can fluently incorporate a LLM into their creative process and run a broad study on its use and failure cases. | 2308.07540#12 | CALYPSO: LLMs as Dungeon Masters' Assistants | The role of a Dungeon Master, or DM, in the game Dungeons & Dragons is to
perform multiple tasks simultaneously. The DM must digest information about the
game setting and monsters, synthesize scenes to present to other players, and
respond to the players' interactions with the scene. Doing all of these tasks
while maintaining consistency within the narrative and story world is no small
feat of human cognition, making the task tiring and unapproachable to new
players. Large language models (LLMs) like GPT-3 and ChatGPT have shown
remarkable abilities to generate coherent natural language text. In this paper,
we conduct a formative evaluation with DMs to establish the use cases of LLMs
in D&D and tabletop gaming generally. We introduce CALYPSO, a system of
LLM-powered interfaces that support DMs with information and inspiration
specific to their own scenario. CALYPSO distills game context into bite-sized
prose and helps brainstorm ideas without distracting the DM from the game. When
given access to CALYPSO, DMs reported that it generated high-fidelity text
suitable for direct presentation to players, and low-fidelity ideas that the DM
could develop further while maintaining their creative agency. We see CALYPSO
as exemplifying a paradigm of AI-augmented tools that provide synchronous
creative assistance within established game worlds, and tabletop gaming more
broadly. | http://arxiv.org/pdf/2308.07540 | Andrew Zhu, Lara J. Martin, Andrew Head, Chris Callison-Burch | cs.CL, cs.HC | 11 pages, 4 figures. AIIDE 2023 | null | cs.CL | 20230815 | 20230815 | [
{
"id": "1706.03762"
},
{
"id": "2304.01860"
},
{
"id": "2211.05030"
}
] |
2308.07540 | 13 | LLMs have been explored as a writing assistant in other modalities as well, using various methods to assist in col- laboratively building a narrative. These works have examined the use of conversational agents (Coenen et al. 2021; Ippolito et al. 2022), writing in established settings (Akoury et al. 2020), and other human-in-the-loop methods (Chung et al. 2022; Roemmele and Gordon 2015; Samuel, Mateas, and Wardrip-Fruin 2016; Calderwood et al. 2020; Yang et al. 2022; Kreminski et al. 2022). There has also been work proposing LLMs for multimodal co-creative frameworks (Lin, Agarwal, and Riedl 2022). Overall, these techniques differ from D&D and other TTRPGs in that they primarily focus on a single writer/creator interacting with the system, rather than the multi-player experience in TTRPGs where all players directly interact with the story.
To our knowledge, our work is the first to examine con- crete implementations of multiple unique interaction modal- ities in and outside of combat scenarios and the ways D&D players interact with language models on this scale. | 2308.07540#13 | CALYPSO: LLMs as Dungeon Masters' Assistants | The role of a Dungeon Master, or DM, in the game Dungeons & Dragons is to
perform multiple tasks simultaneously. The DM must digest information about the
game setting and monsters, synthesize scenes to present to other players, and
respond to the players' interactions with the scene. Doing all of these tasks
while maintaining consistency within the narrative and story world is no small
feat of human cognition, making the task tiring and unapproachable to new
players. Large language models (LLMs) like GPT-3 and ChatGPT have shown
remarkable abilities to generate coherent natural language text. In this paper,
we conduct a formative evaluation with DMs to establish the use cases of LLMs
in D&D and tabletop gaming generally. We introduce CALYPSO, a system of
LLM-powered interfaces that support DMs with information and inspiration
specific to their own scenario. CALYPSO distills game context into bite-sized
prose and helps brainstorm ideas without distracting the DM from the game. When
given access to CALYPSO, DMs reported that it generated high-fidelity text
suitable for direct presentation to players, and low-fidelity ideas that the DM
could develop further while maintaining their creative agency. We see CALYPSO
as exemplifying a paradigm of AI-augmented tools that provide synchronous
creative assistance within established game worlds, and tabletop gaming more
broadly. | http://arxiv.org/pdf/2308.07540 | Andrew Zhu, Lara J. Martin, Andrew Head, Chris Callison-Burch | cs.CL, cs.HC | 11 pages, 4 figures. AIIDE 2023 | null | cs.CL | 20230815 | 20230815 | [
{
"id": "1706.03762"
},
{
"id": "2304.01860"
},
{
"id": "2211.05030"
}
] |
2308.07540 | 14 | 3 Design Motivation To better understand the friction DMs face in looking up reference material midgame, we conducted interviews and ran workshop sessions with seven DMs (referred to as D1- 7 below) from a wide range of backgrounds before creating our system. Participants ranged from 1 to 39 years of ex- perience playing D&D (various editions). In these sessions, we asked DMs how they approached improvising encoun- ters â i.e., to run random encounters that are generated on the fly (usually by rolling on an encounter table). In random encounters, DMs do not have time to research the monsterâs stats and lore beforehand and think of backstories as to why the monster ended up in a particular setting. From these in- terviews, we identify several ways how an AI system could be helpful to DMs: | 2308.07540#14 | CALYPSO: LLMs as Dungeon Masters' Assistants | The role of a Dungeon Master, or DM, in the game Dungeons & Dragons is to
perform multiple tasks simultaneously. The DM must digest information about the
game setting and monsters, synthesize scenes to present to other players, and
respond to the players' interactions with the scene. Doing all of these tasks
while maintaining consistency within the narrative and story world is no small
feat of human cognition, making the task tiring and unapproachable to new
players. Large language models (LLMs) like GPT-3 and ChatGPT have shown
remarkable abilities to generate coherent natural language text. In this paper,
we conduct a formative evaluation with DMs to establish the use cases of LLMs
in D&D and tabletop gaming generally. We introduce CALYPSO, a system of
LLM-powered interfaces that support DMs with information and inspiration
specific to their own scenario. CALYPSO distills game context into bite-sized
prose and helps brainstorm ideas without distracting the DM from the game. When
given access to CALYPSO, DMs reported that it generated high-fidelity text
suitable for direct presentation to players, and low-fidelity ideas that the DM
could develop further while maintaining their creative agency. We see CALYPSO
as exemplifying a paradigm of AI-augmented tools that provide synchronous
creative assistance within established game worlds, and tabletop gaming more
broadly. | http://arxiv.org/pdf/2308.07540 | Andrew Zhu, Lara J. Martin, Andrew Head, Chris Callison-Burch | cs.CL, cs.HC | 11 pages, 4 figures. AIIDE 2023 | null | cs.CL | 20230815 | 20230815 | [
{
"id": "1706.03762"
},
{
"id": "2304.01860"
},
{
"id": "2211.05030"
}
] |
2308.07540 | 15 | Inspiration. As proposed by Santiago et al. (2023), we find that DMs desired the ability to use a language model to generate the first draft of an encounter, which they could then build on top of with their own ideas (D1-3). Different DMs envisioned giving the system varying amounts of con- trol over the narrative. D3 expressed that they would want a system to write a scene that they would then vet and choose whether to present it verbatim to their players, edit it to their liking, or use as inspiration to overcome writerâs block. D1 and D2 envisioned using the systemâs generation verbatim to present an initial scene to players while they either read the complete text of the monster description (D2) or to reduce cognitive load (D1).
Strategic Copilot. One DM mentioned that managing both narrative gameplay and tracking monster stats and mechanics overwhelmed their short-term memory, and ex- pressed interest in a system that could aid them in making strategic decisions and acting as a high-level copilot. They expressed that the large amount of low-level management was a barrier to them running more D&D, and that they wanted to âfeel more like an orchestra conductor over some- one whoâs both putting down the train tracks AND fueling the trainâ (D4). | 2308.07540#15 | CALYPSO: LLMs as Dungeon Masters' Assistants | The role of a Dungeon Master, or DM, in the game Dungeons & Dragons is to
perform multiple tasks simultaneously. The DM must digest information about the
game setting and monsters, synthesize scenes to present to other players, and
respond to the players' interactions with the scene. Doing all of these tasks
while maintaining consistency within the narrative and story world is no small
feat of human cognition, making the task tiring and unapproachable to new
players. Large language models (LLMs) like GPT-3 and ChatGPT have shown
remarkable abilities to generate coherent natural language text. In this paper,
we conduct a formative evaluation with DMs to establish the use cases of LLMs
in D&D and tabletop gaming generally. We introduce CALYPSO, a system of
LLM-powered interfaces that support DMs with information and inspiration
specific to their own scenario. CALYPSO distills game context into bite-sized
prose and helps brainstorm ideas without distracting the DM from the game. When
given access to CALYPSO, DMs reported that it generated high-fidelity text
suitable for direct presentation to players, and low-fidelity ideas that the DM
could develop further while maintaining their creative agency. We see CALYPSO
as exemplifying a paradigm of AI-augmented tools that provide synchronous
creative assistance within established game worlds, and tabletop gaming more
broadly. | http://arxiv.org/pdf/2308.07540 | Andrew Zhu, Lara J. Martin, Andrew Head, Chris Callison-Burch | cs.CL, cs.HC | 11 pages, 4 figures. AIIDE 2023 | null | cs.CL | 20230815 | 20230815 | [
{
"id": "1706.03762"
},
{
"id": "2304.01860"
},
{
"id": "2211.05030"
}
] |
2308.07540 | 16 | Another DM said that DMs often fail to take into ac- count monstersâ unique abilities and stats when running encounters, making simplifications to manage a large num- ber of monsters. For example, a monster with very high intelligence and low dexterity attempting to move sneakily âshould know not to move and make a bunch of noiseâ (D6).
Thematic Commonsense. We asked DMs what parts of monstersâ game statistics they found to be the most impor- tant for their understanding of how to use a monster in their game, and found that multiple DMs used a concept of âbase- lineâ monsters to gain a broad understanding of a monster when they first encounter it. The idea of the baseline mon- ster was not to find a specific monster to compare another to, but to determine which parts of an individual monsterâs game statistics to focus on, and which parts to use prior the- matic commonsense to fill in. | 2308.07540#16 | CALYPSO: LLMs as Dungeon Masters' Assistants | The role of a Dungeon Master, or DM, in the game Dungeons & Dragons is to
perform multiple tasks simultaneously. The DM must digest information about the
game setting and monsters, synthesize scenes to present to other players, and
respond to the players' interactions with the scene. Doing all of these tasks
while maintaining consistency within the narrative and story world is no small
feat of human cognition, making the task tiring and unapproachable to new
players. Large language models (LLMs) like GPT-3 and ChatGPT have shown
remarkable abilities to generate coherent natural language text. In this paper,
we conduct a formative evaluation with DMs to establish the use cases of LLMs
in D&D and tabletop gaming generally. We introduce CALYPSO, a system of
LLM-powered interfaces that support DMs with information and inspiration
specific to their own scenario. CALYPSO distills game context into bite-sized
prose and helps brainstorm ideas without distracting the DM from the game. When
given access to CALYPSO, DMs reported that it generated high-fidelity text
suitable for direct presentation to players, and low-fidelity ideas that the DM
could develop further while maintaining their creative agency. We see CALYPSO
as exemplifying a paradigm of AI-augmented tools that provide synchronous
creative assistance within established game worlds, and tabletop gaming more
broadly. | http://arxiv.org/pdf/2308.07540 | Andrew Zhu, Lara J. Martin, Andrew Head, Chris Callison-Burch | cs.CL, cs.HC | 11 pages, 4 figures. AIIDE 2023 | null | cs.CL | 20230815 | 20230815 | [
{
"id": "1706.03762"
},
{
"id": "2304.01860"
},
{
"id": "2211.05030"
}
] |
2308.07540 | 17 | In this context, we define thematic commonsense as the DMâs intuitive understanding of D&D as a game with me- dieval fantasy themes, and how they might draw inspira- tion from other works of fantasy literature. For example, a DM might intuitively understand that a dragon is a kind of winged reptile with a fire breath based on their consumption of other fantasy works, reason that all dragons are capable of flight, and focus on a particular dragonâs unique abilities rather than flight speed (D7). Although D&D reference ma- terial does not include an explicit description of the dragonâs fire breath, the DM might base their narration on depictions of fire breath from other authors. | 2308.07540#17 | CALYPSO: LLMs as Dungeon Masters' Assistants | The role of a Dungeon Master, or DM, in the game Dungeons & Dragons is to
perform multiple tasks simultaneously. The DM must digest information about the
game setting and monsters, synthesize scenes to present to other players, and
respond to the players' interactions with the scene. Doing all of these tasks
while maintaining consistency within the narrative and story world is no small
feat of human cognition, making the task tiring and unapproachable to new
players. Large language models (LLMs) like GPT-3 and ChatGPT have shown
remarkable abilities to generate coherent natural language text. In this paper,
we conduct a formative evaluation with DMs to establish the use cases of LLMs
in D&D and tabletop gaming generally. We introduce CALYPSO, a system of
LLM-powered interfaces that support DMs with information and inspiration
specific to their own scenario. CALYPSO distills game context into bite-sized
prose and helps brainstorm ideas without distracting the DM from the game. When
given access to CALYPSO, DMs reported that it generated high-fidelity text
suitable for direct presentation to players, and low-fidelity ideas that the DM
could develop further while maintaining their creative agency. We see CALYPSO
as exemplifying a paradigm of AI-augmented tools that provide synchronous
creative assistance within established game worlds, and tabletop gaming more
broadly. | http://arxiv.org/pdf/2308.07540 | Andrew Zhu, Lara J. Martin, Andrew Head, Chris Callison-Burch | cs.CL, cs.HC | 11 pages, 4 figures. AIIDE 2023 | null | cs.CL | 20230815 | 20230815 | [
{
"id": "1706.03762"
},
{
"id": "2304.01860"
},
{
"id": "2211.05030"
}
] |
2308.07540 | 18 | We find this similar to the idea of a genus-differentia def- inition (Parry and Hacker 1991), in that DMs use their gen- eral background understanding of fantasy settings to define their personal genus and supplement prior knowledge by skimming monster reference books for differentia. This sug- gests that co-DM systems should focus on helping DMs ex- tract these differentiae, and that they also require the same extensive background knowledge as the user. For the D&D domain, we believe that LLMs such as GPT-3 (Brown et al. 2020) have included sufficient information on the game and the game books themselves in their training corpus so as to establish such a background knowledge. However, we are in- terested in methods for establishing this thematic common- sense knowledge for works not included in modelsâ training data in future work.
Simple Language. Multiple DMs emphasized that they would like a co-DM system to present monster information in plain language, rather than the elaborate prose found in game reference manuals (D3-6). As a work of fantasy litera- ture, D&D publications (including reference manuals) often use heavy figurative language and obscure words. For exam- ple, the first paragraph of an owlbearâs description reads: | 2308.07540#18 | CALYPSO: LLMs as Dungeon Masters' Assistants | The role of a Dungeon Master, or DM, in the game Dungeons & Dragons is to
perform multiple tasks simultaneously. The DM must digest information about the
game setting and monsters, synthesize scenes to present to other players, and
respond to the players' interactions with the scene. Doing all of these tasks
while maintaining consistency within the narrative and story world is no small
feat of human cognition, making the task tiring and unapproachable to new
players. Large language models (LLMs) like GPT-3 and ChatGPT have shown
remarkable abilities to generate coherent natural language text. In this paper,
we conduct a formative evaluation with DMs to establish the use cases of LLMs
in D&D and tabletop gaming generally. We introduce CALYPSO, a system of
LLM-powered interfaces that support DMs with information and inspiration
specific to their own scenario. CALYPSO distills game context into bite-sized
prose and helps brainstorm ideas without distracting the DM from the game. When
given access to CALYPSO, DMs reported that it generated high-fidelity text
suitable for direct presentation to players, and low-fidelity ideas that the DM
could develop further while maintaining their creative agency. We see CALYPSO
as exemplifying a paradigm of AI-augmented tools that provide synchronous
creative assistance within established game worlds, and tabletop gaming more
broadly. | http://arxiv.org/pdf/2308.07540 | Andrew Zhu, Lara J. Martin, Andrew Head, Chris Callison-Burch | cs.CL, cs.HC | 11 pages, 4 figures. AIIDE 2023 | null | cs.CL | 20230815 | 20230815 | [
{
"id": "1706.03762"
},
{
"id": "2304.01860"
},
{
"id": "2211.05030"
}
] |
2308.07540 | 19 | An owlbearâs screech echoes through dark valleys and benighted forests, piercing the quiet night to an- nounce the death of its prey. Feathers cover the thick, shaggy coat of its bearlike body, and the limpid pupils of its great round eyes stare furiously from its owlish head (Crawford, Mearls, and Perkins 2018, pg. 147).
This style of description continues for seven additional paragraphs. On average, across all D&D monsters published on D&D Beyond, a monsterâs description and list of abilities contains 374 words (min: 0, max: 2,307). DMs often use multiple monsters together in the same encounter, com- pounding the amount of information they must hold in their mind.
Monster descriptions often include descriptions of the monster, its abilities, and lore. Some DMsâ preferred method of referencing monster lore while running the game was to skim the full monster entry, and the complex and long prose often led to DMs feeling overwhelmed (D4, D5). Other DMs wanted a short and salient mechanical (i.e. focusing on mon- sterâs game abilities and actions) description, rather than a narrative (lore and history-focused) one (D3, D6). | 2308.07540#19 | CALYPSO: LLMs as Dungeon Masters' Assistants | The role of a Dungeon Master, or DM, in the game Dungeons & Dragons is to
perform multiple tasks simultaneously. The DM must digest information about the
game setting and monsters, synthesize scenes to present to other players, and
respond to the players' interactions with the scene. Doing all of these tasks
while maintaining consistency within the narrative and story world is no small
feat of human cognition, making the task tiring and unapproachable to new
players. Large language models (LLMs) like GPT-3 and ChatGPT have shown
remarkable abilities to generate coherent natural language text. In this paper,
we conduct a formative evaluation with DMs to establish the use cases of LLMs
in D&D and tabletop gaming generally. We introduce CALYPSO, a system of
LLM-powered interfaces that support DMs with information and inspiration
specific to their own scenario. CALYPSO distills game context into bite-sized
prose and helps brainstorm ideas without distracting the DM from the game. When
given access to CALYPSO, DMs reported that it generated high-fidelity text
suitable for direct presentation to players, and low-fidelity ideas that the DM
could develop further while maintaining their creative agency. We see CALYPSO
as exemplifying a paradigm of AI-augmented tools that provide synchronous
creative assistance within established game worlds, and tabletop gaming more
broadly. | http://arxiv.org/pdf/2308.07540 | Andrew Zhu, Lara J. Martin, Andrew Head, Chris Callison-Burch | cs.CL, cs.HC | 11 pages, 4 figures. AIIDE 2023 | null | cs.CL | 20230815 | 20230815 | [
{
"id": "1706.03762"
},
{
"id": "2304.01860"
},
{
"id": "2211.05030"
}
] |
2308.07540 | 20 | Overall, the complexity of monster descriptions led DMs to forget parts of monstersâ lore or abilities during game- play (D5) or use overly broad simplifications that did not capture an individual monsterâs uniqueness (D6). While of- fline resources exist to help DMs run monsters (e.g. Amman (2019)), they cannot account for the environment or generate a unique scenario for each encounter with the same monster. We believe that LLMsâ capability to summarize and gener- ate unique material is particularly applicable to these chal- lenges.
# Implementation
In this section, we describe the three interfaces we developed to provide DMs with the sorts of support they desired. These interfaces were designed with âin the wildâ deployment in mind:
1. Encounter Understanding: a zero-shot method to gener- ate a concise setup of an encounter, using GPT-3.
2. Focused Brainstorming: a conversational method for DMs to ask additional questions about an encounter or refine an encounter summary, using ChatGPT.
3. Open-Domain Chat Baseline: a conversational interface without the focus of an encounter, using ChatGPT. | 2308.07540#20 | CALYPSO: LLMs as Dungeon Masters' Assistants | The role of a Dungeon Master, or DM, in the game Dungeons & Dragons is to
perform multiple tasks simultaneously. The DM must digest information about the
game setting and monsters, synthesize scenes to present to other players, and
respond to the players' interactions with the scene. Doing all of these tasks
while maintaining consistency within the narrative and story world is no small
feat of human cognition, making the task tiring and unapproachable to new
players. Large language models (LLMs) like GPT-3 and ChatGPT have shown
remarkable abilities to generate coherent natural language text. In this paper,
we conduct a formative evaluation with DMs to establish the use cases of LLMs
in D&D and tabletop gaming generally. We introduce CALYPSO, a system of
LLM-powered interfaces that support DMs with information and inspiration
specific to their own scenario. CALYPSO distills game context into bite-sized
prose and helps brainstorm ideas without distracting the DM from the game. When
given access to CALYPSO, DMs reported that it generated high-fidelity text
suitable for direct presentation to players, and low-fidelity ideas that the DM
could develop further while maintaining their creative agency. We see CALYPSO
as exemplifying a paradigm of AI-augmented tools that provide synchronous
creative assistance within established game worlds, and tabletop gaming more
broadly. | http://arxiv.org/pdf/2308.07540 | Andrew Zhu, Lara J. Martin, Andrew Head, Chris Callison-Burch | cs.CL, cs.HC | 11 pages, 4 figures. AIIDE 2023 | null | cs.CL | 20230815 | 20230815 | [
{
"id": "1706.03762"
},
{
"id": "2304.01860"
},
{
"id": "2211.05030"
}
] |
2308.07540 | 21 | 3. Open-Domain Chat Baseline: a conversational interface without the focus of an encounter, using ChatGPT.
Our implementation differs from other efforts to develop AI-powered co-creative agents in two ways. First, compared to models where the AI acts as the writer, AI-generated con- tent is not necessarily directly exposed to the audience. CA- LYPSO only presents ideas to a human DM, who has final say over what is presented to the players. Second, compared to co-writing assistants where the writer has plentiful time to iterate, the time between idea and presentation is very short. Since the DM uses CALYPSO in the midst of running a real game, CALYPSO should be frictionless to adopt and should not slow down the game.
4.1 Encounter Understanding The first interface we provided to DMs was a button to use a large language model to distill down game statistics and lore available in published monster stat blocks. To accomplish this, we prompted GPT-3 (Brown et al. 2020) (specifically, the text-davinci-003 model) with the text of the chosen en- counter, the description of the setting the encounter was tak- ing place in, and the game statistics and lore of each monster involved in the encounter. The full prompts are available in Appendix A. | 2308.07540#21 | CALYPSO: LLMs as Dungeon Masters' Assistants | The role of a Dungeon Master, or DM, in the game Dungeons & Dragons is to
perform multiple tasks simultaneously. The DM must digest information about the
game setting and monsters, synthesize scenes to present to other players, and
respond to the players' interactions with the scene. Doing all of these tasks
while maintaining consistency within the narrative and story world is no small
feat of human cognition, making the task tiring and unapproachable to new
players. Large language models (LLMs) like GPT-3 and ChatGPT have shown
remarkable abilities to generate coherent natural language text. In this paper,
we conduct a formative evaluation with DMs to establish the use cases of LLMs
in D&D and tabletop gaming generally. We introduce CALYPSO, a system of
LLM-powered interfaces that support DMs with information and inspiration
specific to their own scenario. CALYPSO distills game context into bite-sized
prose and helps brainstorm ideas without distracting the DM from the game. When
given access to CALYPSO, DMs reported that it generated high-fidelity text
suitable for direct presentation to players, and low-fidelity ideas that the DM
could develop further while maintaining their creative agency. We see CALYPSO
as exemplifying a paradigm of AI-augmented tools that provide synchronous
creative assistance within established game worlds, and tabletop gaming more
broadly. | http://arxiv.org/pdf/2308.07540 | Andrew Zhu, Lara J. Martin, Andrew Head, Chris Callison-Burch | cs.CL, cs.HC | 11 pages, 4 figures. AIIDE 2023 | null | cs.CL | 20230815 | 20230815 | [
{
"id": "1706.03762"
},
{
"id": "2304.01860"
},
{
"id": "2211.05030"
}
] |
2308.07540 | 22 | We began by presenting the LLM with the task to sum- marize monstersâ abilities and lore and the environment. We collected feedback from DMs after generating the extracted information by allowing them to select a positive or nega- tive feedback button, and optionally leave comments in an in-app modal. This interaction is illustrated in Figure 2.
Summarization. At first, we prompted GPT-3 to âsumma- rize the following D&D setting and monsters for a DMâs notes without mentioning game stats,â then pasted verbatim the text description of the setting and monster information. For decoding, we used a temperature of 0.9, top-p of 0.95, and frequency and presence penalties of 1. Based on feed- back from DMs (discussed in Section 6.1), we later changed to a more abstract âunderstandingâ task described below. | 2308.07540#22 | CALYPSO: LLMs as Dungeon Masters' Assistants | The role of a Dungeon Master, or DM, in the game Dungeons & Dragons is to
perform multiple tasks simultaneously. The DM must digest information about the
game setting and monsters, synthesize scenes to present to other players, and
respond to the players' interactions with the scene. Doing all of these tasks
while maintaining consistency within the narrative and story world is no small
feat of human cognition, making the task tiring and unapproachable to new
players. Large language models (LLMs) like GPT-3 and ChatGPT have shown
remarkable abilities to generate coherent natural language text. In this paper,
we conduct a formative evaluation with DMs to establish the use cases of LLMs
in D&D and tabletop gaming generally. We introduce CALYPSO, a system of
LLM-powered interfaces that support DMs with information and inspiration
specific to their own scenario. CALYPSO distills game context into bite-sized
prose and helps brainstorm ideas without distracting the DM from the game. When
given access to CALYPSO, DMs reported that it generated high-fidelity text
suitable for direct presentation to players, and low-fidelity ideas that the DM
could develop further while maintaining their creative agency. We see CALYPSO
as exemplifying a paradigm of AI-augmented tools that provide synchronous
creative assistance within established game worlds, and tabletop gaming more
broadly. | http://arxiv.org/pdf/2308.07540 | Andrew Zhu, Lara J. Martin, Andrew Head, Chris Callison-Burch | cs.CL, cs.HC | 11 pages, 4 figures. AIIDE 2023 | null | cs.CL | 20230815 | 20230815 | [
{
"id": "1706.03762"
},
{
"id": "2304.01860"
},
{
"id": "2211.05030"
}
] |
2308.07540 | 23 | Abstractive Understanding. In the understanding task, we prompted GPT-3 with the more abstract task to help the DM âunderstandâ the encounter, along with explicit in- structions to focus on the unique aspects of each creature, use information from mythology and common sense, and to mention how multiple creatures interact with each other. Af- ter these instructions, we included the same information as the Summarization task above. Finally, if a monster had no written description, we included instructions in place of the monsterâs description telling CALYPSO to provide the DM information from mythology and common sense. For de- coding, we used a temperature of 0.8, top-p of 0.95, and a frequency penalty of 0.5. | 2308.07540#23 | CALYPSO: LLMs as Dungeon Masters' Assistants | The role of a Dungeon Master, or DM, in the game Dungeons & Dragons is to
perform multiple tasks simultaneously. The DM must digest information about the
game setting and monsters, synthesize scenes to present to other players, and
respond to the players' interactions with the scene. Doing all of these tasks
while maintaining consistency within the narrative and story world is no small
feat of human cognition, making the task tiring and unapproachable to new
players. Large language models (LLMs) like GPT-3 and ChatGPT have shown
remarkable abilities to generate coherent natural language text. In this paper,
we conduct a formative evaluation with DMs to establish the use cases of LLMs
in D&D and tabletop gaming generally. We introduce CALYPSO, a system of
LLM-powered interfaces that support DMs with information and inspiration
specific to their own scenario. CALYPSO distills game context into bite-sized
prose and helps brainstorm ideas without distracting the DM from the game. When
given access to CALYPSO, DMs reported that it generated high-fidelity text
suitable for direct presentation to players, and low-fidelity ideas that the DM
could develop further while maintaining their creative agency. We see CALYPSO
as exemplifying a paradigm of AI-augmented tools that provide synchronous
creative assistance within established game worlds, and tabletop gaming more
broadly. | http://arxiv.org/pdf/2308.07540 | Andrew Zhu, Lara J. Martin, Andrew Head, Chris Callison-Burch | cs.CL, cs.HC | 11 pages, 4 figures. AIIDE 2023 | null | cs.CL | 20230815 | 20230815 | [
{
"id": "1706.03762"
},
{
"id": "2304.01860"
},
{
"id": "2211.05030"
}
] |
2308.07540 | 24 | 4.2 Focused Brainstorming To handle cases where a single round of information extrac- tion was not sufficient or a DM had additional focused ques- tions or ideas they wanted assistance elaborating, we also provided an interface to open a private thread for focused brainstorming. Available at any time after an encounter was randomly chosen, we provided the same encounter informa- tion as in the Encounter Understanding interface as an initial prompt to ChatGPT (i.e., gpt-3.5-turbo) (OpenAI 2022). If the DM had used the Encounter Understanding interface to generate an information block, we also provided it as con- text (Figure 4). The full prompts are available in Appendix A. For decoding, we used a temperature of 1, top-p of 0.95, and a frequency penalty of 0.3. | 2308.07540#24 | CALYPSO: LLMs as Dungeon Masters' Assistants | The role of a Dungeon Master, or DM, in the game Dungeons & Dragons is to
perform multiple tasks simultaneously. The DM must digest information about the
game setting and monsters, synthesize scenes to present to other players, and
respond to the players' interactions with the scene. Doing all of these tasks
while maintaining consistency within the narrative and story world is no small
feat of human cognition, making the task tiring and unapproachable to new
players. Large language models (LLMs) like GPT-3 and ChatGPT have shown
remarkable abilities to generate coherent natural language text. In this paper,
we conduct a formative evaluation with DMs to establish the use cases of LLMs
in D&D and tabletop gaming generally. We introduce CALYPSO, a system of
LLM-powered interfaces that support DMs with information and inspiration
specific to their own scenario. CALYPSO distills game context into bite-sized
prose and helps brainstorm ideas without distracting the DM from the game. When
given access to CALYPSO, DMs reported that it generated high-fidelity text
suitable for direct presentation to players, and low-fidelity ideas that the DM
could develop further while maintaining their creative agency. We see CALYPSO
as exemplifying a paradigm of AI-augmented tools that provide synchronous
creative assistance within established game worlds, and tabletop gaming more
broadly. | http://arxiv.org/pdf/2308.07540 | Andrew Zhu, Lara J. Martin, Andrew Head, Chris Callison-Burch | cs.CL, cs.HC | 11 pages, 4 figures. AIIDE 2023 | null | cs.CL | 20230815 | 20230815 | [
{
"id": "1706.03762"
},
{
"id": "2304.01860"
},
{
"id": "2211.05030"
}
] |
2308.07540 | 25 | 4.3 Open-Domain Chat Baseline Finally, we made a baseline open-domain chat interface available to all players, without the focus of an encounter. As this interface was available at any time and open-ended, it helped provide a baseline for how DMs would use AI chat- bots generally. To access the interface, users were able to run a bot command, which would start a new thread. We prompted ChatGPT to take on the persona of a fantasy crea- ture knowledgeable about D&D, and generated replies to every message sent in a thread opened in this manner. For decoding, we used a temperature of 1, top-p of 0.95, and a frequency penalty of 0.3. Unlike the private threads cre- ated by the Focused Brainstorming interface, open-domain conversation threads were public and allowed other users to join.
Rolling for random encounter... NLPDemo - Tier 2 Roll: 14100 (61)= 61 1d4 (3) = 3 Owlbear 32 Help me understand the monsters Brainstorm with me NLPDemo- Tier 2 Roll: 14100 (61) = 61 1d4 (3) = 3 Owlbear Encounter Summary Rolling for random encounter...
rolling | 2308.07540#25 | CALYPSO: LLMs as Dungeon Masters' Assistants | The role of a Dungeon Master, or DM, in the game Dungeons & Dragons is to
perform multiple tasks simultaneously. The DM must digest information about the
game setting and monsters, synthesize scenes to present to other players, and
respond to the players' interactions with the scene. Doing all of these tasks
while maintaining consistency within the narrative and story world is no small
feat of human cognition, making the task tiring and unapproachable to new
players. Large language models (LLMs) like GPT-3 and ChatGPT have shown
remarkable abilities to generate coherent natural language text. In this paper,
we conduct a formative evaluation with DMs to establish the use cases of LLMs
in D&D and tabletop gaming generally. We introduce CALYPSO, a system of
LLM-powered interfaces that support DMs with information and inspiration
specific to their own scenario. CALYPSO distills game context into bite-sized
prose and helps brainstorm ideas without distracting the DM from the game. When
given access to CALYPSO, DMs reported that it generated high-fidelity text
suitable for direct presentation to players, and low-fidelity ideas that the DM
could develop further while maintaining their creative agency. We see CALYPSO
as exemplifying a paradigm of AI-augmented tools that provide synchronous
creative assistance within established game worlds, and tabletop gaming more
broadly. | http://arxiv.org/pdf/2308.07540 | Andrew Zhu, Lara J. Martin, Andrew Head, Chris Callison-Burch | cs.CL, cs.HC | 11 pages, 4 figures. AIIDE 2023 | null | cs.CL | 20230815 | 20230815 | [
{
"id": "1706.03762"
},
{
"id": "2304.01860"
},
{
"id": "2211.05030"
}
] |
2308.07540 | 26 | rolling
Rolling for random encounter... NLPDemo - Tier 2 Roll: 1d100 (61) = 61 1d4 (3)= 3 Owlbear Encounter Summary The owlbear is a monstrous combination of owl and bear; its shaggy coat is covered in feathers and it has an owlish head with round eyes. It emits a terrifying screech that announces the death of its prey and it preys on most creatures within its hunting grounds. Its intelligent but difficult to tame, and it is often used as a mount or guard by those living in remote settlements. While its origins remain uncertain, some believe they have always existed in the Feywild. These three owlbears are likely hunting together as a family and they are consummate predators. They screech to declare their territory, flush prey into their hunting grounds, and drag partially devoured kills back to their den. They can be dangerous foes when provoked, showing no fear for foes larger than themselves, so approach with caution The summary was helpful! The summary wasr't that helpful.
hundreds of feet tall and dozens of fet across at their base, but a mati Forest Encounter (Levee 10) dense SUA pena Tothe
(a) An example of the kinds of information DMs reference in an encounter. Digesting the myriad sources of information and using them to build a compelling narrative is a challeng- ing task. | 2308.07540#26 | CALYPSO: LLMs as Dungeon Masters' Assistants | The role of a Dungeon Master, or DM, in the game Dungeons & Dragons is to
perform multiple tasks simultaneously. The DM must digest information about the
game setting and monsters, synthesize scenes to present to other players, and
respond to the players' interactions with the scene. Doing all of these tasks
while maintaining consistency within the narrative and story world is no small
feat of human cognition, making the task tiring and unapproachable to new
players. Large language models (LLMs) like GPT-3 and ChatGPT have shown
remarkable abilities to generate coherent natural language text. In this paper,
we conduct a formative evaluation with DMs to establish the use cases of LLMs
in D&D and tabletop gaming generally. We introduce CALYPSO, a system of
LLM-powered interfaces that support DMs with information and inspiration
specific to their own scenario. CALYPSO distills game context into bite-sized
prose and helps brainstorm ideas without distracting the DM from the game. When
given access to CALYPSO, DMs reported that it generated high-fidelity text
suitable for direct presentation to players, and low-fidelity ideas that the DM
could develop further while maintaining their creative agency. We see CALYPSO
as exemplifying a paradigm of AI-augmented tools that provide synchronous
creative assistance within established game worlds, and tabletop gaming more
broadly. | http://arxiv.org/pdf/2308.07540 | Andrew Zhu, Lara J. Martin, Andrew Head, Chris Callison-Burch | cs.CL, cs.HC | 11 pages, 4 figures. AIIDE 2023 | null | cs.CL | 20230815 | 20230815 | [
{
"id": "1706.03762"
},
{
"id": "2304.01860"
},
{
"id": "2211.05030"
}
] |
2308.07540 | 27 | (a) An example of the kinds of information DMs reference in an encounter. Digesting the myriad sources of information and using them to build a compelling narrative is a challeng- ing task.
an (b) After encounter, the interface DMs used to interact with CALYPSO. rolling
(c) An example of CALYPSOâs summary of the encounter.
Figure 2: Using the Encounter Understanding interface to generate a distilled summary of an encounter.
5 Experimental Setup By deploying CALYPSO in the wild, we sought to learn how real DMs would adopt the new technology (if at all) and the emergent use cases that would arise.
We set up a special âplay-by-post living worldâ game, which we describe below, and invited 71 players and DMs (referred to as P1-71) to participate by posting on D&D re- cruitment forums. While preserving the core foundations of D&D, our setup allowed us to conduct a large-scale study with a greater number of play sessions than studying indi- vidual games of D&D.
In this section, we describe our methodology for setting up this large-scale D&D game.
# 5.1 D&D Game Setup
models without having to add an additional step to transcribe verbal play into text. | 2308.07540#27 | CALYPSO: LLMs as Dungeon Masters' Assistants | The role of a Dungeon Master, or DM, in the game Dungeons & Dragons is to
perform multiple tasks simultaneously. The DM must digest information about the
game setting and monsters, synthesize scenes to present to other players, and
respond to the players' interactions with the scene. Doing all of these tasks
while maintaining consistency within the narrative and story world is no small
feat of human cognition, making the task tiring and unapproachable to new
players. Large language models (LLMs) like GPT-3 and ChatGPT have shown
remarkable abilities to generate coherent natural language text. In this paper,
we conduct a formative evaluation with DMs to establish the use cases of LLMs
in D&D and tabletop gaming generally. We introduce CALYPSO, a system of
LLM-powered interfaces that support DMs with information and inspiration
specific to their own scenario. CALYPSO distills game context into bite-sized
prose and helps brainstorm ideas without distracting the DM from the game. When
given access to CALYPSO, DMs reported that it generated high-fidelity text
suitable for direct presentation to players, and low-fidelity ideas that the DM
could develop further while maintaining their creative agency. We see CALYPSO
as exemplifying a paradigm of AI-augmented tools that provide synchronous
creative assistance within established game worlds, and tabletop gaming more
broadly. | http://arxiv.org/pdf/2308.07540 | Andrew Zhu, Lara J. Martin, Andrew Head, Chris Callison-Burch | cs.CL, cs.HC | 11 pages, 4 figures. AIIDE 2023 | null | cs.CL | 20230815 | 20230815 | [
{
"id": "1706.03762"
},
{
"id": "2304.01860"
},
{
"id": "2211.05030"
}
] |
2308.07540 | 28 | # 5.1 D&D Game Setup
models without having to add an additional step to transcribe verbal play into text.
Living World. Our setup takes aspects from playing both prewritten modules and homebrew worlds. Traditionally, groups are comprised of 1 DM and 3-6 players playing in different worlds created by the DM, who play in regularly scheduled 3-4 hour play sessions (most commonly, once a week). To allow for a larger scale study, in our setting, all 71 players exist in the same would, which we created. To emu- late traditional play sessions, players form groups of 3-6 (on average) to partake in self-contained quests in the setting, always returning to a central hub after each quest. Within the hub, players are free to interact with each other, allow- ing room for storytelling and character development through roleplay without a DM. Outside the hub, we created a di- verse set of environments that players could explore, each with a short description and image. | 2308.07540#28 | CALYPSO: LLMs as Dungeon Masters' Assistants | The role of a Dungeon Master, or DM, in the game Dungeons & Dragons is to
perform multiple tasks simultaneously. The DM must digest information about the
game setting and monsters, synthesize scenes to present to other players, and
respond to the players' interactions with the scene. Doing all of these tasks
while maintaining consistency within the narrative and story world is no small
feat of human cognition, making the task tiring and unapproachable to new
players. Large language models (LLMs) like GPT-3 and ChatGPT have shown
remarkable abilities to generate coherent natural language text. In this paper,
we conduct a formative evaluation with DMs to establish the use cases of LLMs
in D&D and tabletop gaming generally. We introduce CALYPSO, a system of
LLM-powered interfaces that support DMs with information and inspiration
specific to their own scenario. CALYPSO distills game context into bite-sized
prose and helps brainstorm ideas without distracting the DM from the game. When
given access to CALYPSO, DMs reported that it generated high-fidelity text
suitable for direct presentation to players, and low-fidelity ideas that the DM
could develop further while maintaining their creative agency. We see CALYPSO
as exemplifying a paradigm of AI-augmented tools that provide synchronous
creative assistance within established game worlds, and tabletop gaming more
broadly. | http://arxiv.org/pdf/2308.07540 | Andrew Zhu, Lara J. Martin, Andrew Head, Chris Callison-Burch | cs.CL, cs.HC | 11 pages, 4 figures. AIIDE 2023 | null | cs.CL | 20230815 | 20230815 | [
{
"id": "1706.03762"
},
{
"id": "2304.01860"
},
{
"id": "2211.05030"
}
] |
2308.07540 | 29 | All gameplay occurred on our Discord server. We used Avrae, a D&D Discord bot with over five million users, to facilitate gameplay. Avrae is commonly used to run D&D games in this fashion, so the large-scale game was famil- iar to players and DMs (Zhu et al. 2023). All participants were asked to review the serverâs research statement and to provide their informed consent before participating. Partic- ipants were compensated with free access to all published D&D game materials (worth $981.35). We explain the core differences between a traditional game of D&D and our setup here:
Play-by-Post. While most commonly D&D is played in person or using a virtual teleconference, a large number of players also play in a text-only mode known as âplay-by- postâ. In play-by-post games, rather than acting out char- acters using voices and body movements, players narrate their charactersâ actions and speech in a textual format. This text-based modality allowed us to monitor a large number of play sessions and allowed players to interface with language | 2308.07540#29 | CALYPSO: LLMs as Dungeon Masters' Assistants | The role of a Dungeon Master, or DM, in the game Dungeons & Dragons is to
perform multiple tasks simultaneously. The DM must digest information about the
game setting and monsters, synthesize scenes to present to other players, and
respond to the players' interactions with the scene. Doing all of these tasks
while maintaining consistency within the narrative and story world is no small
feat of human cognition, making the task tiring and unapproachable to new
players. Large language models (LLMs) like GPT-3 and ChatGPT have shown
remarkable abilities to generate coherent natural language text. In this paper,
we conduct a formative evaluation with DMs to establish the use cases of LLMs
in D&D and tabletop gaming generally. We introduce CALYPSO, a system of
LLM-powered interfaces that support DMs with information and inspiration
specific to their own scenario. CALYPSO distills game context into bite-sized
prose and helps brainstorm ideas without distracting the DM from the game. When
given access to CALYPSO, DMs reported that it generated high-fidelity text
suitable for direct presentation to players, and low-fidelity ideas that the DM
could develop further while maintaining their creative agency. We see CALYPSO
as exemplifying a paradigm of AI-augmented tools that provide synchronous
creative assistance within established game worlds, and tabletop gaming more
broadly. | http://arxiv.org/pdf/2308.07540 | Andrew Zhu, Lara J. Martin, Andrew Head, Chris Callison-Burch | cs.CL, cs.HC | 11 pages, 4 figures. AIIDE 2023 | null | cs.CL | 20230815 | 20230815 | [
{
"id": "1706.03762"
},
{
"id": "2304.01860"
},
{
"id": "2211.05030"
}
] |
2308.07540 | 30 | 5.2 Mode of Play In the CALYPSO study, the primary mode of play was a random encounter system: players explore the established world and the DM is given a set of monsters to use (ran- domly selected from a table of encounters). The DM must then create an encounter involving those monsters for their players to play in. By giving DMs access to the interfaces de- scribed above, we study how language models can be used with focused local knowledge to help a DM understand com- plex monster lore, interactions, and how to incorporate new encounters into an existing setting. Figure 1 shows the flow of how an encounter is randomly chosen and how a DM might use CALYPSO to help build a compelling narrative.
6 Results and Discussion In this section, we compile the ways that DMs adopted CA- LYPSOâs three interfaces into their games. We highlight the
successes and failures of LLMs, and describe its emergent use cases. We refer to the DMs who used CALYPSO as P1- P71 in this section.
6.1 Encounter Understanding Summarization. DMs interacted with the summarization model in 37 encounters, indicating that the summary helped them understand the monsters and setting in 13 encounters and did not help in 7 encounters. | 2308.07540#30 | CALYPSO: LLMs as Dungeon Masters' Assistants | The role of a Dungeon Master, or DM, in the game Dungeons & Dragons is to
perform multiple tasks simultaneously. The DM must digest information about the
game setting and monsters, synthesize scenes to present to other players, and
respond to the players' interactions with the scene. Doing all of these tasks
while maintaining consistency within the narrative and story world is no small
feat of human cognition, making the task tiring and unapproachable to new
players. Large language models (LLMs) like GPT-3 and ChatGPT have shown
remarkable abilities to generate coherent natural language text. In this paper,
we conduct a formative evaluation with DMs to establish the use cases of LLMs
in D&D and tabletop gaming generally. We introduce CALYPSO, a system of
LLM-powered interfaces that support DMs with information and inspiration
specific to their own scenario. CALYPSO distills game context into bite-sized
prose and helps brainstorm ideas without distracting the DM from the game. When
given access to CALYPSO, DMs reported that it generated high-fidelity text
suitable for direct presentation to players, and low-fidelity ideas that the DM
could develop further while maintaining their creative agency. We see CALYPSO
as exemplifying a paradigm of AI-augmented tools that provide synchronous
creative assistance within established game worlds, and tabletop gaming more
broadly. | http://arxiv.org/pdf/2308.07540 | Andrew Zhu, Lara J. Martin, Andrew Head, Chris Callison-Burch | cs.CL, cs.HC | 11 pages, 4 figures. AIIDE 2023 | null | cs.CL | 20230815 | 20230815 | [
{
"id": "1706.03762"
},
{
"id": "2304.01860"
},
{
"id": "2211.05030"
}
] |
2308.07540 | 31 | We found that DMs appreciated the summary for distill- ing long information from stat blocks (P10) or highlighting an aspect of the monster to focus on (P8). However, when prompted with the task to summarize provided information, GPT-3 would focus too much on numeric game stats (de- spite the prompt stating not to) and summarize the environ- ment and monsters separately, often focusing too much on the environment (P3). For example, for an encounter involv- ing 12 Blink Dogs, the modelâs output simply paraphrased the information written in the setting and stat block rather than suggesting more interesting interactions: | 2308.07540#31 | CALYPSO: LLMs as Dungeon Masters' Assistants | The role of a Dungeon Master, or DM, in the game Dungeons & Dragons is to
perform multiple tasks simultaneously. The DM must digest information about the
game setting and monsters, synthesize scenes to present to other players, and
respond to the players' interactions with the scene. Doing all of these tasks
while maintaining consistency within the narrative and story world is no small
feat of human cognition, making the task tiring and unapproachable to new
players. Large language models (LLMs) like GPT-3 and ChatGPT have shown
remarkable abilities to generate coherent natural language text. In this paper,
we conduct a formative evaluation with DMs to establish the use cases of LLMs
in D&D and tabletop gaming generally. We introduce CALYPSO, a system of
LLM-powered interfaces that support DMs with information and inspiration
specific to their own scenario. CALYPSO distills game context into bite-sized
prose and helps brainstorm ideas without distracting the DM from the game. When
given access to CALYPSO, DMs reported that it generated high-fidelity text
suitable for direct presentation to players, and low-fidelity ideas that the DM
could develop further while maintaining their creative agency. We see CALYPSO
as exemplifying a paradigm of AI-augmented tools that provide synchronous
creative assistance within established game worlds, and tabletop gaming more
broadly. | http://arxiv.org/pdf/2308.07540 | Andrew Zhu, Lara J. Martin, Andrew Head, Chris Callison-Burch | cs.CL, cs.HC | 11 pages, 4 figures. AIIDE 2023 | null | cs.CL | 20230815 | 20230815 | [
{
"id": "1706.03762"
},
{
"id": "2304.01860"
},
{
"id": "2211.05030"
}
] |
2308.07540 | 32 | CALYPSO: A warm and calm autumnal forest with a river running through it, the players can travel through this area towards the distant mountain range. The creatures they may encounter here are blink dogs; they have an armor class of 13, 22 Hit Points, a speed of 40 ft., as well as skills in Perception and Stealth. They also possess limited intelligence and understand Sylvan though cannot speak it. Although the information presented by CALYPSO is fac- tually correct, DMs argued that simply rephrasing the infor- mation in the monsterâs reference material âdoesnât really help in running an encounterâ (P8) and generally wanted âmore information about [the monsters] and why they might interact with [the players]â (P11). DMs wanted CALYPSO to generate details that were entirely omitted from the refer- ence material, using the written reference as only a basis to gain an understanding of the monster. | 2308.07540#32 | CALYPSO: LLMs as Dungeon Masters' Assistants | The role of a Dungeon Master, or DM, in the game Dungeons & Dragons is to
perform multiple tasks simultaneously. The DM must digest information about the
game setting and monsters, synthesize scenes to present to other players, and
respond to the players' interactions with the scene. Doing all of these tasks
while maintaining consistency within the narrative and story world is no small
feat of human cognition, making the task tiring and unapproachable to new
players. Large language models (LLMs) like GPT-3 and ChatGPT have shown
remarkable abilities to generate coherent natural language text. In this paper,
we conduct a formative evaluation with DMs to establish the use cases of LLMs
in D&D and tabletop gaming generally. We introduce CALYPSO, a system of
LLM-powered interfaces that support DMs with information and inspiration
specific to their own scenario. CALYPSO distills game context into bite-sized
prose and helps brainstorm ideas without distracting the DM from the game. When
given access to CALYPSO, DMs reported that it generated high-fidelity text
suitable for direct presentation to players, and low-fidelity ideas that the DM
could develop further while maintaining their creative agency. We see CALYPSO
as exemplifying a paradigm of AI-augmented tools that provide synchronous
creative assistance within established game worlds, and tabletop gaming more
broadly. | http://arxiv.org/pdf/2308.07540 | Andrew Zhu, Lara J. Martin, Andrew Head, Chris Callison-Burch | cs.CL, cs.HC | 11 pages, 4 figures. AIIDE 2023 | null | cs.CL | 20230815 | 20230815 | [
{
"id": "1706.03762"
},
{
"id": "2304.01860"
},
{
"id": "2211.05030"
}
] |
2308.07540 | 33 | Another common case where CALYPSO did not present a helpful summary was when a monster did not have a published prose description, only numeric game stats. Most commonly, such creatures were an animal found in real life (e.g. a brown bear; P3) or relied entirely on a DMâs the- matic commonsense to run (e.g. a snow golem; P3). In these cases, the model would only use minimal external knowl- edge (e.g. noting that giant boars usually had âthick, protec- tive skinâ) rather than using thematic commonsense to make up a more interesting narrative (discussed in Section 3). We found that in order for the LLM to use external knowledge about monsters rather than only what was written in game stats, we had to prompt it with explicit instructions to do so. With these findings in mind, we remodeled the summariza- tion task to a more abstract âunderstandingâ task, in which we provided the model explicit instructions to use thematic commonsense.
Abstractive Understanding. Overall, DMs much pre- ferred this more abstract task: over interactions in 114 en- counters, DMs indicated that the summary helped them un- derstand the monsters and setting in 55 encounters and did
Helpful Not Helpful No Feedback Summarization | Abstractive | 0% 25% 50% 75% | 2308.07540#33 | CALYPSO: LLMs as Dungeon Masters' Assistants | The role of a Dungeon Master, or DM, in the game Dungeons & Dragons is to
perform multiple tasks simultaneously. The DM must digest information about the
game setting and monsters, synthesize scenes to present to other players, and
respond to the players' interactions with the scene. Doing all of these tasks
while maintaining consistency within the narrative and story world is no small
feat of human cognition, making the task tiring and unapproachable to new
players. Large language models (LLMs) like GPT-3 and ChatGPT have shown
remarkable abilities to generate coherent natural language text. In this paper,
we conduct a formative evaluation with DMs to establish the use cases of LLMs
in D&D and tabletop gaming generally. We introduce CALYPSO, a system of
LLM-powered interfaces that support DMs with information and inspiration
specific to their own scenario. CALYPSO distills game context into bite-sized
prose and helps brainstorm ideas without distracting the DM from the game. When
given access to CALYPSO, DMs reported that it generated high-fidelity text
suitable for direct presentation to players, and low-fidelity ideas that the DM
could develop further while maintaining their creative agency. We see CALYPSO
as exemplifying a paradigm of AI-augmented tools that provide synchronous
creative assistance within established game worlds, and tabletop gaming more
broadly. | http://arxiv.org/pdf/2308.07540 | Andrew Zhu, Lara J. Martin, Andrew Head, Chris Callison-Burch | cs.CL, cs.HC | 11 pages, 4 figures. AIIDE 2023 | null | cs.CL | 20230815 | 20230815 | [
{
"id": "1706.03762"
},
{
"id": "2304.01860"
},
{
"id": "2211.05030"
}
] |
2308.07540 | 34 | Helpful Not Helpful No Feedback Summarization | Abstractive | 0% 25% 50% 75%
Figure 3: DMs found the Abstractive Understanding method of distilling monster information more consistently helpful than the Summarization method.
not help in 2 encounters. Figure 3 shows the breakdown of feedback for each interface.
Primarily, DMs using this model were satisfied with its ability to describe monsters based on real-life animals (P8), creature behavior (P3), interactions with players (P3), how to roleplay creatures (P12), and general performance (P3, P8, P11-13). The most common undesired behavior DMs noticed was the model hallucinating, or making up, facts about the monsters, such as giving giant spiders telepathy (P15), an insect monster a poison ability (P8), or a frost sala- mander wings (P14). Given the same example encounter as above, the modelâs output includes much more useful infor- mation about appearance and monster behavior: | 2308.07540#34 | CALYPSO: LLMs as Dungeon Masters' Assistants | The role of a Dungeon Master, or DM, in the game Dungeons & Dragons is to
perform multiple tasks simultaneously. The DM must digest information about the
game setting and monsters, synthesize scenes to present to other players, and
respond to the players' interactions with the scene. Doing all of these tasks
while maintaining consistency within the narrative and story world is no small
feat of human cognition, making the task tiring and unapproachable to new
players. Large language models (LLMs) like GPT-3 and ChatGPT have shown
remarkable abilities to generate coherent natural language text. In this paper,
we conduct a formative evaluation with DMs to establish the use cases of LLMs
in D&D and tabletop gaming generally. We introduce CALYPSO, a system of
LLM-powered interfaces that support DMs with information and inspiration
specific to their own scenario. CALYPSO distills game context into bite-sized
prose and helps brainstorm ideas without distracting the DM from the game. When
given access to CALYPSO, DMs reported that it generated high-fidelity text
suitable for direct presentation to players, and low-fidelity ideas that the DM
could develop further while maintaining their creative agency. We see CALYPSO
as exemplifying a paradigm of AI-augmented tools that provide synchronous
creative assistance within established game worlds, and tabletop gaming more
broadly. | http://arxiv.org/pdf/2308.07540 | Andrew Zhu, Lara J. Martin, Andrew Head, Chris Callison-Burch | cs.CL, cs.HC | 11 pages, 4 figures. AIIDE 2023 | null | cs.CL | 20230815 | 20230815 | [
{
"id": "1706.03762"
},
{
"id": "2304.01860"
},
{
"id": "2211.05030"
}
] |
2308.07540 | 35 | CALYPSO: The blink dogs are canine creatures, about the size of a large dog. Their fur is a mottled grey and white colour, and their eyes flash yellow when they use their supernatural ability to blink in and out of ex- istence. They are surprisingly fast despite their size, and can easily keep up with most horses. In terms of behavior, the blink dogs are curious but shy creatures who prefer to avoid contact with other creatures un- less absolutely necessary. If threatened, they will use their blinking power to confuse and disorientate their opponents before attacking with a pack mentality. The blink dogs of this forest seem particularly adept at working together as a unit and can prove a formidable foe if provoked.
As the Summarization and Abstractive Understanding tasks differ only in prompting methods, we demonstrate how sensitive prompting approaches can be. It is possible that GPT-3 is overfit on the word âsummarizeâ, and even subtle changes in prompt design encourage original and domain-specific generation. With these alterations, we found that LLMs are capable of synthesizing creative ideas about D&D monsters using thematic commonsense contained in its training priors. | 2308.07540#35 | CALYPSO: LLMs as Dungeon Masters' Assistants | The role of a Dungeon Master, or DM, in the game Dungeons & Dragons is to
perform multiple tasks simultaneously. The DM must digest information about the
game setting and monsters, synthesize scenes to present to other players, and
respond to the players' interactions with the scene. Doing all of these tasks
while maintaining consistency within the narrative and story world is no small
feat of human cognition, making the task tiring and unapproachable to new
players. Large language models (LLMs) like GPT-3 and ChatGPT have shown
remarkable abilities to generate coherent natural language text. In this paper,
we conduct a formative evaluation with DMs to establish the use cases of LLMs
in D&D and tabletop gaming generally. We introduce CALYPSO, a system of
LLM-powered interfaces that support DMs with information and inspiration
specific to their own scenario. CALYPSO distills game context into bite-sized
prose and helps brainstorm ideas without distracting the DM from the game. When
given access to CALYPSO, DMs reported that it generated high-fidelity text
suitable for direct presentation to players, and low-fidelity ideas that the DM
could develop further while maintaining their creative agency. We see CALYPSO
as exemplifying a paradigm of AI-augmented tools that provide synchronous
creative assistance within established game worlds, and tabletop gaming more
broadly. | http://arxiv.org/pdf/2308.07540 | Andrew Zhu, Lara J. Martin, Andrew Head, Chris Callison-Burch | cs.CL, cs.HC | 11 pages, 4 figures. AIIDE 2023 | null | cs.CL | 20230815 | 20230815 | [
{
"id": "1706.03762"
},
{
"id": "2304.01860"
},
{
"id": "2211.05030"
}
] |
2308.07540 | 36 | We also found that if we repeated a certain phrase mul- tiple times in the prompt, e.g. âyou may use information from mythology and common senseâ (which could occur if an encounter involved multiple monsters with no published lore), the model would often include the phrase âmythology and common senseâ verbatim in its output. This is likely a case of degenerative output (Holtzman et al. 2020). To prevent this, each time a set phrase would have been tem- plated in, we instead randomly chose 2-4 words from the set | 2308.07540#36 | CALYPSO: LLMs as Dungeon Masters' Assistants | The role of a Dungeon Master, or DM, in the game Dungeons & Dragons is to
perform multiple tasks simultaneously. The DM must digest information about the
game setting and monsters, synthesize scenes to present to other players, and
respond to the players' interactions with the scene. Doing all of these tasks
while maintaining consistency within the narrative and story world is no small
feat of human cognition, making the task tiring and unapproachable to new
players. Large language models (LLMs) like GPT-3 and ChatGPT have shown
remarkable abilities to generate coherent natural language text. In this paper,
we conduct a formative evaluation with DMs to establish the use cases of LLMs
in D&D and tabletop gaming generally. We introduce CALYPSO, a system of
LLM-powered interfaces that support DMs with information and inspiration
specific to their own scenario. CALYPSO distills game context into bite-sized
prose and helps brainstorm ideas without distracting the DM from the game. When
given access to CALYPSO, DMs reported that it generated high-fidelity text
suitable for direct presentation to players, and low-fidelity ideas that the DM
could develop further while maintaining their creative agency. We see CALYPSO
as exemplifying a paradigm of AI-augmented tools that provide synchronous
creative assistance within established game worlds, and tabletop gaming more
broadly. | http://arxiv.org/pdf/2308.07540 | Andrew Zhu, Lara J. Martin, Andrew Head, Chris Callison-Burch | cs.CL, cs.HC | 11 pages, 4 figures. AIIDE 2023 | null | cs.CL | 20230815 | 20230815 | [
{
"id": "1706.03762"
},
{
"id": "2304.01860"
},
{
"id": "2211.05030"
}
] |
2308.07540 | 37 | partially devoured kills back to their den. They can be dangerous foes when provoked, showing no fear for foes larger than themselves, so approach with caution. © Dungeon Master Today at 16:08 What are these three owlbears hunting? & Calypso Today at 16:08 These three owlbears are hunting for their next meal, which could include a variety of creatures found in the woodlands. As opportunistic predators, they may target large herbivores such as deer or elk, as well as smaller creatures like rabbits, raccoons, and even snakes if they come across them. They may also attack other predators if they pose a threat or a chance for an easy meal. The presence of these owlbears in the area may have been noticed by the local fauna, causing some creatures to become more cautious and scarce. This could potentially make the owlbears more aggressive and desperate in their search for food, leading to a higher chance of them attacking any unwary adventurers that happen upon their path. C+) Message "Brainstorm: 1d4 (3) = *3° Ow... a Ge
Figure 4: Using the Focused Brainstorming interface to ask specific questions about an encounter. CALYPSO suggests reasons why the players might encounter the monsters and how they might act. | 2308.07540#37 | CALYPSO: LLMs as Dungeon Masters' Assistants | The role of a Dungeon Master, or DM, in the game Dungeons & Dragons is to
perform multiple tasks simultaneously. The DM must digest information about the
game setting and monsters, synthesize scenes to present to other players, and
respond to the players' interactions with the scene. Doing all of these tasks
while maintaining consistency within the narrative and story world is no small
feat of human cognition, making the task tiring and unapproachable to new
players. Large language models (LLMs) like GPT-3 and ChatGPT have shown
remarkable abilities to generate coherent natural language text. In this paper,
we conduct a formative evaluation with DMs to establish the use cases of LLMs
in D&D and tabletop gaming generally. We introduce CALYPSO, a system of
LLM-powered interfaces that support DMs with information and inspiration
specific to their own scenario. CALYPSO distills game context into bite-sized
prose and helps brainstorm ideas without distracting the DM from the game. When
given access to CALYPSO, DMs reported that it generated high-fidelity text
suitable for direct presentation to players, and low-fidelity ideas that the DM
could develop further while maintaining their creative agency. We see CALYPSO
as exemplifying a paradigm of AI-augmented tools that provide synchronous
creative assistance within established game worlds, and tabletop gaming more
broadly. | http://arxiv.org/pdf/2308.07540 | Andrew Zhu, Lara J. Martin, Andrew Head, Chris Callison-Burch | cs.CL, cs.HC | 11 pages, 4 figures. AIIDE 2023 | null | cs.CL | 20230815 | 20230815 | [
{
"id": "1706.03762"
},
{
"id": "2304.01860"
},
{
"id": "2211.05030"
}
] |
2308.07540 | 38 | Figure 4: Using the Focused Brainstorming interface to ask specific questions about an encounter. CALYPSO suggests reasons why the players might encounter the monsters and how they might act.
{âfolkloreâ, âcommon senseâ, âmythologyâ, âcultureâ} and inserted them in a random order (e.g. âuse information from common sense, mythology, and folkloreâ).
Effect of Hallucinations. We find that not all hallucina- tions are undesired. In many cases, the model suggests mon- ster behaviors or appearances that are not explicitly written out in monster descriptions, such as the appearance of the blink dogsâ fur in the example above. More drastic devia- tions, such as the model suggesting giving a creature wings, were however undesired.
DMs often take creative liberty to synthesize sensical information that isnât included in the source material. As shown above, they expect their tools to do the same when necessary â while the Summarization interface was more conservative in ensuring it did not hallucinate any details, the Abstractive Understanding interface was more well- received even with minor hallucinations. Since the DM acts as a curator of the modelâs output, the DM can choose which of the generations to accept. | 2308.07540#38 | CALYPSO: LLMs as Dungeon Masters' Assistants | The role of a Dungeon Master, or DM, in the game Dungeons & Dragons is to
perform multiple tasks simultaneously. The DM must digest information about the
game setting and monsters, synthesize scenes to present to other players, and
respond to the players' interactions with the scene. Doing all of these tasks
while maintaining consistency within the narrative and story world is no small
feat of human cognition, making the task tiring and unapproachable to new
players. Large language models (LLMs) like GPT-3 and ChatGPT have shown
remarkable abilities to generate coherent natural language text. In this paper,
we conduct a formative evaluation with DMs to establish the use cases of LLMs
in D&D and tabletop gaming generally. We introduce CALYPSO, a system of
LLM-powered interfaces that support DMs with information and inspiration
specific to their own scenario. CALYPSO distills game context into bite-sized
prose and helps brainstorm ideas without distracting the DM from the game. When
given access to CALYPSO, DMs reported that it generated high-fidelity text
suitable for direct presentation to players, and low-fidelity ideas that the DM
could develop further while maintaining their creative agency. We see CALYPSO
as exemplifying a paradigm of AI-augmented tools that provide synchronous
creative assistance within established game worlds, and tabletop gaming more
broadly. | http://arxiv.org/pdf/2308.07540 | Andrew Zhu, Lara J. Martin, Andrew Head, Chris Callison-Burch | cs.CL, cs.HC | 11 pages, 4 figures. AIIDE 2023 | null | cs.CL | 20230815 | 20230815 | [
{
"id": "1706.03762"
},
{
"id": "2304.01860"
},
{
"id": "2211.05030"
}
] |
2308.07540 | 39 | 6.2 Focused Brainstorming In total, DMs used the focused brainstorming model in 71 encounters, comprising a total of 162 rounds of conversa- tion. DMs used the brainstorming model in a number of diverse way, which we qualitatively coded and tabulate in Table 1. Here, we discuss these use cases and some failure cases.
General and Specific Descriptions. The most common way DMs used the interface was to ask it for a high level description of a given encounter and specific descriptions of points in the encounter. Since our prompt included infor- mation on the setting and involved monsters, the model was able to reference the information in its description. Addition- ally, the conversational nature of the language model added to its context, so DMs could reference earlier ideas without having to repeat them. This allowed DMs to ask CALYPSO to simply âdescribe this sceneâ or âdescribe Xâ without hav- ing to specify additional details (P3, P8-10, P12, P16-20). | 2308.07540#39 | CALYPSO: LLMs as Dungeon Masters' Assistants | The role of a Dungeon Master, or DM, in the game Dungeons & Dragons is to
perform multiple tasks simultaneously. The DM must digest information about the
game setting and monsters, synthesize scenes to present to other players, and
respond to the players' interactions with the scene. Doing all of these tasks
while maintaining consistency within the narrative and story world is no small
feat of human cognition, making the task tiring and unapproachable to new
players. Large language models (LLMs) like GPT-3 and ChatGPT have shown
remarkable abilities to generate coherent natural language text. In this paper,
we conduct a formative evaluation with DMs to establish the use cases of LLMs
in D&D and tabletop gaming generally. We introduce CALYPSO, a system of
LLM-powered interfaces that support DMs with information and inspiration
specific to their own scenario. CALYPSO distills game context into bite-sized
prose and helps brainstorm ideas without distracting the DM from the game. When
given access to CALYPSO, DMs reported that it generated high-fidelity text
suitable for direct presentation to players, and low-fidelity ideas that the DM
could develop further while maintaining their creative agency. We see CALYPSO
as exemplifying a paradigm of AI-augmented tools that provide synchronous
creative assistance within established game worlds, and tabletop gaming more
broadly. | http://arxiv.org/pdf/2308.07540 | Andrew Zhu, Lara J. Martin, Andrew Head, Chris Callison-Burch | cs.CL, cs.HC | 11 pages, 4 figures. AIIDE 2023 | null | cs.CL | 20230815 | 20230815 | [
{
"id": "1706.03762"
},
{
"id": "2304.01860"
},
{
"id": "2211.05030"
}
] |
2308.07540 | 40 | After presenting an idea to their players and seeing what part of the encounter players interacted with, the DM was also able to ask follow-up questions to describe in detail specific elements the players interacted with. For example, when running an encounter involving a shipâs figurehead that was washed ashore, P3 first asked for a description of the figurehead. Then, when the players investigated it fur- ther, the DM followed up by asking for âa description about its construction, specifically how it was carved, and perhaps what D&D race crafted it.â This allowed DMs to elabo- rate on specific parts of an encounter when it became rel- evant, rather than presenting a large amount of information up front.
However, DMs found that the model struggled sometimes to describe combat, and suggested that including more infor- mation about the combat state (similar to Zhu et al. (2023)) or map placement information could help generate more specific descriptions (P3, P9). Some DMs used these de- scriptions verbatim (P3, P8, P17), while others picked out particularly vivid phrases to use in a description of their own (P3, P8, P10, P12, P20). Others disagreed with the modelâs description and wrote their own instead (P13, P16, P18, P19). | 2308.07540#40 | CALYPSO: LLMs as Dungeon Masters' Assistants | The role of a Dungeon Master, or DM, in the game Dungeons & Dragons is to
perform multiple tasks simultaneously. The DM must digest information about the
game setting and monsters, synthesize scenes to present to other players, and
respond to the players' interactions with the scene. Doing all of these tasks
while maintaining consistency within the narrative and story world is no small
feat of human cognition, making the task tiring and unapproachable to new
players. Large language models (LLMs) like GPT-3 and ChatGPT have shown
remarkable abilities to generate coherent natural language text. In this paper,
we conduct a formative evaluation with DMs to establish the use cases of LLMs
in D&D and tabletop gaming generally. We introduce CALYPSO, a system of
LLM-powered interfaces that support DMs with information and inspiration
specific to their own scenario. CALYPSO distills game context into bite-sized
prose and helps brainstorm ideas without distracting the DM from the game. When
given access to CALYPSO, DMs reported that it generated high-fidelity text
suitable for direct presentation to players, and low-fidelity ideas that the DM
could develop further while maintaining their creative agency. We see CALYPSO
as exemplifying a paradigm of AI-augmented tools that provide synchronous
creative assistance within established game worlds, and tabletop gaming more
broadly. | http://arxiv.org/pdf/2308.07540 | Andrew Zhu, Lara J. Martin, Andrew Head, Chris Callison-Burch | cs.CL, cs.HC | 11 pages, 4 figures. AIIDE 2023 | null | cs.CL | 20230815 | 20230815 | [
{
"id": "1706.03762"
},
{
"id": "2304.01860"
},
{
"id": "2211.05030"
}
] |
2308.07540 | 41 | Strategy. Another common use case for DMs was to ask the model for monstersâ âmotives, tactics, and who they might prioritize [in a fight]â (P8-9, P12-13, P19, P23). As discussed in section 3 (Strategic Copilot), coming up with and sticking to strategies for each monster can be over- whelming, and often DMs use simplifications to manage their mental load. This use case allowed DMs to create more engaging fights with clearer paths to resolutions by de- scribing a creatureâs motive and specific tactics the creature would use. For example, when a DM asked how a pack of ten wolves might approach a camping party, the model sug- gested to have the wolves âcircle around the camp, hiding behind trees and bushes [...] and wait until a member of the party is alone and vulnerable before striking, hoping to sep- arate and weaken the groupâ (P8). Similar to the interactions with descriptions, these DMs did not always use the strategy presented by the model; sometimes they picked and chose interesting suggestions, while other times they chose a dif- ferent approach. | 2308.07540#41 | CALYPSO: LLMs as Dungeon Masters' Assistants | The role of a Dungeon Master, or DM, in the game Dungeons & Dragons is to
perform multiple tasks simultaneously. The DM must digest information about the
game setting and monsters, synthesize scenes to present to other players, and
respond to the players' interactions with the scene. Doing all of these tasks
while maintaining consistency within the narrative and story world is no small
feat of human cognition, making the task tiring and unapproachable to new
players. Large language models (LLMs) like GPT-3 and ChatGPT have shown
remarkable abilities to generate coherent natural language text. In this paper,
we conduct a formative evaluation with DMs to establish the use cases of LLMs
in D&D and tabletop gaming generally. We introduce CALYPSO, a system of
LLM-powered interfaces that support DMs with information and inspiration
specific to their own scenario. CALYPSO distills game context into bite-sized
prose and helps brainstorm ideas without distracting the DM from the game. When
given access to CALYPSO, DMs reported that it generated high-fidelity text
suitable for direct presentation to players, and low-fidelity ideas that the DM
could develop further while maintaining their creative agency. We see CALYPSO
as exemplifying a paradigm of AI-augmented tools that provide synchronous
creative assistance within established game worlds, and tabletop gaming more
broadly. | http://arxiv.org/pdf/2308.07540 | Andrew Zhu, Lara J. Martin, Andrew Head, Chris Callison-Burch | cs.CL, cs.HC | 11 pages, 4 figures. AIIDE 2023 | null | cs.CL | 20230815 | 20230815 | [
{
"id": "1706.03762"
},
{
"id": "2304.01860"
},
{
"id": "2211.05030"
}
] |
2308.07540 | 43 | Use Case General Descriptions Asking the model to generate a high-level Description description of a scene and encounter. Specific Descriptions Asking specific questions about parts of the encounter, often in response to player actions. Using the model to understand monster motives and get suggestions for their tac- tics. Using the model to decide how the DM should run a given encounter. Generating a list of multiple ideas to build off of individually. Strategy Making Decisions List of Ideas Example âDescribe this encounter from the playerâs perspec- tive.â (P8) âDescribe drinking games that the satyrs are taking part in that are so dangerous someone could get hurt doing them.â (P17) âWhy would a Displacer Beast Kitten leave the safety of its den if it believes an intruder is nearby?â (P12) âShould a diplomatic solution be possible for this en- counter?â (P14) âgive me encounter ideasâ (P10) â...make up more [magic items] to make this en- counter more interesting.â (P19)
Table 1: A list of common ways DMs used the Focused Brainstorming interface. | 2308.07540#43 | CALYPSO: LLMs as Dungeon Masters' Assistants | The role of a Dungeon Master, or DM, in the game Dungeons & Dragons is to
perform multiple tasks simultaneously. The DM must digest information about the
game setting and monsters, synthesize scenes to present to other players, and
respond to the players' interactions with the scene. Doing all of these tasks
while maintaining consistency within the narrative and story world is no small
feat of human cognition, making the task tiring and unapproachable to new
players. Large language models (LLMs) like GPT-3 and ChatGPT have shown
remarkable abilities to generate coherent natural language text. In this paper,
we conduct a formative evaluation with DMs to establish the use cases of LLMs
in D&D and tabletop gaming generally. We introduce CALYPSO, a system of
LLM-powered interfaces that support DMs with information and inspiration
specific to their own scenario. CALYPSO distills game context into bite-sized
prose and helps brainstorm ideas without distracting the DM from the game. When
given access to CALYPSO, DMs reported that it generated high-fidelity text
suitable for direct presentation to players, and low-fidelity ideas that the DM
could develop further while maintaining their creative agency. We see CALYPSO
as exemplifying a paradigm of AI-augmented tools that provide synchronous
creative assistance within established game worlds, and tabletop gaming more
broadly. | http://arxiv.org/pdf/2308.07540 | Andrew Zhu, Lara J. Martin, Andrew Head, Chris Callison-Burch | cs.CL, cs.HC | 11 pages, 4 figures. AIIDE 2023 | null | cs.CL | 20230815 | 20230815 | [
{
"id": "1706.03762"
},
{
"id": "2304.01860"
},
{
"id": "2211.05030"
}
] |
2308.07540 | 44 | Table 1: A list of common ways DMs used the Focused Brainstorming interface.
short drop. The model would sometimes simply give feed- back on both of the options without choosing one (âBoth options have their merits depending on the tone and style of your game...â; P3) and sometimes give a more straightfor- ward answer (â...would that revenant have a vengeance to- wards the party member?â / âYes, absolutely...â; P12). DMs did not ask the model to come to a conclusive decision, sug- gesting that the model providing its âopinionâ helped inspire the DM, without relying on it to run the encounter.
List of Ideas. In this use case, the DM simply asks the model for a list of ideas; for example, a list of magic items sea-dwelling humanoids might have (P10). We believe that the reasoning for this use case is the same reason that makes random tables (as discussed in Section 2.1) a popu- lar method of inspiration â however, compared to prewritten random tables, LLMs have the powerful capability of gener- ating unique ârandom tableâ entries customized for specific contexts. | 2308.07540#44 | CALYPSO: LLMs as Dungeon Masters' Assistants | The role of a Dungeon Master, or DM, in the game Dungeons & Dragons is to
perform multiple tasks simultaneously. The DM must digest information about the
game setting and monsters, synthesize scenes to present to other players, and
respond to the players' interactions with the scene. Doing all of these tasks
while maintaining consistency within the narrative and story world is no small
feat of human cognition, making the task tiring and unapproachable to new
players. Large language models (LLMs) like GPT-3 and ChatGPT have shown
remarkable abilities to generate coherent natural language text. In this paper,
we conduct a formative evaluation with DMs to establish the use cases of LLMs
in D&D and tabletop gaming generally. We introduce CALYPSO, a system of
LLM-powered interfaces that support DMs with information and inspiration
specific to their own scenario. CALYPSO distills game context into bite-sized
prose and helps brainstorm ideas without distracting the DM from the game. When
given access to CALYPSO, DMs reported that it generated high-fidelity text
suitable for direct presentation to players, and low-fidelity ideas that the DM
could develop further while maintaining their creative agency. We see CALYPSO
as exemplifying a paradigm of AI-augmented tools that provide synchronous
creative assistance within established game worlds, and tabletop gaming more
broadly. | http://arxiv.org/pdf/2308.07540 | Andrew Zhu, Lara J. Martin, Andrew Head, Chris Callison-Burch | cs.CL, cs.HC | 11 pages, 4 figures. AIIDE 2023 | null | cs.CL | 20230815 | 20230815 | [
{
"id": "1706.03762"
},
{
"id": "2304.01860"
},
{
"id": "2211.05030"
}
] |
2308.07540 | 45 | another case, the model insists that it is incapable of playing D&D, likely due to efforts to prevent the model from making claims of abilities it does not possess. Although generally infrequent, these artifacts suggest that domain-specific fine- tuning may improve modelsâ performance.
6.3 Open-Domain Chat Baseline Participants chatted with CALYPSO in 51 unique threads, comprising a total of 2,295 rounds of conversation. Com- pared to conversations with the AI in the Focused Brain- storming interface, conversations lasted much longer (aver- aging 45.0 rounds per interaction vs. the brainstorming in- terfaceâs 2.3). Without the time pressure of an active game that the DM is responsible for, participants spent more time playing with the model and refining its responses to gener- ate high-level quest ideas (P3, P8, P12, P16), character and location names (P3, P9, P19, P22), role-play specific charac- ters from other games (P3, P9, P12, P16), and write fanfic- tion about events happening between their characters in the game (P3, P8, P9, P16, P21), among other non-D&D uses. | 2308.07540#45 | CALYPSO: LLMs as Dungeon Masters' Assistants | The role of a Dungeon Master, or DM, in the game Dungeons & Dragons is to
perform multiple tasks simultaneously. The DM must digest information about the
game setting and monsters, synthesize scenes to present to other players, and
respond to the players' interactions with the scene. Doing all of these tasks
while maintaining consistency within the narrative and story world is no small
feat of human cognition, making the task tiring and unapproachable to new
players. Large language models (LLMs) like GPT-3 and ChatGPT have shown
remarkable abilities to generate coherent natural language text. In this paper,
we conduct a formative evaluation with DMs to establish the use cases of LLMs
in D&D and tabletop gaming generally. We introduce CALYPSO, a system of
LLM-powered interfaces that support DMs with information and inspiration
specific to their own scenario. CALYPSO distills game context into bite-sized
prose and helps brainstorm ideas without distracting the DM from the game. When
given access to CALYPSO, DMs reported that it generated high-fidelity text
suitable for direct presentation to players, and low-fidelity ideas that the DM
could develop further while maintaining their creative agency. We see CALYPSO
as exemplifying a paradigm of AI-augmented tools that provide synchronous
creative assistance within established game worlds, and tabletop gaming more
broadly. | http://arxiv.org/pdf/2308.07540 | Andrew Zhu, Lara J. Martin, Andrew Head, Chris Callison-Burch | cs.CL, cs.HC | 11 pages, 4 figures. AIIDE 2023 | null | cs.CL | 20230815 | 20230815 | [
{
"id": "1706.03762"
},
{
"id": "2304.01860"
},
{
"id": "2211.05030"
}
] |
2308.07540 | 46 | Failure Cases. The most common failure case was when DMs tried to invoke other tools (such as a dice rolling or spell search bot) available in the brainstorming chat. As the model responded to every message in the thread, it would also respond to the other toolâs invocation and reply with a generic error message or try to infer the other toolâs output (e.g. â!check stealthâ / âAbominable Yeti stealth check: 18â, hallucinating a result while ignoring the output of an ac- tual dice roller). In some cases, the DM attempted to upload an image, which the model was unable to view. Finally, as discussed in Section 6.1, the model sometimes hallucinated facts about creatures and rules. We believe multimodality (allowing the model to view images) and allowing the model to use tools (e.g. to retrieve rules text, spell descriptions, or search monsters) to be an interesting direction to explore in future work.
However, during a game of D&D, DMs did not have the time luxury to iterate on responses for hours. Without CALYPSOâs management of the game, DMs would have to spend many turns of conversation copying and pasting in- formation to provide it to the LLM, taking attention away from the game and making the baseline implementation un- suitable for real-world adoption. | 2308.07540#46 | CALYPSO: LLMs as Dungeon Masters' Assistants | The role of a Dungeon Master, or DM, in the game Dungeons & Dragons is to
perform multiple tasks simultaneously. The DM must digest information about the
game setting and monsters, synthesize scenes to present to other players, and
respond to the players' interactions with the scene. Doing all of these tasks
while maintaining consistency within the narrative and story world is no small
feat of human cognition, making the task tiring and unapproachable to new
players. Large language models (LLMs) like GPT-3 and ChatGPT have shown
remarkable abilities to generate coherent natural language text. In this paper,
we conduct a formative evaluation with DMs to establish the use cases of LLMs
in D&D and tabletop gaming generally. We introduce CALYPSO, a system of
LLM-powered interfaces that support DMs with information and inspiration
specific to their own scenario. CALYPSO distills game context into bite-sized
prose and helps brainstorm ideas without distracting the DM from the game. When
given access to CALYPSO, DMs reported that it generated high-fidelity text
suitable for direct presentation to players, and low-fidelity ideas that the DM
could develop further while maintaining their creative agency. We see CALYPSO
as exemplifying a paradigm of AI-augmented tools that provide synchronous
creative assistance within established game worlds, and tabletop gaming more
broadly. | http://arxiv.org/pdf/2308.07540 | Andrew Zhu, Lara J. Martin, Andrew Head, Chris Callison-Burch | cs.CL, cs.HC | 11 pages, 4 figures. AIIDE 2023 | null | cs.CL | 20230815 | 20230815 | [
{
"id": "1706.03762"
},
{
"id": "2304.01860"
},
{
"id": "2211.05030"
}
] |
2308.07540 | 47 | We believe this highlights the difference between syn- chronous and asynchronous systems and the importance of removing friction from AI-augmented user interfaces as dis- cussed in Section 4 â while the human user may have the capability to supply a LLM with additional information, the time and computational burden should be on the syn- chronous system rather than the user.
We also find that certain artifacts of the modelâs training process influences its output. For example, the model would sometimes refuse to suggest (fantasy) races, likely due to efforts to reduce the potential for real-world racial bias. In
7 Conclusions In this paper, we present CALYPSO, a system of three LLM-powered interfaces that DMs could use to assist them in preparing and running focused monster encounters in
an established setting, and a large-scale study of how 71 D&D players incorporated CALYPSO into their gameplay. Through interviews with DMs, we established common themes and desires for AI-augmented DM tools, and used these motivations and iterative design to guide our develop- ment. In conclusion, we found that:
1. LLMs are capable brainstorming partners. DMs used CALYPSO to generate both low-fidelity ideas that they could grow using their own creative expression, and guided it to generate high-fidelity descriptions they could present to other players with only minor edits. | 2308.07540#47 | CALYPSO: LLMs as Dungeon Masters' Assistants | The role of a Dungeon Master, or DM, in the game Dungeons & Dragons is to
perform multiple tasks simultaneously. The DM must digest information about the
game setting and monsters, synthesize scenes to present to other players, and
respond to the players' interactions with the scene. Doing all of these tasks
while maintaining consistency within the narrative and story world is no small
feat of human cognition, making the task tiring and unapproachable to new
players. Large language models (LLMs) like GPT-3 and ChatGPT have shown
remarkable abilities to generate coherent natural language text. In this paper,
we conduct a formative evaluation with DMs to establish the use cases of LLMs
in D&D and tabletop gaming generally. We introduce CALYPSO, a system of
LLM-powered interfaces that support DMs with information and inspiration
specific to their own scenario. CALYPSO distills game context into bite-sized
prose and helps brainstorm ideas without distracting the DM from the game. When
given access to CALYPSO, DMs reported that it generated high-fidelity text
suitable for direct presentation to players, and low-fidelity ideas that the DM
could develop further while maintaining their creative agency. We see CALYPSO
as exemplifying a paradigm of AI-augmented tools that provide synchronous
creative assistance within established game worlds, and tabletop gaming more
broadly. | http://arxiv.org/pdf/2308.07540 | Andrew Zhu, Lara J. Martin, Andrew Head, Chris Callison-Burch | cs.CL, cs.HC | 11 pages, 4 figures. AIIDE 2023 | null | cs.CL | 20230815 | 20230815 | [
{
"id": "1706.03762"
},
{
"id": "2304.01860"
},
{
"id": "2211.05030"
}
] |
2308.07540 | 48 | 2. LLMs present thematic commonsense when prompted to. Having been trained on a large corpus containing D&D texts and discussions, works of fantasy literature, and descriptions of real-world creatures, CALYPSO was able to fill in gaps in the D&D literature by probing into thematically relevant common sense knowledge. How- ever, we found that to access this trove of information, the LLM had to be explicitly prompted to do so.
3. LLMs assist, rather than replace, human DMs. CALYPSO was designed to aid a human DM while maintaining their creative agency. We find that human DMs use AI co-DMs to understand complex rules text, brainstorm interactions between non-player characters or monsters, and present DMs with suggestions that the DM can weave into a story to present to players without taking away from the pace of the game. Human creativity is an integral part of sto- rytelling games like D&D, and it is important for future AI tools to always maintain the humanâs creative agency.
# A LLM Prompts
In this section, we provide the prompts used in the CALYPSO system. Generally, we make use of Markdown-style headers to divide sections of the prompt. For chat-based models, we annotate each message with the corresponding role (system, assistant, or user, as exposed in the ChatGPT API).
# A.1 Encounter Understanding | 2308.07540#48 | CALYPSO: LLMs as Dungeon Masters' Assistants | The role of a Dungeon Master, or DM, in the game Dungeons & Dragons is to
perform multiple tasks simultaneously. The DM must digest information about the
game setting and monsters, synthesize scenes to present to other players, and
respond to the players' interactions with the scene. Doing all of these tasks
while maintaining consistency within the narrative and story world is no small
feat of human cognition, making the task tiring and unapproachable to new
players. Large language models (LLMs) like GPT-3 and ChatGPT have shown
remarkable abilities to generate coherent natural language text. In this paper,
we conduct a formative evaluation with DMs to establish the use cases of LLMs
in D&D and tabletop gaming generally. We introduce CALYPSO, a system of
LLM-powered interfaces that support DMs with information and inspiration
specific to their own scenario. CALYPSO distills game context into bite-sized
prose and helps brainstorm ideas without distracting the DM from the game. When
given access to CALYPSO, DMs reported that it generated high-fidelity text
suitable for direct presentation to players, and low-fidelity ideas that the DM
could develop further while maintaining their creative agency. We see CALYPSO
as exemplifying a paradigm of AI-augmented tools that provide synchronous
creative assistance within established game worlds, and tabletop gaming more
broadly. | http://arxiv.org/pdf/2308.07540 | Andrew Zhu, Lara J. Martin, Andrew Head, Chris Callison-Burch | cs.CL, cs.HC | 11 pages, 4 figures. AIIDE 2023 | null | cs.CL | 20230815 | 20230815 | [
{
"id": "1706.03762"
},
{
"id": "2304.01860"
},
{
"id": "2211.05030"
}
] |
2308.07540 | 49 | # A.1 Encounter Understanding
Summarization Summarize the following D&D setting and monsters for a Dungeon Masterâs notes without mentioning game stats.
Setting ======= <Setting description inserted here.>
Creatures ========= <Name> ------ <Statistics and lore inserted here. If the encounter involves multiple creatures, repeat for each creature.>
Summary =======
Abstractive Understanding Your name is Calypso, and your job is to help the Dungeon Master with an encounter. Your task is to help the DM understand the setting and creatures as a group, focusing mainly on appearance and how they act. Especially focus on what makes each creature
# stand out.
Avoid mentioning game stats. You may use information from common sense, mythology, and culture. If there are multiple creatures, conclude by
mentioning how they interact.
Encounter: <Encounter inserted here.> The rest of the prompt follows as in the Summarization prompt above, beginning with the setting. If a monster did not have published lore, we inserted the string âCalypso, please provide the DM with information about the (mon- ster name) using information from (folklore, common sense, mythology, and culture)â (see section 6.1) in place of lore.
# A.2 Focused Brainstorming
SYSTEM: You are a creative D&D player and DM named Calypso. | 2308.07540#49 | CALYPSO: LLMs as Dungeon Masters' Assistants | The role of a Dungeon Master, or DM, in the game Dungeons & Dragons is to
perform multiple tasks simultaneously. The DM must digest information about the
game setting and monsters, synthesize scenes to present to other players, and
respond to the players' interactions with the scene. Doing all of these tasks
while maintaining consistency within the narrative and story world is no small
feat of human cognition, making the task tiring and unapproachable to new
players. Large language models (LLMs) like GPT-3 and ChatGPT have shown
remarkable abilities to generate coherent natural language text. In this paper,
we conduct a formative evaluation with DMs to establish the use cases of LLMs
in D&D and tabletop gaming generally. We introduce CALYPSO, a system of
LLM-powered interfaces that support DMs with information and inspiration
specific to their own scenario. CALYPSO distills game context into bite-sized
prose and helps brainstorm ideas without distracting the DM from the game. When
given access to CALYPSO, DMs reported that it generated high-fidelity text
suitable for direct presentation to players, and low-fidelity ideas that the DM
could develop further while maintaining their creative agency. We see CALYPSO
as exemplifying a paradigm of AI-augmented tools that provide synchronous
creative assistance within established game worlds, and tabletop gaming more
broadly. | http://arxiv.org/pdf/2308.07540 | Andrew Zhu, Lara J. Martin, Andrew Head, Chris Callison-Burch | cs.CL, cs.HC | 11 pages, 4 figures. AIIDE 2023 | null | cs.CL | 20230815 | 20230815 | [
{
"id": "1706.03762"
},
{
"id": "2304.01860"
},
{
"id": "2211.05030"
}
] |
2308.07540 | 50 | # A.2 Focused Brainstorming
SYSTEM: You are a creative D&D player and DM named Calypso.
Avoid mentioning game stats. You may use information from common sense, mythology, and culture.
USER: Iâm running this D&D encounter: < Encounter inserted here.>
<Setting and creatures inserted here, in the same format as Abstractive Understanding.>
Your job is to help brainstorm some ideas for the encounter. If the DM used the Encounter Understanding interface be- fore starting a brainstorming thread, we add an additional message to the prompt: USER: Hereâs what I have so far: <Summary generated by Encounter Understanding inserted here.> This allows the DM to reference ideas proposed by CA- LYPSO in its summary without having to repeat the entire message, aiding continuity.
# Acknowledgments
Thank you to the Northern Lights Province Discord server for playing with us and being so enthusiastic about AI and D&D! Thank you to the NLP server staff - friends and play- ers who helped us write rules, settings, game mechanics, and manage so many players: Ryan Crowley, Nicki Dulmage- Bekker, @ephesia, @lyra.kat, and Joseph Keen. Finally, thank you to D&D Beyond for providing us with access to monster information and game materials. | 2308.07540#50 | CALYPSO: LLMs as Dungeon Masters' Assistants | The role of a Dungeon Master, or DM, in the game Dungeons & Dragons is to
perform multiple tasks simultaneously. The DM must digest information about the
game setting and monsters, synthesize scenes to present to other players, and
respond to the players' interactions with the scene. Doing all of these tasks
while maintaining consistency within the narrative and story world is no small
feat of human cognition, making the task tiring and unapproachable to new
players. Large language models (LLMs) like GPT-3 and ChatGPT have shown
remarkable abilities to generate coherent natural language text. In this paper,
we conduct a formative evaluation with DMs to establish the use cases of LLMs
in D&D and tabletop gaming generally. We introduce CALYPSO, a system of
LLM-powered interfaces that support DMs with information and inspiration
specific to their own scenario. CALYPSO distills game context into bite-sized
prose and helps brainstorm ideas without distracting the DM from the game. When
given access to CALYPSO, DMs reported that it generated high-fidelity text
suitable for direct presentation to players, and low-fidelity ideas that the DM
could develop further while maintaining their creative agency. We see CALYPSO
as exemplifying a paradigm of AI-augmented tools that provide synchronous
creative assistance within established game worlds, and tabletop gaming more
broadly. | http://arxiv.org/pdf/2308.07540 | Andrew Zhu, Lara J. Martin, Andrew Head, Chris Callison-Burch | cs.CL, cs.HC | 11 pages, 4 figures. AIIDE 2023 | null | cs.CL | 20230815 | 20230815 | [
{
"id": "1706.03762"
},
{
"id": "2304.01860"
},
{
"id": "2211.05030"
}
] |
2308.07540 | 52 | References Acharya, D.; Mateas, M.; and Wardrip-Fruin, N. 2021. Inter- views Towards Designing Support Tools for TTRPG Game Masters. In Mitchell, A.; and Vosmeer, M., eds., Interactive Storytelling, Lecture Notes in Computer Science, 283â287. Cham: Springer International Publishing. ISBN 978-3-030- 92300-6. Akoury, N.; Wang, S.; Whiting, J.; Hood, S.; Peng, N.; and Iyyer, M. 2020. STORIUM: A Dataset and Evaluation Plat- In Pro- form for Machine-in-the-Loop Story Generation. ceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), 6470â6484. On- line: Association for Computational Linguistics. Amman, K. 2019. The Monsters Know What Theyâre Doing. New York, NY: Gallery Books. ISBN 9781982122669. Ammanabrolu, P.; Cheung, W.; Tu, D.; Broniec, W.; and Riedl, M. 2020. Bringing Stories Alive: Generating Inter- active Fiction Worlds. Proceedings of the | 2308.07540#52 | CALYPSO: LLMs as Dungeon Masters' Assistants | The role of a Dungeon Master, or DM, in the game Dungeons & Dragons is to
perform multiple tasks simultaneously. The DM must digest information about the
game setting and monsters, synthesize scenes to present to other players, and
respond to the players' interactions with the scene. Doing all of these tasks
while maintaining consistency within the narrative and story world is no small
feat of human cognition, making the task tiring and unapproachable to new
players. Large language models (LLMs) like GPT-3 and ChatGPT have shown
remarkable abilities to generate coherent natural language text. In this paper,
we conduct a formative evaluation with DMs to establish the use cases of LLMs
in D&D and tabletop gaming generally. We introduce CALYPSO, a system of
LLM-powered interfaces that support DMs with information and inspiration
specific to their own scenario. CALYPSO distills game context into bite-sized
prose and helps brainstorm ideas without distracting the DM from the game. When
given access to CALYPSO, DMs reported that it generated high-fidelity text
suitable for direct presentation to players, and low-fidelity ideas that the DM
could develop further while maintaining their creative agency. We see CALYPSO
as exemplifying a paradigm of AI-augmented tools that provide synchronous
creative assistance within established game worlds, and tabletop gaming more
broadly. | http://arxiv.org/pdf/2308.07540 | Andrew Zhu, Lara J. Martin, Andrew Head, Chris Callison-Burch | cs.CL, cs.HC | 11 pages, 4 figures. AIIDE 2023 | null | cs.CL | 20230815 | 20230815 | [
{
"id": "1706.03762"
},
{
"id": "2304.01860"
},
{
"id": "2211.05030"
}
] |
2308.07540 | 53 | D.; Broniec, W.; and Riedl, M. 2020. Bringing Stories Alive: Generating Inter- active Fiction Worlds. Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertain- ment, 16(1): 3â9. Bergstr¨om, K. 2011. Framing Storytelling with Games. In Si, M.; Thue, D.; Andr´e, E.; Lester, J. C.; Tanenbaum, T. J.; and Zammitto, V., eds., Interactive Storytelling, Lec- ture Notes in Computer Science, 170â181. Berlin, Heidel- berg: Springer. ISBN 978-3-642-25289-1. Brown, T.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J. D.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; Agarwal, S.; Herbert-Voss, A.; Krueger, G.; Henighan, T.; Child, R.; Ramesh, A.; Ziegler, D.; Wu, J.; Winter, C.; Hesse, C.; Chen, M.; Sigler, E.; Litwin, M.; Gray, | 2308.07540#53 | CALYPSO: LLMs as Dungeon Masters' Assistants | The role of a Dungeon Master, or DM, in the game Dungeons & Dragons is to
perform multiple tasks simultaneously. The DM must digest information about the
game setting and monsters, synthesize scenes to present to other players, and
respond to the players' interactions with the scene. Doing all of these tasks
while maintaining consistency within the narrative and story world is no small
feat of human cognition, making the task tiring and unapproachable to new
players. Large language models (LLMs) like GPT-3 and ChatGPT have shown
remarkable abilities to generate coherent natural language text. In this paper,
we conduct a formative evaluation with DMs to establish the use cases of LLMs
in D&D and tabletop gaming generally. We introduce CALYPSO, a system of
LLM-powered interfaces that support DMs with information and inspiration
specific to their own scenario. CALYPSO distills game context into bite-sized
prose and helps brainstorm ideas without distracting the DM from the game. When
given access to CALYPSO, DMs reported that it generated high-fidelity text
suitable for direct presentation to players, and low-fidelity ideas that the DM
could develop further while maintaining their creative agency. We see CALYPSO
as exemplifying a paradigm of AI-augmented tools that provide synchronous
creative assistance within established game worlds, and tabletop gaming more
broadly. | http://arxiv.org/pdf/2308.07540 | Andrew Zhu, Lara J. Martin, Andrew Head, Chris Callison-Burch | cs.CL, cs.HC | 11 pages, 4 figures. AIIDE 2023 | null | cs.CL | 20230815 | 20230815 | [
{
"id": "1706.03762"
},
{
"id": "2304.01860"
},
{
"id": "2211.05030"
}
] |
2308.07540 | 54 | Ziegler, D.; Wu, J.; Winter, C.; Hesse, C.; Chen, M.; Sigler, E.; Litwin, M.; Gray, S.; Chess, B.; Clark, J.; Berner, C.; McCandlish, S.; Radford, A.; Sutskever, I.; and Amodei, D. 2020. Language Mod- els are Few-Shot Learners. In Larochelle, H.; Ranzato, M.; Hadsell, R.; Balcan, M.; and Lin, H., eds., Advances in Neu- ral Information Processing Systems, volume 33, 1877â1901. Curran Associates, Inc. Calderwood, A.; Qiu, V.; Gero, K. I.; and Chilton, L. B. 2020. How Novelists Use Generative Language Models: An Exploratory User Study. In International Conference on Intelligent User Interfaces (IUI) Workshops. Cagliari, Italy: ACM. Callison-Burch, C.; Singh Tomar, G.; Martin, L. J.; Ippolito, D.; Bailis, S.; and Reitter, D. 2022. Dungeons and Dragons as a Dialogue Challenge for Artificial Intelligence. In Con- ference on Empirical Methods in Natural Language | 2308.07540#54 | CALYPSO: LLMs as Dungeon Masters' Assistants | The role of a Dungeon Master, or DM, in the game Dungeons & Dragons is to
perform multiple tasks simultaneously. The DM must digest information about the
game setting and monsters, synthesize scenes to present to other players, and
respond to the players' interactions with the scene. Doing all of these tasks
while maintaining consistency within the narrative and story world is no small
feat of human cognition, making the task tiring and unapproachable to new
players. Large language models (LLMs) like GPT-3 and ChatGPT have shown
remarkable abilities to generate coherent natural language text. In this paper,
we conduct a formative evaluation with DMs to establish the use cases of LLMs
in D&D and tabletop gaming generally. We introduce CALYPSO, a system of
LLM-powered interfaces that support DMs with information and inspiration
specific to their own scenario. CALYPSO distills game context into bite-sized
prose and helps brainstorm ideas without distracting the DM from the game. When
given access to CALYPSO, DMs reported that it generated high-fidelity text
suitable for direct presentation to players, and low-fidelity ideas that the DM
could develop further while maintaining their creative agency. We see CALYPSO
as exemplifying a paradigm of AI-augmented tools that provide synchronous
creative assistance within established game worlds, and tabletop gaming more
broadly. | http://arxiv.org/pdf/2308.07540 | Andrew Zhu, Lara J. Martin, Andrew Head, Chris Callison-Burch | cs.CL, cs.HC | 11 pages, 4 figures. AIIDE 2023 | null | cs.CL | 20230815 | 20230815 | [
{
"id": "1706.03762"
},
{
"id": "2304.01860"
},
{
"id": "2211.05030"
}
] |
2308.07540 | 55 | S.; and Reitter, D. 2022. Dungeons and Dragons as a Dialogue Challenge for Artificial Intelligence. In Con- ference on Empirical Methods in Natural Language Pro- cessing (EMNLP), 9379â9393. Abu Dhabi, United Arab Emirates: Association for Computational Linguistics. Chung, J. J. Y.; Kim, W.; Yoo, K. M.; Lee, H.; Adar, E.; and Chang, M. 2022. TaleBrush: Sketching Stories with Gener- ative Pretrained Language Models. In CHI Conference on Human Factors in Computing Systems, 1â19. New Orleans LA USA: ACM. ISBN 978-1-4503-9157-3. Coenen, A.; Davis, L.; Ippolito, D.; Reif, E.; and Yuan, A. 2021. Wordcraft: a Human-AI Collaborative Editor for In First Workshop on Bridging Human- Story Writing. Computer Interaction and Natural Language Processing at EACL 2021. Association for Computational Linguistics. | 2308.07540#55 | CALYPSO: LLMs as Dungeon Masters' Assistants | The role of a Dungeon Master, or DM, in the game Dungeons & Dragons is to
perform multiple tasks simultaneously. The DM must digest information about the
game setting and monsters, synthesize scenes to present to other players, and
respond to the players' interactions with the scene. Doing all of these tasks
while maintaining consistency within the narrative and story world is no small
feat of human cognition, making the task tiring and unapproachable to new
players. Large language models (LLMs) like GPT-3 and ChatGPT have shown
remarkable abilities to generate coherent natural language text. In this paper,
we conduct a formative evaluation with DMs to establish the use cases of LLMs
in D&D and tabletop gaming generally. We introduce CALYPSO, a system of
LLM-powered interfaces that support DMs with information and inspiration
specific to their own scenario. CALYPSO distills game context into bite-sized
prose and helps brainstorm ideas without distracting the DM from the game. When
given access to CALYPSO, DMs reported that it generated high-fidelity text
suitable for direct presentation to players, and low-fidelity ideas that the DM
could develop further while maintaining their creative agency. We see CALYPSO
as exemplifying a paradigm of AI-augmented tools that provide synchronous
creative assistance within established game worlds, and tabletop gaming more
broadly. | http://arxiv.org/pdf/2308.07540 | Andrew Zhu, Lara J. Martin, Andrew Head, Chris Callison-Burch | cs.CL, cs.HC | 11 pages, 4 figures. AIIDE 2023 | null | cs.CL | 20230815 | 20230815 | [
{
"id": "1706.03762"
},
{
"id": "2304.01860"
},
{
"id": "2211.05030"
}
] |
2308.07540 | 56 | Crawford, J.; Mearls, M.; and Perkins, C. 2018. D&D Basic Rules. Renton, WA: Wizards of the Coast. Crawford, J.; Perkins, C.; and Wyatt, J. 2014. Dungeon Mas- terâs Guide. Renton, WA: Wizards of the Coast. D&D Beyond. 2017. dndbeyond.com/.
dScryb. 2020. dScryb. https://dscryb.com/.
Foundry Gaming, LLC. 2019. Foundry Virtual Tabletop. https://foundryvtt.com/. Gygax, G.; and Arneson, D. 1974. Dungeons & Dragons. Holtzman, A.; Buys, J.; Du, L.; Forbes, M.; and Choi, Y. 2020. The Curious Case of Neural Text Degeneration. In International Conference on Learning Representations. Ippolito, D.; Yuan, A.; Coenen, A.; and Burnam, S. 2022. Creative Writing with an AI-Powered Writing Assistant: Perspectives from Professional Writers. ArXiv:2211.05030 [cs]. | 2308.07540#56 | CALYPSO: LLMs as Dungeon Masters' Assistants | The role of a Dungeon Master, or DM, in the game Dungeons & Dragons is to
perform multiple tasks simultaneously. The DM must digest information about the
game setting and monsters, synthesize scenes to present to other players, and
respond to the players' interactions with the scene. Doing all of these tasks
while maintaining consistency within the narrative and story world is no small
feat of human cognition, making the task tiring and unapproachable to new
players. Large language models (LLMs) like GPT-3 and ChatGPT have shown
remarkable abilities to generate coherent natural language text. In this paper,
we conduct a formative evaluation with DMs to establish the use cases of LLMs
in D&D and tabletop gaming generally. We introduce CALYPSO, a system of
LLM-powered interfaces that support DMs with information and inspiration
specific to their own scenario. CALYPSO distills game context into bite-sized
prose and helps brainstorm ideas without distracting the DM from the game. When
given access to CALYPSO, DMs reported that it generated high-fidelity text
suitable for direct presentation to players, and low-fidelity ideas that the DM
could develop further while maintaining their creative agency. We see CALYPSO
as exemplifying a paradigm of AI-augmented tools that provide synchronous
creative assistance within established game worlds, and tabletop gaming more
broadly. | http://arxiv.org/pdf/2308.07540 | Andrew Zhu, Lara J. Martin, Andrew Head, Chris Callison-Burch | cs.CL, cs.HC | 11 pages, 4 figures. AIIDE 2023 | null | cs.CL | 20230815 | 20230815 | [
{
"id": "1706.03762"
},
{
"id": "2304.01860"
},
{
"id": "2211.05030"
}
] |
2308.07540 | 58 | Kreminski, M.; Dickinson, M.; Wardrip-Fruin, N.; and Mateas, M. 2022. Loose Ends: A Mixed-Initiative Creative Interface for Playful Storytelling. Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, 18(1): 120â128. Number: 1. Lin, Z.; Agarwal, R.; and Riedl, M. 2022. Creative Wand: A System to Study Effects of Communications in Co-Creative Settings. AAAI Conference on Artificial Intelligence and In- teractive Digital Entertainment (AIIDE), 18(1): 45â52. Louis, A.; and Sutton, C. 2018. Deep Dungeons and Drag- ons: Learning Character-Action Interactions from Role- Playing Game Transcripts. In Conference of the North Amer- ican Chapter of the Association for Computational Linguis- tics: Human Language Technologies (NAACL-HLT), volume Volume 2 (Short Papers), 708â713. New Orleans, Louisiana: Association for Computational Linguistics. | 2308.07540#58 | CALYPSO: LLMs as Dungeon Masters' Assistants | The role of a Dungeon Master, or DM, in the game Dungeons & Dragons is to
perform multiple tasks simultaneously. The DM must digest information about the
game setting and monsters, synthesize scenes to present to other players, and
respond to the players' interactions with the scene. Doing all of these tasks
while maintaining consistency within the narrative and story world is no small
feat of human cognition, making the task tiring and unapproachable to new
players. Large language models (LLMs) like GPT-3 and ChatGPT have shown
remarkable abilities to generate coherent natural language text. In this paper,
we conduct a formative evaluation with DMs to establish the use cases of LLMs
in D&D and tabletop gaming generally. We introduce CALYPSO, a system of
LLM-powered interfaces that support DMs with information and inspiration
specific to their own scenario. CALYPSO distills game context into bite-sized
prose and helps brainstorm ideas without distracting the DM from the game. When
given access to CALYPSO, DMs reported that it generated high-fidelity text
suitable for direct presentation to players, and low-fidelity ideas that the DM
could develop further while maintaining their creative agency. We see CALYPSO
as exemplifying a paradigm of AI-augmented tools that provide synchronous
creative assistance within established game worlds, and tabletop gaming more
broadly. | http://arxiv.org/pdf/2308.07540 | Andrew Zhu, Lara J. Martin, Andrew Head, Chris Callison-Burch | cs.CL, cs.HC | 11 pages, 4 figures. AIIDE 2023 | null | cs.CL | 20230815 | 20230815 | [
{
"id": "1706.03762"
},
{
"id": "2304.01860"
},
{
"id": "2211.05030"
}
] |
2308.07540 | 59 | Martin, L. J.; Sood, S.; and Riedl, M. O. 2018. Dungeons and DQNs: Toward Reinforcement Learning Agents that Play Tabletop Roleplaying Games. In Wu, H.-Y.; Si, M.; and Jhala, A., eds., Joint Workshop on Intelligent Narra- tive Technologies and Workshop on Intelligent Cinematog- raphy and Editing (INT-WICED). Edmonton, AB, Canada: http://ceur-ws.org.
Newman, P.; and Liu, Y. 2022. Generating Descriptive and Rules-Adhering Spells for Dungeons & Dragons Fifth In Proceedings of the 9th Workshop on Games Edition. and Natural Language Processing within the 13th Language Resources and Evaluation Conference, 54â60. Marseille, France: European Language Resources Association.
OpenAI. 2022. Introducing ChatGPT. https://openai.com/ blog/chatgpt. | 2308.07540#59 | CALYPSO: LLMs as Dungeon Masters' Assistants | The role of a Dungeon Master, or DM, in the game Dungeons & Dragons is to
perform multiple tasks simultaneously. The DM must digest information about the
game setting and monsters, synthesize scenes to present to other players, and
respond to the players' interactions with the scene. Doing all of these tasks
while maintaining consistency within the narrative and story world is no small
feat of human cognition, making the task tiring and unapproachable to new
players. Large language models (LLMs) like GPT-3 and ChatGPT have shown
remarkable abilities to generate coherent natural language text. In this paper,
we conduct a formative evaluation with DMs to establish the use cases of LLMs
in D&D and tabletop gaming generally. We introduce CALYPSO, a system of
LLM-powered interfaces that support DMs with information and inspiration
specific to their own scenario. CALYPSO distills game context into bite-sized
prose and helps brainstorm ideas without distracting the DM from the game. When
given access to CALYPSO, DMs reported that it generated high-fidelity text
suitable for direct presentation to players, and low-fidelity ideas that the DM
could develop further while maintaining their creative agency. We see CALYPSO
as exemplifying a paradigm of AI-augmented tools that provide synchronous
creative assistance within established game worlds, and tabletop gaming more
broadly. | http://arxiv.org/pdf/2308.07540 | Andrew Zhu, Lara J. Martin, Andrew Head, Chris Callison-Burch | cs.CL, cs.HC | 11 pages, 4 figures. AIIDE 2023 | null | cs.CL | 20230815 | 20230815 | [
{
"id": "1706.03762"
},
{
"id": "2304.01860"
},
{
"id": "2211.05030"
}
] |
2308.07540 | 60 | Parry, W. T.; and Hacker, E. A. 1991. Aristotelian logic. Albany, NY: State University of New York Press. ISBN 9780791406892. Perez, M. R. B.; Eisemann, E.; and Bidarra, R. 2021. A Synset-Based Recommender Method for Mixed-Initiative Narrative World Creation. In Mitchell, A.; and Vosmeer, M., eds., Interactive Storytelling, Lecture Notes in Com- puter Science, 13â28. Cham: Springer International Publish- ing. ISBN 978-3-030-92300-6. Perkins, C.; Crawford, J.; Sims, C.; Thompson, R.; Lee, P.; Mearls, M.; Schwalb, R. J.; Sernett, M.; Townshend, S.; and Wyatt, J. 2014. Monster Manual. Renton, WA: Wizards of the Coast. Rameshkumar, R.; and Bailey, P. 2020. Storytelling with Di- alogue: A Critical Role Dungeons and Dragons Dataset. In Annual Meeting of the Association for Computational Lin- guistics (ACL), 5121â5134. Online: Association for | 2308.07540#60 | CALYPSO: LLMs as Dungeon Masters' Assistants | The role of a Dungeon Master, or DM, in the game Dungeons & Dragons is to
perform multiple tasks simultaneously. The DM must digest information about the
game setting and monsters, synthesize scenes to present to other players, and
respond to the players' interactions with the scene. Doing all of these tasks
while maintaining consistency within the narrative and story world is no small
feat of human cognition, making the task tiring and unapproachable to new
players. Large language models (LLMs) like GPT-3 and ChatGPT have shown
remarkable abilities to generate coherent natural language text. In this paper,
we conduct a formative evaluation with DMs to establish the use cases of LLMs
in D&D and tabletop gaming generally. We introduce CALYPSO, a system of
LLM-powered interfaces that support DMs with information and inspiration
specific to their own scenario. CALYPSO distills game context into bite-sized
prose and helps brainstorm ideas without distracting the DM from the game. When
given access to CALYPSO, DMs reported that it generated high-fidelity text
suitable for direct presentation to players, and low-fidelity ideas that the DM
could develop further while maintaining their creative agency. We see CALYPSO
as exemplifying a paradigm of AI-augmented tools that provide synchronous
creative assistance within established game worlds, and tabletop gaming more
broadly. | http://arxiv.org/pdf/2308.07540 | Andrew Zhu, Lara J. Martin, Andrew Head, Chris Callison-Burch | cs.CL, cs.HC | 11 pages, 4 figures. AIIDE 2023 | null | cs.CL | 20230815 | 20230815 | [
{
"id": "1706.03762"
},
{
"id": "2304.01860"
},
{
"id": "2211.05030"
}
] |
2308.07540 | 61 | and Dragons Dataset. In Annual Meeting of the Association for Computational Lin- guistics (ACL), 5121â5134. Online: Association for Compu- tational Linguistics. Roemmele, M.; and Gordon, A. S. 2015. Creative Help: A Story Writing Assistant. In Schoenau-Fog, H.; Bruni, L. E.; Louchart, S.; and Baceviciute, S., eds., Interactive Story- telling, volume 9445, 81â92. Cham: Springer International Publishing. ISBN 978-3-319-27035-7 978-3-319-27036-4. Series Title: Lecture Notes in Computer Science. Roven, T. 2014. Tabletop Audio. https://tabletopaudio.com/. Samuel, B.; Mateas, M.; and Wardrip-Fruin, N. 2016. The Design of Writing Buddy: A Mixed-Initiative Approach To- wards Computational Story Collaboration. In Nack, F.; and Gordon, A. S., eds., Interactive Storytelling, volume 10045, 388â396. Cham: Springer International Publishing. ISBN 978-3-319-48278-1 978-3-319-48279-8. Series Title: | 2308.07540#61 | CALYPSO: LLMs as Dungeon Masters' Assistants | The role of a Dungeon Master, or DM, in the game Dungeons & Dragons is to
perform multiple tasks simultaneously. The DM must digest information about the
game setting and monsters, synthesize scenes to present to other players, and
respond to the players' interactions with the scene. Doing all of these tasks
while maintaining consistency within the narrative and story world is no small
feat of human cognition, making the task tiring and unapproachable to new
players. Large language models (LLMs) like GPT-3 and ChatGPT have shown
remarkable abilities to generate coherent natural language text. In this paper,
we conduct a formative evaluation with DMs to establish the use cases of LLMs
in D&D and tabletop gaming generally. We introduce CALYPSO, a system of
LLM-powered interfaces that support DMs with information and inspiration
specific to their own scenario. CALYPSO distills game context into bite-sized
prose and helps brainstorm ideas without distracting the DM from the game. When
given access to CALYPSO, DMs reported that it generated high-fidelity text
suitable for direct presentation to players, and low-fidelity ideas that the DM
could develop further while maintaining their creative agency. We see CALYPSO
as exemplifying a paradigm of AI-augmented tools that provide synchronous
creative assistance within established game worlds, and tabletop gaming more
broadly. | http://arxiv.org/pdf/2308.07540 | Andrew Zhu, Lara J. Martin, Andrew Head, Chris Callison-Burch | cs.CL, cs.HC | 11 pages, 4 figures. AIIDE 2023 | null | cs.CL | 20230815 | 20230815 | [
{
"id": "1706.03762"
},
{
"id": "2304.01860"
},
{
"id": "2211.05030"
}
] |
2308.07540 | 62 | 388â396. Cham: Springer International Publishing. ISBN 978-3-319-48278-1 978-3-319-48279-8. Series Title: Lec- ture Notes in Computer Science. Santiago, J. M., III; Parayno, R. L.; Deja, J. A.; and Sam- son, B. P. V. 2023. Rolling the Dice: Imagining Genera- tive AI as a Dungeons & Dragons Storytelling Companion. ArXiv:2304.01860 [cs]. van Velsen, M.; Williams, J.; and Verhulsdonck, G. 2009. Table-Top Gaming Narratology for Digital Interactive Sto- rytelling. In Iurgel, I. A.; Zagalo, N.; and Petta, P., eds., In- teractive Storytelling, Lecture Notes in Computer Science, 109â120. Berlin, Heidelberg: Springer. ISBN 978-3-642- 10643-9. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, L.; and Polosukhin, I. 2017. At- tention Is All You Need. | 2308.07540#62 | CALYPSO: LLMs as Dungeon Masters' Assistants | The role of a Dungeon Master, or DM, in the game Dungeons & Dragons is to
perform multiple tasks simultaneously. The DM must digest information about the
game setting and monsters, synthesize scenes to present to other players, and
respond to the players' interactions with the scene. Doing all of these tasks
while maintaining consistency within the narrative and story world is no small
feat of human cognition, making the task tiring and unapproachable to new
players. Large language models (LLMs) like GPT-3 and ChatGPT have shown
remarkable abilities to generate coherent natural language text. In this paper,
we conduct a formative evaluation with DMs to establish the use cases of LLMs
in D&D and tabletop gaming generally. We introduce CALYPSO, a system of
LLM-powered interfaces that support DMs with information and inspiration
specific to their own scenario. CALYPSO distills game context into bite-sized
prose and helps brainstorm ideas without distracting the DM from the game. When
given access to CALYPSO, DMs reported that it generated high-fidelity text
suitable for direct presentation to players, and low-fidelity ideas that the DM
could develop further while maintaining their creative agency. We see CALYPSO
as exemplifying a paradigm of AI-augmented tools that provide synchronous
creative assistance within established game worlds, and tabletop gaming more
broadly. | http://arxiv.org/pdf/2308.07540 | Andrew Zhu, Lara J. Martin, Andrew Head, Chris Callison-Burch | cs.CL, cs.HC | 11 pages, 4 figures. AIIDE 2023 | null | cs.CL | 20230815 | 20230815 | [
{
"id": "1706.03762"
},
{
"id": "2304.01860"
},
{
"id": "2211.05030"
}
] |
2308.07540 | 63 | J.; Jones, L.; Gomez, A. N.; Kaiser, L.; and Polosukhin, I. 2017. At- tention Is All You Need. arXiv:1706.03762. Yang, D.; Zhou, Y.; Zhang, Z.; Jia, T.; Li, J.; and Lc, R. 2022. AI as an Active Writer: Interaction strategies with generated text in human-AI collaborative fiction writing. In Joint Pro- ceedings of the ACM IUI Workshops 2022. Helsinki, Fin- land. Yuan, Y.; Cao, J.; Wang, R.; and Yarosh, S. 2021. Tabletop Games in the Age of Remote Collaboration: Design Oppor- tunities for a Socially Connected Game Experience. In Pro- ceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 1â14. Yokohama Japan: ACM. ISBN 978-1-4503-8096-6. | 2308.07540#63 | CALYPSO: LLMs as Dungeon Masters' Assistants | The role of a Dungeon Master, or DM, in the game Dungeons & Dragons is to
perform multiple tasks simultaneously. The DM must digest information about the
game setting and monsters, synthesize scenes to present to other players, and
respond to the players' interactions with the scene. Doing all of these tasks
while maintaining consistency within the narrative and story world is no small
feat of human cognition, making the task tiring and unapproachable to new
players. Large language models (LLMs) like GPT-3 and ChatGPT have shown
remarkable abilities to generate coherent natural language text. In this paper,
we conduct a formative evaluation with DMs to establish the use cases of LLMs
in D&D and tabletop gaming generally. We introduce CALYPSO, a system of
LLM-powered interfaces that support DMs with information and inspiration
specific to their own scenario. CALYPSO distills game context into bite-sized
prose and helps brainstorm ideas without distracting the DM from the game. When
given access to CALYPSO, DMs reported that it generated high-fidelity text
suitable for direct presentation to players, and low-fidelity ideas that the DM
could develop further while maintaining their creative agency. We see CALYPSO
as exemplifying a paradigm of AI-augmented tools that provide synchronous
creative assistance within established game worlds, and tabletop gaming more
broadly. | http://arxiv.org/pdf/2308.07540 | Andrew Zhu, Lara J. Martin, Andrew Head, Chris Callison-Burch | cs.CL, cs.HC | 11 pages, 4 figures. AIIDE 2023 | null | cs.CL | 20230815 | 20230815 | [
{
"id": "1706.03762"
},
{
"id": "2304.01860"
},
{
"id": "2211.05030"
}
] |
2308.07540 | 64 | Zhou, P.; Zhu, A.; Hu, J.; Pujara, J.; Ren, X.; Callison-Burch, C.; Choi, Y.; and Ammanabrolu, P. 2023. I Cast Detect Thoughts: Learning to Converse and Guide with Intents and Theory-of-Mind in Dungeons and Dragons. In Proceedings of the 61st Annual Meeting of the Association for Computa- tional Linguistics (Volume 1: Long Papers), 11136â11155. Toronto, Canada: Association for Computational Linguis- tics. Zhu, A.; Aggarwal, K.; Feng, A.; Martin, L.; and Callison- Burch, C. 2023. FIREBALL: A Dataset of Dungeons and Dragons Actual-Play with Structured Game State Informa- tion. In Proceedings of the 61st Annual Meeting of the Asso- ciation for Computational Linguistics (Volume 1: Long Pa- pers), 4171â4193. Toronto, Canada: Association for Com- putational Linguistics. Zhu, A.; and D&D Beyond. 2016. Avrae. https://avrae.io/. | 2308.07540#64 | CALYPSO: LLMs as Dungeon Masters' Assistants | The role of a Dungeon Master, or DM, in the game Dungeons & Dragons is to
perform multiple tasks simultaneously. The DM must digest information about the
game setting and monsters, synthesize scenes to present to other players, and
respond to the players' interactions with the scene. Doing all of these tasks
while maintaining consistency within the narrative and story world is no small
feat of human cognition, making the task tiring and unapproachable to new
players. Large language models (LLMs) like GPT-3 and ChatGPT have shown
remarkable abilities to generate coherent natural language text. In this paper,
we conduct a formative evaluation with DMs to establish the use cases of LLMs
in D&D and tabletop gaming generally. We introduce CALYPSO, a system of
LLM-powered interfaces that support DMs with information and inspiration
specific to their own scenario. CALYPSO distills game context into bite-sized
prose and helps brainstorm ideas without distracting the DM from the game. When
given access to CALYPSO, DMs reported that it generated high-fidelity text
suitable for direct presentation to players, and low-fidelity ideas that the DM
could develop further while maintaining their creative agency. We see CALYPSO
as exemplifying a paradigm of AI-augmented tools that provide synchronous
creative assistance within established game worlds, and tabletop gaming more
broadly. | http://arxiv.org/pdf/2308.07540 | Andrew Zhu, Lara J. Martin, Andrew Head, Chris Callison-Burch | cs.CL, cs.HC | 11 pages, 4 figures. AIIDE 2023 | null | cs.CL | 20230815 | 20230815 | [
{
"id": "1706.03762"
},
{
"id": "2304.01860"
},
{
"id": "2211.05030"
}
] |
2308.06921 | 0 | 3 2 0 2
g u A 4 1 ] Y C . s c [
1 v 1 2 9 6 0 . 8 0 3 2 : v i X r a
# CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes
# Mark Liffiton [email protected] Illinois Wesleyan University Bloomington, Illinois, USA
Brad Sheese [email protected] Illinois Wesleyan University Bloomington, Illinois, USA
Jaromir Savelka [email protected] Carnegie Mellon University Pittsburgh, Pennsylvania, USA
# Paul Denny [email protected] The University of Auckland Auckland, New Zealand | 2308.06921#0 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.06921 | 1 | # Paul Denny [email protected] The University of Auckland Auckland, New Zealand
ABSTRACT Computing educators face significant challenges in providing timely support to students, especially in large class settings. Large lan- guage models (LLMs) have emerged recently and show great promise for providing on-demand help at a large scale, but there are concerns that students may over-rely on the outputs produced by these mod- els. In this paper, we introduce CodeHelp, a novel LLM-powered tool designed with guardrails to provide on-demand assistance to programming students without directly revealing solutions. We detail the design of the tool, which incorporates a number of useful features for instructors, and elaborate on the pipeline of prompt- ing strategies we use to ensure generated outputs are suitable for students. To evaluate CodeHelp, we deployed it in a first-year com- puter and data science course with 52 students and collected student interactions over a 12-week period. We examine studentsâ usage patterns and perceptions of the tool, and we report reflections from the course instructor and a series of recommendations for classroom use. Our findings suggest that CodeHelp is well-received by stu- dents who especially value its availability and help with resolving errors, and that for instructors it is easy to deploy and complements, rather than replaces, the support that they provide to students. | 2308.06921#1 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07107 | 1 | AbstractâAs a primary means of information acquisition, information retrieval (IR) systems, such as search engines, have integrated themselves into our daily lives. These systems also serve as components of dialogue, question-answering, and recommender systems. The trajectory of IR has evolved dynamically from its origins in term-based methods to its integration with advanced neural models. While the neural models excel at capturing complex contextual signals and semantic nuances, thereby reshaping the IR landscape, they still face challenges such as data scarcity, interpretability, and the generation of contextually plausible yet potentially inaccurate responses. This evolution requires a combination of both traditional methods (such as term-based sparse retrieval methods with rapid response) and modern neural architectures (such as language models with powerful language understanding capacity). Meanwhile, the emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has revolutionized natural language processing due to their remarkable language understanding, generation, generalization, and reasoning abilities. Consequently, recent research has sought to leverage LLMs to improve IR systems. Given the rapid evolution of this research trajectory, it is necessary to consolidate existing methodologies and provide nuanced insights through a comprehensive overview. In this survey, we delve into the confluence of LLMs and IR systems, including crucial aspects such as query rewriters, retrievers, rerankers, and readers. Additionally, we explore promising directions, such as search agents, within this expanding field. | 2308.07107#1 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 1 | # ABSTRACT
Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile COMMITPACK: 4 terabytes of Git commits across 350 programming languages. We benchmark COMMITPACK against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HUMANEVALPACK, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OCTOCODER and OCTOGEEX, achieve the best performance across HUMANEVALPACK among all permissive models, demonstrating COMMITPACKâs benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack. | 2308.07124#1 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07201 | 1 | Text evaluation has historically posed significant challenges, often demanding substantial labor and time cost. With the emergence of large language models (LLMs), researchers have explored LLMsâ potential as alternatives for human evaluation. While these single-agent-based approaches show promise, experi- mental results suggest that further advancements are needed to bridge the gap between their current effectiveness and human-level evaluation quality. Recog- nizing that best practices of human evaluation processes often involve multiple human annotators collaborating in the evaluation, we resort to a multi-agent debate framework, moving beyond single-agent prompting strategies. The multi-agent- based approach enables a group of LLMs to synergize with an array of intelli- gent counterparts, harnessing their distinct capabilities and expertise to enhance efficiency and effectiveness in handling intricate tasks. In this paper, we con- struct a multi-agent referee team called ChatEval to autonomously discuss and evaluate the quality of generated responses from different models on open-ended questions and traditional natural language generation (NLG) tasks. We derive insights and lessons from practical scenarios where humans instigate group dis- cussions for brainstorming and propose different communication | 2308.07201#1 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | Text evaluation has historically posed significant challenges, often
demanding substantial labor and time cost. With the emergence of large language
models (LLMs), researchers have explored LLMs' potential as alternatives for
human evaluation. While these single-agent-based approaches show promise,
experimental results suggest that further advancements are needed to bridge the
gap between their current effectiveness and human-level evaluation quality.
Recognizing that best practices of human evaluation processes often involve
multiple human annotators collaborating in the evaluation, we resort to a
multi-agent debate framework, moving beyond single-agent prompting strategies.
The multi-agent-based approach enables a group of LLMs to synergize with an
array of intelligent counterparts, harnessing their distinct capabilities and
expertise to enhance efficiency and effectiveness in handling intricate tasks.
In this paper, we construct a multi-agent referee team called ChatEval to
autonomously discuss and evaluate the quality of generated responses from
different models on open-ended questions and traditional natural language
generation (NLG) tasks. Our analysis shows that ChatEval transcends mere
textual scoring, offering a human-mimicking evaluation process for reliable
assessments. Our code is available at https://github.com/chanchimin/ChatEval. | http://arxiv.org/pdf/2308.07201 | Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, Zhiyuan Liu | cs.CL | null | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2303.04048"
},
{
"id": "2004.04696"
},
{
"id": "1810.04805"
},
{
"id": "2005.00456"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2307.02762"
},
{
"id": "2305.19118"
},
{
"id": "2110.08207"
},
{
"id": "2303.15078"
},
{
"id": "2303.17760"
},
{
"id": "2307.04738"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2210.07197"
},
{
"id": "1904.09675"
},
{
"id": "2307.07924"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2305.14325"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2307.05300"
},
{
"id": "1909.02622"
},
{
"id": "2304.03442"
},
{
"id": "2109.06835"
},
{
"id": "2109.01652"
},
{
"id": "2305.13091"
},
{
"id": "2305.16960"
}
] |
2308.06921 | 2 | unlikely to be exhaustive. Thus, there is great need for scalable ap- proaches for providing immediate, high-quality support to students who are learning to program.
Large language models (LLMs) have recently garnered consider- able interest due to their capabilities for generating human-like text in a wide array of contexts, including computing education [27]. There, LLMs have shown great potential for generating resources such as programming exercises, code explanations and model so- lutions [11]. Recent work has even shown that LLM-generated explanations of code are perceived as more useful to students than explanations produced by their peers [20]. Thus, the prospect of using LLMs to produce real-time, on-demand help for students appears promising. However, a common concern is that students may rely too heavily on the outputs produced by such models, espe- cially if they can be used to generate solutions directly [1]. Related concerns around student over-reliance on LLM-based tools are com- mon in educational settings [16]. Indeed, when OpenAI recently released the widely publicised GPT-4 model, they showcased the example of a âsocraticâ tutor, highlighting how the model could be steered away from revealing solutions directly to the user1.
CCS CONCEPTS ⢠Social and professional topics â Computer science edu- cation; Software engineering education; ⢠Human-centered computing â Interactive systems and tools. | 2308.06921#2 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07107 | 2 | Index TermsâLarge Language Models; Information Retrieval; Query Rewrite; Rerank; Reader; Fine-tuning; Prompting
â¦
# 1 INTRODUCTION
needs of human beings. To fulfill the need for rapid acquisition of desired information, various information re- trieval (IR) systems have been developed [1â4]. Prominent examples include search engines such as Google, Bing, and Baidu, which serve as IR systems on the Internet, adept at retrieving relevant web pages in response to user queries, and provide convenient and efficient access to information on the Internet. It is worth noting that IR extends beyond web page retrieval. In dialogue systems (chatbots) [1, 5â 8], such as Microsoft Xiaoice [2], Apple Siri,1 and Google Assistant,2 IR systems play a crucial role in retrieving appro- priate responses to user input utterances, thereby producing natural and fluent human-machine conversations. Similarly, in question-answering systems [3, 9], IR systems are em- ployed to select relevant clues essential for addressing user questions effectively. In image search engines [4], IR systems excel at returning images that align with user input queries. Given the exponential growth of information, research and industry have become increasingly interested in the devel- opment of effective IR systems. | 2308.07107#2 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 2 | 1) CommitPack import numpy as np Code Before Code After import matplotlib.pyplot as plt import math Import matplotiib.pyplot as pl para COU # generate sample data import matplotlib.pyplot as plt x_data = np.linspace(-5, 5, 20) y_data = np.random.normal(0.0, 1.0, x_data.size) # generate sample data x_data = np.linspace(-math.pi, math.pi, 30) y_data = np.sin(x_data) + np.random.normal(0.0, 0.1, x_data.size) plt.plot(x_data, y_data, 'o') mmit N pit.show() Ch to sin() functi ith noi ange to sin() function with noise Message Fixing Code Explaining Code = BLOOMZ @®@® StarChat-8 M8 StarCoder MMM CodeGeeX2 MM OctoGeex @@⢠OctoCoder = InstructCodeTs + = Wizardcoder! = pra | 2308.07124#2 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07201 | 2 | generation (NLG) tasks. We derive insights and lessons from practical scenarios where humans instigate group dis- cussions for brainstorming and propose different communication strategies within ChatEval. Our experiments on two benchmark tasks illustrate that ChatEval deliv- ers superior accuracy and correlation in alignment with human assessment. Fur- thermore, we find that the diverse role prompts (different personas) are essen- tial in the multi-agent debate process; that is, utilizing the same role description in the prompt can lead to a degradation in performance. Our qualitative analy- sis also shows that ChatEval transcends mere textual scoring, offering a human- mimicking evaluation process for reliable assessments. Our code is available at https://github.com/chanchimin/ChatEval. | 2308.07201#2 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | Text evaluation has historically posed significant challenges, often
demanding substantial labor and time cost. With the emergence of large language
models (LLMs), researchers have explored LLMs' potential as alternatives for
human evaluation. While these single-agent-based approaches show promise,
experimental results suggest that further advancements are needed to bridge the
gap between their current effectiveness and human-level evaluation quality.
Recognizing that best practices of human evaluation processes often involve
multiple human annotators collaborating in the evaluation, we resort to a
multi-agent debate framework, moving beyond single-agent prompting strategies.
The multi-agent-based approach enables a group of LLMs to synergize with an
array of intelligent counterparts, harnessing their distinct capabilities and
expertise to enhance efficiency and effectiveness in handling intricate tasks.
In this paper, we construct a multi-agent referee team called ChatEval to
autonomously discuss and evaluate the quality of generated responses from
different models on open-ended questions and traditional natural language
generation (NLG) tasks. Our analysis shows that ChatEval transcends mere
textual scoring, offering a human-mimicking evaluation process for reliable
assessments. Our code is available at https://github.com/chanchimin/ChatEval. | http://arxiv.org/pdf/2308.07201 | Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, Zhiyuan Liu | cs.CL | null | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2303.04048"
},
{
"id": "2004.04696"
},
{
"id": "1810.04805"
},
{
"id": "2005.00456"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2307.02762"
},
{
"id": "2305.19118"
},
{
"id": "2110.08207"
},
{
"id": "2303.15078"
},
{
"id": "2303.17760"
},
{
"id": "2307.04738"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2210.07197"
},
{
"id": "1904.09675"
},
{
"id": "2307.07924"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2305.14325"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2307.05300"
},
{
"id": "1909.02622"
},
{
"id": "2304.03442"
},
{
"id": "2109.06835"
},
{
"id": "2109.01652"
},
{
"id": "2305.13091"
},
{
"id": "2305.16960"
}
] |
2308.06921 | 3 | CCS CONCEPTS ⢠Social and professional topics â Computer science edu- cation; Software engineering education; ⢠Human-centered computing â Interactive systems and tools.
KEYWORDS Intelligent tutoring systems, Intelligent programming tutors, Pro- gramming assistance, Novice programmers, Natural language in- terfaces, Large language models, Guardrails
1 INTRODUCTION AND MOTIVATION As student interest in programming continues to grow and class sizes expand, educators face significant challenges in providing effective and timely support to all students. Traditional approaches of offering on-demand expert help do not scale well in very large settings, and not all students feel comfortable approaching an in- structor or a teaching assistant for help [13]. Similarly, authoring static hints or responses to commonly encountered issues that can be presented to students needing help is both time intensive and | 2308.06921#3 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07107 | 3 | The core function of an IR system is retrieval, which aims to determine the relevance between a user-issued query and the content to be retrieved, including various types of information such as texts, images, music, and more. For the scope of this survey, we concentrate solely on reviewing those text retrieval systems, in which query-document relevance is commonly measured by their matching score.3 Given that IR systems operate on extensive repositories, the efficiency of retrieval algorithms becomes of paramount importance. To improve the user experience, the retrieval performance is enhanced from both the upstream (query reformulation) and downstream (reranking and reading) perspectives. As an upstream technique, query reformu- lation is designed to refine user queries so that they are more effective at retrieving relevant documents [10, 11]. With the recent surge in the popularity of conversational search, this technique has received increasing attention. On the downstream side, reranking approaches are developed to further adjust the document ranking [12â14]. In contrast to the retrieval stage, reranking is performed only on a limited set of relevant documents, already retrieved by the retriever. Under this circumstance, | 2308.07107#3 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07201 | 3 | # INTRODUCTION
Evaluating the quality of text generated by language models or written by humans has long been a challenging endeavor, consistently garnering substantial attention (Celikyilmaz et al., 2020). Tra- ditional methodologies predominantly rely on human annotation of texts (Callison-Burch, 2009), an approach considered overly demanding in terms of time and cost. Automatic evaluation metrics based on n-grams, such as Rouge (Lin, 2004), BLEU (Papineni et al., 2002), and METEOR (Baner- jee & Lavie, 2005), have been proposed to tackle this issue (Kondrak, 2005). However, these methods have been shown to exhibit a relatively weak correlation with human judgments, partic- ularly in the context of tasks involving open-ended generation or requiring domain-specific exper- tise (Novikova et al., 2017).
Recent advancements in the field of natural language processing have led to the emergence of billion-parameter scale LLMs, such as GPT-3 (Brown et al., 2020). These LLMs have demonâCorresponding author. Email: [email protected]
1 | 2308.07201#3 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | Text evaluation has historically posed significant challenges, often
demanding substantial labor and time cost. With the emergence of large language
models (LLMs), researchers have explored LLMs' potential as alternatives for
human evaluation. While these single-agent-based approaches show promise,
experimental results suggest that further advancements are needed to bridge the
gap between their current effectiveness and human-level evaluation quality.
Recognizing that best practices of human evaluation processes often involve
multiple human annotators collaborating in the evaluation, we resort to a
multi-agent debate framework, moving beyond single-agent prompting strategies.
The multi-agent-based approach enables a group of LLMs to synergize with an
array of intelligent counterparts, harnessing their distinct capabilities and
expertise to enhance efficiency and effectiveness in handling intricate tasks.
In this paper, we construct a multi-agent referee team called ChatEval to
autonomously discuss and evaluate the quality of generated responses from
different models on open-ended questions and traditional natural language
generation (NLG) tasks. Our analysis shows that ChatEval transcends mere
textual scoring, offering a human-mimicking evaluation process for reliable
assessments. Our code is available at https://github.com/chanchimin/ChatEval. | http://arxiv.org/pdf/2308.07201 | Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, Zhiyuan Liu | cs.CL | null | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2303.04048"
},
{
"id": "2004.04696"
},
{
"id": "1810.04805"
},
{
"id": "2005.00456"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2307.02762"
},
{
"id": "2305.19118"
},
{
"id": "2110.08207"
},
{
"id": "2303.15078"
},
{
"id": "2303.17760"
},
{
"id": "2307.04738"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2210.07197"
},
{
"id": "1904.09675"
},
{
"id": "2307.07924"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2305.14325"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2307.05300"
},
{
"id": "1909.02622"
},
{
"id": "2304.03442"
},
{
"id": "2109.06835"
},
{
"id": "2109.01652"
},
{
"id": "2305.13091"
},
{
"id": "2305.16960"
}
] |
2308.06921 | 4 | In this paper we introduce CodeHelp, an LLM-powered tool for generating real-time help for programming and computer sci- ence students. A key contribution of CodeHelp is its use of robust âguardrailsâ that are specifically designed to not reveal solutions directly while helping students resolve their issues, thus mitigating the over-reliance trap that direct use of LLMs may cause. We de- scribe the design of the CodeHelp tool and elaborate on the LLM prompting strategies that we use to generate outputs that guide students towards a solution without producing answers directly. We also discuss the toolâs useful features for instructors, including the ability to observe, summarise, and review how their students engage with it. To explore its potential, we deployed CodeHelp in a first-year computer- and data-science course with 52 students and monitored its usage over a 12-week period. We investigate when and how frequently students engaged with CodeHelp, what types of help they request, and how useful they found the tool. To date, there has been significant interest in the computing education liter- ature focusing on the accuracy of LLMs, the types of resources they can generate, and comparative analyses involving historical student data [11]. To our knowledge, this work represents the first evalua- tion of an always-available LLM-powered teaching assistant with
1https://openai.com/research/gpt-4 | 2308.06921#4 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07107 | 4 | to the retrieval stage, reranking is performed only on a limited set of relevant documents, already retrieved by the retriever. Under this circumstance, the emphasis is placed on achieving higher performance rather than keeping higher efficiency, allowing for the application of more complex ap- proaches in the reranking process. Additionally, reranking can accommodate other specific requirements, such as per- sonalization [15â18] and diversification [19â22]. Following the retrieval and reranking stages, a reading component is incorporated to summarize the retrieved documents and de- liver a concise document to users [23, 24]. While traditional IR systems typically require users to gather and organize relevant information themselves; however, the reading com- ponent is an integral part of new IR systems such as New | 2308.07107#4 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 4 | # OctoPack: Instruction Tuning Code Large Language Models
1
# INTRODUCTION
Finetuning large language models (LLMs) on a variety of language tasks explained via instructions (instruction tuning) has been shown to improve model usability and general performance (Wei et al., 2022; Sanh et al., 2022; Min et al., 2022; Ouyang et al., 2022). The instruction tuning paradigm has also proven successful for models trained on visual (Liu et al., 2023a; Li et al., 2023a), audio (Zhang et al., 2023b) and multilingual (Muennighoff et al., 2022b; Wang et al., 2022b) data. | 2308.07124#4 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07201 | 4 | 1
strated remarkable capabilities across diverse downstream tasks, presenting new opportunities for text quality evaluation using such models. Moreover, various training paradigms have been pro- posed to endow LLMs with the ability to accomplish tasks in a zero-shot manner and better adhere to human-provided instructions (Ouyang et al., 2022; Sanh et al., 2021; Wei et al., 2021). These advancements facilitate the prompting of LLMs to evaluate generated text, effectively simulating human evaluators in the assessment process.
In view of the impressive text understanding and instruction-following capabilities of recent LLMs, a body of literature (Liu et al., 2023b; Chiang & Lee, 2023; Gao et al., 2023; Shen et al., 2023) has adopted LLM as an evaluator to assess the quality of responses to open-ended questions or tradi- tional NLG tasks, including dialogue response generation and summarization. This methodology is dubbed LLM-as-a-judge (Zheng et al., 2023). Findings from these researches indicate that LLM can mimic human behavior and provide evaluations that correspond with human judgments, revealing a potentially scalable and transparent alternative to costly and laborious human evaluations. | 2308.07201#4 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | Text evaluation has historically posed significant challenges, often
demanding substantial labor and time cost. With the emergence of large language
models (LLMs), researchers have explored LLMs' potential as alternatives for
human evaluation. While these single-agent-based approaches show promise,
experimental results suggest that further advancements are needed to bridge the
gap between their current effectiveness and human-level evaluation quality.
Recognizing that best practices of human evaluation processes often involve
multiple human annotators collaborating in the evaluation, we resort to a
multi-agent debate framework, moving beyond single-agent prompting strategies.
The multi-agent-based approach enables a group of LLMs to synergize with an
array of intelligent counterparts, harnessing their distinct capabilities and
expertise to enhance efficiency and effectiveness in handling intricate tasks.
In this paper, we construct a multi-agent referee team called ChatEval to
autonomously discuss and evaluate the quality of generated responses from
different models on open-ended questions and traditional natural language
generation (NLG) tasks. Our analysis shows that ChatEval transcends mere
textual scoring, offering a human-mimicking evaluation process for reliable
assessments. Our code is available at https://github.com/chanchimin/ChatEval. | http://arxiv.org/pdf/2308.07201 | Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, Zhiyuan Liu | cs.CL | null | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2303.04048"
},
{
"id": "2004.04696"
},
{
"id": "1810.04805"
},
{
"id": "2005.00456"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2307.02762"
},
{
"id": "2305.19118"
},
{
"id": "2110.08207"
},
{
"id": "2303.15078"
},
{
"id": "2303.17760"
},
{
"id": "2307.04738"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2210.07197"
},
{
"id": "1904.09675"
},
{
"id": "2307.07924"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2305.14325"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2307.05300"
},
{
"id": "1909.02622"
},
{
"id": "2304.03442"
},
{
"id": "2109.06835"
},
{
"id": "2109.01652"
},
{
"id": "2305.13091"
},
{
"id": "2305.16960"
}
] |
2308.07107 | 5 | All authors are from Gaoling School of Artificial Intelligence and School of Information, Renmin University of China. Contact e-mail: [email protected], [email protected]
1. Apple Siri, https://www.apple.com/siri/ 2. Google Assistant, https://assistant.google.com/
3. The term âdocumentâ will henceforth refer to any text-based con- tent subject to retrieve, including both long articles and short passages.
1
Tradition al IR Components Q New Query Search Context Documents Candidate Selected Documents Query; Rewriter Retriever | | Reranker Response, Q Query, Response, : Large Language Models â@ Response Q Query, ChatGPT QOQLLaMA GF lan-T5 ©)GLM (BLOOM (2) Search Agent
Fig. 1. Overview of existing studies that apply LLMs into IR. (1) LLMs can be used to enhance traditional IR components, such as query rewriter, retriever, reranker, and reader. (2) LLMs can also be used as search agents to perform multiple IR tasks.
Bing,4 streamlining usersâ browsing experience and saving valuable time. | 2308.07107#5 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 5 | In this work, we instruction tune LLMs on the coding modality. While Code LLMs can already be indirectly instructed to generate desired code using code comments, this procedure is brittle and does not work when the desired output is natural language, such as explaining code. Explicit instructing tuning of Code LLMs may improve their steerability and enable their application to more tasks. Concurrently to our work, three instruction tuned Code LLMs have been proposed: PanGu-Coder2 (Shen et al., 2023), WizardCoder (Luo et al., 2023) and InstructCodeT5+ (Wang et al., 2023c). These models rely on more capable and closed models from the OpenAI API1 to create their instruction training data. This approach is problematic as (1) closed-source APIs keep changing and have unpredictable availability (Pozzobon et al., 2023; Chen et al., 2023a), (2) it relies on the assumption that a more capable model exists (3) it can reinforce model hallucination (Gudibande et al., 2023) and (4), depending on legal interpretation, OpenAIâs terms of use2 forbid such models: â...You may not...use output from | 2308.07124#5 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07201 | 5 | While a single powerful LLM can already tackle various missions, emerging studies suggest that multiple LLMs can further improve one another through debate and cooperation (Li et al., 2023a; Liang et al., 2023). By incorporating multiple LLMs into an integrated group and designing specific interaction mechanisms, different LLMs can engage in proposing and deliberating unique responses and thought processes across several rounds. This approach leads to enhanced factuality of gen- erated responses (Du et al., 2023) and improvement in the completion of arduous tasks (Li et al., 2023a; Qian et al., 2023). Furthermore, the multi-agent group also addresses and mitigates the Degeneration-of-Thought (DOT) problem (Liang et al., 2023). | 2308.07201#5 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | Text evaluation has historically posed significant challenges, often
demanding substantial labor and time cost. With the emergence of large language
models (LLMs), researchers have explored LLMs' potential as alternatives for
human evaluation. While these single-agent-based approaches show promise,
experimental results suggest that further advancements are needed to bridge the
gap between their current effectiveness and human-level evaluation quality.
Recognizing that best practices of human evaluation processes often involve
multiple human annotators collaborating in the evaluation, we resort to a
multi-agent debate framework, moving beyond single-agent prompting strategies.
The multi-agent-based approach enables a group of LLMs to synergize with an
array of intelligent counterparts, harnessing their distinct capabilities and
expertise to enhance efficiency and effectiveness in handling intricate tasks.
In this paper, we construct a multi-agent referee team called ChatEval to
autonomously discuss and evaluate the quality of generated responses from
different models on open-ended questions and traditional natural language
generation (NLG) tasks. Our analysis shows that ChatEval transcends mere
textual scoring, offering a human-mimicking evaluation process for reliable
assessments. Our code is available at https://github.com/chanchimin/ChatEval. | http://arxiv.org/pdf/2308.07201 | Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, Zhiyuan Liu | cs.CL | null | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2303.04048"
},
{
"id": "2004.04696"
},
{
"id": "1810.04805"
},
{
"id": "2005.00456"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2307.02762"
},
{
"id": "2305.19118"
},
{
"id": "2110.08207"
},
{
"id": "2303.15078"
},
{
"id": "2303.17760"
},
{
"id": "2307.04738"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2210.07197"
},
{
"id": "1904.09675"
},
{
"id": "2307.07924"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2305.14325"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2307.05300"
},
{
"id": "1909.02622"
},
{
"id": "2304.03442"
},
{
"id": "2109.06835"
},
{
"id": "2109.01652"
},
{
"id": "2305.13091"
},
{
"id": "2305.16960"
}
] |
2308.06921 | 6 | 2 RELATED WORK Providing effective automated assistance to novice programmers has been a longstanding research problem. Considerable attention has been devoted to the development and evaluation of so-called in- telligent tutoring systems for programming, sometimes referred to as intelligent programming tutors (IPT). Such systems vary greatly and contain a large range of supplementary features [8]. Most of the work has been devoted to various approaches for the generation of effective hints [21, 22] and feedback [18]. The primary difference between CodeHelp and previous work in this area is that CodeHelp is able to respond to a far wider range of requests and requires little or no configuration or setup for any specific class context due to its underlying use of LLMs. Prior to the development and use of LLMs, similar tools had to rely on various rule-based and machine learning-based natural language processing techniques that were much more specialized and, hence, brittle. For example, they could only support a single programming language or type of support request. CodeHelp supports any programming language with sufficient coverage in the underlying LLMâs training set. In particular, programming languages that are commonly used in com- puting education are covered very well. CodeHelp can also respond effectively to a wide variety of request types. | 2308.06921#6 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07107 | 6 | Bing,4 streamlining usersâ browsing experience and saving valuable time.
The trajectory of IR has traversed a dynamic evolution, transitioning from its origins in term-based methods to the integration of neural models. Initially, IR was anchored in term-based methods [25] and Boolean logic, focusing on keyword matching for document retrieval. The paradigm gradually shifted with the introduction of vector space mod- els [26], unlocking the potential to capture nuanced semantic relationships between terms. This progression continued with statistical language models [27, 28], refining relevance estimation through contextual and probabilistic considera- tions. The influential BM25 algorithm [29] played an im- portant role during this phase, revolutionizing relevance ranking by accounting for term frequency and document length variations. The most recent chapter in IRâs journey is marked by the ascendancy of neural models [3, 30â 32]. These models excel at capturing intricate contextual cues and semantic nuances, reshaping the landscape of IR. However, these neural models still face challenges such as data scarcity, interpretability, and the potential generation of plausible yet inaccurate responses. Thus, the evolution of IR continues to be a journey of balancing traditional strengths (such as the BM25 algorithmâs high efficiency) with the remarkable capability (such as semantic understanding) brought about by modern neural architectures. | 2308.07107#6 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07201 | 6 | In the human evaluation processes, relying on a single perspective can introduce bias and instabil- ity in the results (Karpinska et al., 2021). Recognizing this, best practices often involve multiple human annotators collaborating in the evaluation (Van Der Lee et al., 2019). Drawing inspiration from this collaborative and iterative human evaluation approach, we propose ChatEval, a system that enables each agent to employ varied communication strategies in collaborative discussion, working towards formulating final judgments. Furthermore, to enrich the evaluation dynamics, every agent within ChatEval is endowed with a unique persona. This deliberate design ensures that each agent focuses on distinct perspectives or brings specific expertise to the table. By doing so, the collective evaluation benefits from a more comprehensive lens, capturing nuances and subtleties that a single perspective might overlook. We derive this idea primarily from the insight of âThere are a thousand Hamlets in a thousand peopleâs eyesâ, meaning that every person has their unique interpretation or perspective, especially applicable to text evaluation. Indeed, these divergent perspectives shape the comprehensive and multifaceted assessment of Hamlet. Another underlying intuition of our work stems from renowned concepts in sociology and biology, including Collective Intelligence(Woolley et al., 2010) and Cognitive Synergy(Luppi et al., 2022), where multiple cognitive processes or sys- tems interact and cooperate in a way that produces a combined effect greater than the sum of their separate effects. | 2308.07201#6 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | Text evaluation has historically posed significant challenges, often
demanding substantial labor and time cost. With the emergence of large language
models (LLMs), researchers have explored LLMs' potential as alternatives for
human evaluation. While these single-agent-based approaches show promise,
experimental results suggest that further advancements are needed to bridge the
gap between their current effectiveness and human-level evaluation quality.
Recognizing that best practices of human evaluation processes often involve
multiple human annotators collaborating in the evaluation, we resort to a
multi-agent debate framework, moving beyond single-agent prompting strategies.
The multi-agent-based approach enables a group of LLMs to synergize with an
array of intelligent counterparts, harnessing their distinct capabilities and
expertise to enhance efficiency and effectiveness in handling intricate tasks.
In this paper, we construct a multi-agent referee team called ChatEval to
autonomously discuss and evaluate the quality of generated responses from
different models on open-ended questions and traditional natural language
generation (NLG) tasks. Our analysis shows that ChatEval transcends mere
textual scoring, offering a human-mimicking evaluation process for reliable
assessments. Our code is available at https://github.com/chanchimin/ChatEval. | http://arxiv.org/pdf/2308.07201 | Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, Zhiyuan Liu | cs.CL | null | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2303.04048"
},
{
"id": "2004.04696"
},
{
"id": "1810.04805"
},
{
"id": "2005.00456"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2307.02762"
},
{
"id": "2305.19118"
},
{
"id": "2110.08207"
},
{
"id": "2303.15078"
},
{
"id": "2303.17760"
},
{
"id": "2307.04738"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2210.07197"
},
{
"id": "1904.09675"
},
{
"id": "2307.07924"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2305.14325"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2307.05300"
},
{
"id": "1909.02622"
},
{
"id": "2304.03442"
},
{
"id": "2109.06835"
},
{
"id": "2109.01652"
},
{
"id": "2305.13091"
},
{
"id": "2305.16960"
}
] |
2308.06921 | 7 | Chatbots provide a convenient interaction experience and have previously been deployed as intelligent assistants in programming education contexts. For example, Carreira et al. developed Pyo, a chatbot designed to help novice programmers in online courses by providing definitions of concepts, guiding them through errors, and assisting with exercises [4]. Although the goal of Pyo is very similar to that of CodeHelp, a notable distinction is that Pyo is rule-based with predetermined topics and conversation flows, while CodeHelp is far more flexible. In similar work, Konecki et al. proposed a rule- based intelligent assistant for programming education aiming to increase engagement, motivation and learning time [19]. Although the primary focus of CodeHelp is to assist students in resolving their issues when programming, we expect it may influence engagement and motivation as well. | 2308.06921#7 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07107 | 7 | Large language models (LLMs) have recently emerged as transformative forces across various research fields, such as natural language processing (NLP) [33â35], recommender systems [36â39], finance [40], and even molecule discov- ery [41]. These cutting-edge LLMs are primarily based on the Transformer architecture and undergo extensive pre- training on diverse textual sources, including web pages, research articles, books, and codes. As their scale contin- ues to expand (including both model size and data vol- ume), LLMs have demonstrated remarkable advances in their capabilities. On the one hand, LLMs have exhibited unprecedented proficiency in language understanding and generation, resulting in responses that are more human-like and better align with human intentions. On the other hand, the larger LLMs have shown impressive emergent abilities
4. New Bing, https://www.bing.com/new | 2308.07107#7 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 7 | We focus on more permissively licensed data and avoid using a closed-source model to generate synthetic data. We benchmark four popular sources of code instruction data: (1) xP3x (Muennighoff et al., 2022b), which contains data from common code benchmarks, (2) Self-Instruct (Wang et al., 2023a) data we create using a permissive Code LLM, (3) OASST (Köpf et al., 2023), which contains mostly natural language data and few code examples and (4) COMMITPACK, our new 4TB dataset of Git commits. Instruction tuningâs primary purpose is to expand modelsâ generalization abilities to a wide variety of tasks and settings. Thus, we extend the code synthesis benchmark, HumanEval (Chen et al., 2021; Zheng et al., 2023), to create HUMANEVALPACK: A code benchmark covering code synthesis, code repair, and code explanation across six programming languages. | 2308.07124#7 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07201 | 7 | To summarize, the main contribution of our work is as follows:
1. We propose a multi-agent-based framework called ChatEval that aligns better with human preferences compared with single-agent-based approaches as depicted in Figure 1.
2. We propose various communication strategies and demonstrate the necessity of diverse role prompts in multi-agent debate scenarios.
Itâs designed to be both composable and scalable, enabling re- searchers to implement their unique communication strategies easily. We hope this con- tributes to advancing research in the field of communicative agents and beyond.
# 2 METHODOLOGY
In this section, we elaborate on the principal components in ChatEval including debater agents, diverse role specification, communication strategy, and provide a detailed overview of each compo- nentâs role and functionality1.
1our code repository is built on top of https://github.com/OpenBMB/AgentVerse.
2 | 2308.07201#7 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | Text evaluation has historically posed significant challenges, often
demanding substantial labor and time cost. With the emergence of large language
models (LLMs), researchers have explored LLMs' potential as alternatives for
human evaluation. While these single-agent-based approaches show promise,
experimental results suggest that further advancements are needed to bridge the
gap between their current effectiveness and human-level evaluation quality.
Recognizing that best practices of human evaluation processes often involve
multiple human annotators collaborating in the evaluation, we resort to a
multi-agent debate framework, moving beyond single-agent prompting strategies.
The multi-agent-based approach enables a group of LLMs to synergize with an
array of intelligent counterparts, harnessing their distinct capabilities and
expertise to enhance efficiency and effectiveness in handling intricate tasks.
In this paper, we construct a multi-agent referee team called ChatEval to
autonomously discuss and evaluate the quality of generated responses from
different models on open-ended questions and traditional natural language
generation (NLG) tasks. Our analysis shows that ChatEval transcends mere
textual scoring, offering a human-mimicking evaluation process for reliable
assessments. Our code is available at https://github.com/chanchimin/ChatEval. | http://arxiv.org/pdf/2308.07201 | Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, Zhiyuan Liu | cs.CL | null | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2303.04048"
},
{
"id": "2004.04696"
},
{
"id": "1810.04805"
},
{
"id": "2005.00456"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2307.02762"
},
{
"id": "2305.19118"
},
{
"id": "2110.08207"
},
{
"id": "2303.15078"
},
{
"id": "2303.17760"
},
{
"id": "2307.04738"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2210.07197"
},
{
"id": "1904.09675"
},
{
"id": "2307.07924"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2305.14325"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2307.05300"
},
{
"id": "1909.02622"
},
{
"id": "2304.03442"
},
{
"id": "2109.06835"
},
{
"id": "2109.01652"
},
{
"id": "2305.13091"
},
{
"id": "2305.16960"
}
] |
2308.06921 | 8 | Python-Bot [24] and RevBot [25] are examples of AI-based sys- tems that help students understand Python syntax and practice past exam questions. Here, the focus is not on resolving issues, as with CodeHelp, but rather on helping students understand particular topics and testing their knowledge. Duckbot is another chatbot designed to enhance help-seeking between students and teaching staff in programming tutorials [29]. Walden et al. [34] developed a chatbot for teaching secure programming in PHP. Unlike many existing chatbot tools that have a narrow focus, CodeHelp lever- ages the power of LLMs to provide support across a wide variety of contexts involving various programming languages.
LLMs have been shown to exhibit remarkable performance on a broad range of tasks, including code generation [6]. Finnie-Ansley
Mark Liffiton, Brad Sheese, Jaromir Savelka, and Paul Denny | 2308.06921#8 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07107 | 8 | 4. New Bing, https://www.bing.com/new
when dealing with complex tasks [42], such as general- ization and reasoning skills. Notably, LLMs can effectively apply their learned knowledge and reasoning abilities to tackle new tasks with just a few task-specific demonstrations or appropriate instructions [43, 44]. Furthermore, advanced techniques, such as in-context learning, have significantly enhanced the generalization performance of LLMs without requiring fine-tuning on specific downstream tasks [34]. This breakthrough is particularly valuable, as it reduces the need for extensive fine-tuning while attaining remarkable task performance. Powered by prompting strategies such as chain-of-thought, LLMs can generate outputs with step-by- step reasoning, navigating complex decision-making pro- cesses [45]. Leveraging the impressive power of LLMs can undoubtedly improve the performance of IR systems. By incorporating these sophisticated language models, IR systems can provide users with more accurate responses, ultimately reshaping the landscape of information access and retrieval. | 2308.07107#8 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 8 | Instruction tuning StarCoder (Li et al., 2023b) on a filtered variant of COMMITPACK and OASST leads to our best model, OCTOCODER, which surpasses all other openly licensed models (Figure 1), but falls short of the much larger GPT-4 (OpenAI, 2023). GPT-4 is close to maximum performance on the code synthesis variant, notably with a pass@1 score of 86.6% on Python HumanEval. However, it performs significantly worse on the code fixing and explanation variants of HUMANEVALPACK, which we introduce. This suggests that the original HumanEval benchmark may soon cease to be useful due to models reaching close to the maximum performance. Our more challenging evaluation variants provide room for future LLMs to improve on the performance of the current state-of-the-art.
In summary, we contribute:
⢠COMMITPACK and COMMITPACKFT: 4TB of permissively licensed code commits across 350 programming languages for pretraining and a filtered variant containing high-quality code instructions for finetuning
⢠HUMANEVALPACK: A benchmark for Code LLM generalization, spanning three scenarios (Code Repair, Code Explanation, Code Synthesis) and 6 programming languages (Python, JavaScript, Java, Go, C++, Rust)
⢠OCTOCODER and OCTOGEEX: The best permissive Code LLMs | 2308.07124#8 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07201 | 8 | 1our code repository is built on top of https://github.com/OpenBMB/AgentVerse.
2
be g bg Large Language Model (LLM) Based Agent Single-Agent method oo AFter carefully reviewing the improve my time j UZP responses of both responses .. | Ox) management skills? = think ASSISTANT | is better. Wi â ees Sy {ASSISTANT | Imp improving | your time management y Multi-Agent debate 8 |AFter discussing thoroughly with] >) my co-workers, we are convinced that ASSISTANT 2 is | Wd better based on the reason -) SS | your time management, |_| skills involves e) | some tips to improve | Ce ee
Figure 1: When several referees participate in the evaluation process, they can discuss with each other and finally give a judgment that is better aligned with human annotators.
Debater Agents. Debater agents are one of the most significant components in our framework. We treat each individual LLM as an agent and ask them to generate their response from the given prompt2. Responses from other agents are served as chat history which will be replaced in the prompt template. After configuring the agents, we then start the group debate where each agent autonomously receives responses from the others and, in turn, delivers its own responses to them. It should be noted that the whole process does not require human intervention. | 2308.07201#8 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | Text evaluation has historically posed significant challenges, often
demanding substantial labor and time cost. With the emergence of large language
models (LLMs), researchers have explored LLMs' potential as alternatives for
human evaluation. While these single-agent-based approaches show promise,
experimental results suggest that further advancements are needed to bridge the
gap between their current effectiveness and human-level evaluation quality.
Recognizing that best practices of human evaluation processes often involve
multiple human annotators collaborating in the evaluation, we resort to a
multi-agent debate framework, moving beyond single-agent prompting strategies.
The multi-agent-based approach enables a group of LLMs to synergize with an
array of intelligent counterparts, harnessing their distinct capabilities and
expertise to enhance efficiency and effectiveness in handling intricate tasks.
In this paper, we construct a multi-agent referee team called ChatEval to
autonomously discuss and evaluate the quality of generated responses from
different models on open-ended questions and traditional natural language
generation (NLG) tasks. Our analysis shows that ChatEval transcends mere
textual scoring, offering a human-mimicking evaluation process for reliable
assessments. Our code is available at https://github.com/chanchimin/ChatEval. | http://arxiv.org/pdf/2308.07201 | Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, Zhiyuan Liu | cs.CL | null | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2303.04048"
},
{
"id": "2004.04696"
},
{
"id": "1810.04805"
},
{
"id": "2005.00456"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2307.02762"
},
{
"id": "2305.19118"
},
{
"id": "2110.08207"
},
{
"id": "2303.15078"
},
{
"id": "2303.17760"
},
{
"id": "2307.04738"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2210.07197"
},
{
"id": "1904.09675"
},
{
"id": "2307.07924"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2305.14325"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2307.05300"
},
{
"id": "1909.02622"
},
{
"id": "2304.03442"
},
{
"id": "2109.06835"
},
{
"id": "2109.01652"
},
{
"id": "2305.13091"
},
{
"id": "2305.16960"
}
] |
2308.06921 | 9 | et al. found that Codex (GitHub Copilot) outperforms typical stu- dents in CS1 programming exams [12]. Similarly, Savelka et al. found that GPT-4 comfortably passes diverse types of assessments from introductory and intermediate Python programming classes at the post-secondary education level [31]. Denny et al. evaluated Copilot on 166 CS1 coding problems and found that it successfully solves around half of these problems on its very first attempt, and that it solves 60% of the remaining problems if the problem de- scription is reformulated appropriately [9]. Tian et al. evaluated ChatGPT as a programming assistant and found that it successfully handles typical programming challenges [33]. LLMs have also been applied to other computing education tasks, such as writing tests [5, 15], and helping novices learn how to craft effective prompts [10]. Moreover, LLMs have been employed to generate example ex- planations as scaffolding to help students learn how to understand and explain code themselves [20] and to generate programming exercises and code explanations [30]. This prior work demonstrates the capabilities and the flexibility of the LLMs that power CodeHelp. Despite their impressive performance at many | 2308.06921#9 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07107 | 9 | Initial efforts have been made to utilize the potential of LLMs in the development of novel IR systems. Notably, in terms of practical applications, New Bing is designed to improve the usersâ experience of using search engines by extracting information from disparate web pages and con- densing it into concise summaries that serve as responses to user-generated queries. In the research community, LLMs have proven useful within specific modules of IR systems (such as retrievers), thereby enhancing the overall perfor- mance of these systems. Due to the rapid evolution of LLM- enhanced IR systems, it is essential to comprehensively review their most recent advancements and challenges.
Our survey provides an insightful exploration of the in- tersection between LLMs and IR systems, covering key per- spectives such as query rewriters, retrievers, rerankers, and readers (as shown in Figure 1).5 We also include some recent studies that leverage LLMs as search agents to perform various IR tasks. This analysis enhances our understanding of LLMsâ potential and limitations in advancing the IR field.
5. As yet, there has not been a formal definition for LLMs. In this pa- per, we mainly focus on models with more than 1B parameters. We also notice that some methods do not rely on such strictly defined LLMs, but due to their representativeness, we still include an introduction to them in this survey.
2 | 2308.07107#9 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 9 | ⢠OCTOCODER and OCTOGEEX: The best permissive Code LLMs
# 2 COMMITPACK: CODE INSTRUCTION DATA
Prior work has shown that models can generalize to languages included in pretraining, but absent during instruction tuning (Muennighoff et al., 2022b). However, they also show that including such
1https://openai.com/blog/openai-api 2https://openai.com/policies/terms-of-use
2
# OctoPack: Instruction Tuning Code Large Language Models | 2308.07124#9 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07201 | 9 | Diverse Role Specification. As presented in Section 1, diverse role specification is necessary for the framework as well. Although all the agents share a common prompt template, we substitute the role description slot with diverse role prompts, specifying distinct personalities for different agents. We take inspiration from Wu et al. (2023) and formulate an analogous role description.
Communication Strategy. How to maintain the chat history is another significant issue in ChatEval. In our work, we use a more intuitive term to illustrate the maintenance of the chat history called communication strategy. In a nutshell, different communication strategies can be seen as different approaches to maintaining and manipulating their chat history. As is shown in Figure 2, We primarily design three different communication strategies and illustrate them as follows:
1. One-By-One. During each round of the debate, the debater agents take turns in a set order to generate their response based on the current observation. When itâs time for a debater agent to respond, we directly concatenate what previous other agents have said into its chat history slot.
2. Simultaneous-Talk. Unlike the one-by-one strategy, we carry out an alternative com- munication strategy called simultaneous-talk, where debater agents are prompted to asyn- chronously generate responses in each iteration of the discussion to nullify the impact of the speaking order. | 2308.07201#9 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | Text evaluation has historically posed significant challenges, often
demanding substantial labor and time cost. With the emergence of large language
models (LLMs), researchers have explored LLMs' potential as alternatives for
human evaluation. While these single-agent-based approaches show promise,
experimental results suggest that further advancements are needed to bridge the
gap between their current effectiveness and human-level evaluation quality.
Recognizing that best practices of human evaluation processes often involve
multiple human annotators collaborating in the evaluation, we resort to a
multi-agent debate framework, moving beyond single-agent prompting strategies.
The multi-agent-based approach enables a group of LLMs to synergize with an
array of intelligent counterparts, harnessing their distinct capabilities and
expertise to enhance efficiency and effectiveness in handling intricate tasks.
In this paper, we construct a multi-agent referee team called ChatEval to
autonomously discuss and evaluate the quality of generated responses from
different models on open-ended questions and traditional natural language
generation (NLG) tasks. Our analysis shows that ChatEval transcends mere
textual scoring, offering a human-mimicking evaluation process for reliable
assessments. Our code is available at https://github.com/chanchimin/ChatEval. | http://arxiv.org/pdf/2308.07201 | Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, Zhiyuan Liu | cs.CL | null | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2303.04048"
},
{
"id": "2004.04696"
},
{
"id": "1810.04805"
},
{
"id": "2005.00456"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2307.02762"
},
{
"id": "2305.19118"
},
{
"id": "2110.08207"
},
{
"id": "2303.15078"
},
{
"id": "2303.17760"
},
{
"id": "2307.04738"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2210.07197"
},
{
"id": "1904.09675"
},
{
"id": "2307.07924"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2305.14325"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2307.05300"
},
{
"id": "1909.02622"
},
{
"id": "2304.03442"
},
{
"id": "2109.06835"
},
{
"id": "2109.01652"
},
{
"id": "2305.13091"
},
{
"id": "2305.16960"
}
] |
2308.06921 | 10 | and code explanations [30]. This prior work demonstrates the capabilities and the flexibility of the LLMs that power CodeHelp. Despite their impressive performance at many tasks, LLMs may not be as effective as human tutors in some domains. For instance, LLMs may struggle with certain types of programming multiple- choice questions [32] or certain types of coding exercises [31]. An empirical evaluation of GitHub Copilotâs code suggestions revealed limitations in generating reliable code [23]. Pardos and Bhandari [26] compared learning gains from hints generated by LLMs and human tutors, finding that although both led to positive learning gains, human-generated hints were superior. They also found that only 70% of ChatGPT-generated hints were usable. Our vision for CodeHelp is that it will serve to augment existing instruction, pro- viding students with another convenient and accessible avenue to seek support, rather than replacing human instructors or TAs. | 2308.06921#10 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07107 | 10 | 2
For this survey, we create a Github repository by collecting the relevant papers and resources about LLM4IR.6 We will continue to update the repository with newer papers. This survey will also be periodically updated according to the development of this area. We notice that there are several surveys for PLMs, LLMs, and their applications (e.g., AIGC or recommender systems) [46â52]. Among these, we highly recommend the survey of LLMs [52], which provides a systematic and comprehensive reference to many important aspects of LLMs. Compared with them, we focus on the techniques and methods for developing and applying LLMs for IR systems. In addition, we notice a perspective paper discussing the opportunity of IR when meeting LLMs [53]. It would be an excellent supplement to this survey regarding future directions.
The remaining part of this survey is organized as fol- lows: Section 2 introduces the background for IR and LLMs. Section 3, 4, 5, 6 respectively review recent progress from the four perspectives of query rewriter, retriever, reranker, and reader, which are four key components of an IR system. Then, Section 8 discusses some potential directions in future research. Finally, we conclude the survey in Section 9 by summarizing the major findings.
# 2 BACKGROUND 2.1 Information Retrieval | 2308.07107#10 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 10 | 2
# OctoPack: Instruction Tuning Code Large Language Models
oO >Uta sas4tro= od sow CShe wv x= ov on pux wn eT SSSSSSCST TSS STS OAS GSE BZ BENS LSU TSASSS POSES RGES SS5BRS fa Gexe Er 5oee ges 250tw Age -I1NG 9O9ar Yiwe seg fe $s = HZ Dy L4H BAG, Zo Porat avs Bes gz 28 a a ae 5 5 8 £8 Bg Sar § = & 8 ° v . uo s is) B=] g L Pod ou bd co a a a Ss o a £ e 5 3 (25.57%) New Features Deprecation (0.28%) /-ââ Build System/Tooling (1.30%) (0.88%) User Interface Documentation (3.9346) Dependencies (5.38%) (13.32%) Testing Configuration (4.61%) (0.624%) Logging/Instrumentation Release Management (4.14%) ~~ Formatting/Linting (0.40%) (19.02%) Bug Fixes Refactoring/Code Cleanup (19.78%)
10M
10M
f re 8 4 g 100k & £ [-} Ls) ¥ rs £ § * | 2308.07124#10 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07201 | 10 | 3. Simultaneous-Talk-with-Summarizer. The main difference between this strategy and simultaneous-talk is that we additionally employ another LLM as a summarizer. At the end of each iteration of the debate, we prompt this extra LLM to summarize the messages conveyed so far and concatenate this summarization into all debater agentsâ chat history slots.
2The full prompt template can be found in Appendix A.
3
(a) One-by-One (b) Simultaneous-Talk (c) Simultaneous-Talk-with-Summarizer | 2308.07201#10 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | Text evaluation has historically posed significant challenges, often
demanding substantial labor and time cost. With the emergence of large language
models (LLMs), researchers have explored LLMs' potential as alternatives for
human evaluation. While these single-agent-based approaches show promise,
experimental results suggest that further advancements are needed to bridge the
gap between their current effectiveness and human-level evaluation quality.
Recognizing that best practices of human evaluation processes often involve
multiple human annotators collaborating in the evaluation, we resort to a
multi-agent debate framework, moving beyond single-agent prompting strategies.
The multi-agent-based approach enables a group of LLMs to synergize with an
array of intelligent counterparts, harnessing their distinct capabilities and
expertise to enhance efficiency and effectiveness in handling intricate tasks.
In this paper, we construct a multi-agent referee team called ChatEval to
autonomously discuss and evaluate the quality of generated responses from
different models on open-ended questions and traditional natural language
generation (NLG) tasks. Our analysis shows that ChatEval transcends mere
textual scoring, offering a human-mimicking evaluation process for reliable
assessments. Our code is available at https://github.com/chanchimin/ChatEval. | http://arxiv.org/pdf/2308.07201 | Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, Zhiyuan Liu | cs.CL | null | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2303.04048"
},
{
"id": "2004.04696"
},
{
"id": "1810.04805"
},
{
"id": "2005.00456"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2307.02762"
},
{
"id": "2305.19118"
},
{
"id": "2110.08207"
},
{
"id": "2303.15078"
},
{
"id": "2303.17760"
},
{
"id": "2307.04738"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2210.07197"
},
{
"id": "1904.09675"
},
{
"id": "2307.07924"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2305.14325"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2307.05300"
},
{
"id": "1909.02622"
},
{
"id": "2304.03442"
},
{
"id": "2109.06835"
},
{
"id": "2109.01652"
},
{
"id": "2305.13091"
},
{
"id": "2305.16960"
}
] |
2308.06921 | 11 | Two recent studies in the computing education literature pro- vide excellent motivation for our work. Both studies highlight the pressing need for a tool that provides appropriate guardrails when generating responses to studentsâ requests. The first study, by Kazemitabaar et al., analyses student use of their Coding Steps tool [17]. Coding Steps integrates an AI code generator into the user interface of an online programming tool. When a student uses this code generator, they provide a natural language prompt which is packaged together with their existing code and six static examples and sent to the OpenAI Codex API. The response from the API is then automatically inserted for the student into the code editor. In their study, where students tackled 45 Python programming tasks over ten 90-minute sessions, AI-generated code was submit- ted by students without any modification 49% of the time. This heavy use of the code generator raises concerns around student over-reliance which has been identified as a key challenge for edu- cators [1, 3, 7, 28]. The second study that is particularly pertinent to our work is the recent paper by Hellas et al. exploring responses generated by Codex and GPT-3.5 to 150 student help requests from a historical | 2308.06921#11 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07107 | 11 | # 2 BACKGROUND 2.1 Information Retrieval
Information retrieval (IR), as an essential branch of com- puter science, aims to efficiently retrieve information rel- evant to user queries from a large repository. Generally, users interact with the system by submitting their queries in textual form. Subsequently, IR systems undertake the task of matching and ranking these user-supplied queries against an indexed database, thereby facilitating the retrieval of the most pertinent results. | 2308.07107#11 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07201 | 11 | Alice a e 8-8 | x N round
â Alice Lona gâ8â? % N round:
Figure 2: The overall schematic diagram of our proposed three different kinds of communication strategy. The direction of the arrows represents the flow of information, meaning that what this person says will be appended to the chat history of the person pointed to by the arrow. Full algorithm description of the above communication strategies can be found in Appendix B.
Unlike previous work like Du et al. (2023), we do not explicitly ask the debater agents to reach a consensus at the end of the debate. In situations where the response format relies on direct compar- ison, we derive the final results from the majority vote among various annotators. Conversely, if the response format requires a direct score, we calculate the average score obtained from multiple annotators. This methodological approach ensures the impartiality and balance of our evaluation process.
# 3 EXPERIMENTS
We evaluate ChatEval on two benchmarks, FairEval and Topical-Chat which represent the cate- gories of open-ended question answer and dialogue response generation, respectively.
IMPLEMENTATION DETAILS | 2308.07201#11 | ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate | Text evaluation has historically posed significant challenges, often
demanding substantial labor and time cost. With the emergence of large language
models (LLMs), researchers have explored LLMs' potential as alternatives for
human evaluation. While these single-agent-based approaches show promise,
experimental results suggest that further advancements are needed to bridge the
gap between their current effectiveness and human-level evaluation quality.
Recognizing that best practices of human evaluation processes often involve
multiple human annotators collaborating in the evaluation, we resort to a
multi-agent debate framework, moving beyond single-agent prompting strategies.
The multi-agent-based approach enables a group of LLMs to synergize with an
array of intelligent counterparts, harnessing their distinct capabilities and
expertise to enhance efficiency and effectiveness in handling intricate tasks.
In this paper, we construct a multi-agent referee team called ChatEval to
autonomously discuss and evaluate the quality of generated responses from
different models on open-ended questions and traditional natural language
generation (NLG) tasks. Our analysis shows that ChatEval transcends mere
textual scoring, offering a human-mimicking evaluation process for reliable
assessments. Our code is available at https://github.com/chanchimin/ChatEval. | http://arxiv.org/pdf/2308.07201 | Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, Zhiyuan Liu | cs.CL | null | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2303.04048"
},
{
"id": "2004.04696"
},
{
"id": "1810.04805"
},
{
"id": "2005.00456"
},
{
"id": "2302.04166"
},
{
"id": "2304.02554"
},
{
"id": "2307.02762"
},
{
"id": "2305.19118"
},
{
"id": "2110.08207"
},
{
"id": "2303.15078"
},
{
"id": "2303.17760"
},
{
"id": "2307.04738"
},
{
"id": "2306.04181"
},
{
"id": "2306.05087"
},
{
"id": "2210.07197"
},
{
"id": "1904.09675"
},
{
"id": "2307.07924"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2305.14325"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2307.05300"
},
{
"id": "1909.02622"
},
{
"id": "2304.03442"
},
{
"id": "2109.06835"
},
{
"id": "2109.01652"
},
{
"id": "2305.13091"
},
{
"id": "2305.16960"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.