Papers
arxiv:1911.00536

DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation

Published on Nov 1, 2019
Authors:
,
,
,
,
,
,
,

Abstract

We present a large, tunable neural conversational response generation model, DialoGPT (dialogue generative pre-trained transformer). Trained on 147M conversation-like exchanges extracted from Reddit comment chains over a period spanning from 2005 through 2017, DialoGPT extends the Hugging Face PyTorch transformer to attain a performance close to human both in terms of automatic and human evaluation in single-turn dialogue settings. We show that conversational systems that leverage DialoGPT generate more relevant, contentful and context-consistent responses than strong baseline systems. The pre-trained model and training pipeline are publicly released to facilitate research into neural response generation and the development of more intelligent open-domain dialogue systems.

Community

Sign up or log in to comment

Models citing this paper 22

Browse 22 models citing this paper

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/1911.00536 in a dataset README.md to link it from this page.

Spaces citing this paper 1,159

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.