DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation
Abstract
We present a large, tunable neural conversational response generation model, DialoGPT (dialogue generative pre-trained transformer). Trained on 147M conversation-like exchanges extracted from Reddit comment chains over a period spanning from 2005 through 2017, DialoGPT extends the Hugging Face PyTorch transformer to attain a performance close to human both in terms of automatic and human evaluation in single-turn dialogue settings. We show that conversational systems that leverage DialoGPT generate more relevant, contentful and context-consistent responses than strong baseline systems. The pre-trained model and training pipeline are publicly released to facilitate research into neural response generation and the development of more intelligent open-domain dialogue systems.
Models citing this paper 22
Browse 22 models citing this paperDatasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 1,159
Collections including this paper 0
No Collection including this paper