id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
sequencelengths 1
1
|
---|---|---|---|---|---|---|
1512.05742#166 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Managing speaker identity and user proï¬ les in a spoken dialogue system. In Procesamiento del Lenguaje Natural, number 43 in 1, pages 77â 84, 2009. B. MacWhinney and C. Snow. The child language data exchange system. Journal of child language, 12(02):271â 295, 1985. F. Mairesse and S. Young. Stochastic language generation in dialogue using factored language models. Computational Linguistics, 2014. | 1512.05742#165 | 1512.05742#167 | 1512.05742 | [
"1511.06931"
] |
1512.05742#167 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | F. Mairesse, M. GaË si´c, F. JurË cÂ´Ä±Ë cek, S. Keizer, B. Thomson, K. Yu, and S. Young. Phrase-based statistical language generation using graphical models and active learning. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1552â 1561. Association for Computational Linguistics, 2010. C. D. Manning and H. | 1512.05742#166 | 1512.05742#168 | 1512.05742 | [
"1511.06931"
] |
1512.05742#168 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Sch¨utze. Foundations of statistical natural language processing. MIT press, 1999. M. McCarthy. Spoken language and applied linguistics. Ernst Klett Sprachen, 1998. S. McGlashan, N. Fraser, N. Gilbert, E. Bilange, P. Heisterkamp, and N. Youd. Dialogue management for telephone information systems. In Proceedings of the third conference on Applied natural language processing, pages 245â 246. Association for Computational Linguistics, 1992. G. McKeown, M. F Valstar, R. Cowie, and M. Pantic. | 1512.05742#167 | 1512.05742#169 | 1512.05742 | [
"1511.06931"
] |
1512.05742#169 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | The SEMAINE corpus of emotionally coloured character interac- tions. In Multimedia and Expo (ICME), 2010 IEEE International Conference on, pages 1079â 1084, 2010. T. Mikolov, M. Karaï¬ Â´at, L. Burget, J. Cernock`y, and Sanjeev Khudanpur. Recurrent neural network based language model. In 11th Proceedings of INTERSPEECH, pages 1045â 1048, 2010. T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean. | 1512.05742#168 | 1512.05742#170 | 1512.05742 | [
"1511.06931"
] |
1512.05742#170 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111â 3119, 2013. G. A. Miller. WordNet: a lexical database for english. Communications of the ACM, 38(11):39â 41, 1995. X. A. Miro, S. Bozonnet, N. Evans, C. Fredouille, G. Friedland, and O. Vinyals. | 1512.05742#169 | 1512.05742#171 | 1512.05742 | [
"1511.06931"
] |
1512.05742#171 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Speaker diarization: A review of recent research. Audio, Speech, and Language Processing, IEEE Transactions on, 20(2):356â 370, 2012. K. Mo, S. Li, Y. Zhang, J. Li, and Q. Yang. Personalizing a dialogue system with transfer learning. arXiv preprint arXiv:1610.02891, 2016. S. Mohan and J. Laird. Learning goal-oriented hierarchical tasks from situated interactive instruction. In AAAI, 2014. | 1512.05742#170 | 1512.05742#172 | 1512.05742 | [
"1511.06931"
] |
1512.05742#172 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | 41 T. Nguyen, M. Rosenberg, X. Song, J. Gao, S. Tiwary, R. Majumder, and L. Deng. Ms marco: A human generated machine reading comprehension dataset. arXiv preprint arXiv:1611.09268, 2016. L. Nio, S. Sakti, G. Neubig, T. Toda, M. Adriani, and Sa. Nakamura. | 1512.05742#171 | 1512.05742#173 | 1512.05742 | [
"1511.06931"
] |
1512.05742#173 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Developing non-goal dialog system based on In Natural Interaction with Robots, Knowbots and Smartphones, pages 355â 361. examples of drama television. Springer, 2014a. L Nio, S. Sakti, G. Neubig, T. Toda, and S. Nakamura. Conversation dialog corpora from television and movie scripts. In 17th Oriental Chapter of the International Committee for the Co-ordination and Standardization of Speech Databases and Assessment Techniques (COCOSDA), pages 1â | 1512.05742#172 | 1512.05742#174 | 1512.05742 | [
"1511.06931"
] |
1512.05742#174 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | 4, 2014b. E. N¨oth, A. Horndasch, F. Gallwitz, and J. Haas. Experiences with commercial telephone-based dialogue systems. itâ Information Technology (vormals it+ ti), 46(6/2004):315â 321, 2004. C. Oertel, F. Cummins, J. Edlund, P. Wagner, and N. Campbell. D64: | 1512.05742#173 | 1512.05742#175 | 1512.05742 | [
"1511.06931"
] |
1512.05742#175 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | A corpus of richly recorded conversational interaction. Journal on Multimodal User Interfaces, 7(1-2):19â 28, 2013. A. H. Oh and A. I. Rudnicky. Stochastic language generation for spoken dialogue systems. In Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2000), Workshop on Conversational Systems, volume 3, pages 27â 32. Association for Computational Linguistics, 2000. T. | 1512.05742#174 | 1512.05742#176 | 1512.05742 | [
"1511.06931"
] |
1512.05742#176 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Paek. Reinforcement learning for spoken dialogue systems: Comparing strengths and weaknesses for practical deploy- ment. In Proc. Dialog-on-Dialog Workshop, INTERSPEECH, 2006. K. Papineni, S. Roukos, T Ward, and W Zhu. BLEU: a method for automatic evaluation of machine translation. Proceedings of the 40th annual meeting on Association for Computational Linguistics (ACL), 2002. In A. N. Pargellis, H-K. J. Kuo, and C. Lee. | 1512.05742#175 | 1512.05742#177 | 1512.05742 | [
"1511.06931"
] |
1512.05742#177 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | An automatic dialogue generation platform for personalized dialogue applica- tions. Speech Communication, 42(3-4):329â 351, 2004. doi: 10.1016/j.specom.2003.10.003. R. Passonneau and E. Sachar. Loqui human-human dialogue corpus (transcriptions and annotations), 2014. D. Perez-Marin and I. Pascual-Nieto. Conversational Agents and Natural Language Interaction: Techniques and Effective Practices. IGI Global, 2011. S. | 1512.05742#176 | 1512.05742#178 | 1512.05742 | [
"1511.06931"
] |
1512.05742#178 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Petrik. Wizard of Oz Experiments on Speech Dialogue Systems. PhD thesis, Technischen Universitat Graz, 2004. R. Pieraccini, D. Suendermann, K. Dayanidhi, and J. Liscombe. Are we there yet? research in commercial spoken dialog systems. In Text, Speech and Dialogue, pages 3â 13, 2009. O. Pietquin. A framework for unsupervised learning of dialogue strategies. Presses Universit´e Catholique de Louvain, 2004. | 1512.05742#177 | 1512.05742#179 | 1512.05742 | [
"1511.06931"
] |
1512.05742#179 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | O. Pietquin. A probabilistic description of man-machine spoken communication. In Multimedia and Expo, 2005. ICME 2005. IEEE International Conference on, pages 410â 413, 2005. O. Pietquin. Learning to ground in spoken dialogue systems. In Acoustics, Speech and Signal Processing, 2007. ICASSP 2007. IEEE International Conference on, volume 4, pages IVâ 165, 2007. O Pietquin and T. Dutoit. | 1512.05742#178 | 1512.05742#180 | 1512.05742 | [
"1511.06931"
] |
1512.05742#180 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | A probabilistic framework for dialog simulation and optimal strategy learning. IEEE Transac- tions on Audio, Speech, and Language Processing, 14(2):589â 599, 2006. O. Pietquin and H. Hastie. A survey on metrics for the evaluation of user simulations. The knowledge engineering review, 28(01):59â 73, 2013. B. Piot, M. Geist, and O. Pietquin. Imitation learning applied to embodied conversational agents. In 4th Workshop on Machine Learning for Interactive Systems (MLIS 2015), volume 43, 2015. | 1512.05742#179 | 1512.05742#181 | 1512.05742 | [
"1511.06931"
] |
1512.05742#181 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | S. Png and J. Pineau. Bayesian reinforcement learning for pomdp-based dialogue systems. In IEEE International Con- ference on Acoustics, Speech and Signal Processing (ICASSP), pages 2156â 2159, 2011. C. Potts. Goal-driven answers in the cards dialogue corpus. In Proceedings of the 30th west coast conference on formal linguistics, pages 1â 20, 2012. A. Ratnaparkhi. | 1512.05742#180 | 1512.05742#182 | 1512.05742 | [
"1511.06931"
] |
1512.05742#182 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Trainable approaches to surface natural language generation and their application to conversational dialog systems. Computer Speech & Language, 16(3):435â 455, 2002. A. Raux, B. Langner, D. Bohus, A. W. Black, and M. Eskenazi. Lets go public! taking a spoken dialog system to the real world. In Proceedings of INTERSPEECH. Citeseer, 2005. N. Reithinger and M. Klesen. | 1512.05742#181 | 1512.05742#183 | 1512.05742 | [
"1511.06931"
] |
1512.05742#183 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Dialogue act classiï¬ cation using language models. In EuroSpeech, 1997. H. Ren, W. Xu, Y. Zhang, and Y. Yan. Dialog state tracking using conditional random ï¬ elds. In Special Interest Group on Discourse and Dialogue (SIGDIAL), 2013. S. Renals, T. Hain, and H. Bourlard. Recognition and understanding of meetings the AMI and AMIDA projects. In IEEE Workshop on Automatic Speech Recognition & Understanding (ASRU), 2007. | 1512.05742#182 | 1512.05742#184 | 1512.05742 | [
"1511.06931"
] |
1512.05742#184 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | R. Reppen and N. Ide. The american national corpus overall goals and the ï¬ rst release. Journal of English Linguistics, 32(2):105â 113, 2004. 42 J. Rickel and W. L. Johnson. Animated agents for procedural training in virtual reality: Perception, cognition, and motor control. Applied artiï¬ cial intelligence, 13(4-5):343â 382, 1999. V. Rieser and O. | 1512.05742#183 | 1512.05742#185 | 1512.05742 | [
"1511.06931"
] |
1512.05742#185 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Lemon. Natural language generation as planning under uncertainty for spoken dialogue systems. In Empirical methods in natural language generation, pages 105â 120. Springer, 2010. A. Ritter, C. Cherry, and B. Dolan. Unsupervised modeling of twitter conversations. In North American Chapter of the Association for Computational Linguistics (NAACL 2010), 2010. A. Ritter, C. Cherry, and W. B. Dolan. | 1512.05742#184 | 1512.05742#186 | 1512.05742 | [
"1511.06931"
] |
1512.05742#186 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Data-driven response generation in social media. In Proceedings of the conference on Empirical Methods in Natural Language Processing, 2011. S. Rosenthal and K. McKeown. I couldnt agree more: The role of conversational structure in agreement and disagreement detection in online discussions. In Special Interest Group on Discourse and Dialogue (SIGDIAL), page 168, 2015. S. Rosset and S. Petel. The ritel corpus-an annotated human-machine open-domain question answering spoken dialog corpus. In The International Conference on Language Resources and Evaluation (LREC), 2006. S. Rossignol, O. Pietquin, and M. Ianotto. | 1512.05742#185 | 1512.05742#187 | 1512.05742 | [
"1511.06931"
] |
1512.05742#187 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Training a bn-based user model for dialogue simulation with missing data. In Proceedings of the International Joint Conference on Natural Language Processing, pages 598â 604, 2011. A. Roy, C. Guinaudeau, H. Bredin, and C. Barras. TVD: a reproducible and multiply aligned tv series dataset. In The International Conference on Language Resources and Evaluation (LREC), volume 2, 2014. J. Ruppenhofer, M. Ellsworth, M. R.L. Petruck, C. R. Johnson, and J. | 1512.05742#186 | 1512.05742#188 | 1512.05742 | [
"1511.06931"
] |
1512.05742#188 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Scheffczyk. FrameNet II: Extended Theory and Practice. International Computer Science Institute, 2006. Distributed with the FrameNet data. J. Schatzmann and S. Young. The hidden agenda user simulation model. IEEE transactions on audio, speech, and language processing, 17(4):733â 747, 2009. J. Schatzmann, K. Georgila, and S. Young. Quantitative evaluation of user simulation techniques for spoken dialogue systems. In Special Interest Group on Discourse and Dialogue (SIGDIAL), 2005. | 1512.05742#187 | 1512.05742#189 | 1512.05742 | [
"1511.06931"
] |
1512.05742#189 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | J. Schatzmann, K. Weilhammer, M. Stuttle, and S. Young. A survey of statistical user simulation techniques for reinforcement-learning of dialogue management strategies. The Knowledge Engineering Review, 21(02):97â 126, 2006. J. Schatzmann, B. Thomson, K. Weilhammer, . Ye, and S. Young. Agenda-based user simulation for bootstrapping a pomdp dialogue system. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Companion Volume, Short Papers, pages 149â 152, 2007. J. N. Schrading. | 1512.05742#188 | 1512.05742#190 | 1512.05742 | [
"1511.06931"
] |
1512.05742#190 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Analyzing domestic abuse using natural language processing on social media data. Masterâ s thesis, Rochester Institute of Technology, 2015. http://scholarworks.rit.edu/theses. N. Schrading, C. O. Alm, R. Ptucha, and C. M. Homan. An analysis of domestic a.se discourse on reddit. In Empirical Methods in Natural Language Processing (EMNLP), 2015. K. K. Schuler. VerbNet: A broad-coverage, comprehensive verb lexicon. | 1512.05742#189 | 1512.05742#191 | 1512.05742 | [
"1511.06931"
] |
1512.05742#191 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | PhD thesis, University of Pennsylvania, 2005. Paper AAI3179808. I. V. Serban. Maximum likelihood learning and inference in conditional random ï¬ elds. Bachelorâ s thesis, University of Copenhagen, Denmark, 2012. http://www.blueanalysis.com/thesis/thesis.pdf. I. V. Serban and J. Pineau. Text-based speaker identiï¬ cation for multi-participant open-domain dialogue systems. Neural Information Processing Systems Workshop on Machine Learning for Spoken Language Understanding, 2015. I. V. Serban, A. Sordoni, Y. Bengio, A. Courville, and J. Pineau. | 1512.05742#190 | 1512.05742#192 | 1512.05742 | [
"1511.06931"
] |
1512.05742#192 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Building End-To-End Dialogue Systems Using Genera- tive Hierarchical Neural Networks. In AAAI, 2016. In press. I. V. Serban, T. Klinger, G. Tesauro, K. Talamadupula, B. Zhou, Y. Bengio, and A. Courville. Multiresolution recurrent neural networks: An application to dialogue response generation. In AAAI Conference, 2017a. I. V. Serban, A. Sordoni, R. Lowe, L. Charlin, J. Pineau, A. Courville, and Y. Bengio. | 1512.05742#191 | 1512.05742#193 | 1512.05742 | [
"1511.06931"
] |
1512.05742#193 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | A hierarchical latent variable encoder-decoder model for generating dialogues. In AAAI Conference, 2017b. S. Shaikh, T. Strzalkowski, G. A. Broadwell, J. Stromer-Galley, S. M. Taylor, and N. Webb. Mpc: A multi-party chat corpus for modeling social phenomena in discourse. In The International Conference on Language Resources and Evaluation (LREC), 2010. L. Shang, Z. Lu, and H. Li. | 1512.05742#192 | 1512.05742#194 | 1512.05742 | [
"1511.06931"
] |
1512.05742#194 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Neural responding machine for short-text conversation. arXiv preprint arXiv:1503.02364, 2015. C. Shaoul and C. Westbury. A usenet corpus (2005-2009), 2009. S. Sharma, J. He, K. Suleman, H. Schulz, and P. Bachman. Natural language generation in dialogue using lexicalized and delexicalized data. arXiv preprint arXiv:1606.03632, 2016. B. A. Shawar and E. Atwell. | 1512.05742#193 | 1512.05742#195 | 1512.05742 | [
"1511.06931"
] |
1512.05742#195 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Different measurements metrics to evaluate a chatbot system. In Proceedings of the Workshop on Bridging the Gap: Academic and Industrial Research in Dialog Technologies, pages 89â 96, 2007a. B. A. Shawar and Eric Atwell. Chatbots: are they really useful? In LDV Forum, volume 22, pages 29â 49, 2007b. 43 E. Shriberg, R. Dhillon, S. Bhagat, J. Ang, and H. Carvey. The ICSI meeting recorder dialog act (mrda) corpus. Technical report, DTIC Document, 2004. A. Simpson and N. M Eraser. | 1512.05742#194 | 1512.05742#196 | 1512.05742 | [
"1511.06931"
] |
1512.05742#196 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Black box and glass box evaluation of the sundial system. In Third European Conference on Speech Communication and Technology, 1993. S. Singh, D. Litman, M. Kearns, and M. Walker. Optimizing dialogue management with reinforcement learning: Experi- ments with the njfun system. Journal of Artiï¬ cial Intelligence Research, pages 105â 133, 2002. S. P. Singh, M. J. Kearns, D. J. Litman, and M. A. Walker. | 1512.05742#195 | 1512.05742#197 | 1512.05742 | [
"1511.06931"
] |
1512.05742#197 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Reinforcement learning for spoken dialogue systems. In Neural Information Processing Systems, 1999. A. Sordoni, Y. Bengio, H. Vahabi, C. Lioma, J. G. Simonsen, and J. Nie. A hierarchical recurrent encoder-decoder for generative context-aware query suggestion. In Proceedings of the 24th ACM International Conference on Information and Knowledge Management (CIKM 2015), 2015a. A. Sordoni, M. Galley, M. Auli, C. Brockett, Y. Ji, M. Mitchell, J. Nie, J. Gao, and B. Dolan. | 1512.05742#196 | 1512.05742#198 | 1512.05742 | [
"1511.06931"
] |
1512.05742#198 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | A neural network approach In Conference of the North American Chapter of the to context-sensitive generation of conversational responses. Association for Computational Linguistics (NAACL-HLT 2015), 2015b. A. Stenstr¨om, G. Andersen, and I. K. Hasund. Trends in teenage talk: Corpus compilation, analysis and ï¬ ndings, volume 8. J. Benjamins, 2002. A. Stent, R. Prasad, and M. Walker. | 1512.05742#197 | 1512.05742#199 | 1512.05742 | [
"1511.06931"
] |
1512.05742#199 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Trainable sentence planning for complex information presentation in spoken dialog systems. In Proceedings of the 42nd annual meeting on association for computational linguistics, page 79. Association for Computational Linguistics, 2004. A. Stolcke, K. Ries, N. Coccaro, E. Shriberg, R. Bates, D. Jurafsky, P. Taylor, R. Martin, C. Van Ess-Dykema, and M. Meteer. | 1512.05742#198 | 1512.05742#200 | 1512.05742 | [
"1511.06931"
] |
1512.05742#200 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Dialogue act modeling for automatic tagging and recognition of conversational speech. Computational linguistics, 26(3):339â 373, 2000. P.-H. Su, Y.-B. Wang, T.-H. Yu, and L.-S. Lee. A dialogue game framework with personalized training using reinforce- ment learning for computer-assisted language learning. In 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pages 8213â 8217. IEEE, 2013. P.-H. Su, D. Vandyke, M. Gasic, D. Kim, N. Mrksic, T.-H. Wen, and S. Young. | 1512.05742#199 | 1512.05742#201 | 1512.05742 | [
"1511.06931"
] |
1512.05742#201 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Learning from real users: Rating dialogue success with neural networks for reinforcement learning in spoken dialogue systems. In INTERSPEECH, 2015. P.-H. Su, M. Gasic, N. Mrksic, L. Rojas-Barahona, S. Ultes, D. Vandyke, T.-H. Wen, and S. Young. Continuously learning neural dialogue management. arXiv preprint arXiv:1606.02689, 2016. S. Sukhbaatar, A. Szlam, J. Weston, and R. | 1512.05742#200 | 1512.05742#202 | 1512.05742 | [
"1511.06931"
] |
1512.05742#202 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Fergus. End-to-end memory networks. In Neural Information Processing Systems (NIPS), 2015. X. Sun, J. Lichtenauer, M. Valstar, A. Nijholt, and M. Pantic. A multimodal database for mimicry analysis. In Affective Computing and Intelligent Interaction, pages 367â 376. Springer, 2011. J. Svartvik. The London-Lund corpus of spoken English: Description and research. | 1512.05742#201 | 1512.05742#203 | 1512.05742 | [
"1511.06931"
] |
1512.05742#203 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Number 82 in 1. Lund University Press, 1990. B. Thomson and S. Young. Bayesian update of dialogue state: A POMDP framework for spoken dialogue systems. Computer Speech & Language, 24(4):562â 588, 2010. J. Tiedemann. Parallel data, tools and interfaces in opus. In The International Conference on Language Resources and Evaluation (LREC), 2012. S. E. Tranter, D. Reynolds, et al. | 1512.05742#202 | 1512.05742#204 | 1512.05742 | [
"1511.06931"
] |
1512.05742#204 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | An overview of automatic speaker diarization systems. Audio, Speech, and Language Processing, IEEE Transactions on, 14(5):1557â 1565, 2006. D. Traum and J. Rickel. Embodied agents for multi-party dialogue in immersive virtual worlds. In Proceedings of the ï¬ rst international joint conference on Autonomous agents and multiagent systems: part 2, pages 766â 773. ACM, 2002. A. M. Turing. Computing machinery and intelligence. Mind, pages 433â 460, 1950. D. C Uthus and D. W Aha. | 1512.05742#203 | 1512.05742#205 | 1512.05742 | [
"1511.06931"
] |
1512.05742#205 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | The ubuntu chat corpus for multiparticipant chat analysis. In AAAI Spring Symposium: Analyzing Microtext, 2013. J. Vandeventer, A. J. Aubrey, P. L. Rosin, and D. Marshall. 4d cardiff conversation database (4D CCDb): A 4D database In Proceedings of the 1st Joint Conference on Facial Analysis, Animation and of natural, dyadic conversations. Auditory-Visual Speech Processing (FAAVSP 2015), 2015. D. Vandyke, P.-H. Su, M. Gasic, N. Mrksic, T.-H. Wen, and S. Young. | 1512.05742#204 | 1512.05742#206 | 1512.05742 | [
"1511.06931"
] |
1512.05742#206 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Multi-domain dialogue success classiï¬ ers for policy training. In Automatic Speech Recognition and Understanding (ASRU), 2015 IEEE Workshop on, pages 763â 770. IEEE, 2015. O. Vinyals and Q. Le. A neural conversational model. arXiv preprint arXiv:1506.05869, 2015. 44 M. A. Walker, D. J. Litman, C. A. Kamm, and A. Abella. Paradise: | 1512.05742#205 | 1512.05742#207 | 1512.05742 | [
"1511.06931"
] |
1512.05742#207 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | A framework for evaluating spoken dialogue agents. In Proceedings of the eighth conference on European chapter of the Association for Computational Linguistics, pages 271â 280, 1997. M. A. Walker, O. C. Rambow, and M. Rogati. Training a sentence planner for spoken dialogue using boosting. Computer Speech & Language, 16(3):409â 433, 2002. M. A. Walker, R. Grant, J. Sawyer, G. I. Lin, N. Wardrip-Fruin, and M. Buell. | 1512.05742#206 | 1512.05742#208 | 1512.05742 | [
"1511.06931"
] |
1512.05742#208 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Perceived or not perceived: Film character models for expressive nlg. In ICIDS, pages 109â 121, 2011. M. A Walker, G. I. Lin, and J. Sawyer. An annotated corpus of ï¬ lm dialogue for learning and characterizing character style. In The International Conference on Language Resources and Evaluation (LREC), pages 1373â 1378, 2012a. M. A Walker, J. E. F. Tree, P. Anand, R. Abbott, and J. King. | 1512.05742#207 | 1512.05742#209 | 1512.05742 | [
"1511.06931"
] |
1512.05742#209 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | A corpus for research on deliberation and debate. In The International Conference on Language Resources and Evaluation (LREC), pages 812â 817, 2012b. Z. Wang and O. Lemon. A simple and generic belief tracking mechanism for the dialog state tracking challenge: On the believability of observed information. In Special Interest Group on Discourse and Dialogue (SIGDIAL), 2013. S. Webb. A corpus driven study of the potential for vocabulary learning through watching movies. International Journal of Corpus Linguistics, 15(4):497â 519, 2010. J. Weizenbaum. ELIZAa computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1):36â 45, 1966. T. Wen, M. GaË sic, D. Kim, N. | 1512.05742#208 | 1512.05742#210 | 1512.05742 | [
"1511.06931"
] |
1512.05742#210 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | MrkË sic, P. Su, D. Vandyke, and S. Young. Stochastic language generation in dialogue using recurrent neural networks with convolutional sentence reranking. Special Interest Group on Discourse and Dialogue (SIGDIAL), 2015. T.-H. Wen, M. Gasic, N. Mrksic, L. M. Rojas-Barahona, P.-H. Su, D. Vandyke, and S. Young. | 1512.05742#209 | 1512.05742#211 | 1512.05742 | [
"1511.06931"
] |
1512.05742#211 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Multi-domain neural In Conference of the North American Chapter of the network language generation for spoken dialogue systems. Association for Computational Linguistics (NAACL-HLT 2016), 2016. J. Weston. Dialog-based language learning. arXiv preprint arXiv:1604.06045, 2016. J. Weston, S. Chopra, and A. Bordes. Memory networks. In International Conference on Learning Representations (ICLR), 2015. J. Williams, A. Raux, D. Ramachandran, and A. | 1512.05742#210 | 1512.05742#212 | 1512.05742 | [
"1511.06931"
] |
1512.05742#212 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Black. The dialog state tracking challenge. In Special Interest Group on Discourse and Dialogue (SIGDIAL), 2013. J. D. Williams and S. Young. Partially observable markov decision processes for spoken dialog systems. Computer Speech & Language, 21(2):393â 422, 2007. J. D. Williams and G. Zweig. End-to-end lstm-based dialog control optimized with supervised and reinforcement learning. arXiv preprint arXiv:1606.01269, 2016. M. | 1512.05742#211 | 1512.05742#213 | 1512.05742 | [
"1511.06931"
] |
1512.05742#213 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Wolska, Q. B. Vo, D. Tsovaltzi, I. Kruijff-Korbayov´a, E. Karagjosova, H. Horacek, A. Fiedler, and C. Benzm¨uller. An annotated corpus of tutorial dialogs on mathematical theorem proving. In The International Conference on Language Resources and Evaluation (LREC), 2004. B. Wrede and E. Shriberg. | 1512.05742#212 | 1512.05742#214 | 1512.05742 | [
"1511.06931"
] |
1512.05742#214 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Relationship between dialogue acts and hot spots in meetings. In Automatic Speech Recogni- tion and Understanding, 2003. ASRUâ 03. 2003 IEEE Workshop on, pages 180â 185. IEEE, 2003. Yi Yang, Wen-tau Yih, and Christopher Meek. Wikiqa: A challenge dataset for open-domain question answering. In EMNLP, pages 2013â 2018. Citeseer, 2015. Z. Yang, B. Li, Y. Zhu, I. King, G. Levow, and H. Meng. | 1512.05742#213 | 1512.05742#215 | 1512.05742 | [
"1511.06931"
] |
1512.05742#215 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Collection of user judgments on spoken dialog system with crowdsourcing. In Spoken Language Technology Workshop (SLT), 2010 IEEE, pages 277â 282, 2010. S. Young, M. Gasic, B. Thomson, and J. D. Williams. POMDP-based statistical spoken dialog systems: A review. Proceedings of the IEEE, 101(5):1160â 1179, 2013. S. J. Young. Probabilistic methods in spokenâ dialogue systems. | 1512.05742#214 | 1512.05742#216 | 1512.05742 | [
"1511.06931"
] |
1512.05742#216 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Philosophical Transactions of the Royal Society of London. Series A: Mathematical, Physical and Engineering Sciences, 358(1769), 2000. J. Zhang, R. Kumar, S. Ravi, and C. Danescu-Niculescu-Mizil. Conversational ï¬ ow in oxford-style debates. In Confer- ence of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT 2016), 2016. | 1512.05742#215 | 1512.05742#217 | 1512.05742 | [
"1511.06931"
] |
1512.05742#217 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | 45 # Appendix A. Learning from Dialogue Corpora In this appendix section, we review some existing computational architectures suitable for learning dialogue strategies directly from data. The goal is not to provide full technical details on the methods available to achieve this â though we provide appropriate citations for the interested reader â but rather to illustrate concretely how the datasets described above can, and have, been used in different dialogue learning efforts. As such, we limit this review to a small set of existing work. # A.1 Data Pre-processing Before applying machine learning methods to a dialogue corpus, it is common practice to perform some form of pre-processing. The aim of pre-processing is to standardize a dataset with minimal loss of information. This can reduce data scarcity, and eventually make it easier for models to learn from the dataset. In natural language processing, it is commonly acknowledged that pre-processing can have a signiï¬ cant effect on the results of the natural language processing systemâ the same observation holds for dialogue. Although the speciï¬ c procedure for pre-processing is task- and data-dependent, in this section we highlight a few common approaches, in order to give a general idea of where pre-processing can be effective for dialogue systems. Pre-processing is often used to remove anomalies in the data. For text-based corpora, this can include removing acronyms, slang, misspellings and phonemicization (e.g. where words are written according to their pronunciation instead of their correct spelling). For some models, such as the generative dialogue models discussed later, tokenization (e.g. deï¬ ning the smallest unit of input) is also critical. In datasets collected from mobile text, forum, microblog and chat-based settings, it is common to observe a signiï¬ cant number of acronyms, abbreviations, and phonemicizations that are speciï¬ c to the topic and userbase (Clark, 2003). Although there is no widely accepted standard for handling such occurrences, many NLP systems incorporate some form of pre-processing to normalize these entries (Kaufmann and Kalita, 2010; Aw et al., 2006; Clark, 2003). For example, there are look-up tables, such as the IRC Beginner List18, which can be used to translate the most common acronyms and slang into standard English. | 1512.05742#216 | 1512.05742#218 | 1512.05742 | [
"1511.06931"
] |
1512.05742#218 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Another common strategy is to use stemming and lemmatization to replace many words with a single item (e.g. walking and walker both replaced by walk). Of course, depending on the task at hand and the corpus size, an option is also to leave the acronyms and phonemicized words as they are. In our experience, almost all dialogue datasets contain some amount of spelling errors. By correcting these, we expect to reduce data sparsity. This can be done by using automatic spelling correctors. | 1512.05742#217 | 1512.05742#219 | 1512.05742 | [
"1511.06931"
] |
1512.05742#219 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | However, it is important to inspect their effectiveness. For example, for movie scripts, Serban et al. (2016) found that automatic spelling correctors introduced more spelling errors than they corrected, and a better strategy was to use Wikipediaâ s most commonly misspelled words19 to lookup and replace potential spelling errors. Transcribed spoken language corpora often include many non-words in their transcriptions (e.g. uh, oh). Depending on whether or not these provide additional information to the dialogue system, researchers may also want to remove these words by using automatic spelling correctors. 18. http://www.ircbeginner.com/ircinfo/abbreviations.html 19. https://en.wikipedia.org/wiki/Commonly_misspelled_English_words | 1512.05742#218 | 1512.05742#220 | 1512.05742 | [
"1511.06931"
] |
1512.05742#220 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | 46 # A.2 Segmenting Speakers and Conversations Some dialogue corpora, such as those based on movie subtitles, come without explicit speaker segmentation. However, it is often possible to estimate the speaker segmentation, which is useful to build a model of a given speakerâ as compared to a model of the conversation as a whole. For text-based corpora, Serban and Pineau (2015) have recently proposed the use of recurrent neural networks to estimate turn-taking and speaker labels in movie scripts with promising results. In the speech recognition literature, this is the subtask of speaker diarisation (Miro et al., 2012; Tranter et al., 2006). When the audio stream of the speech is available, the segmentation is quite accurate with classiï¬ | 1512.05742#219 | 1512.05742#221 | 1512.05742 | [
"1511.06931"
] |
1512.05742#221 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | cation error rates as low as 5%. A strategy sometimes used for segmentation of spoken dialogues is based on labelling a small subset of the corpus, known as the gold corpus, and training a speciï¬ c segmentation model based on this. The remaining corpus is then segmented iteratively according to the segmentation model, after which the gold corpus is expanded with the most conï¬ dent segmentations and the segmentation model is retrained. This process is sometimes known as embedded training, and is widely used in other speech recognition tasks (Jurafsky and Martin, 2008). It appears to work well in practice, but has the disadvantage that the interpretation of the label can drift. Naturally, this approach can be applied to text dialogues as well in a straightforward manner. In certain corpora, such as those based on chat channels or extracted from movie subtitles, many conversations occur in sequence. In some cases, there are no labels partitioning the beginning and end of separate conversations. Similarly, certain corpora with multiple speakers, such as corpora based on chat channels, contain several conversations occurring in parallel (e.g. simultaneously) but do not contain any segmentation separating these conversations. This makes it hard to learn a meaningful model from such conversations, because they do not represent consistent speakers or coherent semantic topics. To leverage such data towards learning individual conversations, researchers have proposed methods to automatically estimate segmentations of conversations (Lowe et al., 2015a; Nio et al., 2014a). Former solutions were mostly based on hand-crafted rules and seemed to work well upon manual inspection. For chat forums, one solution involves thresholding the beginning and end of conversations based on time (e.g. delay of more than x minutes between utterances), and eliminat- ing speakers from the conversation unless they are referred to explicitly by other speakers (Lowe et al., 2015a). More advanced techniques involve maximum-entropy classiï¬ ers, which leverage the content of the utterances in addition to the discourse structure and timing information (Elsner and Charniak, 2008). For movie scripts, researchers have proposed the use of simple information- retrieval similarity measures, such as cosine similarity, to identify conversations (Nio et al., 2014a). | 1512.05742#220 | 1512.05742#222 | 1512.05742 | [
"1511.06931"
] |
1512.05742#222 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Based on the their performance on estimating turn-taking and speaker labels, recurrent neural net- works also hold promise for segmenting conversations (Serban and Pineau, 2015). # A.3 Discriminative Model Architectures As discussed in Subsection 2.3, discriminative models aim to predict certain labels or annotations manually associated with a portion of a dialogue. For example, a discriminative model might be trained to predict the intent of a person in a dialogue, or the topic, or a speciï¬ | 1512.05742#221 | 1512.05742#223 | 1512.05742 | [
"1511.06931"
] |
1512.05742#223 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | c piece of information. 47 In the following subsections, we discuss research directions where discriminative models have been developed to solve dialogue-related tasks.20 This is primarily meant to review and contrast the work from a data-driven learning perspective. A.3.1 DIALOGUE ACT CLASSIFICATION AND DIALOGUE TOPIC SPOTTING Here we consider the simple task known as dialogue act classiï¬ cation (or dialogue move recogni- tion). In this task, the goal is to classify a user utterance, independent of the rest of the conversation, as one out of K dialogue acts: P (A | U ), where A is the discrete variable representing the dialogue act and U is the userâ s utterance. This falls under the general umbrella of text classiï¬ cation tasks, though its application is speciï¬ c to dialogue. Like the dialogue state tracker model, a dialogue act classiï¬ cation model could be plugged into a dialogue system as an additional natural language understanding component. Early approaches for this task focused on using n-gram models for classiï¬ cation (Reithinger and Klesen, 1997; Bird et al., 1995). For example, Reithinger et al. assumed that each dialogue act is generated by its own language model. They trained an n-gram language model on the utterances of each dialogue act, Pθ(U |A), and afterwards use Bayesâ rule to assign the probability of a new dialogue act Pθ(A|U ) to be proportional to the probability of generating the utterance under the language model Pθ(U |A). However, a major problem with this approach is the lack of datasets with annotated dialogue acts. More recent work by Forgues et al. (2014) acknowledged this problem, and tried to overcome the data scarcity issue by leveraging word embeddings learned from other, larger text corpora. They created an utterance-level representation by combining the word embeddings of each word, for example, by summing the word embeddings or taking the maximum w.r.t. each dimension. These utterance-level representations, together with word counts, were then given as inputs to a linear classiï¬ er to classify the dialogue acts. Thus, Forgues et al. showed that by leveraging another, substantially larger, corpus they were able to improve performance on their original task. | 1512.05742#222 | 1512.05742#224 | 1512.05742 | [
"1511.06931"
] |
1512.05742#224 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | This makes the work on dialogue act classiï¬ cation very appealing from a data-driven perspec- tive. First, it seems that the accuracy can be improved by leveraging alternative data sources. Sec- ond, unlike the dialogue state tracking models, dialogue act classiï¬ cation models typically involve relatively little feature hand-crafting thus suggesting that data-driven approaches may be more pow- erful for these tasks. # A.3.2 DIALOGUE STATE TRACKING The core task of the DSTC (Williams et al., 2013) adds more complexity by focusing on tracking the state of a conversation. | 1512.05742#223 | 1512.05742#225 | 1512.05742 | [
"1511.06931"
] |
1512.05742#225 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | This is framed as a classiï¬ cation problem: for every time step t of the dialogue, the model is given the current input to the dialogue state tracker (including ASR and SLU outputs) together with external knowledge sources (e.g. bus timetables). The required output is a probability distribution over a set of Nt predeï¬ ned hypotheses, in addition to the REST hypothesis (which represents the probability that none of the previous Nt hypotheses are correct). The goal is to match the distribution over hypotheses as closely as possible to the real annotated | 1512.05742#224 | 1512.05742#226 | 1512.05742 | [
"1511.06931"
] |
1512.05742#226 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | 20. It is important to note that although discriminative models have been favored to model supervised problems in the dialogue-system literature, in principle generative models (P (X, Y )) instead of discriminative models (P (Y |X)) could be used. 48 data. By providing an open dataset with accurate labels, it has been possible for researchers to perform rigourous comparative evaluations of different classiï¬ cation models for dialogue systems. Models for the DSTC include both statistical approaches and hand-crafted systems. An example of the latter is the system proposed in Wang and Lemon (2013), which relies on having access to a marginal conï¬ dence score Pt(u, s, v) for a user dialogue u(s = v) with slot s and value v given by a subsystem at time t. The marginal conï¬ dence score gives a heuristic estimate of the probability of a slot taking a particular value. The model must then aggregate all these estimates and conï¬ dence scores to compute probabilities for each hypothesis. In this model, the SLU component may for example give the marginal conï¬ dence score (in- form(data.day=today)=0.9) in the bus scheduling DSTC, meaning that it believes with high conï¬ - dence (0.9) that the user has requested information for the current day. This marginal conï¬ dence score is used to update the belief state of the system bt(s, v) at time t using a set of hand-crafted updates to the probability distribution over hypotheses. From a data-driven learning perspective, this approach does not make efï¬ cient use of the dataset, but instead relies heavily on the accuracy of the hand-crafted tracker outputs. More sophisticated models for the DSTC take a dynamic Bayesian approach by modeling the latent dialogue state and observed tracker outputs in a directed graphical model (Thomson and Young, 2010). These models are sometimes called generative state tracking models, though they are still discriminative in nature as they only attempt to model the state of the dialogue and not the words and speech acts in each dialogue. | 1512.05742#225 | 1512.05742#227 | 1512.05742 | [
"1511.06931"
] |
1512.05742#227 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | For simplicity we drop the index i in the following equations. Similar to before, let xt be the observed tracker outputs at time t. Let st be the dialogue state at time t, which represents the state of the world including, for example, the user actions (e.g. deï¬ ned by slot-value pairs) and system actions (e.g. number of times a piece of information has been requested). For the DSTC, the state st must represent the true current slot-value pair at time t. Let rt be the reward observed at time t, and let at be the action taken by the dialogue system at time t. This general framework, also known as a partially-observable Markov decision process (POMDP) then deï¬ nes the graphical model: Pθ(xt, st, rt|at, stâ 1) = Pθ(xt|st, at)Pθ(st|stâ 1, at)Pθ(rt|st, at), (3) where at is assumed to be a deterministic variable of the dialogue history. This variable is given in the DSTC, because it comes from the policy used to interact with the humans when gathering the datasets. This approach is attractive from a data-driven learning perspective, because it models the uncertainty (e.g. noise and ambiguity) inherent in all variables of interest. Thus, we might expect such a model to be more robust in real applications. Now, since all variables are observed in this task, and since the goal is to determine st given the other variables, we are only interested in: Pθ(st|xt, rt, at) â Pθ(xt|st, at)Pθ(st|stâ 1, at)Pθ(rt|st, at), (4) which can then be normalized appropriately since st is a discrete stochastic variable. However, due to the temporal dependency between st and stâ 1, the complexity of the model is similar to a hidden Markov model, and thus both learning and inference become intractable when the state, observation and action spaces are too large. Indeed, as noted by Young et al. (2013), the number of states, actions and observations can easily reach 1010 conï¬ gurations in some dialogue systems. | 1512.05742#226 | 1512.05742#228 | 1512.05742 | [
"1511.06931"
] |
1512.05742#228 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Thus, it is necessary to make simplifying assumptions on the distribution Pθ(st|xt, rt, at) and to approximate 49 the learning and inference procedures (Young et al., 2013). With appropriate structural assumptions and approximations, these models perform well compared to baseline systems on the DSTC (Black et al., 2011). Non-bayesian data-driven models have also been proposed. These models are sometimes called discriminative state tracking models, because they do not assume a generation process for the tracker outputs, xt or for any other variables, but instead only condition on them. For example, Henderson et al. (2013) proposed to use a feed-forward neural network. At each time step t, they extracted a set of features and then concatenate a window of W feature vectors together. These are given as input to the neural network, which outputs the probability of each hypothesis from the set of hypotheses. By learning a discriminative model and using a window over the last time steps, they do not face the intractability issues of dynamic Bayesian networks. Instead, their system can be trained with gradient descent methods. This approach could eventually scale to large datasets, and is therefore very attractive for data-driven learning. However, unlike the dynamic Bayesian approaches, these models do not represent probability distributions over variables apart from the state of the dialogue. Without probability distributions, it is not clear how to deï¬ ne a conï¬ dence interval over the predictions. Thus the models might not provide adequate information to determine when to seek conï¬ rmation or clariï¬ cation following unclear statements. Researchers have also investigated the use of conditional random ï¬ elds (CRFs) for state tracking (Ren et al., 2013). This class of models also falls under the umbrella of discriminative state tracking models; however, they are able to take into account temporal dependencies within dialogues by modeling a complete joint distribution over states: | 1512.05742#227 | 1512.05742#229 | 1512.05742 | [
"1511.06931"
] |
1512.05742#229 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Pθ(S|X) â fi(sc, xc), câ C i (5) where C is the set of factors, i.e. sets of state and tracker variables across time, sc is the set of states associated with factor c, xc is the set of observations associated with factor c, and {fi}i is a set of functions parametrized by parameters θ. There exist certain functions fi, for which exact inference is tractable and learning the parameters θ is efï¬ cient (Koller and Friedman, 2009; Serban, 2012). For example, Ren et al. (2013) propose a set of factors which create a linear dependency structure between the dialogue states while conditioning on all the observed tracker outputs: Pθ(S|X) â fi(stâ 1, st, st+1, X). t i (6) | 1512.05742#228 | 1512.05742#230 | 1512.05742 | [
"1511.06931"
] |
1512.05742#230 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | This creates a dependency between all dialogue states, forcing them be coherent with each other. This should be contrasted to the feed-forward neural network approach, which does not enforce any sort of consistency between different predicted dialogue states. The CFR models can be trained with gradient descent to optimize the exact log-likelihood, but exact inference is typically intractable. Therefore, an approximate inference procedure, such as loopy belief propagation, is necessary to approximate the posterior distribution over states st. In summary, there exist different approaches to building discriminative learning architectures for dialogue. While they are fairly straightforward to evaluate and often form a crucial component for real-world dialogue systems, by themselves they only offer a limited view of what we ultimately want to accomplish with dialogue models. They often require labeled data, which is often difï¬ cult to acquire on a large scale (except in the case of answer re-ranking) and require manual feature selection, which reduces their potential effectiveness. Since each model is trained independently | 1512.05742#229 | 1512.05742#231 | 1512.05742 | [
"1511.06931"
] |
1512.05742#231 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | 50 of the other models and components with which it interacts in the complete dialogue system, one cannot give guarantees on the performance of the ï¬ nal dialogue system by evaluating the individual models alone. Thus, we desire models that are capable of producing probability distributions over all possible responses instead of over all annotated labelsâ in other words, models that can actually generate new responses by selecting the highest probability next utterance. This is the subject of the next section. # A.4 Response Generation Models Both the response re-ranking approach and the generative response model approach have allowed for the use of large-scale unannotated dialogue corpora for training dialogue systems. We therefore close this section by discussing these classes of approaches In general, approaches which aim to generate responses have the potential to learn semantically more powerful representations of dialogues compared to models trained for dialogue state tracking or dialogue act classiï¬ cation tasks: the concepts they are able to represent are limited only by the content of the dataset, unlike the dialogue state tracking or dialogue act classiï¬ cation models which are limited by the annotation scheme used (e.g. the set of possible slot-value pairs pre-speciï¬ ed for the DSTC). # A.4.1 RE-RANKING RESPONSE MODELS Researchers have recently turned their attention to the problem of building models that produce answers by re-ranking a set of candidate answers, and outputting the one with the highest rank or probability. | 1512.05742#230 | 1512.05742#232 | 1512.05742 | [
"1511.06931"
] |
1512.05742#232 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | While the task may seem artiï¬ cial, the main advantage is that it allows the use of completely un-annotated datasets. Unlike dialogue state tracking, this task does not require datasets where experts have labeled every utterance and system response. This task only requires knowing the sequence of utterances, which can be extracted automatically from transcribed conversations. Banchs and Li (2012) construct an information retrieval system based on movie scripts using the vector space model. Their system searches through a database of movie scripts to ï¬ nd a dialogue similar to the current dialogue with the user, and then emits the response from the closest dialogue in the database. Similarly, Ameixa et al. (2014) also use an information retrieval system, but based on movie subtitles instead of movie scripts. They show that their system gives sensible responses to questions, and that bootstrapping an existing dialogue system from movie subtitles improves an- swering out-of-domain questions. Both approaches assume that the responses given in the movie scripts and movie subtitle corpora are appropriate. Such information retrieval systems consist of a relatively small set of manually tuned parameters. For this reason, they do not require (annotated) labels and can therefore take advantage of raw data (in this case movie scripts and movie subtitles). However, these systems are effectively nearest-neighbor methods. They do not learn rich represen- tations from dialogues which can be used, for example, to generalize to previously unseen situations. Furthermore, it is unclear how to transform such models into full dialogue agents. They are not ro- bust and it is not clear how to maintain the dialogue state. Contrary to search engines, which present an entire page of results, the dialogue system is only allowed to give a single response to the user. (Lowe et al., 2015a) also propose a re-ranking approach using the Ubuntu Dialogue Corpus. The authors propose an afï¬ nity model between a context c (e.g. ï¬ ve consecutive utterances in a conversation) and a potential reply r. Given a context-reply pair the model compares the output of a context-speciï¬ c LSTM against that of a response-speciï¬ c LSTM neural network and outputs | 1512.05742#231 | 1512.05742#233 | 1512.05742 | [
"1511.06931"
] |
1512.05742#233 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | 51 whether or not the response is correct for the given context. The model maximizes the likelihood of a correct context-response pair: cri) TH) max > Po(true response | ¢;,r;)/i"») (1 â Py(true response i where θ stands for the set of all model parameters and Ici(·) denotes a function that returns 1 when ri is the correct response to ci and 0 otherwise. Learning in the model uses stochastic gradient descent. As is typical with neural network architectures, this learning procedure scales to large datasets. Given a context, the trained model can be used to pick an appropriate answer from a set of potential answers. This model assumes that the responses given in the corpus are appropriate (i.e., this model does not generate novel responses). However, unlike the above information retrieval systems, this model is not provided with a similarity metric as in the vector space model, but instead must learn the semantic relevance of a response to a context. This approach is more attractive from a data-driven learning perspective because it uses the dataset more efï¬ ciently and avoids costly hand tuning of parameters. # A.4.2 FULL GENERATIVE RESPONSE MODELS Generative dialogue response strategies are designed to automatically produce utterances by com- posing text (see Section 2.4). | 1512.05742#232 | 1512.05742#234 | 1512.05742 | [
"1511.06931"
] |
1512.05742#234 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | A straightforward way to deï¬ ne the set of dialogue system actions is by considering them as sequences of words which form utterances. Sordoni et al. (2015b) and Ser- ban et al. (2016) both use this approach. They assume that both the user and the system utterances can be represented by the same generative distribution: om) Po(ui,...,ur) = Po(uz | wet) (8) t=1 # t=1 T # N Pθ(wt,n | wt,<n, u<t), t=1 n=1 = where the dialogue consists of T utterances u1, . . . , uT and wt,n is the nth token in utterance t. The variable u<t indicates the sequence of utterances which preceed ut and similarly for wt,<n. Further, the probability of the ï¬ rst utterance is deï¬ ned as P (u1|u<1) = P (u1), and the ï¬ rst word of each utterance only conditions on the previous utterance, i.e. wt,<1 is â nullâ . | 1512.05742#233 | 1512.05742#235 | 1512.05742 | [
"1511.06931"
] |
1512.05742#235 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Tokens can be words, as well as speech and dialogue acts. The set of tokens depends on the particular application domain, but in general the set must be able to represent all desirable system actions. In particular, the set must contain an end-of-utterance token to allow the model to express turn-taking. This approach is similar to language modeling. For differentiable models, training is based on maximum log- likelihood using stochastic gradient descent methods. As discussed in Subsection 2.4, these models project words and dialogue histories onto an Euclidian space. Furthermore, when trained on text only, they can be thought of as unsupervised machine learning models. Sordoni et al. (2015b) use the above approach to generate responses for posts on Twitter. Specif- ically, Pθ(um | u<m) is given by a recurrent neural network which generates a response word-by- word based on Eq. (9). The model learns its parameters using stochastic gradient descent on a corpus of Twitter messages. The authors then combine their generative model with a machine translation | 1512.05742#234 | 1512.05742#236 | 1512.05742 | [
"1511.06931"
] |
1512.05742#236 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | 52 (9) system and demonstrate that the hybrid system outperforms a state-of-the-art machine translation system (Ritter et al., 2011). Serban et al. (2016) extend the above model to generate responses for movie subtitles and movie scripts. Speciï¬ cally, Serban et al. (2016) adapt a hierarchical recurrent neural network (Sordoni et al., 2015a), which they argue is able to represent the common ground between the dialogue interlocutors. They also propose to add speech and dialogue acts to the vocabulary of the model to make the interaction with the system more natural. However, since the model is used in a standalone manner, i.e., without combining it with a machine translation system, the majority of the generated responses are highly generic (e.g. | 1512.05742#235 | 1512.05742#237 | 1512.05742 | [
"1511.06931"
] |
1512.05742#237 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Iâ m sorry or I donâ t know). The authors conclude that this is a limitation of all neural network-based generative models for dialogue (e.g., (Serban et al., 2016; Sordoni et al., 2015b; Vinyals and Le, 2015)). The problem appears to lie in the distribution of words in the dialogue utterances, which primarily consist of pronouns, punctuation tokens and a few common verbs but rarely nouns, verbs and adjectives. When trained on a such a skewed distribution, the models do not learn to represent the semantic content of dialogues very well. This issue is exacerbated by the fact that dialogue is inherently ambiguous and multi-modal, which makes it more likely for the model to fall back on a generic response. As a workaround, Li et al. (2015) increase response diversity by changing the objective function at generation time to also maximize the mutual information between the context, i.e. the previous utterances, and the response utterance. | 1512.05742#236 | 1512.05742#238 | 1512.05742 | [
"1511.06931"
] |
1512.05742#238 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | However, it is not clear what impact this artiï¬ cial diversity has on the effectiveness or naturalness of the dialogue system. It is possible that the issue may require larger corpora to learn semantic representations of dialogue, more context (e.g. longer conversations, user proï¬ les and task-speciï¬ c corpora) and multi-modal interfaces to reduce uncertainty. Further research is needed to resolve this question. Wen et al. (2015) train a neural network to generate natural language responses for a closed- dialogue domain. They use Amazon Mechanical Turk21 to collect a dataset of dialogue acts and utterance pairs. They then train recurrent neural networks to generate a single utterance as in Eq. (9), but condition on the speciï¬ ed dialogue act: Pθ(U |A) = Pθ(wn | w<n, A), (10) # n where A is the dialogue act represented by a discrete variable, U is the generated utterance given A and wn is the nth word in the utterance. Based on a hybrid approach combining different recurrent neural networks for answer generation and convolutional neural networks for re-ranking answers, they are able to generate diverse utterances representing the dialogue acts in their datasets. Similar to the models which re-rank answers, generative models may be used as complete di- alogue systems or as response generation components of other dialogue systems. However, unlike the models which re-rank answers, the word-by-word generative models can generate entirely new utterances never seen before in the training set. Further, in certain models such as those cited above, response generation scales irrespective of dataset size. # A.5 User Simulation Models In the absence of large datasets, some researchers have turned to building user simulation models (sometimes referred to as â user modelsâ ) to train dialogue strategies. User simulation models aim | 1512.05742#237 | 1512.05742#239 | 1512.05742 | [
"1511.06931"
] |
1512.05742#239 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | 21. http://www.mturk.com 53 to produce natural, varied and consistent interactions from a ï¬ xed corpus, as stated by Pietquin and Hastie (2013, p. 2): â An efï¬ cient user simulation should not only reproduce the statistical distribution of dialogue acts measured in the data but should also reproduce complete dialogue structures.â As such, they model the conditional probability of the user utterances given previous user and system utterances: Pθ(uuser t <t , usystem |uuser <t ), (11) and usystem t where θ are the model parameters, uuser utterance (or action) respectively at time t. Similarly, uuser and system utterances that precede uuser are the user utterance (or action) and the system indicate the sequence of user t <t and usystem <t and usystem t , respectively. t There are two main differences between user simulation models and the generative response models discussed in Subsection A.4.2. First, user simulation models never model the distribution over system utterances, but instead only model the conditional distribution over user utterances given previous user and system utterances. Second, user simulation models usually model dia- logue acts as opposed to word tokens. Since a single dialogue act may represent many different utterances, the models generalize well across paraphrases. However, training such user simulation models requires access to a dialogue corpus with annotated dialogue acts, and limits their applica- tion to training dialogue systems which work on the same set of dialogue acts. For spoken dialogue systems, user simulation models are usually combined with a model over speech recognition errors based on the automatic speech recognition system but, for simplicity, we omit this aspect in our analysis. Researchers initially experimented with n-gram-based user simulation models (Eckert et al., 1997; Georgila et al., 2006), which are deï¬ | 1512.05742#238 | 1512.05742#240 | 1512.05742 | [
"1511.06931"
] |
1512.05742#240 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | ned as: Pθ(uuser t |usystem tâ 1 tâ 2, . . . , usystem , uuser tâ nâ 1) = θuuser t ,usystem tâ 2,...,usystem tâ 1 ,uuser tâ nâ 1 , (12) where n is an even integer, and θ is an n-dimensional tensor (table) which satisï¬ es: θuuser t ,usystem tâ 2,...,usystem tâ 1 ,uuser tâ nâ 1 = 1. (13) # uuser t These models are trained either to maximize the log-likelihood of the observations by setting θuuser equal to (a constant times) the number of occurrences of each correspond- ing n-gram , or on a related objective function which encourages smoothness and therefore reduces data sparsity for larger nâ | 1512.05742#239 | 1512.05742#241 | 1512.05742 | [
"1511.06931"
] |
1512.05742#241 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | s (Goodman, 2001). Even with smoothing, n has to be kept small and these models are therefore unable to maintain the history and goals of the user over several utter- ances (Schatzmann et al., 2005). Consequently, the goal of the user changes over time, which has a detrimental effect on the performance of the dialogue system trained using the user simulator. Several solutions have been proposed to solve the problem of maintaining the history of the dialogue. Pietquin (2004) propose to condition the n-gram model on the userâ | 1512.05742#240 | 1512.05742#242 | 1512.05742 | [
"1511.06931"
] |
1512.05742#242 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | s goal: Pθ(uuser t |usystem tâ 1 tâ 2, . . . , usystem , uuser tâ nâ 1, g), (14) where g is the goal of the user deï¬ ned as a set of slot-value pairs. Unfortunately, not only must the goal lie within a set of hand-crafted slot-value pairs, but its distribution when simulating must 54 also be deï¬ ned by experts. Using a more data-driven approach, Georgila et al. (2006) propose to condition the n-gram model on additional features: | 1512.05742#241 | 1512.05742#243 | 1512.05742 | [
"1511.06931"
] |
1512.05742#243 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Pθ(uuser t |usystem tâ 1 tâ 2, . . . , usystem <t , usystem , uuser tâ nâ 1, f (uuser <t )), (15) where f (uuser ) is a function mapping all previous user and system utterances to a low- dimensional vector that summarizes the previous interactions between the user and the system (e.g. slot-value pairs that the user has provided the system up to time t). | 1512.05742#242 | 1512.05742#244 | 1512.05742 | [
"1511.06931"
] |
1512.05742#244 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Now, θ can be learned using maximum log-likelihood with stochastic gradient descent. More sophisticated probabilistic models have been proposed based on directed graphical mod- els, such as hidden Markov models and input-output hidden Markov models (Cuay´ahuitl et al., 2005), and undirected graphical models, such as conditional random ï¬ elds based on linear chains (Jung et al., 2009). Inspired by Pietquin (2005), Pietquin (2007) and Rossignol et al. (2011) propose the following directed graphical model: Pθ(uuser t <t , usystem |uuser <t ) = Pθ(uuser t <t , usystem |gt, kt, uuser <t <t , usystem )Pθ(gt|kt)Pθ(kt|k<t, uuser <t gt,kt ) where gt is a discrete random variable representing the userâ s goal at time t (e.g. a set of slot-value pairs), and kt is another discrete random variable representing the userâ s knowledge at time t (e.g. a set of slot-value pairs). This model allows the user to change goals during the dialogue, which would be the case, for example, if the user is notiï¬ ed by the dialogue system that the original goal cannot be accomplished. The dependency on previous user and system utterances for uuser and kt may be limited to a small number of previous turns as well as a set of hand-crafted features computed on these utterances. | 1512.05742#243 | 1512.05742#245 | 1512.05742 | [
"1511.06931"
] |
1512.05742#245 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | For example, the conditional probability: Pθ(uuser t <t , usystem |gt, kt, uuser <t ), (17) may be approximated by an n-gram model with additional features as in Georgila et al. (2006). Generating user utterances can be done in a straightforward manner by using ancestral sampling: ï¬ rst, sample kt given k<t and the previous user and system utterances; then, sample gt given kt; and ï¬ nally, sample uuser given gt, kt and the previous user and system utterances. | 1512.05742#244 | 1512.05742#246 | 1512.05742 | [
"1511.06931"
] |
1512.05742#246 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | The model can be trained using maximum log-likelihood. If all variables are observed, i.e. gt and kt have been given by human annotators, then the maximum-likelihood parameters can be found similarly to n- gram models by counting the co-occurrences of variables. If some variables are missing, they can be estimated using the expectation-maximization (EM) algorithm, since the dependencies form a linear chain. Rossignol et al. (2011) also propose to regularize the model by assuming a Dirichlet distribution prior over the parameters, which is straightforward to combine with the EM algorithm. User simulation models are particularly useful in the development of dialogue systems based on reinforcement learning methods (Singh et al., 2002; Schatzmann et al., 2006; Pietquin and Dutoit, 2006; Frampton and Lemon, 2009; JurË | 1512.05742#245 | 1512.05742#247 | 1512.05742 | [
"1511.06931"
] |
1512.05742#247 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | cÂ´Ä±Ë cek et al., 2012; Png and Pineau, 2011; Young et al., 2013). Furthermore, many user simulation models, such as those trainable with stochastic gradient descent or co-occurrence statistics, are able to scale to large corpora. In the light of the increasing availability of large dialogue corpora, there are ample opportunities for building novel user simulation models, which aim to better represent real user behavior, and in turn for training dialogue systems, which aim to solve more general and more difï¬ cult tasks. Despite their similarities, research on user simulation | 1512.05742#246 | 1512.05742#248 | 1512.05742 | [
"1511.06931"
] |
1512.05742#248 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | 55 (16) models and full generative models has progressed independently of each other so far. Therefore, it also seems likely that there is fruitful work to be done in transferring and merging ideas between these two areas. 56 | 1512.05742#247 | 1512.05742 | [
"1511.06931"
] |
|
1512.04455#0 | Memory-based control with recurrent neural networks | 5 1 0 2 c e D 4 1 ] G L . s c [ 1 v 5 5 4 4 0 . 2 1 5 1 : v i X r a # Memory-based control with recurrent neural networks # Nicolas Heess* Jonathan J Hunt* Timothy P Lillicrap David Silver Google Deepmind * These authors contributed equally. heess, jjhunt, countzero, davidsilver @ google.com # Abstract Partially observed control problems are a challenging aspect of reinforcement learning. We extend two related, model-free algorithms for continuous control â deterministic policy gradient and stochastic value gradient â to solve partially observed domains using recurrent neural networks trained with backpropagation through time. We demonstrate that this approach, coupled with long-short term memory is able to solve a variety of physical control problems exhibiting an as- sortment of memory requirements. These include the short-term integration of in- formation from noisy sensors and the identiï¬ cation of system parameters, as well as long-term memory problems that require preserving information over many time steps. We also demonstrate success on a combined exploration and mem- ory problem in the form of a simpliï¬ ed version of the well-known Morris water maze task. Finally, we show that our approach can deal with high-dimensional observations by learning directly from pixels. | 1512.04455#1 | 1512.04455 | [
"1509.03005"
] |
|
1512.04455#1 | Memory-based control with recurrent neural networks | We ï¬ nd that recurrent deterministic and stochastic policies are able to learn similarly good solutions to these tasks, including the water maze where the agent must learn effective search strategies. # Introduction The use of neural networks for solving continuous control problems has a long tradition. Several recent papers successfully apply model-free, direct policy search methods to the problem of learning neural network control policies for challenging continuous domains with many degrees of freedoms [2, 6, 14, 21, 22, 12]. However, all of this work assumes fully observed state. Many real world control problems are partially observed. Partial observability can arise from dif- ferent sources including the need to remember information that is only temporarily available such as a way sign in a navigation task, sensor limitations or noise, unobserved variations of the plant under control (system identiï¬ cation), or state-aliasing due to function approximation. Partial ob- servability also arises naturally in many tasks that involve control from vision: a static image of a dynamic scene provides no information about velocities, occlusions occur as a consequence of the three-dimensional nature of the world, and most vision sensors are bandwidth-limited and only have a restricted ï¬ eld-of-view. Resolution of partial observability is non-trivial. | 1512.04455#0 | 1512.04455#2 | 1512.04455 | [
"1509.03005"
] |
1512.04455#2 | Memory-based control with recurrent neural networks | Existing methods can roughly be divided into two broad classes: On the one hand there are approaches that explicitly maintain a belief state that corresponds to the distribution over the world state given the observations so far. This approach has two major disadvantages: The ï¬ rst is the need for a model, and the second is the computational cost that is typically associated with the update of the belief state [8, 23]. 1 On the other hand there are model free approaches that learn to form memories based on interactions with the world. This is challenging since it is a priori unknown which features of the observations will be relevant later, and associations may have to be formed over many steps. For this reason, most model free approaches tend to assume the fully-observed case. In practice, partial observability is often solved by hand-crafting a solution such as providing multiple-frames at each timestep to allow velocity estimation [16, 14]. In this work we investigate a natural extension of two recent, closely related policy gradient algo- rithms for learning continuous-action policies to handle partially observed problems. We primarily consider the Deterministic Policy Gradient algorithm (DPG) [24], which is an off-policy policy gradient algorithm that has recently produced promising results on a broad range of difï¬ cult, high- dimensional continuous control problems, including direct control from pixels [14]. DPG is an actor-critic algorithm that uses a learned approximation of the action-value (Q) function to obtain approximate action-value gradients. These are then used to update a deterministic policy via the chain-rule. We also consider DPGâ s stochastic counterpart, SVG(0) ([6]; SVG stands for â Stochastic Value Gradientsâ ) which similarly updates the policy via backpropagation of action-value gradients from an action-value critic but learns a stochastic policy. We modify both algorithms to use recurrent networks trained with backpropagation through time. We demonstrate that the resulting algorithms, Recurrent DPG (RDPG) and Recurrent SVG(0) (RSVG(0)), can be applied to a number of partially observed physical control problems with di- verse memory requirements. These problems include: short-term integration of sensor information to estimate the system state (pendulum and cartpole swing-up tasks without velocity information); system identiï¬ | 1512.04455#1 | 1512.04455#3 | 1512.04455 | [
"1509.03005"
] |
1512.04455#3 | Memory-based control with recurrent neural networks | cation (cart pole swing-up with variable and unknown pole-length); long-term mem- ory (a robot arm that needs to reach out and grab a payload to move it to the position the arm started from); as well as a simpliï¬ ed version of the water maze task which requires the agent to learn an exploration strategy to ï¬ nd a hidden platform and then remember the platformâ s position in order to return to it subsequently. We also demonstrate successful control directly from pixels. Our results suggest that actor-critic algorithms that rely on bootstrapping for estimating the value function can be a viable option for learning control policies in partially observed domains. | 1512.04455#2 | 1512.04455#4 | 1512.04455 | [
"1509.03005"
] |
1512.04455#4 | Memory-based control with recurrent neural networks | We further ï¬ nd that, at least in the setup considered here, there is little performance difference between stochastic and deterministic policies, despite the former being typically presumed to be preferable in partially observed domains. # 2 Background We model our environment as discrete-time, partially-observed Markov Decision process (POMDP). A POMDP is described a set of environment states S and a set of actions A, an initial state distribu- tion p0(s0), a transition function p(st+1|st, at) and reward function r(st, at). This underlying MDP is partially observed when the agent is unable to observe the state st directly and instead receives observations from the set O which are conditioned on the underlying state p(ot|st). The agent only indirectly observes the underlying state of the MDP through the observations. An optimal agent may, in principle, require access to the entire history ht = (o1, a1, o2, a2, ...atâ 1, ot). The goal of the agent is thus to learn a policy Ï (ht) which maps from the history to a distribution over actions P (A) which maximizes the expected discounted reward (below we consider both stochastic and deterministic policies). For stochastic policies we want to maximise Sars ; (1) J=E, t=1 where the trajectories Ï = (s1, o1, a1, s2, . . . ) are drawn from the trajectory distribution induced by the policy Ï : p(s1)p(o1|s1)Ï (a1|h1)p(s2|s1, a1)p(o2|s2)Ï (a2|h2) . . . and where ht is deï¬ ned as above. For deterministic policies we replace Ï with a deterministic function µ which maps directly from states S to actions A and we replace at â ¼ Ï (·|ht) with at = µ(ht). In the algorithms below we make use of the action-value function QÏ . For a fully observed MDP, when we have access to s, the action-value function is deï¬ ned as the expected future discounted reward when in state st the agent takes action at and thereafter follows policy Ï | 1512.04455#3 | 1512.04455#5 | 1512.04455 | [
"1509.03005"
] |
1512.04455#5 | Memory-based control with recurrent neural networks | . Since we are 2 interested in the partially observed case where the agent does not have access to s we instead deï¬ ne QÏ in terms of h: Q" (ht, ar) = Es, jn, [re(St,¢)] + Exy jn, a: » y'r(stris oa (2) i=1 where Ï >t = (st+1, ot+1, at+1 . . . ) is the future trajectory and the two expectations are taken with respect to the conditionals p(st|ht) and p(Ï >t|ht, at) of the trajectory distribution associated with Ï | 1512.04455#4 | 1512.04455#6 | 1512.04455 | [
"1509.03005"
] |
1512.04455#6 | Memory-based control with recurrent neural networks | . Note that this equivalent to deï¬ ning QÏ in terms of the belief state since h is a sufï¬ cient statistic. Obviously, for most POMDPs of interest, it is not tractable to condition on the entire sequence of observations. A central challenge is to learn how to summarize the past in a scalable way. # 3 Algorithms # 3.1 Recurrent DPG We extend the Deterministic Policy Gradient (DPG) algorithm for MDPs introduced in [24] to deal with partially observed domains and pixels. The core idea of the DPG algorithm for the fully ob- served case is that for a deterministic policy µθ with parameters θ, and given access to the true action-value function associated with the current policy Qµ, the policy can be updated by backprop- agation: | 1512.04455#5 | 1512.04455#7 | 1512.04455 | [
"1509.03005"
] |
1512.04455#7 | Memory-based control with recurrent neural networks | oJ) _» AQ" (s,a) au (s) a0 owen da , (3) 06 a=9(s) where the expectation is taken with respect to the (discounted) state visitation distribution Ï Âµ induced by the current policy µθ [24]. Similar ideas had previously been exploited in NFQCA [4] and in the ADP [13] community. In practice the exact action-value function Qµ is replaced by an approximate (critic) QÏ with parameters Ï that is differentiable in a and which can be learned e.g. with Q- learning. In order to ensure the applicability of our approach to large observation spaces (e.g. from pixels), we use neural networks for all function approximators. These networks, with convolutional layers have proven effective at many sensory processing tasks [11, 18], and been demonstrated to be effective for scaling reinforcement learning to large state spaces [14, 16]. [14] proposed modiï¬ cations to DPG necessary in order to learn effectively with deep neural networks which we make use of here (cf. sections 3.1.1, 3.1.2). Under partial observability the optimal policy and the associated action-value function are both functions of the entire preceding observation-action history ht. The primary change we introduce is the use of recurrent neural networks, rather than feedforward networks, in order to allow the network to learn to preserve (limited) information about the past which is needed in order to solve the POMDP. Thus, writing µ(h) and Q(h, a) rather than µ(s) and Q(s, a) we obtain the following policy update: aJ(0) 0 (4) Oa 00 a=p9 (he) ee OQ" (ht, a) | where we have written the expectation now explicitly over entire trajectories Ï = (s1, o1, a1, s2, o2, a2, . . . ) which are drawn from the trajectory distribution induced by the current policy and ht = (o1, a1, . . . , otâ 1, atâ 1, ot) is the observation-action trajectory preï¬ x at time step t, both as introduced above1. | 1512.04455#6 | 1512.04455#8 | 1512.04455 | [
"1509.03005"
] |
1512.04455#8 | Memory-based control with recurrent neural networks | In practice, as in the fully observed case, we replace Qµ by learned approximation QÏ (which is also a recurrent network with parameters Ï ). Thus, rather than di- rectly conditioning on the entire observation history, we effectively train recurrent neural networks to summarize this history in their recurrent state using backpropagation through time (BPTT). For 1 A discount factor γt appears implicitly in the update which is absorbed in the discounted state-visitation distribution in eq. 3. In practice we ignore this term as is often done in policy gradient implementations in practice (e.g. [26]). | 1512.04455#7 | 1512.04455#9 | 1512.04455 | [
"1509.03005"
] |
1512.04455#9 | Memory-based control with recurrent neural networks | 3 long episodes or continuing tasks it is possible to use truncated BPTT, although we do not use this here. The full algorithm is given below (Algorithm 1). RDPG is an algorithm for learning deterministic policies. As discussed in the literature [25, 20] it is possible to construct examples where deterministic policies perform poorly under partial ob- servability. In RDPG the policy is conditioned on the entire history but since we are using function approximation state aliasing may still occur, especially early in learning. We therefore also inves- tigate a recurrent version of the stochastic counterpart to DPG: SVG(0) [6] (DPG can be seen as the deterministic limit of SVG(0)). In addition to learning stochastic policies SVG(0) also admits on-policy learning whereas DPG is inherently off policy (see below). Similar to DPG, SVG(0) updates the policy by backpropagation â | 1512.04455#8 | 1512.04455#10 | 1512.04455 | [
"1509.03005"
] |
1512.04455#10 | Memory-based control with recurrent neural networks | Q/â a from the action-value func- tion, but does so for stochastic policies. This is enabled through a â re-parameterizationâ (e.g. [10, 19]) of the stochastic policy: The stochastic policy is represented in terms of a ï¬ xed, inde- pendent noise source and a parameterized deterministic function that transforms a draw from that noise source, i.e., in our case, a = Ï Î¸(h, ν) with ν â ¼ β(·) where β is some ï¬ xed distribution. For instance, a Gaussian policy Ï Î¸(a|h) = N (a|µθ(h), Ï 2) can be re-parameterized as follows: a = Ï Î¸(h, ν) = µθ(h) + Ï Î½ where ν â ¼ N (·|0, 1). See [6] for more details. The stochastic policy is updated as follows: aI(O) _ 1-1 0Qâ ¢ (hy, a) On (hi, V4) 9 = Erw | da 30 t a=n9 (hyve) (5) with Ï drawn from the trajectory distribution which is conditioned on IID draws of νt from β at each time step. The full algorithm is provided in the supplementary (Algorithm 2). # 3.1.1 Off-policy learning and experience replay DPG is typically used in an off-policy setting due to the fact that the policy is deterministic but exploration is needed in order to learn the gradient of Q with respect to the actions. | 1512.04455#9 | 1512.04455#11 | 1512.04455 | [
"1509.03005"
] |
1512.04455#11 | Memory-based control with recurrent neural networks | Furthermore, in practice, data efï¬ ciency and stability can also be greatly improved by using experience replay (e.g. [4, 5, 14, 16, 6]) and we use the same approach here (see Algorithms 1, 2). Thus, during learning we store experienced trajectories in a database and then replace the expectation in eq. (4) with trajectories sampled from the database. One consequence of this is a bias in the state distribution in eqs. (3, 5) which no longer corresponds to the state distribution induced by the current policy . With function approximation this can lead to a bias in the learned policy, although this typically ignored in practice. RDPG and RSVG(0) may similarly be affected; in fact since policies (and Q) are not just a function of the state but of an entire action-observation history (eq. 4) the bias might be more severe. One potential advantage of (R)SVG(0) in this context is that it allows on-policy learning although we do not explore this possibility here. We found that off-policy learning with experience replay remained effective in the partially observed case. | 1512.04455#10 | 1512.04455#12 | 1512.04455 | [
"1509.03005"
] |
1512.04455#12 | Memory-based control with recurrent neural networks | # 3.1.2 Target networks A second algorithmic feature that has been found to greatly improve the stability of neural-network based reinforcement learning algorithms that rely on bootstrapping for learning value functions is the use of target networks [4\[14] [16] {6]: The algorithm maintains two copies of the value function Q and of the policy 7 each, with parameters 6 and 6â , and w and wâ â respectively. 6 and w are the parameters that are being updated by the algorithm; 0â and wâ track them with some delay and are used to compute the â targets valuesâ for the Q function update. Different authors have explored different approaches to updating 6â and wâ . In this work we use â soft updatesâ as in [14] (see Algorithms|T]and[2]below). 4 # Algorithm 1 RDPG algorithm Initialize critic network Q* (at, ht) and actor ju°(h+) with parameters w and 0. Initialize target networks Q*â and we "with weights wâ + w, 0â + 0. Initialize replay buffer R. for episodes = 1, Mdo initialize empty history ho fort=1,T do receive observation 0; hy < heâ 1, Gtâ 1, 0; (append observation and previous action to history) select action a; = 1° (ht) + â ¬ (with e: exploration noise) end for Store the sequence (01,41, 71...07,a7,1rr) in R a Sample a minibatch of N episodes (0, a}, rj, ...0, ap, Tp )im1,...,N from R Construct histories hi = (o',a4,...ai_4, 04) Compute target values for each sample episode (y;{, ...y/,) using the recurrent target (y;{, ...y/,) et T ) using the recurrent target networks Ut =r + 7Q° â (ni et "(hi 41)) | 1512.04455#11 | 1512.04455#13 | 1512.04455 | [
"1509.03005"
] |
1512.04455#13 | Memory-based control with recurrent neural networks | Compute critic update (using BPTT) xP LD (vi Oi a) POE Compute actor update (using BPTT) oer P(hi)) Op? (hi D> ( a On) Update actor and critic using Adam [9] Update the target networks w & twt(lâ 7)w" Oe rO+ (1 â 7)0' end for # 4 Results We tested our algorithms on a variety of partial-observed environments, covering different types of memory problems. Videos of the learned policies for all the domains are included in our sup- plementary videos2, we encourage viewing them as these may provide a better intuition for the environments. All physical control problems except the simulated water maze (section 4.3) were simulated in MuJoCo [28]. We tested both standard recurrent networks as well as LSTM networks. | 1512.04455#12 | 1512.04455#14 | 1512.04455 | [
"1509.03005"
] |
1512.04455#14 | Memory-based control with recurrent neural networks | # 4.1 Sensor integration and system identiï¬ cation Physical control problems with noisy sensors are one of the paradigm examples of partially-observed environments. A large amount of research has focused on how to efï¬ ciently integrate noisy sensory information over multiple timesteps in order to derive accurate estimates of the system state, or to estimate derivatives of important properties of the system [27]. Here, we consider two simple, standard control problems often used in reinforcement learning, the under-actuated pendulum and cartpole swing up. We modify these standard benchmarks tasks such that in both cases the agent receives no direct information of the velocity of any of the components, i.e. for the pendulum swing-up task the observation comprises only the angle of the pendulum, and 2Video of all the learned policies is available at https://youtu.be/V4_vb1D5NNQ | 1512.04455#13 | 1512.04455#15 | 1512.04455 | [
"1509.03005"
] |
1512.04455#15 | Memory-based control with recurrent neural networks | 5 Figure (1) (a) The reward curve for the partially-observed pendulum task. Both RDPG and RSVG(0) are able to learn policies which bring the pendulum to an upright position. (b) The reward curve for the cartpole with no velocity and varying cartpole lengths. RDPG with LSTM, is able to reliably learn a good solution for this task; a purely feedforward agent (DDPG), which will not be able to estimate velocities nor to infer the pole length, is not able to solve the problem. | 1512.04455#14 | 1512.04455#16 | 1512.04455 | [
"1509.03005"
] |
1512.04455#16 | Memory-based control with recurrent neural networks | (a) (b) (a) (b) (a) (b) (c) (d) Figure 2: Reward curves for the (a) hidden target reacher task, and (b) return to start gripper task. In both cases the RDPG-agents with LSTMs are able to ï¬ nd good policies whereas the feedforward agents fail on the memory component. (In both cases the feedforward agents perform clearly better than random which is expected from the setup of the tasks: For instance, as can be seen in the video, the gripper without memory is still able to grab the payload and move it to a â defaultâ position.) Example frames from the 3 joint reaching task (c) and the gripper task (d). | 1512.04455#15 | 1512.04455#17 | 1512.04455 | [
"1509.03005"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.