id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
sequencelengths 1
1
|
---|---|---|---|---|---|---|
1512.05742#66 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | New Southâ city at the beginning of the 21st century, that could be used as a resource for linguistic analysis. It was originally released as one of several collections in the New South Voices corpus, which otherwise contained mostly oral histories. Information on speaker age and gender in the CNCC is included in the header for each transcript. # 4.2.2 CONSTRAINED SPOKEN CORPORA Next, we discuss domains in which conversations only occur about a particular topic, or intend to solve a speciï¬ c task. Not only is the topic of the conversation speciï¬ ed beforehand, but participants are discouraged from deviating off-topic. As a result, these corpora are slightly less general than their spontaneous counterparts; however, they may be useful for building goal-oriented dialogue systems. As discussed in Subsection 3.3, this may also make the conversations less natural. We can further subdivide this category into the types of topics they cover: path-ï¬ nding or planning tasks, persuasion tasks or debates, Q&A or information retrieval tasks, and miscellaneous topics. | 1512.05742#65 | 1512.05742#67 | 1512.05742 | [
"1511.06931"
] |
1512.05742#67 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | 21 Collaborative Path-Finding or Planning Tasks Several corpora focus on task planning or path- ï¬ nding through the collaboration of two interlocutors. In these corpora typically one person acts as the decision maker and the other acts as the observer. A well-known example of such a dataset is the HCRC Map Task Corpus (Anderson et al., 1991), that consists of unscripted, task-oriented dialogues that have been digitally recorded and transcribed. The corpus uses the Map Task (Brown et al., 1984), where participants must collab- orate verbally to reproduce a route on one of the participantâ s map on the map of another partic- ipant. The corpus is fairly small, but it controls for the familiarity between speakers, eye contact between speakers, matching between landmarks on the participantsâ maps, opportunities for con- trastive stress, and phonological characteristics of landmark names. By adding these controls, the dataset attempts to focus on solely the dialogue and human speech involved in the planning process. The Walking Around Corpus (Brennan et al., 2013) consists of 36 dialogues between people communicating over mobile telephone. The dialogues have two parts: ï¬ | 1512.05742#66 | 1512.05742#68 | 1512.05742 | [
"1511.06931"
] |
1512.05742#68 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | rst, a â stationary partnerâ is asked to direct a â mobile partnerâ to ï¬ nd 18 destinations on a medium-sized university campus. The stationary partner is equipped with a map marked with the target destinations accompanied by photos of the locations, while the mobile partner is given a GPS navigation system and a camera to take photos. In the second part, the participants are asked to interact in-person in order to duplicate the photos taken by the mobile partner. The goal of the dataset is to provide a testbed for natural lexical entrainment, and to be used as a resource for pedestrian navigation applications. The TRAINS 93 Dialogues Corpus (Heeman and Allen, 1995) consists of recordings of two interlocutors interacting to solve various planning tasks for scheduling train routes and arranging railroad freight. One user acts the role of a planning assistant system and the other user acts as the coordinator. This was not done in a Wizard-of-Oz fashion, and as such is not considered a Human-Machine corpus. 34 different interlocutors were asked to complete 20 different tasks such as: â | 1512.05742#67 | 1512.05742#69 | 1512.05742 | [
"1511.06931"
] |
1512.05742#69 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Determine the maximum number of boxcars of oranges that you could get to Bath by 7 AM tomorrow morning. It is now 12 midnight.â The person playing the role of the planning assistant was provided with access to information that is needed to solve the task. Also included in the dataset is the information available to both users, the length of dialogue, and the speaker and â systemâ interlocutor identities. The Verbmobil Corpus (Burger et al., 2000) is a multilingual corpus consisting of English, German, and Japanese dialogues collected for the purposes of training and testing the Verbmobil project system. The system was a designed for speech-to-speech machine translation tasks. Dia- logues were recorded in a variety of conditions and settings with room microphones, telephones, or close microphones, and were subsequently transcribed. Users were tasked with planning and scheduling an appointment throughout the course of the dialogue. Note that while there have been several versions of the Verbmobil corpora released, we refer to the entire collection here as described in (Burger et al., 2000). Dialogue acts were annotated in a subset of the corpus (1,505 mixed dia- logues in German, English and Japanese). 76,210 acts were annotated with 32 possible categories of dialogue acts Alexandersson et al. (2000)4. Persuasion and Debates Another theme recurring among constrained spoken corpora is the ap- pearance of persuasion or debate tasks. These can involve general debates on a topic or tasking a speciï¬ c interlocutor to try to convince another interlocutor of some opinion or topic. Generally, 4. | 1512.05742#68 | 1512.05742#70 | 1512.05742 | [
"1511.06931"
] |
1512.05742#70 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Note, this information and further facts about the Verbmobil project and corpus can be found here: http: //verbmobil.dfki.de/facts.html 22 these datasets record the outcome of how convinced the audience is of the argument at the end of the dialogue or debate. The Green Persuasive Dataset (Douglas-Cowie et al., 2007) was recorded in 2007 to provide data for the HUMAINE project, whose goal is to develop interfaces that can register and respond to emotion. In the dataset, a persuader with strong pro-environmental (â pro-greenâ ) feelings tries to convince persuadees to consider adopting more green lifestyles; these interactions are in the form of dialogues. | 1512.05742#69 | 1512.05742#71 | 1512.05742 | [
"1511.06931"
] |
1512.05742#71 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | It contains 8 long dialogues, totalling about 30 minutes each. Since the persuadees often either disagree or agree strongly with the persuaders points, this would be good corpus for studying social signs of (dis)-agreement between two people. The MAHNOB Mimicry Database (Sun et al., 2011) contains 11 hours of recordings, split over 54 sessions between 60 people engaged either in a socio-political discussion or negotiating a tenancy agreement. This dataset consists of a set of fully synchronised audio-visual recordings of natural dyadic (one-on-one) interactions. It is one of several dialogue corpora that provide multi- modal data for analyzing human behaviour during conversations. Such corpora often consist of auditory, visual, and written transcriptions of the dialogues. Here, only audio-visual recordings are provided. The purpose of the dataset was to analyze mimicry (i.e. when one participant mimics the verbal and nonverbal expressions of their counterpart). The authors provide some benchmark video classiï¬ cation models to this effect. The Intelligence Squared Debate Dataset (Zhang et al., 2016) covers the â Intelligence Squaredâ Oxford-style debates taking place between 2006 and 2015. The topics of the debates vary across the dataset, but are constrained within the context of each debate. Speakers are labeled and the full transcript of the debate is provided. Furthermore, the outcome of the debate is provided (how many of the audience members were for the given proposal or against, before and after the debate). QA or Information Retrieval There are several corpora which feature direct question and an- swering sessions. These may involve general QA, such as in a press conference, or more task- speciï¬ c lines of questioning, as to retrieve a speciï¬ c set of information. The Corpus of Professional Spoken American English (CPSAE) (Barlow, 2000) was con- structed using a selection of transcripts of interactions occurring in professional settings. The corpus contains two million words involving over 400 speakers, recorded between 1994-1998. The CPASE has two main components. | 1512.05742#70 | 1512.05742#72 | 1512.05742 | [
"1511.06931"
] |
1512.05742#72 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | The ï¬ rst is a collection of transcripts (0.9 million words) of White House press conferences, which contains almost exclusively question and answer sessions, with some pol- icy statements by politicians. The second component consists of transcripts (1.1 million words) of faculty meetings and committee meetings related to national tests that involve statements, discus- sions, and questions. The creation of the corpus was motivated by the desire to understand and model more formal uses of the English language. As previously mentioned, the Dialog State Tracking Challenge (DSTC) consists of a series of datasets evaluated using a â state trackingâ or â slot ï¬ llingâ | 1512.05742#71 | 1512.05742#73 | 1512.05742 | [
"1511.06931"
] |
1512.05742#73 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | metric. While the ï¬ rst 3 installments of this challenge had conversations between a human participant and a computer, DSTC4 (Kim et al., 2015) contains dialogues between humans. In particular, this dataset has 35 conversations with 21 hours of interactions between tourists and tour guides over Skype, discussing information on hotels, ï¬ ights, and car rentals. Due to the small size of the dataset, researchers were encouraged to use transfer learning from other datasets in the DSTC in order to improve state tracking performance. This same training set is used for DSTC5 (Kim et al., 2016) as well. However, the goal of DSTC5 | 1512.05742#72 | 1512.05742#74 | 1512.05742 | [
"1511.06931"
] |
1512.05742#74 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | 23 is to study multi-lingual speech-act prediction, and therefore it combines the DSTC4 dialogues plus a set of equivalent Chinese dialogs; evaluation is done on a holdout set of Chinese dialogues. Miscellaneous Lastly, there are several corpora which do not fall into any of the aforementioned categories, involving a range of tasks and situations. The IDIAP Wolf Corpus (Hung and Chittaranjan, 2010) is an audio-visual corpus containing natural conversational data of volunteers who took part in an adversarial role-playing game called â | 1512.05742#73 | 1512.05742#75 | 1512.05742 | [
"1511.06931"
] |
1512.05742#75 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Werewolfâ . Four groups of 8-12 people were recorded using headset microphones and synchro- nised video cameras, resulting in over 7 hours of conversational data. The novelty of this dataset is that the roles of other players are unknown to game participants, and some of the roles are decep- tive in nature. Thus, there is a signiï¬ cant amount of lying that occurs during the game. Although speciï¬ c instances of lying are not annotated, each speaker is labeled with their role in the game. In a dialogue setting, this could be useful for analyzing the differences in language when deception is being used. | 1512.05742#74 | 1512.05742#76 | 1512.05742 | [
"1511.06931"
] |
1512.05742#76 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | The SEMAINE Corpus (McKeown et al., 2010) consists of 100 â emotionally colouredâ con- versations. Participants held conversations with an operator who adopted various roles designed to evoke emotional reactions. These conversations were recorded with synchronous video and audio devices. Importantly, the operatorsâ responses were stock phrases that were independent of the con- tent of the userâ s utterances, and only dependent on the userâ s emotional state. This corpus motivates building dialogue systems with affective and emotional intelligence abilities, since the corpus does not exhibit the natural language understanding that normally occurs between human interlocutors. The Loqui Human-Human Dialogue Corpus (Passonneau and Sachar, 2014) consists of an- notated transcriptions of telephone interactions between patrons and librarians at New York Cityâ s Andrew Heiskell Braille & Talking Book Library in 2006. It stands out as it has annotated dis- cussion topics, question-answer pair links (adjacency pairs), dialogue acts, and frames (discourse units). Similarly, the The ICSI Meeting Recorder Dialog Act (MRDA) Corpus (Shriberg et al., 2004) has annotated dialogue acts, question-answer pair links (adjacency pairs), and dialogue hot spots5. It consists of transcribed recordings of 75 ICSI meetings on several classes of topics including: the ICSI meeting recorder project itself, automatic speech recognition, natural language processing and neural theories of language, and discussions with the annotators for the project. # 4.2.3 SCRIPTED CORPORA A ï¬ nal category of spoken dialogue consists of conversations that have been pre-scripted for the purpose of being spoken later. We refer to datasets containing such conversations as â scripted cor- poraâ . As discussed in Subsection 3.4, these datasets are distinct from spontaneous human-human conversations, as they inevitably contain fewer â ï¬ llerâ | 1512.05742#75 | 1512.05742#77 | 1512.05742 | [
"1511.06931"
] |
1512.05742#77 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | words and expressions that are common in spoken dialogue. However, they should not be confused with human-human written dialogues, as they are intended to sound like natural spoken conversations when read aloud by the participants. Furthermore, these scripted dialogues are required to be dramatic, as they are generally sourced from movies or TV shows. There exist multiple scripted corpora based on movies and TV series. These can be sub-divided into two categories: corpora that provide the actual scripts (i.e. the movie script or TV series script) where each utterance is tagged with the appropriate speaker, and those that only contain subtitles | 1512.05742#76 | 1512.05742#78 | 1512.05742 | [
"1511.06931"
] |
1512.05742#78 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | 5. For more information on dialogue hot spots and how they relate to dialogue acts, see (Wrede and Shriberg, 2003). 24 and consecutive utterances are not divided or labeled in any way. It is always preferable to have the speaker labels, but there is signiï¬ cantly more unlabeled subtitle data available, and both sources of information can be leveraged to build a dialogue system. The Movie DiC Corpus (Banchs, 2012) is an example of the former caseâ it contains about 130,000 dialogues and 6 million words from movie scripts extracted from the Internet Movie Script Data Collection6, carefully selected to cover a wide range of genres. These dialogues also come with context descriptions, as written in the script. One derivation based on this corpus is the Movie Triples Dataset (Serban et al., 2016). There is also the American Film Scripts Corpus and Film Scripts Online Corpus which form the Film Scripts Online Series Corpus, which can be pur- chased 7. The latter consists of a mix of British and American ï¬ lm scripts, while the former consists of solely American ï¬ lms. The majority of these datasets consist mostly of raw scripts, which are not guaranteed to portray conversations between only two people. The dataset collected by Nio et al. (2014b), which we refer to as the Filtered Movie Script Corpus, takes over 1 million utterance-response pairs from web- based script resources and ï¬ lters them down to 86,000 such pairs. The ï¬ ltering method limits the extracted utterances to X-Y-X triples, where X is spoken by the same actor and each of the utterance share some semantic similarity. These triples are then decomposed into X-Y and Y-X pairs. | 1512.05742#77 | 1512.05742#79 | 1512.05742 | [
"1511.06931"
] |
1512.05742#79 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Such ï¬ ltering largely removes conversations with more than two speakers, which could be useful in some applications. Particularly, the ï¬ ltering method helps to retain semantic context in the dialogue and keeps a back-and-forth conversational ï¬ ow that is desired in training many dialogue systems. The Cornell Movie-Dialogue Corpus (Danescu-Niculescu-Mizil and Lee, 2011) also has short conversations extracted from movie scripts. The distinguishing feature of this dataset is the amount of metadata available for each conversation: this includes movie metadata such as genre, release year, and IMDB rating, as well as character metadata such as gender and position on movie credits. Although this corpus contains 220,000 dialogue excerpts, it only contains 300,000 utterances; thus, many of the excerpts consist of single utterances. The Corpus of American Soap Operas (Davies, 2012b) contains 100 million words in more than 22,000 transcripts of ten American TV-series soap operas from 2001 and 2012. Because it is based on soap operas it is qualitatively different from the Movie Dic Corpus, which contains movies in the action and horror genres. The corpus was collected to provide insights into colloquial Amer- ican speech, as the vocabulary usage is quite different from the British National Corpus (Davies, 2012a). Unfortunately, this corpus does not come with speaker labels. Another corpus consisting of dialogues from TV shows is the TVD Corpus (Roy et al., 2014). This dataset consists of 191 movie transcripts from the comedy show The Big Bang Theory, and the drama show Game of Thrones, along with crowd-sourced text descriptions (brief episode sum- maries, longer episode outlines) and various types of metadata (speakers, shots, scenes). Text align- ment algorithms are used to link descriptions and metadata to the appropriate sections of each script. For example, one might align an event description with all the utterances associated with that event in order to develop algorithms for locating speciï¬ c events from raw dialogue, such as â person X tries to convince person Yâ . | 1512.05742#78 | 1512.05742#80 | 1512.05742 | [
"1511.06931"
] |
1512.05742#80 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Some work has been done in order to analyze character style from movie scripts. This is aided by a dataset collected by Walker et al. (2012a) that we refer to as the Character Style from Film Cor- pus. This corpus was collected from the IMSDb archive, and is annotated for linguistic structures 6. http://www.imsdb.com 7. http://alexanderstreet.com/products/film-scripts-online-series 25 and character archetypes. Features, such as the sentiment behind the utterances, are automatically extracted and used to derive models of the characters in order to generate new utterances similar in style to those spoken by the character. Thus, this dataset could be useful for building dialogue personalization models. There are two primary movie subtitle datasets: the OpenSubtitles (Tiedemann, 2012) and the SubTle Corpus (Ameixa and Coheur, 2013). Both corpora are based on the OpenSubtitles web- site.8 The OpenSubtitles dataset is a giant collection of movie subtitles, containing over 1 billion words, whereas SubTle Corpus has been pre-processed in order to extract interaction-response pairs that can help dialogue systems deal with out-of-domain (OOD) interactions. The Corpus of English Dialogues 1560-1760 (CED) (Kyt¨o and Walker, 2006) compiles di- alogues from the mid-16th century until the mid-18th century. The sources vary from real trial transcripts to ï¬ ction dialogues. Due to the scripted nature of ï¬ ctional dialogues and the fact that the majority of the corpus consists of ï¬ ctional dialogue, we classify it here as such. The corpus is com- posed as follows: trial proceedings (285,660 words), witness depositions (172,940 words), drama comedy works (238,590 words), didactic works (236,640 words), prose ï¬ ction (223,890 words), and miscellaneous (25,970 words). # 4.3 Human-Human Written Corpora We proceed to survey corpora of conversations between humans in written form. As before, we sub-divide this section into spontaneous and constrained corpora, depending on whether there are restrictions on the topic of conversation. However, we make a further distinction between forum, micro-blogging, and chat corpora. | 1512.05742#79 | 1512.05742#81 | 1512.05742 | [
"1511.06931"
] |
1512.05742#81 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Forum corpora consist of conversations on forum-based websites such as Reddit9 where users can make posts, and other users can make comments or replies to said post. In some cases, com- ments can be nested indeï¬ nitely, as users make replies to previous replies. Utterances in forum corpora tend to be longer, and there is no restriction on the number of participants in a discussion. On the other hand, conversations on micro-blogging websites such as Twitter10 tend to have very short utterances as there is an upper bound on the number of characters permitted in each message. As a result, these tend to exhibit highly colloquial language with many abbreviations. The identi- fying feature of chat corpora is that the conversations take place in real-time between users. Thus, these conversations share more similarities with spoken dialogue between humans, such as common grounding phenomena. 4.3.1 SPONTANEOUS WRITTEN CORPORA We begin with written corpora where the topic of conversation is not pre-speciï¬ ed. Such is the case for the NPS Internet Chatroom Conversations Corpus (Forsyth and Martell, 2007), which con- sists of 10,567 English utterances gathered from age-speciï¬ c chat rooms of various online chat ser- vices from October and November of 2006. Each utterance was annotated with part-of-speech and dialogue act information; the correctness of this was veriï¬ | 1512.05742#80 | 1512.05742#82 | 1512.05742 | [
"1511.06931"
] |
1512.05742#82 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | ed manually. The NPS Internet Chatroom Conversations Corpus was one of the ï¬ rst corpora of computer-mediated communication (CMC), 8. http://www.opensubtitles.org 9. http://www.reddit.com 10. http://www.twitter.com 26 and it was intended for various NLP applications such as conversation thread topic detection, author proï¬ ling, entity identiï¬ cation, and social network analysis. Several corpora of spontaneous micro-blogging conversations have been collected, such as the Twitter Corpus from Ritter et al. (2010), which contains 1.3 million post-reply pairs extracted from Twitter. The corpus was originally constructed to aid in the production of unsupervised approaches to modeling dialogue acts. Larger Twitter corpora have been collected. The Twitter Triples Cor- pus (Sordoni et al., 2015b) is one such example, with a described original dataset of 127 million context-message-response triples, but only a small labeled subset of this corpus has been released. | 1512.05742#81 | 1512.05742#83 | 1512.05742 | [
"1511.06931"
] |
1512.05742#83 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Speciï¬ cally, the released labeled subset contains 4,232 pairs that scored an average of greater than 4 on the Likert scale by crowdsourced evaluators for quality of the response to the context-message pair. Similarly, large micro-blogging corpora such as the Sina Weibo Corpus (Shang et al., 2015), which contains 4.5 million post-reply pairs, have been collected; however, the authors have not yet been made publicly available. We do not include the Sina Weibo Corpus (and its derivatives) in the tables in this section, as they are not primarily in English. The Usenet Corpus (Shaoul and Westbury, 2009) is a gigantic collection of public Usenet postings11 containing over 7 billion words from October 2005 to January 2011. Usenet was a distributed discussion system established in 1980 where participants could post articles to one of 47,860 â newsgroupâ categories. It is seen as the precursor to many current Internet forums. The corpus derived from these posts has been used for research in collaborative ï¬ ltering (Konstan et al., 1997) and role detection (Fisher et al., 2006). The NUS SMS Corpus (Chen and Kan, 2013) consists of conversations carried out over mobile phone SMS messages between two users. While the original purpose of the dataset was to improve predictive text entry when mobile phones still mapped multiple letters to a single number, aided by video and timing analysis of users entering their messages it could equally be used for analysis of informal dialogue. Unfortunately, the corpus does not consist of dialogues, but rather single SMS messages. SMS messages are similar in style to Twitter, in that they use many abbreviations and acronyms. | 1512.05742#82 | 1512.05742#84 | 1512.05742 | [
"1511.06931"
] |
1512.05742#84 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Currently, one of the most popular forum-based websites is Reddit12 where users can create discussions and post comments in various sub-forums called â subredditsâ . Each subreddit addresses its own particular topic. Over 1.7 billion of these comments have been collected in the Reddit Cor- pus.13 Each comment is labeled with the author, score (rating from other users), and position in the comment tree; the position is important as it determines which comment is being replied to. Al- though researchers have not yet investigated dialogue problems using this Reddit discussion corpus, the sheer size of the dataset renders it an interesting candidate for transfer learning. Additionally, researchers have used smaller collections of Reddit discussions for broad discourse classiï¬ cation. (Schrading et al., 2015). Some more curated versions of the Reddit dataset have been collected. The Reddit Domestic Abuse Corpus (Schrading et al., 2015) consists of Reddit posts and comments taken from either subreddits speciï¬ c to domestic abuse, or from subreddits representing casual conversations, advice, and general anxiety or anger. | 1512.05742#83 | 1512.05742#85 | 1512.05742 | [
"1511.06931"
] |
1512.05742#85 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | The motivation is to build classiï¬ ers that can detect occurrences of domestic abuse in other areas, which could provide insights into the prevalence and consequences 11. http://www.usenet.net 12. http://www.reddit.com 13. https://www.reddit.com/r/datasets/comments/3bxlg7/i_have_every_publicly_ available_reddit_comment/ 27 of these situations. These conversations have been pre-processed with lower-casing, lemmatizing, and removal of stopwords, and semantic role labels are provided. # 4.3.2 CONSTRAINED WRITTEN CORPORA There are also several written corpora where users are limited in terms of topics of conversation. For example, the Settlers of Catan Corpus (Afantenos et al., 2012) contains logs of 40 games of â Settlers of Catanâ | 1512.05742#84 | 1512.05742#86 | 1512.05742 | [
"1511.06931"
] |
1512.05742#86 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | , with about 80,000 total labeled utterances. The game is played with up to 4 players, and is predicated on trading certain goods between players. The goal of the game is to be the ï¬ rst player to achieve a pre-speciï¬ ed number of points. Therefore, the game is adversarial in nature, and can be used to analyze situations of strategic conversation where the agents have diverging motives. Another corpus that deals with game playing is the Cards Corpus (Djalali et al., 2012), which consists of 1,266 transcripts of conversations between players playing a game in the â Cards worldâ | 1512.05742#85 | 1512.05742#87 | 1512.05742 | [
"1511.06931"
] |
1512.05742#87 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | . This world is a simple 2-D environment where players collaborate to collect cards. The goal of the game is to collect six cards of a particular suit (cards in the environment are only visible to a player when they are near the location of that player), or to determine that this goal is impossible in the environment. The catch is that each player can only hold 3 cards, thus players must collaborate in order to achieve the goal. Further, each playerâ s location is hidden to the other player, and there are a ï¬ xed number of non-chatting moves. Thus, players must use the chat to formulate a plan, rather than exhaustively exploring the environment themselves. | 1512.05742#86 | 1512.05742#88 | 1512.05742 | [
"1511.06931"
] |
1512.05742#88 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | The dataset has been further annotated by Potts (2012) to collect all locative question-answer pairs (i.e. all questions of the form â Where are you?â ). The Agreement by Create Debaters Corpus (Rosenthal and McKeown, 2015), the Agree- ment in Wikipedia Talk Pages Corpus (Andreas et al., 2012) and the Internet Argument Corpus (Abbott et al., 2016) all cover dialogs with annotations measuring levels of agreement or disagree- ment in responses to posts in various media. The Agreement by Create Debaters Corpus and the Agreement in Wikipedia Talk Pages Corpus both are formatted in the same way. Post-reply pairs are annotated with whether they are in agreement or disagreement, as well as the type of agreement they are in if applicable (e.g. paraphrasing). The difference between the two corpora is the source: the former is collected from Create Debate forums and the latter from a mix of Wikipedia Discus- sion pages and LiveJournal postings. The Internet Argument Corpus (IAC) (Walker et al., 2012b) is a forum-based corpus with 390,000 posts on 11,000 discussion topics. Each topic is controversial in nature, including subjects such as evolution, gay marriage and climate change; users participate by sharing their opinions on one of these topics. Posts-reply pairs have been labeled as being either in agreement or disagreement, and sarcasm ratings are given to each post. | 1512.05742#87 | 1512.05742#89 | 1512.05742 | [
"1511.06931"
] |
1512.05742#89 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Another source of constrained text-based corpora are chat-room environments. Such a set-up forms the basis of the MPC Corpus (Shaikh et al., 2010), which consists of 14 multi-party dialogue sessions of approximately 90 minutes each. In some cases, discussion topics were constrained to be about certain political stances, or mock committees for choosing job candidates. An interest- ing feature is that different participants are given different rolesâ leader, disruptor, and consensus builderâ with only a general outline of their goals in the conversation. Thus, this dataset could be used to model social phenomena such as agenda control, inï¬ uence, and leadership in on-line interactions. | 1512.05742#88 | 1512.05742#90 | 1512.05742 | [
"1511.06931"
] |
1512.05742#90 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | 28 The largest written corpus with a constrained topic is the recently released Ubuntu Dialogue Corpus (Lowe et al., 2015a), which has almost 1 million dialogues of 3 turns or more, and 100 It is related to the former Ubuntu Chat Corpus (Uthus and Aha, 2013). Both million words. corpora were scraped from the Ubuntu IRC channel logs.14 On this channel, users can log in and ask a question about a problem they are having with Ubuntu; these questions are answered by other users. Although the chat room allows everyone to chat with each other in a multi-party setting, the Ubuntu Dialogue Corpus uses a series of heuristics to disentangle it into dyadic dialogue. The technical nature and size of this corpus lends itself particularly well to applications in technical support. Other corpora have been extracted from IRC chat logs. The IRC Corpus (Elsner and Charniak, 2008) contains approximately 50 hours of chat, with an estimated 20,000 utterances from the Linux channel on IRC, complete with the posting times. Therefore, this dataset consists of similarly technical conversations to the Ubuntu Corpus, with the occasional social chat. The purpose of this dataset was to investigate approaches for conversation disentanglement; given a multi-party chat room, one attempts to recover the individual conversations of which it is composed. For this purpose, there are approximately 1,500 utterances with annotated ground-truth conversations. More recent efforts have combined traditional conversational corpora with question answering and recommendation datasets in order to facilitate the construction of goal-driven dialogue systems. Such is the case for the Movie Dialog Dataset (Dodge et al., 2015). There are four tasks that the authors propose as a prerequisite for a working dialogue system: question answering, recommenda- tion, question answering with recommendation, and casual conversation. The Movie Dialog dataset consists of four sub-datasets used for training models to complete these tasks: a QA dataset from the Open Movie Database (OMDb)15 of 116k examples with accompanying movie and actor metadata in the form of knowledge triples; a recommendation dataset from MovieLens16 with 110k users and 1M questions; a combined recommendation and QA dataset with 1M conversations of 6 turns each; and a discussion dataset from Redditâ s movie subreddit. | 1512.05742#89 | 1512.05742#91 | 1512.05742 | [
"1511.06931"
] |
1512.05742#91 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | The former is evaluated using recall metrics in a manner similar to Lowe et al. (2015a). It should be noted that, other than the Reddit dataset, the dialogues in the sub-datasets are simulated QA pairs, where each response corresponds to a list of entities from the knowledge base. # 5. Discussion We conclude by discussing a number of general issues related to the development and evaluation of data-driven dialogue systems. We also discuss alternative sources of information, user personaliza- tion, and automatic evaluation methods. # 5.1 Challenges of Learning from Large Datasets Recently, several large-scale dialogue datasets have been proposed in order to train data-driven dialogue systems; the Twitter Corpus (Ritter et al., 2010) and the Ubuntu Dialogue corpus (Lowe et al., 2015a) are two examples. | 1512.05742#90 | 1512.05742#92 | 1512.05742 | [
"1511.06931"
] |
1512.05742#92 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | In this section, we discuss the beneï¬ ts and drawbacks of these datasets based on our experience using them for building data-driven models. Unlike the previous 14. http://irclogs.ubuntu.com 15. http://en.omdb.org 16. http://movielens.org 29 section, we now focus explicitly on aspects of high relevance for using these datasets for learning dialogue strategies. # 5.1.1 THE TWITTER CORPUS The Twitter Corpus consists of a series of conversations extracted from tweets. While the dataset is large and general-purpose, the micro-blogging nature of the source material leads to several draw- backs for building conversational dialogue agents. However, some of these drawbacks do not apply if the end goal is to build an agent that interacts with users on the Twitter platform. The Twitter Corpus has an enormous amount of typos, slang, and abbreviations. Due to the 140-character limit, tweets are often very short and compressed. In addition, users frequently use Twitter-speciï¬ c devices such as hashtags. Unless one is building a dialogue agent speciï¬ cally for Twitter, it is often not desirable to have a chatbot use hashtags and excessive abbreviations as it is not reï¬ ective of how humans converse in other environments. This also results in a signiï¬ cant increase in the word vocabulary required for dialogue systems trained at the word level. As such, it is not surprising that character-level models have shown promising results on Twitter (Dhingra et al., 2016). Twitter conversations often contain various kinds of verbal role-playing and imaginative actions similar to stage directions in theater plays (e.g. instead of writing â | 1512.05742#91 | 1512.05742#93 | 1512.05742 | [
"1511.06931"
] |
1512.05742#93 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | goodbyeâ , a user might write â *waves goodbye and leaves*â ). These conversations are very different from the majority of text- based chats. Therefore, dialogue models trained on this dataset are often able to provide interesting and accurate responses to contexts involving role-playing and imaginative actions (Serban et al., 2017b). Another challenge posed by Twitter is that Twitter conversations often refer to recent public events outside the conversation. In order to learn effective responses for such conversations, a dialogue agent must infer the news event under discussion by referencing some form of external knowledge base. | 1512.05742#92 | 1512.05742#94 | 1512.05742 | [
"1511.06931"
] |
1512.05742#94 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | This would appear to be a particularly difï¬ cult task. # 5.1.2 THE UBUNTU DIALOGUE CORPUS The Ubuntu Dialogue Corpus is one of the largest, publicly available datasets containing technical support dialogues. Due to the commercial importance of such systems, the dataset has attracted signiï¬ cant attention.17 Thus, the Ubuntu Dialogue Corpus presents the opportunity for anyone to train large-scale data-driven technical support dialogue systems. Despite this, there are several problems when training data-driven dialogue models on the Ubuntu Dialogue Corpus due to the nature of the data. First, since the corpus comes from a multi- party IRC channel, it needs to be disentangled into separate dialogues. This disentanglement process is noisy, and errors inevitably arise. The most frequent error is when a missing utterance in the di- alogue is not picked up by the extraction procedure (e.g. an utterance from the original multi-party chat was not added to the disentangled dialogue). As a result, for a substantial amount of conver- sations, it is difï¬ | 1512.05742#93 | 1512.05742#95 | 1512.05742 | [
"1511.06931"
] |
1512.05742#95 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | cult to follow the topic. In particular, this means that some of the Next Utterance Classiï¬ cation (NUC) examples, where models must select the correct next response from a list of candidates, are either difï¬ cult or impossible for models to predict. 17. Most of the largest technical support datasets are based on commercial technical support channels, which are propri- etary and never released to the public for privacy reasons. 30 Another problem arises from the lack of annotations and labels. Since users try to solve their technical problems, it is perhaps best to build models under a goal-driven dialogue framework, where a dialogue system has to maximize the probability that it will solve the userâ s problem at the end of the conversation. | 1512.05742#94 | 1512.05742#96 | 1512.05742 | [
"1511.06931"
] |
1512.05742#96 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | However, there are no reward labels available. Thus, it is difï¬ cult to model the dataset in a goal-driven dialogue framework. Future work may alleviate this by constructing automatic methods of determining whether a user in a particular conversation solved their problem. A particular challenge of the Ubuntu Dialogue Corpus is the large number of out-of-vocabulary words, including many technical words related to the Ubuntu operating system, such as commands, software packages, websites, etc. Since these words occur rarely in the dataset, it is difï¬ cult to learn their meaning directly from the dataset â for example, it is difï¬ cult to obtain meaningful distributed, real-valued vector representations for neural network-based dialogue models. This is further exacerbated by the large number of users who use different nomenclature, acronyms, and speaking styles, and the many typos in the dataset. Thus, the linguistic diversity of the corpus is large. | 1512.05742#95 | 1512.05742#97 | 1512.05742 | [
"1511.06931"
] |
1512.05742#97 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | A ï¬ nal challenge of the dataset is the necessity for additional knowledge related to Ubuntu in order to accurately generate or predict the next response in a conversation. We hypothesize that this knowledge is crucial for a system trained on the Ubuntu Dialogue Corpus to be effective in practice, as often solutions to technical problems change over time as new versions of the operating system become available. Thus, an effective dialogue system must learn to combine up-to-date technical information with an understanding of natural language dialogue in order to solve the usersâ problems. We will discuss the use of external knowledge in more detail in Section 5.5. While these challenges make it difï¬ cult to build data-driven dialogue systems, it also presents an important research opportunity. Current data-driven dialogue systems perform rather poorly in terms of generating utterances that are coherent and on-topic (Serban et al., 2017a). | 1512.05742#96 | 1512.05742#98 | 1512.05742 | [
"1511.06931"
] |
1512.05742#98 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | As such, there is signiï¬ cant room for improvement on these models. # 5.2 Transfer Learning Between Datasets While it is not always feasible to obtain large corpora for every new application, the use of other related datasets can effectively bootstrap the learning process. In several branches of machine learn- ing, and in particular in deep learning, the use of related datasets in pre-training the model is an effective method of scaling up to complex environments (Erhan et al., 2010; Kumar et al., 2015). To build open-domain dialogue systems, it is arguably necessary to move beyond domain- speciï¬ c datasets. Instead, like humans, dialogue systems may have to be trained on multiple data sources for solving multiple tasks. To leverage statistical efï¬ ciency, it may be necessary to ï¬ rst use unsupervised learningâ as opposed to supervised learning or ofï¬ ine reinforcement learning, which typically only provide a sparse scalar feedback signal for each phrase or sequence of phrasesâ and then ï¬ ne-tune models based on human feedback. Researchers have already proposed various ways of applying transfer learning to build data-driven dialogue systems, ranging from learning separate sub-components of the dialogue system (e.g. intent and dialogue act classiï¬ cation) to learning the entire dialogue system (e.g. in an unsupervised or reinforcement learning framework) using transfer learning (Fabbrizio et al., 2004; Forgues et al., 2014; Serban and Pineau, 2015; Serban et al., 2016; Lowe et al., 2015a; Vandyke et al., 2015; Wen et al., 2016; GaË si´c et al., 2016; Mo et al., 2016; Genevay and Laroche, 2016; Chen et al., 2016) | 1512.05742#97 | 1512.05742#99 | 1512.05742 | [
"1511.06931"
] |
1512.05742#99 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | 31 # 5.3 Topic-oriented & Goal-driven Datasets Tables 1â 5 list the topics of available datasets. Several of the human-human datasets are denoted as having casual or unrestricted topics. In contrast, most human-machine datasets focus on speciï¬ c, narrow topics. It is useful to keep this distinction between restricted and unrestricted topics in mind, as goal-driven dialogue systems â which typically have a well-deï¬ ned measure of performance related to task completion â are usually developed in the former setting. In some cases, the line between these two types of datasets blurs. For example, in the case of conversations occurring between players of an online game (Afantenos et al., 2012), the outcome of the game is determined by how participants play in the game environment, not by their conversation. In this case, some conversations may have a direct impact on a playerâ s performance in the game, some conversations may be related to the game but irrelevant to the goal (e.g. commentary on past events) and some conversations may be completly unrelated to the game. | 1512.05742#98 | 1512.05742#100 | 1512.05742 | [
"1511.06931"
] |
1512.05742#100 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | # 5.4 Incorporating longer memories Recently, signiï¬ cant progress has been made towards incorporating a form of external memory into various neural-network architectures for sequence modeling. Models such as Memory Net- works (Weston et al., 2015; Sukhbaatar et al., 2015) and Neural Turing Machines (NTM) (Graves et al., 2014) store some part of their input in a memory, which is then reasoned over in order to perform a variety of sequence to sequence tasks. These vary from simple problems, such as se- quence copying, to more complex problems, such as question answering and machine translation. Although none of these models are explicitly designed to address dialogue problems, the extension by Kumar et al. (2015) to Dynamic Memory Networks speciï¬ cally differentiates between episodic and semantic memory. In this case, the episodic memory is the same as the memory used in the traditional Memory Networks paper that is extracted from the input, while the semantic memory refers to knowledge sources that are ï¬ | 1512.05742#99 | 1512.05742#101 | 1512.05742 | [
"1511.06931"
] |
1512.05742#101 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | xed for all inputs. The model is shown to work for a variety of NLP tasks, and it is not difï¬ cult to envision an application to dialogue utterance generation where the semantic memory is the desired external knowledge source. # 5.5 Incorporating External Knowledge Another interesting research direction is the incorporation of external knowledge sources in order to inform the response to be generated. Using external information is of great importance to dialogues systems, particularly in the goal-driven setting. Even non-goal-driven dialogue systems designed to simply entertain the user could beneï¬ t from leveraging external information, such as current news articles or movie reviews, in order to better converse about real-world events. This may be particularly useful in data-sparse domains, where there is not enough dialogue training data to reliably learn a response that is appropriate for each input utterance, or in domains that evolve quickly over time. # 5.5.1 STRUCTURED EXTERNAL KNOWLEDGE In traditional goal-driven dialogue systems (Levin and Pieraccini, 1997), where the goal is to provide information to the user, there is already extensive use of external knowledge sources. | 1512.05742#100 | 1512.05742#102 | 1512.05742 | [
"1511.06931"
] |
1512.05742#102 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | For example, in the Letâ s Go! dialogue system (Raux et al., 2005), the user requests information about various bus arrival and departure times. Thus, a critical input to the model is the actual bus schedule, which is 32 used in order to generate the systemâ s utterances. Another example is the dialogue system described by N¨oth et al. (2004), which helps users ï¬ nd movie information by utilizing movie showtimes from different cinemas. Such examples are abundant both in the literature and in practice. Although these models make use of external knowledge, the knowledge sources in these cases are highly structured and are only used to place hard constraints on the possible states of an utterance to be generated. They are essentially contained in relational databases or structured ontologies, and are only used to provide a deterministic mapping from the dialogue states extracted from an input user utterance to the dialogue system state or the generated response. Complementary to domain-speciï¬ c databases and ontologies are the general natural language processing databases and tools. These include lexical databases such as WordNet (Miller, 1995), which contains lexical relationships between words for over a hundred thousand words, VerbNet (Schuler, 2005) which contains lexical relations between verbs, and FrameNet (Ruppenhofer et al., 2006), which contains â word sensesâ for over ten thousand words along with examples of each word sense. In addition, there exist several natural language processing tools such as part of speech taggers, word category classiï¬ ers, word embedding models, named entity recognition models, co- reference resolution models, semantic role labeling models, semantic similarity models and sen- timent analysis models (Manning and Sch¨utze, 1999; Jurafsky and Martin, 2008; Mikolov et al., 2013; Gurevych and Strube, 2004; Lin and Walker, 2011b) that may be used by the Natural Lan- guage Interpreter to extract meaning from human utterances. Since these tools are typically built upon texts and annotations created by humans, using them inside a dialogue system can be inter- preted as a form of structured transfer learning, where the relationships or labels learned from the original natural language processing corpus provide additional information to the dialogue system and improve generalization of the system. | 1512.05742#101 | 1512.05742#103 | 1512.05742 | [
"1511.06931"
] |
1512.05742#103 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | # 5.5.2 UNSTRUCTURED EXTERNAL KNOWLEDGE Complementary sources of information can be found in unstructured knowledge sources, such as online encyclopedias (Wikipedia (Denoyer and Gallinari, 2007)) as well as domain-speciï¬ c It is beyond the scope of this paper to review all possible ways sources (Lowe et al., 2015b). that these unstructured knowledge sources have or could be used in conjunction with a data-driven dialogue system. However, we note that this is likely to be a fruitful research area. # 5.6 Personalized dialogue agents When conversing, humans often adapt to their interlocutor to facilitate understanding, and thus improve conversational efï¬ ciency and satisfaction. Attaining human-level performance with dia- logue agents may well require personalization, i.e. models that are aware and capable of adapting to their intelocutor. Such capabilities could increase the effectiveness and naturalness of generated dialogues (Lucas et al., 2009; Su et al., 2013). We see personalization of dialogue systems as an important task, which so far has not received much attention. There has been initial efforts on user- speciï¬ c models which could be adapted to work in combination with the dialogue models presented in this survey (Lucas et al., 2009; Lin and Walker, 2011a; Pargellis et al., 2004). There has also been interesting work on character modeling in movies (Walker et al., 2011; Li et al., 2016; Mo et al., 2016). | 1512.05742#102 | 1512.05742#104 | 1512.05742 | [
"1511.06931"
] |
1512.05742#104 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | There is signiï¬ cant potential to learn user models as part of dialogue models. The large datasets presented in this paper, some of which provide multiple dialogues per user, may enable the development of such models. 33 # 5.7 Evaluation metrics One of the most challenging aspects of constructing dialogue systems lies in their evaluation. While the end goal is to deploy the dialogue system in an application setting and receive real human feed- back, getting to this stage is time consuming and expensive. Often it is also necessary to optimize performance on a pseudo-performance metric prior to release. This is particularly true if a dialogue model has many hyper-parameters to be optimizedâ it is infeasible to run user experiments for every parameter setting in a grid search. Although crowdsourcing platforms, such as Amazon Mechanical Turk, can be used for some user testing (Jurcıcek et al., 2011), evaluations using paid subjects can also lead to biased results (Young et al., 2013). Ideally, we would have some automated metrics for calculating a score for each model, and only involve human evaluators once the best model has been chosen with reasonable conï¬ | 1512.05742#103 | 1512.05742#105 | 1512.05742 | [
"1511.06931"
] |
1512.05742#105 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | dence. The evaluation problem also arises for non-goal-driven dialogue systems. Here, researchers have focused mainly on the output of the response generation module. Evaluation of such non-goal- driven dialogue systems can be traced back to the Turing test (Turing, 1950), where human judges communicate with both computer programs and other humans over a chat terminal without knowing each otherâ s true identity. The goal of the judges was to identify the humans and computer programs under the assumption that a program indistinguishable from a real human being must be intelligent. | 1512.05742#104 | 1512.05742#106 | 1512.05742 | [
"1511.06931"
] |
1512.05742#106 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | However, this setup has been criticized extensively with numerous researchers proposing alterna- tive evaluation procedures (Cohen, 2005). More recently, researchers have turned to analyzing the collected dialogues produced after they are ï¬ nished (Galley et al., 2015; Pietquin and Hastie, 2013; Shawar and Atwell, 2007a; Schatzmann et al., 2005). Even when human evaluators are available, it is often difï¬ cult to choose a set of informative and consistent criteria that can be used to judge an utterance generated by a dialogue system. For example, one might ask the evaluator to rate the utterance on vague notions such as â appropriatenessâ and â naturalnessâ | 1512.05742#105 | 1512.05742#107 | 1512.05742 | [
"1511.06931"
] |
1512.05742#107 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | , or to try to differentiate between utterances generated by the system and those generated by actual humans (Vinyals and Le, 2015). Schatzmann et al. (2005) suggest two aspects that need to be evaluated for all response generation systems (as well as user simulation models): 1) if the model can generate human-like output, and 2) if the model can reproduce the variety of user behaviour found in corpus. But we lack a deï¬ nitive framework for such evaluations. We complete this discussion by summarizing different approaches to the automatic evaluation problem as they relate to these objectives. 5.7.1 AUTOMATIC EVALUATION METRICS FOR GOAL-DRIVEN DIALOGUE SYSTEMS User evaluation of goal-driven dialogue systems typically focuses on goal-related performance cri- teria, such as goal completion rate, dialogue length, and user satisfaction (Walker et al., 1997; Schatzmann et al., 2005). These were originally evaluated by human users interacting with the dialogue system, but more recently researchers have also begun to use third-party annotators for evaluating recorded dialogues (Yang et al., 2010). Due to their simplicity, the vast majority of hand- crafted task-oriented dialogue systems have been solely evaluated in this way. However, when using machine learning algorithms to train on large-scale corpora, automatic optimization criteria are re- quired. The challenge with evaluating goal-driven dialogue systems without human intervention is that the process necessarily requires multiple stepsâ | 1512.05742#106 | 1512.05742#108 | 1512.05742 | [
"1511.06931"
] |
1512.05742#108 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | it is difï¬ cult to determine if a task has been solved from a single utterance-response pair from a conversation. Thus, simulated data is often gen- erated by a user simulator (Eckert et al., 1997; Schatzmann et al., 2007; Jung et al., 2009; Georgila 34 et al., 2006; Pietquin and Hastie, 2013). Given a sufï¬ ciently accurate user simulation model, an interaction between the dialogue system and the user can be simulated from which it is possible to deduce the desired metrics, such as goal completion rate. Signiï¬ cant effort has been made to render the simulated data as realistic as possible, by modeling user intentions. Evaluation of such simulation methods has already been conducted (Schatzmann et al., 2005). However, generating realistic user simulation models remains an open problem. 5.7.2 AUTOMATIC EVALUATION METRICS FOR NON-GOAL-DRIVEN DIALOGUE SYSTEMS Evaluation of non-goal-driven dialogue systems, whether by automatic means or user studies, re- mains a difï¬ cult challenge. Word Overlap Metrics. One approach is to borrow evaluation metrics from other NLP tasks such as machine translation, which uses BLEU (Papineni et al., 2002) and METEOR (Banerjee and Lavie, 2005) scores. These metrics have been used to compare responses generated by a learned dialogue strategy to the actual next utterance in the conversation, conditioned on a dialogue context (Sordoni et al., 2015b). While BLEU scores have been shown to correlate with human judgements for machine translation (Papineni et al., 2002), their effectiveness for automatically assessing di- alogue response generation is unclear. There are several issues to consider: given the context of a conversation, there often exists a large number of possible responses that â | 1512.05742#107 | 1512.05742#109 | 1512.05742 | [
"1511.06931"
] |
1512.05742#109 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | ï¬ tâ into the dialogue. Thus, the response generated by a dialogue system could be entirely reasonable, yet it may have no words in common with the actual next utterance. In this case, the BLEU score would be very low, but would not accurately reï¬ ect the strength of the model. Indeed, even humans who are tasked with predicting the next utterance of a conversation achieve relatively low BLEU scores (Sordoni et al., 2015b). Although the METEOR metric takes into account synonyms and morphological variants of words in the candidate response, it still suffers from the aforementioned problems. In a sense, these measurements only satisfy one direction of Schatzmannâ s criteria: high BLEU and METEOR scores imply that the model is generating human-like output, but the model may still not reproduce the variety of user behaviour found in corpus. Furthermore, such metrics will only accurately re- ï¬ ect the performance of the dialogue system if given a large number of candidate responses for each given context. Next Utterance Classiï¬ | 1512.05742#108 | 1512.05742#110 | 1512.05742 | [
"1511.06931"
] |
1512.05742#110 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | cation. Alternatively, one can narrow the number of possible responses to a small, pre-deï¬ ned list, and ask the model to select the most appropriate response from this list. The list includes the actual next response of the conversation (the desired prediction), and the other entries (false positives) are sampled from elsewhere in the corpus (Lowe et al., 2016, 2015a). This next utterance classiï¬ cation (NUC) task is derived from the recall and precision metrics for information-retrieval-based approaches. There are several attractive properties of this metric: it is easy to interpret, and the difï¬ culty can be adjusted by changing the number of false responses. | 1512.05742#109 | 1512.05742#111 | 1512.05742 | [
"1511.06931"
] |
1512.05742#111 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | However, there are drawbacks. In particular, since the other candidate answers are sampled from elsewhere in the corpus, there is a chance that these also represent reasonable responses given the context. This can be alleviated to some extent by reporting Recall@k measures, i.e. whether the correct response is found in the k responses with the highest rankings according to the model. Although current models evaluated using NUC are trained explicitly to maximize the performance on this metric by minimizing the cross-entropy between context-response pairs (Lowe et al., 2015a; Kadlec et al., 2015), the metric could also be used to evaluate a probabilistic generative model trained to output full utterances. | 1512.05742#110 | 1512.05742#112 | 1512.05742 | [
"1511.06931"
] |
1512.05742#112 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | 35 Word Perplexity. Another metric proposed to evaluate probabilistic language models (Bengio et al., 2003; Mikolov et al., 2010) that has seen signiï¬ cant recent use for evaluating end-to-end dialogue systems is word perplexity (Pietquin and Hastie, 2013; Serban et al., 2016). Perplexity ex- plicitly measures the probability that the model will generate the ground truth next utterance given some context of the conversation. This is particularly appealing for dialogue, as the distribution over words in the next utterance can be highly multi-modal (i.e. many possible responses). A re-weighted perplexity metric has also been proposed where stop-words, punctuation, and end-of-utterance to- kens are removed before evaluating to focus on the semantic content of the phrase (Serban et al., 2016). Both word perplexity, as well as utterance-level recall and precision outlined above, satisfy Schatzmannâ s evaluation criteria, since scoring high on these would require the model to produce human-like output and to reproduce most types of conversations in the corpus. Response Diversity. Recent non-goal-driven dialogue systems based on neural networks have had problems generating diverse responses (Serban et al., 2016). (Li et al., 2015) recently intro- duced two new metrics, distinct-1 and distinct-2, which respectively measure the number of distinct unigrams and bigrams of the generated responses. Although these fail to satisfy either of Schatz- mannâ s criteria, they may still be useful in combination with other metrics, such as BLEU, NUC or word perplexity. # 6. Conclusion | 1512.05742#111 | 1512.05742#113 | 1512.05742 | [
"1511.06931"
] |
1512.05742#113 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | There is strong evidence that over the next few years, dialogue research will quickly move towards large-scale data-driven model approaches. In particular, as is the case for other language-related applications such as speech recognition, machine translation and information retrieval, these ap- proaches will likely come in the form of end-to-end trainable systems. This paper provides an extensive survey of currently available datasets suitable for research, development, and evaluation of such data-driven dialogue systems. In addition to presenting the datasets, we provide a detailed discussion of several of the is- sues related to the use of datasets in dialogue system research. Several potential directions are highlighted, such as transfer learning and incorporation of external knowledge, which may lead to scalable solutions for end-to-end training of conversational agents. # Acknowledgements The authors gratefully acknowledge ï¬ nancial support by the Samsung Advanced Institute of Tech- nology (SAIT), the Natural Sciences and Engineering Research Council of Canada (NSERC), the Canada Research Chairs, the Canadian Institute for Advanced Research (CIFAR) and Compute Canada. Early versions of the manuscript beneï¬ ted greatly from the proofreading of Melanie Lyman-Abramovitch, and later versions were extensively revised by Genevieve Fried and Nico- las Angelard-Gontier. The authors also thank Nissan Pow, Michael Noseworthy, Chia-Wei Liu, Gabriel Forgues, Alessandro Sordoni, Yoshua Bengio and Aaron Courville for helpful discussions. | 1512.05742#112 | 1512.05742#114 | 1512.05742 | [
"1511.06931"
] |
1512.05742#114 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | # References B. Aarts and S. A. Wallis. The diachronic corpus of present-day spoken english (DCPSE), 2006. R. Abbott, B. Ecker, P. Anand, and M. Walker. Internet argument corpus 2.0: An sql schema for dialogic social media and the corpora to go with it. In Language Resources and Evaluation Conference, LREC2016, 2016. 36 S. Afantenos, N. Asher, F. Benamara, A. Cadilhac, C´edric D´egremont, P. Denis, M. Guhe, S. Keizer, A. Lascarides, O. Lemon, et al. | 1512.05742#113 | 1512.05742#115 | 1512.05742 | [
"1511.06931"
] |
1512.05742#115 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Developing a corpus of strategic conversation in the settlers of catan. In SeineDial 2012-The 16th workshop on the semantics and pragmatics of dialogue, 2012. Y. Al-Onaizan, U. Germann, U. Hermjakob, K. Knight, P. Koehn, D. M., and K. Yamada. Translating with scarce resources. In AAAI, 2000. J. Alexandersson, R. Engel, M. Kipp, S. Koch, U. K¨ussner, N. Reithinger, and M. Stede. | 1512.05742#114 | 1512.05742#116 | 1512.05742 | [
"1511.06931"
] |
1512.05742#116 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Modeling negotiation dialogs. In Verbmobil: Foundations of Speech-to-Speech Translation, pages 441â 451. Springer, 2000. D. Ameixa and L. Coheur. From subtitles to human interactions: introducing the subtle corpus. Technical report, Tech. rep., 2013. D. Ameixa, Luisa Coheur, P. Fialho, and P. Quaresma. Luke, I am your father: dealing with out-of-domain requests by using movies subtitles. | 1512.05742#115 | 1512.05742#117 | 1512.05742 | [
"1511.06931"
] |
1512.05742#117 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | In Intelligent Virtual Agents, pages 13â 21, 2014. A. H. Anderson, M. Bader, E. G. Bard, E. Boyle, G. Doherty, S. Garrod, S. Isard, J. Kowtko, J. McAllister, J. Miller, et al. The HCRC map task corpus. Language and speech, 34(4):351â 366, 1991. J. Andreas, S. Rosenthal, and K. McKeown. | 1512.05742#116 | 1512.05742#118 | 1512.05742 | [
"1511.06931"
] |
1512.05742#118 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Annotating agreement and disagreement in threaded discussion. In LREC, pages 818â 822. Citeseer, 2012. L. E. Asri, J. He, and K. Suleman. A sequence-to-sequence model for user simulation in spoken dialogue systems. arXiv preprint arXiv:1607.00070, 2016. A. J. Aubrey, D. Marshall, P. L. Rosin, J. Vandeventer, D. W. Cunningham, and C. Wallraven. Cardiff conversation database (CCDb): A database of natural dyadic conversations. | 1512.05742#117 | 1512.05742#119 | 1512.05742 | [
"1511.06931"
] |
1512.05742#119 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | In Computer Vision and Pattern Recognition Workshops (CVPRW), IEEE Conference on, pages 277â 282, 2013. H. Aust, M. Oerder, F. Seide, and V. Steinbiss. The philips automatic train timetable information system. Speech Communication, 17(3):249â 262, 1995. A. Aw, M. Zhang, J. Xiao, and J. Su. A phrase-based statistical model for sms text normalization. In Proceedings of the COLING, pages 33â 40, 2006. R. E. Banchs. | 1512.05742#118 | 1512.05742#120 | 1512.05742 | [
"1511.06931"
] |
1512.05742#120 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Movie-DiC: a movie dialogue corpus for research and development. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Short Papers, 2012. R. E. Banchs and H. Li. IRIS: a chat-oriented dialogue system based on the vector space model. In Proceedings of the ACL 2012 System Demonstrations, 2012. S. Banerjee and A. Lavie. METEOR: An automatic metric for mt evaluation with improved correlation with human judgments. In Proceedings of the ACL workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization, 2005. M. Barlow. | 1512.05742#119 | 1512.05742#121 | 1512.05742 | [
"1511.06931"
] |
1512.05742#121 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Corpus of spoken, professional american-english, 2000. J. Beare and B. Scott. The spoken corpus of the survey of english dialects: language variation and oral history. In Proceedings of ALLC/ACH, 1999. Y. Bengio, R. Ducharme, P. Vincent, and C. Janvin. A neural probabilistic language model. The Journal of Machine Learning Research, 3:1137â 1155, 2003. Y. Bengio, I. Goodfellow, and A. Courville. Deep learning. | 1512.05742#120 | 1512.05742#122 | 1512.05742 | [
"1511.06931"
] |
1512.05742#122 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | An MIT Press book in preparation. Draft chapters available at http://www. iro. umontreal. ca/ bengioy/dlbook, 2014. C. Bennett and A. I Rudnicky. The carnegie mellon communicator corpus, 2002. D. Biber and E. Finegan. An initial typology of english text types. Corpus linguistics II: New studies in the analysis and exploitation of computer corpora, pages 19â | 1512.05742#121 | 1512.05742#123 | 1512.05742 | [
"1511.06931"
] |
1512.05742#123 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | 46, 1986. D. Biber and E. Finegan. Diachronic relations among speech-based and written registers in english. Variation in English: multi-dimensional studies, pages 66â 83, 2001. S. Bird, S. Browning, R. Moore, and M. Russell. Dialogue move recognition using topic spotting techniques. In Spoken Dialogue Systems-Theories and Applications, 1995. A. W. Black, S. Burger, A. Conkie, H. Hastie, S. Keizer, O. Lemon, N. Merigaud, G. Parent, G. Schubiner, B. Thomson, In Special Interest Group on et al. | 1512.05742#122 | 1512.05742#124 | 1512.05742 | [
"1511.06931"
] |
1512.05742#124 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Spoken dialog challenge 2010: Comparison of live and control test results. Discourse and Dialogue (SIGDIAL), 2011. D. Bohus and A. I Rudnicky. Sorry, I didnt catch that! In Recent Trends in Discourse and Dialogue, pages 123â 154. Springer, 2008. S. E. Brennan, K. S. Schuhmann, and K. M. Batres. Entrainment on the move and in the lab: The walking around corpus. In Proceedings of the 35th Annual Conference of the Cognitive Science Society, 2013. | 1512.05742#123 | 1512.05742#125 | 1512.05742 | [
"1511.06931"
] |
1512.05742#125 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | G. Brown, A. Anderson, R. Shillcock, and G. Yule. Teaching talk. Cambridge: CUP, 1984. S. Burger, K. Weilhammer, F. Schiel, and H. G. Tillmann. Verbmobil data collection and annotation. In Verbmobil: Foundations of speech-to-speech translation, pages 537â 549. Springer, 2000. 37 J. E. Cahn and S. E. Brennan. | 1512.05742#124 | 1512.05742#126 | 1512.05742 | [
"1511.06931"
] |
1512.05742#126 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | A psychological model of grounding and repair in dialog. In AAAI Symposium on Psychological Models of Communication in Collaborative Systems, 1999. A. Canavan and G. Zipperlen. Callfriend american english-non-southern dialect. Linguistic Data Consortium, 10:1, 1996. A. Canavan, D. Graff, and G. Zipperlen. Callhome american english speech. Linguistic Data Consortium, 1997. S. K. Card, T. P. Moran, and A. Newell. | 1512.05742#125 | 1512.05742#127 | 1512.05742 | [
"1511.06931"
] |
1512.05742#127 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | The Psychology of Human-Computer Interaction. L. Erlbaum Associates Inc., Hillsdale, NJ, USA, 1983. ISBN 0898592437. R. Carter. Orders of reality: Cancode, communication, and culture. ELT journal, 52(1):43â 56, 1998. R. Carter and M. McCarthy. Cambridge grammar of English: a comprehensive guide; spoken and written English grammar and usage. Ernst Klett Sprachen, 2006. | 1512.05742#126 | 1512.05742#128 | 1512.05742 | [
"1511.06931"
] |
1512.05742#128 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Tanya L. Chartrand and J. A. Bargh. The chameleon effect: the perceptionâ behavior link and social interaction. Journal of personality and social psychology, 76(6):893, 1999. T. Chen and M. Kan. Creating a live, public short message service corpus: the nus sms corpus. Language Resources and Evaluation, 47(2):299â 335, 2013. Y.-N. Chen, D. Hakkani-T¨ur, and X. He. | 1512.05742#127 | 1512.05742#129 | 1512.05742 | [
"1511.06931"
] |
1512.05742#129 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Zero-shot learning of intent embeddings for expansion by convolutional deep structured semantic models. In Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE International Confer- ence on, pages 6045â 6049. IEEE, 2016. A. Clark. Pre-processing very noisy text. In Proc. of Workshop on Shallow Processing of Large Corpora, pages 12â 22, 2003. H. H. Clark and S. E. Brennan. Grounding in communication. Perspectives on socially shared cognition, 13:127â 149, 1991. P. R. Cohen. If not turingâ s test, then what? | 1512.05742#128 | 1512.05742#130 | 1512.05742 | [
"1511.06931"
] |
1512.05742#130 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | AI magazine, 26(4):61, 2005. K. M. Colby. Modeling a paranoid mind. Behavioral and Brain Sciences, 4:515â 534, 1981. R. M. Cooper. The control of eye ï¬ xation by the meaning of spoken language: A new methodology for the real-time investigation of speech perception, memory, and language processing. Cognitive Psychology, 6(1):84â 107, 1974. N. Cristianini and J. Shawe-Taylor. | 1512.05742#129 | 1512.05742#131 | 1512.05742 | [
"1511.06931"
] |
1512.05742#131 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | An Introduction to Support Vector Machines: And Other Kernel-based Learning Methods. Cambridge University Press, 2000. H. Cuay´ahuitl, S. Renals, O. Lemon, and H. Shimodaira. Human-computer dialogue simulation using hidden markov models. In Automatic Speech Recognition and Understanding, 2005 IEEE Workshop on, pages 290â 295, 2005. H. Cuay´ahuitl, S. Keizer, and O. | 1512.05742#130 | 1512.05742#132 | 1512.05742 | [
"1511.06931"
] |
1512.05742#132 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Lemon. Strategic dialogue management via deep reinforcement learning. arXiv preprint arXiv:1511.08099, 2015. C. Danescu-Niculescu-Mizil and L. Lee. Chameleons in imagined conversations: A new approach to understanding coordination of linguistic style in dialogs. In Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics, ACL, 2011. L. Daubigney, M. Geist, S. Chandramohan, and O. | 1512.05742#131 | 1512.05742#133 | 1512.05742 | [
"1511.06931"
] |
1512.05742#133 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Pietquin. A comprehensive reinforcement learning framework for dialogue management optimization. IEEE Journal of Selected Topics in Signal Processing, 6(8):891â 902, 2012. M. Davies. Comparing the corpus of american soap operas, COCA, and the BNC, 2012a. M. Davies. Corpus of american soap operas, 2012b. I. de Kok, D. Heylen, and L. Morency. | 1512.05742#132 | 1512.05742#134 | 1512.05742 | [
"1511.06931"
] |
1512.05742#134 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Speaker-adaptive multimodal prediction model for listener responses. In Proceed- ings of the 15th ACM on International conference on multimodal interaction, 2013. L. Deng and X. Li. Machine learning paradigms for speech recognition: An overview. Audio, Speech, and Language Processing, IEEE Transactions on, 21(5):1060â 1089, 2013. L. Denoyer and P. Gallinari. The wikipedia xml corpus. In Comparative Evaluation of XML Information Retrieval Systems, pages 12â | 1512.05742#133 | 1512.05742#135 | 1512.05742 | [
"1511.06931"
] |
1512.05742#135 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | 19. Springer, 2007. B. Dhingra, Z. Zhou, D. Fitzpatrick, M. Muehl, and W. Cohen. Tweet2vec: Character-based distributed representations for social media. arXiv preprint arXiv:1605.03481, 2016. A. Djalali, S. Lauer, and C. Potts. Corpus evidence for preference-driven interpretation. In Logic, Language and Meaning, pages 150â 159. Springer, 2012. J. Dodge, A. Gane, X. Zhang, A. Bordes, S. Chopra, A. Miller, A. Szlam, and J. Weston. | 1512.05742#134 | 1512.05742#136 | 1512.05742 | [
"1511.06931"
] |
1512.05742#136 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Evaluating prerequisite qualities for learning end-to-end dialog systems. arXiv preprint arXiv:1511.06931, 2015. S. Dose. Flipping the script: A corpus of american television series (cats) for corpus-based language learning and teaching. Corpus Linguistics and Variation in English: Focus on Non-native Englishes, 2013. E. Douglas-Cowie, R. Cowie, I. Sneddon, C. Cox, O. Lowry, M. Mcrorie, J. Martin, L. Devillers, S. Abrilian, A. Batliner, et al. The humaine database: addressing the collection and annotation of naturalistic and induced emotional data. In Affective computing and intelligent interaction, pages 488â | 1512.05742#135 | 1512.05742#137 | 1512.05742 | [
"1511.06931"
] |
1512.05742#137 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | 500. Springer, 2007. 38 W. Eckert, E. Levin, and R. Pieraccini. User modeling for spoken dialogue system evaluation. In Automatic Speech Recognition and Understanding, 1997. Proceedings., 1997 IEEE Workshop on, pages 80â 87, 1997. L. El Asri, H. Schulz, S. Sharma, J. Zumer, J. Harris, E. Fine, R. Mehrotra, and K. Suleman. Frames: | 1512.05742#136 | 1512.05742#138 | 1512.05742 | [
"1511.06931"
] |
1512.05742#138 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Acorpus for adding memory to goal-oriented dialogue systems. preprint on webpage at http://www.maluuba.com/ publications/, 2017. M. Elsner and E. Charniak. You talking to me? a corpus and algorithm for conversation disentanglement. In Association for Computational Linguistics (ACL), 2008. D. Erhan, Y. Bengio, A. Courville, Pierre-A. Manzagol, and P. Vincent. | 1512.05742#137 | 1512.05742#139 | 1512.05742 | [
"1511.06931"
] |
1512.05742#139 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Why does unsupervised pre-training help deep learning? Journal of Machine Learning Research, 11, 2010. G. Di Fabbrizio, G. Tur, and D. Hakkani-Tr. Bootstrapping spoken dialog systems with data reuse. In Special Interest Group on Discourse and Dialogue (SIGDIAL), 2004. M. Fatemi, L. E. Asri, H. Schulz, J. He, and K. Suleman. | 1512.05742#138 | 1512.05742#140 | 1512.05742 | [
"1511.06931"
] |
1512.05742#140 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Policy networks with two-stage training for dialogue systems. In Special Interest Group on Discourse and Dialogue (SIGDIAL), 2016. D. Fisher, M. Smith, and H. T Welser. You are who you talk to: Detecting roles in usenet newsgroups. In Proceedings of the 39th Annual Hawaii International Conference on System Sciences (HICSSâ 06), volume 3, pages 59bâ 59b, 2006. P. Forchini. | 1512.05742#139 | 1512.05742#141 | 1512.05742 | [
"1511.06931"
] |
1512.05742#141 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Spontaneity reloaded: American face-to-face and movie conversation compared. In Corpus Linguistics, 2009. P. Forchini. Movie language revisited. Evidence from multi-dimensional analysis and corpora. Peter Lang, 2012. G. Forgues, J. Pineau, J. LarchevË eque, and R. Tremblay. Bootstrapping dialog systems with word embeddings. In Work- shop on Modern Machine Learning and Natural Language Processing, Advances in neural information processing systems (NIPS), 2014. | 1512.05742#140 | 1512.05742#142 | 1512.05742 | [
"1511.06931"
] |
1512.05742#142 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | E. N. Forsyth and C. H. Martell. Lexical and discourse analysis of online chat dialog. In International Conference on Semantic Computing (ICSC)., pages 19â 26, 2007. M. Frampton and O. Lemon. Recent research advances in reinforcement learning in spoken dialogue systems. The Knowledge Engineering Review, 24(04):375â 408, 2009. M. Galley, C. Brockett, A. Sordoni, Y. Ji, M. Auli, C. Quirk, M. Mitchell, J. Gao, and B. Dolan. deltaBLEU: A dis- criminative metric for generation tasks with intrinsically diverse targets. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, ACL, pages 445â 450, 2015. M. GaË si´c, F. JurË cÂ´Ä±Ë cek, S. Keizer, F. Mairesse, B. Thomson, K. Yu, and S. Young. | 1512.05742#141 | 1512.05742#143 | 1512.05742 | [
"1511.06931"
] |
1512.05742#143 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Gaussian processes for fast policy optimisation of pomdp-based dialogue managers. In Proceedings of the 11th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 201â 204. Association for Computational Linguistics, 2010. M. GaË si´c, F. JurË cÂ´Ä±Ë cek, B. Thomson, K. Yu, and S. Young. On-line policy optimisation of spoken dialogue systems via live interaction with human subjects. In IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU), pages 312â | 1512.05742#142 | 1512.05742#144 | 1512.05742 | [
"1511.06931"
] |
1512.05742#144 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | 317. IEEE, 2011. M. GaË si´c, M. Henderson, B. Thomson, P. Tsiakoulis, and S. Young. Policy optimisation of pomdp-based dialogue systems without state space compression. In Spoken Language Technology Workshop (SLT), 2012 IEEE, pages 31â 36. IEEE, 2012. M. Gasic, C. Breslin, M. Henderson, D. Kim, M. Szummer, B. Thomson, P. Tsiakoulis, and S. Young. | 1512.05742#143 | 1512.05742#145 | 1512.05742 | [
"1511.06931"
] |
1512.05742#145 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | On-line pol- icy optimisation of Bayesian spoken dialogue systems via human interaction. In IEEE International Conference on Acoustics, Speech and Signal Processing, pages 8367â 8371, 2013. M. GaË si´c, N. MrkË si´c, L. M. Rojas-Barahona, P.-H. Su, S. Ultes, D. Vandyke, T.-H. Wen, and S. Young. | 1512.05742#144 | 1512.05742#146 | 1512.05742 | [
"1511.06931"
] |
1512.05742#146 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Dialogue manager domain adaptation using gaussian process reinforcement learning. Computer Speech & Language, 2016. A. Genevay and R. Laroche. Transfer learning for user adaptation in spoken dialogue systems. In Proceedings of the 2016 International Conference on Autonomous Agents & Multiagent Systems, pages 975â 983. International Foundation for Autonomous Agents and Multiagent Systems, 2016. K. Georgila, J. Henderson, and O. Lemon. User simulation for spoken dialogue systems: learning and evaluation. In Proceedings of INTERSPEECH, 2006. | 1512.05742#145 | 1512.05742#147 | 1512.05742 | [
"1511.06931"
] |
1512.05742#147 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | K. Georgila, M. Wolters, J. D. Moore, and R. H. Logie. The MATCH corpus: A corpus of older and younger users interactions with spoken dialogue systems. Language Resources and Evaluation, 44(3):221â 261, 2010. J. Gibson and A. D. Pick. Perception of another personâ s looking behavior. The American journal of psychology, 76(3): 386â 394, 1963. J. J Godfrey, E. C Holliman, and J McDaniel. SWITCHBOARD: Telephone speech corpus for research and development. In International Conference on Acoustics, Speech, and Signal Processing (ICASSP-92), 1992. | 1512.05742#146 | 1512.05742#148 | 1512.05742 | [
"1511.06931"
] |
1512.05742#148 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | 39 I. Goodfellow, A. Courville, and Y. Bengio. Deep learning. Book in preparation for MIT Press, 2015. URL http: //goodfeli.github.io/dlbook/. J. T. Goodman. A bit of progress in language modeling extended version. Machine Learning and Applied Statistics Group Microsoft Research. Technical Report, MSR-TR-2001-72, 2001. C. Goodwin. Conversational Organization: Interaction Between Speakers and Hearers. | 1512.05742#147 | 1512.05742#149 | 1512.05742 | [
"1511.06931"
] |
1512.05742#149 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | New York: Academic Press, 1981. A. L. Gorin, G. Riccardi, and J. H. Wright. How may I help you? Speech Communication, 23(1):113â 127, 1997. A. Graves. Sequence transduction with recurrent neural networks. In Proceedings of the 29th International Conference on Machine Learning (ICML), Representation Learning Workshop, 2012. A. Graves, G. | 1512.05742#148 | 1512.05742#150 | 1512.05742 | [
"1511.06931"
] |
1512.05742#150 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Wayne, and I. Danihelka. Neural turing machines. arXiv preprint arXiv:1410.5401, 2014. S. Greenbaum. Comparing English worldwide: The international corpus of English. Clarendon Press, 1996. S. Greenbaum and G Nelson. The international corpus of english (ICE) project. World Englishes, 15(1):3â 15, 1996. C. G¨ulc¸ehre, O. Firat, K. Xu, K. Cho, L. Barrault, H. Lin, F. Bougares, H. Schwenk, and Y. Bengio. | 1512.05742#149 | 1512.05742#151 | 1512.05742 | [
"1511.06931"
] |
1512.05742#151 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | On using monolingual corpora in neural machine translation. CoRR, abs/1503.03535, 2015. I. Gurevych and M. Strube. Semantic similarity applied to spoken dialogue summarization. In Proceedings of the 20th international conference on Computational Linguistics, 2004. V. Haslerud and A. Stenstr¨om. The bergen corpus of london teenager language (COLT). Spoken English on Computer. Transcription, Mark-up and Application. London: Longman, pages 235â 242, 1995. P. A. Heeman and J. F. Allen. The TRAINS 93 Dialogues. Technical report, DTIC Document, 1995. C. T. Hemphill, J. J. Godfrey, and G. R. Doddington. The atis spoken language systems pilot corpus. In Proceedings of the DARPA speech and natural language workshop, pages 96â 101, 1990. M. Henderson, B. Thomson, and S. Young. | 1512.05742#150 | 1512.05742#152 | 1512.05742 | [
"1511.06931"
] |
1512.05742#152 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Deep neural network approach for the dialog state tracking challenge. In Special Interest Group on Discourse and Dialogue (SIGDIAL), 2013. M. Henderson, B. Thomson, and J. Williams. Dialog state tracking challenge 2 & 3, 2014a. M. Henderson, B. Thomson, and J. Williams. The second dialog state tracking challenge. In Special Interest Group on Discourse and Dialogue (SIGDIAL), 2014b. | 1512.05742#151 | 1512.05742#153 | 1512.05742 | [
"1511.06931"
] |
1512.05742#153 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | M. Henderson, B. Thomson, and S. Young. Word-based dialog state tracking with recurrent neural networks. In 15th Special Interest Group on Discourse and Dialogue (SIGDIAL), page 292, 2014c. G. Hinton, L. Deng, D. Yu, G. E. Dahl, A. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T.a N. Sainath, et al. | 1512.05742#152 | 1512.05742#154 | 1512.05742 | [
"1511.06931"
] |
1512.05742#154 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. Signal Processing Magazine, IEEE, 29(6):82â 97, 2012. T. Hiraoka, G. Neubig, K. Yoshino, T. Toda, and S. Nakamura. Active learning for example-based dialog systems. In Proc Intl Workshop on Spoken Dialog Systems, Saariselka, Finland, 2016. H. Hung and G. | 1512.05742#153 | 1512.05742#155 | 1512.05742 | [
"1511.06931"
] |
1512.05742#155 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Chittaranjan. The IDIAP wolf corpus: exploring group behaviour in a competitive role-playing game. In Proceedings of the international conference on Multimedia, pages 879â 882, 2010. J. L. Hutchens and M. D. Alder. Introducing MegaHAL. In Proceedings of the Joint Conferences on New Methods in Language Processing and Computational Natural Language Learning, 1998. Arne J. and Nils D. Talking to a computer is not like talking to your best friend. In Proceedings of The ï¬ rst Scandinivian Conference on Artiï¬ cial Intelligence, 1988. | 1512.05742#154 | 1512.05742#156 | 1512.05742 | [
"1511.06931"
] |
1512.05742#156 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | S. Jung, C. Lee, K. Kim, M. Jeong, and G. G. Lee. Data-driven user simulation for automated evaluation of spoken dialog systems. Computer Speech & Language, 23(4):479â 509, 2009. D. Jurafsky and J. H. Martin. Speech and language processing, 2nd Edition. Prentice Hall, 2008. F. Jurcıcek, S. Keizer, M. GaË sic, F. Mairesse, B. Thomson, K. Yu, and S. Young. | 1512.05742#155 | 1512.05742#157 | 1512.05742 | [
"1511.06931"
] |
1512.05742#157 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Real user evaluation of spoken dialogue systems using amazon mechanical turk. In Proceedings of INTERSPEECH, volume 11, 2011. F. JurË cÂ´Ä±Ë cek, B. Thomson, and S. Young. Reinforcement learning for parameter estimation in statistical spoken dialogue systems. Computer Speech & Language, 26(3):168â 192, 2012. R. Kadlec, M. Schmid, and J. Kleindienst. | 1512.05742#156 | 1512.05742#158 | 1512.05742 | [
"1511.06931"
] |
1512.05742#158 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Improved deep learning baselines for ubuntu corpus dialogs. Neural Informa- tion Processing Systems Workshop on Machine Learning for Spoken Language Understanding, 2015. M. Kaufmann and J. Kalita. Syntactic normalization of twitter messages. In International conference on natural language processing, Kharagpur, India, 2010. S. Kim, L. F. DHaro, R. E. Banchs, J. Williams, and M. Henderson. Dialog state tracking challenge 4, 2015. S. Kim, L. F. DHaro, R. E. Banchs, J. D. Williams, M. Henderson, and K. | 1512.05742#157 | 1512.05742#159 | 1512.05742 | [
"1511.06931"
] |
1512.05742#159 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Yoshino. The ï¬ fth dialog state tracking challenge. In IEEE Spoken Language Technology Workshop (SLT), 2016. D. Koller and N. Friedman. Probabilistic graphical models: principles and techniques. MIT press, 2009. 40 J. A Konstan, B. N. Miller, D. Maltz, J. L. Herlocker, L. R. Gordon, and J. Riedl. Grouplens: applying collaborative ï¬ ltering to usenet news. Communications of the ACM, 40(3):77â 87, 1997. A. Kumar, O. Irsoy, J. Su, J. Bradbury, R. English, B. Pierce, P. Ondruska, I. Gulrajani, and R. Socher. | 1512.05742#158 | 1512.05742#160 | 1512.05742 | [
"1511.06931"
] |
1512.05742#160 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Ask me anything: Dynamic memory networks for natural language processing. Neural Information Processing Systems (NIPS), 2015. M. Kyt¨o and T. Walker. Guide to A corpus of English dialogues 1560-1760. Acta Universitatis Upsaliensis, 2006. I. Langkilde and K. Knight. Generation that exploits corpus-based statistical knowledge. In Proceedings of the 36th An- nual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics-Volume 1, pages 704â | 1512.05742#159 | 1512.05742#161 | 1512.05742 | [
"1511.06931"
] |
1512.05742#161 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | 710. Association for Computational Linguistics, 1998. G. Leech. 100 million words of english: the british national corpus (BNC). Language Research, 28(1):1â 13, 1992. E. Levin and R. Pieraccini. A stochastic model of computer-human interaction for learning dialogue strategies. Eurospeech, volume 97, pages 1883â 1886, 1997. In E. Levin, R. Pieraccini, and W. Eckert. | 1512.05742#160 | 1512.05742#162 | 1512.05742 | [
"1511.06931"
] |
1512.05742#162 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Learning dialogue strategies within the markov decision process framework. In Automatic Speech Recognition and Understanding, 1997. Proceedings., 1997 IEEE Workshop on, pages 72â 79. IEEE, 1997. J. Li, M. Galley, C. Brockett, J. Gao, and B. Dolan. A diversity-promoting objective function for neural conversation models. arXiv preprint arXiv:1510.03055, 2015. J. Li, M. Galley, C. Brockett, J. Gao, and Bill D. | 1512.05742#161 | 1512.05742#163 | 1512.05742 | [
"1511.06931"
] |
1512.05742#163 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | A persona-based neural conversation model. In ACL, pages 994â 1003, 2016. G. Lin and M. Walker. All the worldâ s a stage: Learning character models from ï¬ lm. In AAAI Conference on Artiï¬ cial Intelligence and Interactive Digital Entertainment, 2011a. G. I. Lin and M. A. Walker. All the worldâ s a stage: Learning character models from ï¬ lm. In AIIDE, 2011b. C. Lord and M. Haith. | 1512.05742#162 | 1512.05742#164 | 1512.05742 | [
"1511.06931"
] |
1512.05742#164 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | The perception of eye contact. Attention, Perception, & Psychophysics, 16(3):413â 416, 1974. R. Lowe, N. Pow, I. Serban, and J. Pineau. The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems. In Special Interest Group on Discourse and Dialogue (SIGDIAL), 2015a. R. Lowe, N. Pow, I. V. Serban, L. Charlin, and J. Pineau. | 1512.05742#163 | 1512.05742#165 | 1512.05742 | [
"1511.06931"
] |
1512.05742#165 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | Incorporating unstructured textual knowledge sources into neural dialogue systems. Neural Information Processing Systems Workshop on Machine Learning for Spoken Language Understanding, 2015b. R. Lowe, I. V. Serban, M. Noseworthy, L. Charlin, and J. Pineau. On the evaluation of dialogue systems with next utterance classiï¬ cation. In Special Interest Group on Discourse and Dialogue (SIGDIAL), 2016. J. M. Lucas, F. Fernndez, J. Salazar, J. Ferreiros, and R. San Segundo. | 1512.05742#164 | 1512.05742#166 | 1512.05742 | [
"1511.06931"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.