id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
1612.07837#15
SampleRNN: An Unconditional End-to-End Neural Audio Generation Model
Published as a conference paper at ICLR 2017 Table 3: Effect of subsequence length on NLL (bits per audio sample) computed on the Blizzard validation set. # Subsequence Length 32 64 128 256 512 NLL Validation 1.575 1.468 1.412 1.391 1.364 Table 4: Test (validation) set NLL (bits per audio sample) for Blizzard. Variants of SampleRNN are provided to compare the contribution of each component in performance. NLL Test (Validation) Model SampleRNN (2-tier) Without Embedding Multi-Softmax 1.392 (1.369) 1.566 (1.539) 1.685 (1.656)
1612.07837#14
1612.07837#16
1612.07837
[ "1602.07868" ]
1612.07837#16
SampleRNN: An Unconditional End-to-End Neural Audio Generation Model
while having a reasonable number of updates per unit time. Although our model is very similar to WaveNet, the design choices, e.g. number of convolution ï¬ lters in each dilated convolution layer, length of target sequence to train on simultaneously (one can train with a single target with all sam- ples in the receptive ï¬ eld as input or with target sequence length of size T with input of size receptive ï¬ eld + T - 1), batch-size, etc. might make our implementation different from what the authors have done in the original WaveNet model. Hence, we note here that although we did our best at exactly reproducing their results, there would very likely be different choice of hyper-parameters between our implementation and the one of the authors. For our WaveNet implementation, we have used 4 dilated convolution blocks each having 10 dilated convolution layers with dilation 1, 2, 4, 8 up to 512. Hence, our network has a receptive ï¬ eld the parameters of multinomial distribution of sample at time step of 4092 acoustic samples i.e. t, p(xi) = fθ(xiâ 1, xiâ 2, . . . xiâ 4092) where θ is model parameters.
1612.07837#15
1612.07837#17
1612.07837
[ "1602.07868" ]
1612.07837#17
SampleRNN: An Unconditional End-to-End Neural Audio Generation Model
We train on target sequence length of 1600 and use batch size of 8. Each dilated convolution ï¬ lter has size 2 and the number of output channels is 64 for each dilated convolutional layer (128 ï¬ lters in total due to gated non- linearity). We trained this model using Adam optimizer with a ï¬ xed global learning rate of 0.001 for Blizzard dataset and 0.0001 for Onomatopoeia and Music datasets. We trained these models for about one week on a GeForce GTX TITAN X. We dropped the learning rate in the Blizzard experiment to 0.0001 after around 3 days of training. 3.2 HUMAN EVALUATION Apart from reporting NLL, we conducted AB preference tests for random samples from four models trained on the Blizzard dataset. For unconditional generation of speech which at best sounds like mumbling, this type of test is the one which is more suited. Competing models were the RNN, SampleRNN (2-tier), SampleRNN (3-tier), and our implementation of WaveNet. The rest of the models were excluded as the quality of samples were deï¬ nitely lower and also to keep the number of pair comparison tests manageable. We will release the samples that have been used in this test too. All the samples were set to have the same volume. Every user is then shown a set of twenty pairs of samples with one random pair at a time. Each pair had samples from two different models. The human evaluator is asked to listen to the samples and had the option of choosing between the two model or choosing not to prefer any of them.
1612.07837#16
1612.07837#18
1612.07837
[ "1602.07868" ]
1612.07837#18
SampleRNN: An Unconditional End-to-End Neural Audio Generation Model
Hence, we have a quantiï¬ cation of preference between every pair of models. We used the online tool made publicly available by Jillings et al. (2015). Results in Fig. 3 clearly points out that SampleRNN (3-tier) is a winner by a huge margin in terms of preference by human raters, then SampleRNN (2-tier) and afterward two other models, which matches with the performance comparison in Table 1. The same evaluation was conducted for Music dataset except for an additional ï¬ ltering process of samples. Speciï¬ c to only this dataset, we observed that a batch of generated samples from competing models (this time restricted to RNN, SampleRNN (2-tier), and SampleRNN (3-tier)) were either music-like or random noise. For all these models we only considered random samples that were not random noise. Fig. 4 is dedicated to result of human evaluation on Music dataset.
1612.07837#17
1612.07837#19
1612.07837
[ "1602.07868" ]
1612.07837#19
SampleRNN: An Unconditional End-to-End Neural Audio Generation Model
7 Published as a conference paper at ICLR 2017 100 100 100 3+tier 3-tier 3-tier 2 80 80 80 â ¬ 5 £ 60 60 60 3 ES 2 = 40 40 40 2 g £ 20 20 20 2-tier No-Pref. RNN _ No-Pref. WaveN. nio-pref, 0 0 0 848 101 51 842 «8.9 69 39.0 7.0 40 100 100 100 g 80 2-tier 80 80 c RNN 2 60 co}. 2tier 60 3 ES g 5 40 40 Waven. 40 $ aveN 5 . £& 20 RNN 20 â 20 No-Pref. No-Pref. No-Pref. 0 0 0 790 180 3.0 602 320 78 22.4 633 143 Figure 3:
1612.07837#18
1612.07837#20
1612.07837
[ "1602.07868" ]
1612.07837#20
SampleRNN: An Unconditional End-to-End Neural Audio Generation Model
Pairwise comparison of 4 best models based on the votes from listeners conducted on samples generated from models trained on Blizzard dataset. 100, 100, 100 © 3+tier 2-tier z 8o 80 80 ⠬ 5 5 60 2-tier 60 60 g g 2 40) 3 tier 40 40 2 2 20 No-Pref. 20 No-Pref. 20 No-Pref. } } RNN } RNN 32.6 57.0 10.5 83.5 47 11.8 85.1 2.3 12.6 Figure 4: Pairwise comparison of 3 best models based on the votes from listeners conducted on samples generated from models trained on Music dataset. 3.3 QUANTIFYING INFORMATION RETENTION For the last experiment we are interested in measuring the memory span of the model. We trained our model, SampleRNN (3-tier), with best hyper-parameters on a dataset of 2 speakers reading audio books, one male and one female, respectively, with mean fundamental frequency of 125.3 and 201.8Hz. Each speaker has roughly 10 hours of audio in the dataset that has been preprocessed similar to Blizzard. We observed that it learned to stay consistent generating samples from the same speaker without having any knowledge about the speaker ID or any other conditioning information. This effect is more apparent here in comparison to the unbalanced Onomatopoeia that sometimes mixes two different categories of sounds. Another experiment was conducted to test the effect of memory and study the effective memory horizon. We inject 1 second of silence in the middle of sampling procedure in order to see if it will remember to generate from the same speaker or not. Initially when sampling we let the model generate 2 seconds of audio as it normally do. From 2 to 3 seconds instead of feeding back the generated sample at that timestep a silent token (zero amplitude) would be fed. From 3 to 5 seconds again we sample normally; feeding back the generated token.
1612.07837#19
1612.07837#21
1612.07837
[ "1602.07868" ]
1612.07837#21
SampleRNN: An Unconditional End-to-End Neural Audio Generation Model
We did classiï¬ cation based on mean fundamental frequency of speakers for the ï¬ rst and last 2 seconds. In 83% of samples SampleRNN generated from the same person in two separate segments. 8 Published as a conference paper at ICLR 2017 This is in contrast to a model with ï¬ xed past window like WaveNet where injecting 16000 silent tokens (3.3 times the receptive ï¬ eld size) is equivalent to generating from scratch which has 50% chance (assuming each 2-second segment is coherent and not a mixed sound of two speakers). # 4 RELATED WORK Our work is related to earlier work on auto-regressive multi-layer neural networks, starting with Bengio & Bengio (1999), then NADE (Larochelle & Murray, 2011) and more recently Pix- elRNN (van den Oord et al., 2016). Similar to how they tractably model joint distribution over units of the data (e.g. words in sentences, pixels in images, etc.) through an auto-regressive decomposi- tion, we transform the joint distribution of acoustic samples using Eq. 1. The idea of having part of the model running at different clock rates is related to multi-scale RNNs (Schmidhuber, 1992; El Hihi & Bengio, 1995; Koutnik et al., 2014; Sordoni et al., 2015; Serban et al., 2016). Chung et al. (2015) also attempt to model raw audio waveforms which is in contrast to traditional approaches which use spectral features as in Tokuda et al. (2013), Bertrand et al. (2008), and Lee et al. (2009). Our work is closely related to WaveNet (Oord et al., 2016), which is why we have made the above comparisons, and makes it interesting to compare the effect of adding higher-level RNN stages working at a low resolution. Similar to this work, our models generate one acoustic sample at a time conditioned on all previously generated samples. We also share the preprocessing step of quantizing the acoustics into bins. Unlike this model, we have different modules in our models running at different clock-rates.
1612.07837#20
1612.07837#22
1612.07837
[ "1602.07868" ]
1612.07837#22
SampleRNN: An Unconditional End-to-End Neural Audio Generation Model
In contrast to WaveNets, we mitigate the problem of long-term dependency with hierarchical structure and using stateful RNNs, i.e. we will always propagate hidden states to the next training sequence although the gradient of the loss will not take into account the samples in previous training sequence. # 5 DISCUSSION AND CONCLUSION We propose a novel model that can address unconditional audio generation in the raw acoustic domain, which typically has been done until recently with hand-crafted features. We are able to show that a hierarchy of time scales and frequent updates will help to overcome the problem of modeling extremely high-resolution temporal data. That allows us, for this particular application, to learn the data manifold directly from audio samples. We show that this model can generalize well and generate samples on three datasets that are different in nature. We also show that the samples generated by this model are preferred by human raters. Success in this application, with a general-purpose solution as proposed here, opens up room for more improvement when speciï¬ c domain knowledge is applied. This method, however, proposed with audio generation application in mind, can easily be adapted to other tasks that require learning the representation of sequential data with high temporal resolution and long-range complex struc- ture. # ACKNOWLEDGMENTS The authors would like to thank JoË ao Felipe Santos and Kyle Kastner for insightful comments and discussion. We would like to thank the Theano Development Team (2016)4 and MILA staff. We acknowledge the support of the following agencies for research funding and computing support: NSERC, Calcul Qu´ebec, Compute Canada, the Canada Research Chairs and CIFAR. Jose Sotelo also thanks the Consejo Nacional de Ciencia y Tecnolog´ıa (CONACyT) as well as the Secretar´ıa de Educaci´on P´ublica (SEP) for their support. This work was a collaboration with Ubisoft.
1612.07837#21
1612.07837#23
1612.07837
[ "1602.07868" ]
1612.07837#23
SampleRNN: An Unconditional End-to-End Neural Audio Generation Model
# 4http://deeplearning.net/software/theano/ 9 Published as a conference paper at ICLR 2017 # REFERENCES Yoshua Bengio and Samy Bengio. Modeling high-dimensional discrete data with multi-layer neural networks. In NIPS, volume 99, pp. 400â 406, 1999. James Bergstra and Yoshua Bengio. Random search for hyper-parameter optimization. Journal of Machine Learning Research, 13(Feb):281â 305, 2012.
1612.07837#22
1612.07837#24
1612.07837
[ "1602.07868" ]
1612.07837#24
SampleRNN: An Unconditional End-to-End Neural Audio Generation Model
Alexander Bertrand, Kris Demuynck, Veronique Stouten, et al. Unsupervised learning of auditory ï¬ lter banks using non-negative matrix factorisation. In 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 4713â 4716. IEEE, 2008. Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555, 2014. Junyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron C Courville, and Yoshua Ben- gio. A recurrent latent variable model for sequential data. In Advances in neural information processing systems, pp. 2980â 2988, 2015. Alexey Dosovitskiy, Jost Springenberg, Maxim Tatarchenko, and Thomas Brox.
1612.07837#23
1612.07837#25
1612.07837
[ "1602.07868" ]
1612.07837#25
SampleRNN: An Unconditional End-to-End Neural Audio Generation Model
Learning to gener- ate chairs, tables and cars with convolutional networks. 2016. Salah El Hihi and Yoshua Bengio. Hierarchical recurrent neural networks for long-term dependen- cies. In NIPS, volume 400, pp. 409. Citeseer, 1995. Felix Gers. Long short-term memory in recurrent neural networks. PhD thesis, Universit¨at Han- nover, 2001. Alex Graves. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, 2013. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.
1612.07837#24
1612.07837#26
1612.07837
[ "1602.07868" ]
1612.07837#26
SampleRNN: An Unconditional End-to-End Neural Audio Generation Model
Delving deep into rectiï¬ ers: Surpassing human-level performance on imagenet classiï¬ cation. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1026â 1034, 2015. Sepp Hochreiter and J¨urgen Schmidhuber. Long short-term memory. Neural computation, 9(8): 1735â 1780, 1997. Nicholas Jillings, David Moffat, Brecht De Man, and Joshua D. Reiss.
1612.07837#25
1612.07837#27
1612.07837
[ "1602.07868" ]
1612.07837#27
SampleRNN: An Unconditional End-to-End Neural Audio Generation Model
Web Audio Evaluation Tool: A browser-based listening test environment. In 12th Sound and Music Computing Conference, July 2015. Andrej Karpathy. The unreasonable effectiveness of recurrent neural networks. Andrej Karpathy blog, 2015. Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. Jan Koutnik, Klaus Greff, Faustino Gomez, and Juergen Schmidhuber. A clockwork rnn. arXiv preprint arXiv:1402.3511, 2014. Hugo Larochelle and Iain Murray. The neural autoregressive distribution estimator. In AISTATS, volume 1, pp. 2, 2011. Honglak Lee, Peter Pham, Yan Largman, and Andrew Y Ng. Unsupervised feature learning for audio classiï¬ cation using convolutional deep belief networks. In Advances in neural information processing systems, pp. 1096â 1104, 2009. Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. Wavenet: A generative model for raw audio. arXiv preprint arXiv:1609.03499, 2016.
1612.07837#26
1612.07837#28
1612.07837
[ "1602.07868" ]
1612.07837#28
SampleRNN: An Unconditional End-to-End Neural Audio Generation Model
Kishore Prahallad, Anandaswarup Vadapalli, Naresh Elluru, G Mantena, B Pulugundla, P Bhaskararao, HA Murthy, S King, V Karaiskos, and AW Black. The blizzard challenge 2013â indian language task. In Blizzard Challenge Workshop 2013, 2013. 10 Published as a conference paper at ICLR 2017 Tim Salimans and Diederik P Kingma.
1612.07837#27
1612.07837#29
1612.07837
[ "1602.07868" ]
1612.07837#29
SampleRNN: An Unconditional End-to-End Neural Audio Generation Model
Weight normalization: A simple reparameterization to ac- celerate training of deep neural networks. arXiv preprint arXiv:1602.07868, 2016. J¨urgen Schmidhuber. Learning complex, extended sequences using the principle of history com- pression. Neural Computation, 4(2):234â 242, 1992. Iulian V Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. Building end-to-end dialogue systems using generative hierarchical neural network models. In Proceedings of the 30th AAAI Conference on Artiï¬ cial Intelligence (AAAI-16), 2016.
1612.07837#28
1612.07837#30
1612.07837
[ "1602.07868" ]
1612.07837#30
SampleRNN: An Unconditional End-to-End Neural Audio Generation Model
Hava T Siegelmann. Computation beyond the turing limit. In Neural Networks and Analog Compu- tation, pp. 153â 164. Springer, 1999. Alessandro Sordoni, Yoshua Bengio, Hossein Vahabi, Christina Lioma, Jakob Grue Simonsen, and Jian-Yun Nie. A hierarchical recurrent encoder-decoder for generative context-aware query sug- gestion. In Proceedings of the 24th ACM International on Conference on Information and Knowl- edge Management, pp. 553â
1612.07837#29
1612.07837#31
1612.07837
[ "1602.07868" ]
1612.07837#31
SampleRNN: An Unconditional End-to-End Neural Audio Generation Model
562. ACM, 2015. Theano Development Team. Theano: A Python framework for fast computation of mathematical expressions. arXiv e-prints, abs/1605.02688, May 2016. URL http://arxiv.org/abs/ 1605.02688. Keiichi Tokuda, Yoshihiko Nankaku, Tomoki Toda, Heiga Zen, Junichi Yamagishi, and Keiichiro Oura. Speech synthesis based on hidden markov models.
1612.07837#30
1612.07837#32
1612.07837
[ "1602.07868" ]
1612.07837#32
SampleRNN: An Unconditional End-to-End Neural Audio Generation Model
Proceedings of the IEEE, 101(5): 1234â 1252, 2013. Aaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. arXiv preprint arXiv:1601.06759, 2016. Fisher Yu and Vladlen Koltun. Multi-scale context aggregation by dilated convolutions. arXiv preprint arXiv:1511.07122, 2015.
1612.07837#31
1612.07837#33
1612.07837
[ "1602.07868" ]
1612.07837#33
SampleRNN: An Unconditional End-to-End Neural Audio Generation Model
Wojciech Zaremba. An empirical exploration of recurrent network architectures. 2015. # APPENDIX A A MODEL VARIANT: SAMPLERNN-WAVENET HYBRID SampleRNN-WaveNet model has two modules operating at two different clock-rate. The slower clock-rate module (frame-level module) sees one frame (each of which has size FS) at a time while the faster clock-rate component(sample-level component) sees one acoustic sample at a time i.e. the ratio of clock-rates for these two modules would be the size of a single frame. Number of sequential steps for frame-level component would be FS times lower. We repeat the output of each step of frame-level component FS times so that number of time-steps for output of both the components match. The output of both these modules are concatenated for every time-step which is further operated by non-linearities for every time-step independently before generating the ï¬ nal output. In our experiments, we kept size of a single frame (FS) to be 128. We tried two variants of this model: 1. fully convolutional WaveNet and 2. RNN-WaveNet. In fully convolutional WaveNet, both modules described above are implemented using dilated convolutions as described in original WaveNet model. In RNN-WaveNet, we use high capacity RNN in the frame-level module to model the dependency between frames. The sample-level WaveNet in RNN-WaveNet has receptive ï¬ eld of size 509 samples from the past.
1612.07837#32
1612.07837#34
1612.07837
[ "1602.07868" ]
1612.07837#34
SampleRNN: An Unconditional End-to-End Neural Audio Generation Model
Although these models are designed with the intention of combining the two models to harness their best features, preliminary experiments show that this variant is not meeting our expectations at the moment which directs us to a possible future work. 11
1612.07837#33
1612.07837
[ "1602.07868" ]
1612.04936#0
Learning through Dialogue Interactions by Asking Questions
7 1 0 2 b e F 3 1 ] L C . s c [ 4 v 6 3 9 4 0 . 2 1 6 1 : v i X r a Published as a conference paper at ICLR 2017 # LEARNING THROUGH DIALOGUE INTERACTIONS BY ASKING QUESTIONS Jiwei Li, Alexander H. Miller, Sumit Chopra, Marcâ Aurelio Ranzato, Jason Weston Facebook AI Research, New York, USA {jiwel,ahm,spchopra,ranzato,jase}@fb.com # ABSTRACT
1612.04936#1
1612.04936
[ "1511.06931" ]
1612.04936#1
Learning through Dialogue Interactions by Asking Questions
A good dialogue agent should have the ability to interact with users by both re- sponding to questions and by asking questions, and importantly to learn from both types of interaction. In this work, we explore this direction by designing a simu- lator and a set of synthetic tasks in the movie domain that allow such interactions between a learner and a teacher. We investigate how a learner can beneï¬ t from asking questions in both ofï¬ ine and online reinforcement learning settings, and demonstrate that the learner improves when asking questions. Finally, real exper- iments with Mechanical Turk validate the approach.
1612.04936#0
1612.04936#2
1612.04936
[ "1511.06931" ]
1612.04936#2
Learning through Dialogue Interactions by Asking Questions
Our work represents a ï¬ rst step in developing such end-to-end learned interactive dialogue agents. # INTRODUCTION When a student is asked a question by a teacher, but is not conï¬ dent about the answer, they may ask for clariï¬ cation or hints. A good conversational agent (a learner/bot/student) should have this ability to interact with a dialogue partner (the teacher/user). However, recent efforts have mostly focused on learning through ï¬ xed answers provided in the training set, rather than through interactions. In that case, when a learner encounters a confusing situation such as an unknown surface form (phrase or structure), a semantically complicated sentence or an unknown word, the agent will either make a (usually poor) guess or will redirect the user to other resources (e.g., a search engine, as in Siri). Humans, in contrast, can adapt to many situations by asking questions. We identify three categories of mistakes a learner can make during dialogue1: (1) the learner has problems understanding the surface form of the text of the dialogue partner, e.g., the phrasing of a question; (2) the learner has a problem with reasoning, e.g. they fail to retrieve and connect the relevant knowledge to the question at hand; (3) the learner lacks the knowledge necessary to answer the question in the ï¬
1612.04936#1
1612.04936#3
1612.04936
[ "1511.06931" ]
1612.04936#3
Learning through Dialogue Interactions by Asking Questions
rst place â that is, the knowledge sources the student has access to do not contain the needed information. All the situations above can be potentially addressed through interaction with the dialogue partner. Such interactions can be used to learn to perform better in future dialogues. If a human student has problems understanding a teacherâ s question, they might ask the teacher to clarify the question. If the student doesnâ t know where to start, they might ask the teacher to point out which known facts are most relevant. If the student doesnâ t know the information needed at all, they might ask the teacher to tell them the knowledge theyâ re missing, writing it down for future use.
1612.04936#2
1612.04936#4
1612.04936
[ "1511.06931" ]
1612.04936#4
Learning through Dialogue Interactions by Asking Questions
In this work, we try to bridge the gap between how a human and an end-to-end machine learning dialogue agent deal with these situations: our student has to learn how to learn. We hence design a simulator and a set of synthetic tasks in the movie question answering domain that allow a bot to interact with a teacher to address the issues described above. Using this framework, we explore how a bot can beneï¬ t from interaction by asking questions in both ofï¬ ine supervised settings and online reinforcement learning settings, as well as how to choose when to ask questions in the latter setting. In both cases, we ï¬ nd that the learning system improves through interacting with users. 1This list is not exhaustive; for example, we do not address a failure in the dialogue generation stage.
1612.04936#3
1612.04936#5
1612.04936
[ "1511.06931" ]
1612.04936#5
Learning through Dialogue Interactions by Asking Questions
1 Published as a conference paper at ICLR 2017 Finally, we validate our approach on real data where the teachers are humans using Amazon Me- chanical Turk, and observe similar results. # 2 RELATED WORK Learning language through interaction and feedback can be traced back to the 1950s, when Wittgen- stein argued that the meaning of words is best understood from their use within given language games (Wittgenstein, 2010). The direction of interactive language learning through language games has been explored in the early seminal work of Winograd (Winograd, 1972), and in the recent SHRD- LURN system (Wang et al., 2016). In a broader context, the usefulness of feedback and interactions has been validated in the setting of multiple language learning, such as second language learning (Bassiri, 2011) and learning by students (Higgins et al., 2002; Latham, 1997; Werts et al., 1995). In the context of dialogue, with the recent popularity of deep learning models, many neural dialogue systems have been proposed. These include the chit-chat type end-to-end dialogue systems (Vinyals & Le, 2015; Li et al., 2015; Sordoni et al., 2015), which directly generate a response given the previous history of user utterance. It also include a collection of goal-oriented dialogue systems (Wen et al., 2016; Su et al., 2016; Bordes & Weston, 2016), which complete a certain task such as booking a ticket or making a reservation at a restaurant. Another line of research focuses on supervised learning for question answering from dialogues (Dodge et al., 2015; Weston, 2016), using either a given database of knowledge (Bordes et al., 2015; Miller et al., 2016) or short stories (Weston et al., 2015). As far as we know, current dialogue systems mostly focus on learning through ï¬ xed supervised signals rather than interacting with users. Our work is closely related to the recent work of Weston (2016), which explores the problem of learning through conducting conversations, where supervision is given naturally in the response dur- ing the conversation. Their work introduced multiple learning schemes from dialogue utterances.
1612.04936#4
1612.04936#6
1612.04936
[ "1511.06931" ]
1612.04936#6
Learning through Dialogue Interactions by Asking Questions
In particular the authors discussed Imitation Learning, where the agent tries to learn by imitating the dialogue interactions between a teacher and an expert student; Reward-Based Imitation Learn- ing, which only learns by imitating the dialogue interactions which have have correct answers; and Forward Prediction, which learns by predicting the teacherâ s feedback to the studentâ s response. Despite the fact that Forward Prediction does not uses human-labeled rewards, the authors show that it yields promising results. However, their work did not fully explore the ability of an agent to learn via questioning and interaction. Our work can be viewed as a natural extension of theirs.
1612.04936#5
1612.04936#7
1612.04936
[ "1511.06931" ]
1612.04936#7
Learning through Dialogue Interactions by Asking Questions
# 3 THE TASKS In this section we describe the dialogue tasks we designed2. They are tailored for the three different situations described in Section 1 that motivate the bot to ask questions: (1) Question Clariï¬ cation, in which the bot has problems understanding its dialogue partnerâ s text; (2) Knowledge Operation, in which the bot needs to ask for help to perform reasoning steps over an existing knowledge base; and (3) Knowledge Acquisition, in which the botâ s knowledge is incomplete and needs to be ï¬ lled.
1612.04936#6
1612.04936#8
1612.04936
[ "1511.06931" ]
1612.04936#8
Learning through Dialogue Interactions by Asking Questions
For our experiments we adapt the WikiMovies dataset (Weston et al., 2015), which consists of roughly 100k questions over 75k entities based on questions with answers in the open movie dataset (OMDb). The training/dev/test sets respectively contain 181638 / 9702 / 9698 examples. The accu- racy metric corresponds to the percentage of times the student gives correct answers to the teacherâ s questions. Each dialogue takes place between a teacher and a bot. In this section we describe how we gener- ate tasks using a simulator. Section 4.2 discusses how we test similar setups with real data using Mechanical Turk.
1612.04936#7
1612.04936#9
1612.04936
[ "1511.06931" ]
1612.04936#9
Learning through Dialogue Interactions by Asking Questions
The bot is ï¬ rst presented with facts from the OMDb KB. This allows us to control the exact knowl- edge the bot has access to. Then, we include several teacher-bot question-answer pairs unrelated to the question the bot needs to answer, which we call conversation histories3. In order to explore the 2 Code and data are available at https://github.com/facebook/MemNN/tree/master/AskingQuestions. 3 These history QA pairs can be viewed as distractions and are used to test the botâ s ability to separate the 3 These history QA pairs can be viewed as tions and are wheat from the chaff. For each dialogue, we incorporate 5 extra QA pairs (10 sentences). ility to separate the wheat from the chaff. For each dialogue, we incorporate 5 extra QA pairs (10 sentences). 2
1612.04936#8
1612.04936#10
1612.04936
[ "1511.06931" ]
1612.04936#10
Learning through Dialogue Interactions by Asking Questions
Published as a conference paper at ICLR 2017 beneï¬ ts of asking clariï¬ cation questions during a conversation, for each of the three scenarios, our simulator generated data for two different settings, namely, Question-Answering (denoted by QA), and Asking-Question (denoted by AQ). For both QA and AQ, the bot needs to give an answer to the teacherâ s original question at the end. The details of the simulator can be found in the appendix. # 3.1 QUESTION CLARIFICATION. In this setting, the bot does not understand the teacherâ s question. We focus on a special situation where the bot does not understand the teacher because of typo/spelling mistakes, as shown in Figure 1. We intentionally misspell some words in the questions such as replacing the word â
1612.04936#9
1612.04936#11
1612.04936
[ "1511.06931" ]
1612.04936#11
Learning through Dialogue Interactions by Asking Questions
movieâ with â movvieâ or â starâ with â sttarâ .4 To make sure that the bot will have problems understanding the question, we guarantee that the bot has never encountered the misspellings beforeâ the misspelling- introducing mechanisms in the training, dev and test sets are different, so the same word will be misspelled in different ways in different sets. We present two AQ tasks: (i) Question Paraphrase where the student asks the teacher to use a paraphrase that does not contain spelling mistakes to clarify the question by asking â what do you mean?â ; and (ii) Question Veriï¬ cation where the stu- dent asks the teacher whether the original typo-bearing question corresponds to another question without the spelling mistakes (e.g., â Do you mean which ï¬
1612.04936#10
1612.04936#12
1612.04936
[ "1511.06931" ]
1612.04936#12
Learning through Dialogue Interactions by Asking Questions
lm did Tom Hanks appear in?â ). The teacher will give feedback by giving a paraphrase of the original question without spelling mistakes (e.g., â I mean which ï¬ lm did Tom Hanks appear inâ ) in Question Paraphrase or positive/negative feedback in Question Veriï¬ cation. Next the student will give an answer and the teacher will give positive/negative feedback depending on whether the studentâ s answer is correct. Positive and nega- tive feedback are variants of â No, thatâ s incorrectâ or â Yes, thatâ s rightâ 5.
1612.04936#11
1612.04936#13
1612.04936
[ "1511.06931" ]
1612.04936#13
Learning through Dialogue Interactions by Asking Questions
In these tasks, the bot has access to all relevant entries in the KB. 3.2 KNOWLEDGE OPERATION The bot has access to all the relevant knowledge (facts) but lacks the ability to perform necessary reasoning operations over them; see Figure 2. We focus on a special case where the bot will try to understand what are the relevant facts. We explore two settings: Ask For Relevant Knowledge (Task 3) where the bot directly asks the teacher to point out the relevant KB fact and Knowledge Veriï¬ cation (Task 4) where the bot asks whether the teacherâ s question is relevant to one particular KB fact. The teacher will point out the relevant KB fact in the Ask For Relevant Knowledge setting or give a positive or negative response in the Knowledge Veriï¬ cation setting. Then the bot will give an answer to the teacherâ s original question and the teacher will give feedback on the answer.
1612.04936#12
1612.04936#14
1612.04936
[ "1511.06931" ]
1612.04936#14
Learning through Dialogue Interactions by Asking Questions
3.3 KNOWLEDGE ACQUISITION For the tasks in this subsection, the bot has an incomplete KB and there are entities important to the dialogue missing from it, see Figure 3. For example, given the question â Which movie did Tom Hanks star in?â , the missing part could either be the entity that the teacher is asking about (question entity for short, which is Tom Hanks in this example), the relation entity (starred actors), the answer to the question (Forrest Gump), or the combination of the three. In all cases, the bot has little chance of giving the correct answer due to the missing knowledge. It needs to ask the teacher the answer to acquire the missing knowledge. The teacher will give the answer and then move on to other questions (captured in the conversational history). They later will come back to reask the question. At this point, the bot needs to give an answer since the entity is not new any more. Though the correct answer has effectively been included in the earlier part of the dialogue as the answer to the botâ s question, as we will show later, many of the tasks are not as trivial as they look when the teacher reasks the question. This is because the botâ s model needs to memorize the missing entity and then construct the links between the missing entities and known ones. This is akin to the real world case where a student might make the same mistake again and again even though each time the teacher corrects them if their answer is wrong.
1612.04936#13
1612.04936#15
1612.04936
[ "1511.06931" ]
1612.04936#15
Learning through Dialogue Interactions by Asking Questions
We now detail each task in turn. 4Many reasons could lead to the bot not understanding the teacherâ s question, e.g., the teacherâ s question has an unknown phrase structure, rather than unknown words. We choose to use spelling mistakes because of the ease of dataset construction. 5In the datasets we build, there are 6 templates for positive feedback and 6 templates for negative feedback. 3 Published as a conference paper at ICLR 2017 Task 1: Question Paraphrase (QA) kb: Larry Crowne directed by Tom Hanks kb: Forrest Gump starred actors Tom Hanks, Robin Wright, Gary Sinise kb: Forrest Gump directed by Robert Zemeckis T/S :
1612.04936#14
1612.04936#16
1612.04936
[ "1511.06931" ]
1612.04936#16
Learning through Dialogue Interactions by Asking Questions
Conversation History. T : Which movvie did Tom Hanks sttar in ? S : Larry Crowne T : Wrong. (-) Task 1: Question Paraphrase (AQ): kb: Larry Crowne directed by Tom Hanks kb: Forrest Gump starred actors Tom Hanks, Robin Wright, Gary Sinise kb: Forrest Gump directed by Robert Zemeckis T/S : Conversation History. T : Which movvie did Tom Hanks sttar in ? S : What do you mean ? T : I mean which ï¬
1612.04936#15
1612.04936#17
1612.04936
[ "1511.06931" ]
1612.04936#17
Learning through Dialogue Interactions by Asking Questions
lm did Tom Hanks appear in. T : Which movvie did Tom Hanks sttar in ? S : Forrest Gump T : Thatâ s correct. (+) Task 2: Question Veriï¬ cation (QA): kb: Larry Crowne directed by Tom Hanks kb: Forrest Gump starred actors Tom Hanks, Robin Wright, Gary Sinise kb: Forrest Gump directed by Robert Zemeckis T/S : Conversation History. T : Which movvie did Tom Hanks sttar in ? S : Forrest Gump T : Thatâ s correct. (+) Task 2: Question Veriï¬ cation (AQ) kb:
1612.04936#16
1612.04936#18
1612.04936
[ "1511.06931" ]
1612.04936#18
Learning through Dialogue Interactions by Asking Questions
Larry Crowne directed by Tom Hanks kb: Forrest Gump starred actors Tom Hanks, Robin Wright, Gary Sinise kb: Forrest Gump directed by Robert Zemeckis T/S : Conversation History. T : Which movvie did Tom Hanks sttar in ? S : Do you mean which ï¬ lm did Tom Hanks appear in ? T : Thatâ s correct. (+) T : Which movvie did Tom Hanks sttar in ? S : Larry Crowne T : Sorry, thatâ s not right. (-)
1612.04936#17
1612.04936#19
1612.04936
[ "1511.06931" ]
1612.04936#19
Learning through Dialogue Interactions by Asking Questions
Figure 1: Sample dialogues for the two Question Clariï¬ cation tasks (rows) using both the traditional QA setting (left column) and AQ setting (right column). In each case the same example is given for simplicity. Black text preï¬ xed by â kb:â denotes KB knowledge that the student has access to. Blue text is spoken by the teacher, denoted by T. (+) denotes a reward of 1 (and 0 otherwise) that the teacher assigns to the bot. Red text denotes responses or questions posed by the bot, denoted by S. Brown denotes typos deliberately introduced by the authors. For the Question Veriï¬ cation setting, the student can either ask a correct (pertinent) question (as in this example) or an incorrect (irrelevant) one. The teacher will give positive or negative feedback based on the correctness of the studentâ s question. In our ofï¬ ine superised learning experiments, the probability of asking pertinent questions and correctly answering the original question from the teacher is set to 0.5. Finally, T/S denotes 5 pairs of questions and answers that are irrelevant to the rest of the conversation. Task 3: Ask For Relevant Knowledge (AQ): kb: Larry Crowne directed by Tom Hanks kb: Forrest Gump starred actors Tom Hanks, Robin Wright, Gary Sinise kb: Forrest Gump directed by Robert Zemeckis T/S :
1612.04936#18
1612.04936#20
1612.04936
[ "1511.06931" ]
1612.04936#20
Learning through Dialogue Interactions by Asking Questions
Conversation History. T : Which movie did Tom Hanks star in ? S : Can you give me a hint ? T : It is related to the kb fact that Task 4: Knowledge Veriï¬ cation (AQ): kb: Larry Crowne directed by Tom Hanks kb: Forrest Gump starred actors Tom Hanks, Robin Wright, Gary Sinise kb: Forrest Gump directed by Robert Zemeckis T/S : Conversation History. T : Which movie did Tom Hanks star in ? S : Does it have something to do with Larry Crowne directed by Tom Hanks? Forrest Gump starred actors Tom Hanks Robin Wright, Gary Sinise T : Which movie did Tom Hanks star in ?
1612.04936#19
1612.04936#21
1612.04936
[ "1511.06931" ]
1612.04936#21
Learning through Dialogue Interactions by Asking Questions
S : Forrest Gump T : Thatâ s correct. (+) T : No. Thatâ s wrong. (-) T : Which movie did Tom Hanks star in ? S : Forrest Gump T : Thatâ s correct. (+) Figure 2: Sample dialogues for Knowledge Operation tasks. Missing Question Entity: The entity that the teacher is asking about is missing from the knowledge base. All KB facts containing the question entity will be hidden from the bot. In the example for Task 5 in Figure 3, since the teacherâ s question contains the entity Tom Hanks, the KB facts that contain Tom Hanks are hidden from the bot.
1612.04936#20
1612.04936#22
1612.04936
[ "1511.06931" ]
1612.04936#22
Learning through Dialogue Interactions by Asking Questions
4 Published as a conference paper at ICLR 2017 Figure 3: Different Tasks for Knowledge Acquisition. Crossed lines correspond to entries of the KB whose retrieval is missed. . Task 5: Missing Question Entity (QA) kb: Larry Crowne directed by Tom Hanks kb: Forrest Gump starred actors Tom Hanks kb: Forrest Gump starred actors Sally Field kb: Forrest Gump directed by Robert Zemeckis T/S : Conversation History. T : Which movie did Tom Hanks star in ? S : Forrest Gump T : Thatâ s correct. (+) Task 5: Missing Question Entity (AQ) kb:
1612.04936#21
1612.04936#23
1612.04936
[ "1511.06931" ]
1612.04936#23
Learning through Dialogue Interactions by Asking Questions
Larry Crowne directed by Tom Hanks kb: Forrest Gump starred actors Tom Hanks, kb: Forrest Gump starred actors Sally Field kb: Forrest Gump directed by Robert Zemeckis T : Which movie did Tom Hanks star in ? S : I donâ t know. Whatâ s the answer? T : The answer is Forrest Gump. T/S : Conversation History. T : Which movie did Tom Hanks star in ? S : Forrest Gump T : Thatâ s correct. (+) Task 6: Missing Answer Entity (AQ) kb:
1612.04936#22
1612.04936#24
1612.04936
[ "1511.06931" ]
1612.04936#24
Learning through Dialogue Interactions by Asking Questions
Larry Crowne directed by Tom Hanks kb: Forrest Gump starred actors Tom Hanks kb: Forrest Gump starred actors Sally Field kb: Forrest Gump directed by Robert Zemeckis T : Which movie did Tom Hanks star in ? S : I donâ t know. Whatâ s the answer? T : The answer is Forrest Gump. T/S : Conversation History. T : Which movie did Tom Hanks star in ? S : Forrest Gump T : Thatâ s correct. (+) Task 7: Missing Relation Entity (AQ) kb: Larry Crowne directed by Tom Hanks kb: Forrest Gump starred actors Tom Hanks, kb: Forrest Gump starred actors Sally Field kb: Forrest Gump directed by Robert Zemeckis T : Which movie did Tom Hanks star in ?
1612.04936#23
1612.04936#25
1612.04936
[ "1511.06931" ]
1612.04936#25
Learning through Dialogue Interactions by Asking Questions
S : I donâ t know. Whatâ s the answer? T : The answer is Forrest Gump. T/S : Conversation History. T : Which movie did Tom Hanks star in ? S : Forrest Gump T : Thatâ s correct. (+) . Task 8: Missing Triple (AQ) kb: Larry Crowne directed by Tom Hanks kb: Forrest Gump starred actors Tom Hanks kb: Forrest Gump starred actors Sally Field kb: Forrest Gump directed by Robert Zemeckis T : Which movie did Tom Hanks star in ?
1612.04936#24
1612.04936#26
1612.04936
[ "1511.06931" ]
1612.04936#26
Learning through Dialogue Interactions by Asking Questions
S : I donâ t know. Whatâ s the answer? T : The answer is Forrest Gump. T/S : Conversation History. T : Which movie did Tom Hanks star in ? S : Forrest Gump T : Thatâ s correct. (+) Task 9: Missing Everything (AQ) kb: Larry Crowne directed by Tom Hanks kb: Forrest Gump starred actors Tom Hanks, kb: Forrest Gump starred actors Sally Field kb: Forrest Gump directed by Robert Zemeckis T : Which movie did Tom Hanks star in ?
1612.04936#25
1612.04936#27
1612.04936
[ "1511.06931" ]
1612.04936#27
Learning through Dialogue Interactions by Asking Questions
S : I donâ t know. Whatâ s the answer? T : The answer is Forrest Gump. T/S : Conversation History. T : Which movie did Tom Hanks star in ? S : Forrest Gump T : Thatâ s correct. (+) Missing Answer Entity: The answer entity to the question is unknown to the bot. All KB facts that contain the answer entity will be hidden. Hence, in Task 6 of Figure 3, all KB facts containing the answer entity Forrest Gump will be hidden from the bot. Missing Relation Entity: The relation type is unknown to the bot. In Task 7 of Figure 3, all KB facts that express the relation starred actors are hidden from the bot.
1612.04936#26
1612.04936#28
1612.04936
[ "1511.06931" ]
1612.04936#28
Learning through Dialogue Interactions by Asking Questions
Missing Triples: The triple that expresses the relation between the question entity and the answer In Task 8 of Figure 3, the triple â Forrest Gump (question entity) entity is hidden from the bot. starred actors Tom Hanks (answer entity)â will be hidden. Missing Everything: The question entity, the relation entity, the answer entity are all missing from the KB. All KB facts in Task 9 of Figure 3 will be removed since they either contain the relation entity (i.e., starred actors), the question entity (i.e., Forrest Gump) or the answer entity Tom Hanks.
1612.04936#27
1612.04936#29
1612.04936
[ "1511.06931" ]
1612.04936#29
Learning through Dialogue Interactions by Asking Questions
5 Published as a conference paper at ICLR 2017 # 4 TRAIN/TEST REGIME We now discuss in detail the regimes we used to train and test our models, which are divided between evaluation within our simulator and using real data collected via Mechanical Turk. 4.1 SIMULATOR Using our simulator, our objective was twofold. We ï¬ rst wanted to validate the usefulness of ask- ing questions in all the settings described in Section 3. Second, we wanted to assess the ability of our student bot to learn when to ask questions. In order to accomplish these two objectives we ex- plored training our models with our simulator using two methodologies, namely, Ofï¬ ine Supervised Learning and Online Reinforcement Learning. 4.1.1 OFFLINE SUPERVISED LEARNING The motivation behind training our student models in an ofï¬ ine supervised setting was primarily to test the usefulness of the ability to ask questions. The dialogues are generated as described in the previous section, and the botâ s role is generated with a ï¬
1612.04936#28
1612.04936#30
1612.04936
[ "1511.06931" ]
1612.04936#30
Learning through Dialogue Interactions by Asking Questions
xed policy. We chose a policy where answers to the teacherâ s questions are correct answers 50% of the time, and incorrect otherwise, to add a degree of realism. Similarly, in tasks where questions can be irrelevant they are only asked correctly 50% of the time.6 The ofï¬ ine setting explores different combinations of training and testing scenarios, which mimic different situations in the real world. The aim is to understand when and how observing interactions between two agents can help the bot improve its performance for different tasks. As a result we construct training and test sets in three ways across all tasks, resulting in 9 different scenarios per task, each of which correspond to a real world scenario. The three training sets we generated are referred to as TrainQA, TrainAQ, and TrainMix. TrainQA follows the QA setting discussed in the previous section: the bot never asks questions and only tries to immediately answer. TrainAQ follows the AQ setting: the student, before answering, ï¬ rst always asks a question in response to the teacherâ s original question. TrainMix is a combination of the two where 50% of time the student asks a question and 50% does not. The three test sets we generated are referred to as TestQA, TestAQ, and TestModelAQ. TestQA and TestAQ are generated similarly to TrainQA and TrainAQ, but using a perfect ï¬ xed policy (rather than 50% correct) for evaluation purposes. In the TestModelAQ setting the model has to get the form of the question correct as well. In the Question Veriï¬ cation and Knowledge Veriï¬ cation tasks there are many possible ways of forming the question and some of them are correct â the model has to choose the right question to ask. E.g. it should ask â
1612.04936#29
1612.04936#31
1612.04936
[ "1511.06931" ]
1612.04936#31
Learning through Dialogue Interactions by Asking Questions
Does it have something to do with the fact that Larry Crowne directed by Tom Hanks?â rather than â Does it have something to do with the fact that Forrest Gump directed by Robert Zemeckis?â when the latter is irrelevant (the candidate list of questions is generated from the known knowledge base entries with respect to that question). The policy is trained using either the TrainAQ or TrainMix set, depending on the training scenario. The teacher will reply to the question, giving positive feedback if the studentâ s question is correct and no response and negative feedback otherwise. The student will then give the ï¬ nal answer. The difference between TestModelAQ and TestAQ only exists in the Question Veriï¬ cation and Knowledge Veriï¬ cation tasks; in other tasks there is only one way to ask the question and TestModelAQ and TestAQ are identical. To summarize, for every task listed in Section 3 we train one model for each of the three training sets (TrainQA, TrainAQ, TrainMix) and test each of these models on the three test sets (TestQA, TestAQ, and TestModelAQ), resulting in 9 combinations. For the purpose of notation the train/test combination is denoted by â TrainSetting+TestSettingâ . For example, TrainAQ+TestQA denotes a model which is trained using the TrainAQ dataset and tested on TestQA dataset. Each combination has a real world interpretation. For instance, TrainAQ+TestQA would refer to a scenario where a student can ask the teacher questions during learning but cannot to do so while taking an exam. Similarly, TrainQA+TestQA describes a stoic teacher that never answers a studentâ s question at either learning or examination time. The setting TrainQA+TestAQ corresponds to the case where a lazy 6This only makes sense in tasks like Question or Knowledge Veriï¬ cation. In tasks where the question is static such as â What do you mean?â there is no way to ask an irrelevant question, and we do not use this policy. 6
1612.04936#30
1612.04936#32
1612.04936
[ "1511.06931" ]
1612.04936#32
Learning through Dialogue Interactions by Asking Questions
Published as a conference paper at ICLR 2017 student never asks question at learning time but gets anxious during the examination and always asks a question. 4.1.2 ONLINE REINFORCEMENT LEARNING (RL) We also explored scenarios where the student learns the ability to decide when to ask a question. In other words, the student learns how to learn. Although it is in the interest of the student to ask questions at every step of the conversation, since the response to its question will contain extra information, we donâ t want our model to learn this behavior. Each time a human student asks a question, thereâ
1612.04936#31
1612.04936#33
1612.04936
[ "1511.06931" ]
1612.04936#33
Learning through Dialogue Interactions by Asking Questions
s a cost associated with that action. This cost is a reï¬ ection of the patience of the teacher, or more generally of the users interacting with the bot in the wild: users wonâ t ï¬ nd the bot engaging if it always asks clariï¬ cation questions. The student should thus be judicious about asking questions and learn when and what to ask. For instance, if the student is conï¬ dent about the answer, there is no need for it to ask. Or, if the teacherâ s question is so hard that clariï¬ cation is unlikely to help enough to get the answer right, then it should also refrain from asking.
1612.04936#32
1612.04936#34
1612.04936
[ "1511.06931" ]
1612.04936#34
Learning through Dialogue Interactions by Asking Questions
We now discuss how we model this problem under the Reinforcement Learning framework. The bot is presented with KB facts (some facts might be missing depending on the task) and a question. It needs to decide whether to ask a question or not at this point. The decision whether to ask is made by a binary policy PRLQuestion. If the student chooses to ask a question, it will be penalized by costAQ. We explored different values of costAQ ranging from [0, 2], which we consider as modeling the patience of the teacher.
1612.04936#33
1612.04936#35
1612.04936
[ "1511.06931" ]
1612.04936#35
Learning through Dialogue Interactions by Asking Questions
The goal of this setting is to ï¬ nd the best policy for asking/not- asking questions which would lead to the highest cumulative reward. The teacher will appropriately reply if the student asks a question. The student will eventually give an answer to the teacherâ s initial question at the end using the policy PRLAnswer, regardless of whether it had asked a question. The student will get a reward of +1 if its ï¬ nal answer is correct and â 1 otherwise. Note that the student can ask at most one question and that the type of question is always speciï¬ ed by the task under consideration. The ï¬ nal reward the student gets is the cumulative reward over the current dialogue episode. In particular the reward structure we propose is the following:
1612.04936#34
1612.04936#36
1612.04936
[ "1511.06931" ]
1612.04936#36
Learning through Dialogue Interactions by Asking Questions
# Asking Question Not asking Question Final Answer Correct Final Answer Incorrect 1-costAQ -1-costAQ 1 -1 Table 1: Reward structure for the Reinforcement Learning setting. For each of the tasks described in Section 3, we consider three different RL scenarios. Good-Student: The student will be presented with all relevant KB facts. There are no misspellings or unknown words in the teacherâ s question. This represents a knowledgable student in the real world that knows as much as it needs to know (e.g., a large knowledge base, large vocabulary).
1612.04936#35
1612.04936#37
1612.04936
[ "1511.06931" ]
1612.04936#37
Learning through Dialogue Interactions by Asking Questions
This setting is identical across all missing entity tasks (5 - 9). Poor-Student: The KB facts or the questions presented to the student are ï¬ awed depending on each task. For example, for the Question Clariï¬ cation tasks, the student does not understand the question due to spelling mistakes. For the Missing Question Entity task the entity that the teacher asks about is unknown by the student and all facts containing the entity will be hidden from the student. This setting is similar to a student that is underprepared for the tasks. Medium-Student: The combination of the previous two settings where for 50% of the questions, the student has access to the full KB and there are no new words or phrases or entities in the question, and 50% of the time the question and KB are taken from the Poor-Student setting. 4.2 MECHANICAL TURK DATA Finally, to validate our approach beyond our simulator by using real language, we collected data via Amazon Mechanical Turk. Due to the cost of data collection, we focused on real language versions of Tasks 4 (Knowledge Veriï¬ cation) and 8 (Missing Triple), see Secs. 3.2 and 3.3 for the simulator versions. That is, we collect dialoguess and use them in an ofï¬ ine supervised learning setup similar to Section 4.1.1. This setup allows easily reproducibile experiments. For Mechanical Turk Task 4, the bot is asked a question by a human teacher, but before answering can ask the human if the question is related to one of the facts it knows about from its memory.
1612.04936#36
1612.04936#38
1612.04936
[ "1511.06931" ]
1612.04936#38
Learning through Dialogue Interactions by Asking Questions
7 Published as a conference paper at ICLR 2017 a Which mowvie did Tom Hanks sttar in? AQ Qa cd 5 2S) what do you mean? ® is â Larry Crowne BS I mean which film did Tom Hanks appear in. & Gh thatâ s incorrect (-) ES) Forest Gump. ~ Reward: -1 a Thatâ s correct (+) Reward: 1-CostAQ illustration of the poor-student setting for RL Task 1 (Question Figure 4: An illustration of the poor-student setting for RL Task 1 (Question Paraphrase). It is then required to answer the original question, after some additional dialog turns relating to other question/answer pairs (called â conversational historyâ , as before).
1612.04936#37
1612.04936#39
1612.04936
[ "1511.06931" ]
1612.04936#39
Learning through Dialogue Interactions by Asking Questions
For Task 8, the bot is asked a question by a human but lacks the triple in its memory that would be needed to answer it. It is allowed to ask for the missing information, the human responds to the question in free-form language. The bot is then required to answer the original question, again after some â conversational historyâ has transpired. We collect around 10,000 episodes (dialogues) for training, 1000 for validation, and 2500 for testing for each of the two tasks. In each case, we give instructions to the Turkers that still follow the original form of the task, but make the tasks contain realistic language written by humans. The instructions given to the Turkers are given in the appendix. For both tasks, while the human turkers replace the simulator that the bot was previously conversing with, the botâ s dialogue actions (capabilities) are essentially unchanged from before. That is, when answering questions, now the bot is required to answer a humanâ s questions rather than templated questions from the simulator. When the bot is asking questions, the bot still asks in the same form as before, e.g. questions like â
1612.04936#38
1612.04936#40
1612.04936
[ "1511.06931" ]
1612.04936#40
Learning through Dialogue Interactions by Asking Questions
Does it have something to do with Xâ for Task 4 or â I donâ t know. Whatâ s the answer?â for Task 8. However, now its questions are answered by a human. In both cases (asking and answering) the human data is richer with potentially more complex language and lexical variability. Examples of the collected dialogues are given in Figure 5. MTurk Task 4: Knowledge Veriï¬ cation (AQ): kb: In Their Skin written by Joshua Close, Jeremy Power Regimbal kb: In Their Skin starred actors Selma Blair, James Dâ arcy, Rachel Miner, Joshua Close T:
1612.04936#39
1612.04936#41
1612.04936
[ "1511.06931" ]
1612.04936#41
Learning through Dialogue Interactions by Asking Questions
What ï¬ lm was written by Joshua Close? S: Does it have something to do with â In Their Skin starred actors Joshua Closeâ ? T: No we were looking for who wrote the ï¬ lm not who starred in it. . . . MTurk Task 8: Missing Triple (AQ): T :What movies does Timothy Scott Bogart have writing credit for? S: I donâ t know. Whatâ s the answer? T: Extreme Ops or The I Inside is the right answer. . . . Figure 5: Sample dialogues for Mechanical Turk versions of Tasks 4 and 8. Compared to the original tasks (see Figs 2 and 3) the teacherâ s questions, and the teacher responses to the studentâ s questions, are written by humans and are more complex and contain more variety.
1612.04936#40
1612.04936#42
1612.04936
[ "1511.06931" ]
1612.04936#42
Learning through Dialogue Interactions by Asking Questions
# 5 MODELS For both ofï¬ ine supervised and online RL settings, we use the End-to-End Memory Network model (MemN2N) (Sukhbaatar et al., 2015) as a backbone. The model takes as input the last utterance of the dialogue history (the question from the teacher) as well as a set of memory contexts including short-term memories (the dialogue history between the bot and the teacher) and long-term memories 8 Published as a conference paper at ICLR 2017
1612.04936#41
1612.04936#43
1612.04936
[ "1511.06931" ]
1612.04936#43
Learning through Dialogue Interactions by Asking Questions
(the knowledge base facts that the bot has access to), and outputs a label. We refer readers to the Appendix for more details about MemN2N. Ofï¬ ine Supervised Settings: The ï¬ rst learning strategy we adopt is the reward-based imitation strategy (denoted vanilla-MemN2N) described in (Weston, 2016), where at training time, the model maximizes the log likelihood probability of the correct answers the student gave (examples with incorrect ï¬ nal answers are discarded). Candidate answers are words that appear in the memories, which means the bot can only predict the entities that it has seen or known before. We also use a variation of MemN2N called â context MemN2Nâ (Cont-MemN2N for short) where we replace each wordâ s embedding with the average of its embedding (random for unseen words) and the embeddings of the other words that appear around it. We use both the preceeding and following words as context and the number of context words is a hyperparameter selected on the dev set. An issue with both vanilla-MemN2N and Cont-MemN2N is that the model only makes use of the botâ
1612.04936#42
1612.04936#44
1612.04936
[ "1511.06931" ]
1612.04936#44
Learning through Dialogue Interactions by Asking Questions
s answers as signals and ignores the teacherâ s feedback. We thus propose to use a model that jointly predicts the botâ s answers and the teacherâ s feedback (denoted as TrainQA (+FP)). The botâ s answers are predicted using a vanilla-MemN2N and the teacherâ s feedback is predicted using the Forward Prediction (FP) model as described in (Weston, 2016). We refer the readers to the Appendix for the FP model details. At training time, the models learn to jointly predict the teacherâ s feedback and the answers with positive reward. At test time, the model will only predict the botâ s answer. For the TestModelAQ setting described in Section 4, the model needs to decide the question to ask. Again, we use vanilla-MemN2N that takes as input the question and contexts, and outputs the question the bot will ask. Online RL Settings: A binary vanilla-MemN2N (denoted as PRL(Question)) is used to decide whether the bot should or should not ask a question, with the teacher replying if the bot does ask something. A second MemN2N is then used to decide the botâ s answer, denoted as PRL(Answer). PRL(Answer) for QA and AQ are two separate models, which means the bot will use different models for ï¬ nal-answer prediction depending on whether it chooses to ask a question or not.7 to update PRL(Question) and We use the REINFORCE algorithm (Williams, 1992) PRL(Answer). For each dialogue, the bot takes two sequential actions (a1, a2): to ask or not to ask a question (denoted as a1); and guessing the ï¬ nal answer (denoted as a2). Let r(a1, a2) denote the cumulative reward for the dialogue episode, computed using Table 1. The gradient to update the policy is given by: p(a1, a2) = PRL(Question)(a1) · PRL(answer)(a2) â J(θ) â â log p(a1, a2)[r(a1, a2) â b] (1)
1612.04936#43
1612.04936#45
1612.04936
[ "1511.06931" ]
1612.04936#45
Learning through Dialogue Interactions by Asking Questions
where b is the baseline value, which is estimated using another MemN2N model that takes as input the query x and memory C, and outputs a scalar b denoting the estimation of the future reward. The baseline model is trained by minimizing the mean squared loss between the estimated reward b and actual cumulative reward r, ||r â b||2. We refer the readers to (Ranzato et al., 2015; Zaremba & Sutskever, 2015) for more details. The baseline estimator model is independent from the policy models and the error is not backpropagated back to them.
1612.04936#44
1612.04936#46
1612.04936
[ "1511.06931" ]
1612.04936#46
Learning through Dialogue Interactions by Asking Questions
train only In practice, we ï¬ nd the following training strategy yields better results: ï¬ rst PRL(answer), updating gradients only for the policy that predicts the ï¬ nal answer. After the botâ s ï¬ nal-answer policy is sufï¬ ciently learned, train both policies in parallel8. This has a real-world anal- ogy where the bot ï¬ rst learns the basics of the task, and then learns to improve its performance via a question-asking policy tailored to the userâ s patience (represented by costAQ) and its own ability to asnwer questions. 7An alternative is to train one single model for ï¬ nal answer prediction in both AQ and QA cases, similar to the TrainMix setting in the supervised learning setting. But we ï¬ nd training AQ and QA separately for the ï¬
1612.04936#45
1612.04936#47
1612.04936
[ "1511.06931" ]
1612.04936#47
Learning through Dialogue Interactions by Asking Questions
nal answer prediction yields a little better result than the single model setting. 8 We implement this by running 16 epochs in total, updating only the modelâ s policy for ï¬ nal answers in the ï¬ rst 8 epochs while updating both policies during the second 8 epochs. We pick the model that achieves the best reward on the dev set during the ï¬ nal 8 epochs. Due to relatively large variance for RL models, we repeat each task 5 times and keep the best model on each task. 9 Published as a conference paper at ICLR 2017 Question Clariï¬ cation Knowledge Operation Train \Test TrainQA (Context) TrainAQ (Context) TrainMix (Context) Task 1: Q. Paraphrase TestAQ TestQA 0.726 0.754 0.889 0.640 0.846 0.751 Task 2: Q. Veriï¬ cation TestQA 0.742 0.643 0.740 TestAQ 0.684 0.807 0.789 Task 3: Ask For Relevant K. TestQA 0.883 0.716 0.870 TestAQ 0.947 0.985 0.985 Task 4: K. Veriï¬ cation TestQA 0.888 0.852 0.875 TestAQ 0.959 0.987 0.985 Train \Test TrainQA (Context) TrainAQ (Context) TrainMix (Context) TestQA TestAQ Task 5: Q. Entity <0.01 0.224 <0.01 0.639 <0.01 0.632 Knowledge Acquisition TestAQ TestQA Task 6: Answer Entity <0.01 <0.01 <0.01 TestQA Task 7: Relation Entity 0.241 0.143 0.216 TestAQ 0.120 0.885 0.852 0.301 0.893 0.898 TestQA TestAQ Task 8: Triple 0.339 0.154 0.298 0.251 0.884 0.886 TestQA TestAQ Task 9: Everything <0.01 0.058 <0.01 0.908 <0.01 0.903 Table 2:
1612.04936#46
1612.04936#48
1612.04936
[ "1511.06931" ]
1612.04936#48
Learning through Dialogue Interactions by Asking Questions
Results for Cont-MemN2N on different tasks. 6 EXPERIMENTS 6.1 SIMULATOR Ofï¬ ine Results: Ofï¬ ine results are presented in Tables 2, 7 and 8 (the latter two are in the appendix). Table 7 presents results for the vanilla-MemN2N and Forward Prediction models. Table 2 presents results for Cont-MemN2N, which is better at handling unknown words. We repeat each experiment 10 times and report the best result. Finally, Table 8 presents results for the test scenario where the bot itself chooses when to ask questions. Observations can be summarized as as follows: Asking questions helps at test time, which is intuitive since it provides additional evidence:
1612.04936#47
1612.04936#49
1612.04936
[ "1511.06931" ]
1612.04936#49
Learning through Dialogue Interactions by Asking Questions
â ¢ TrainAQ+TestAQ (questions can be asked at both training and test time) performs the best across all the settings. â ¢ TrainQA+TestAQ (questions can be asked at training time but not at test time) performs worse than TrainQA+TestQA (questions can be asked at neither training nor test time) in tasks Question Clariï¬ cation and Knowledge Operation due to the discrepancy between training and testing. â ¢ TrainQA+TestAQ performs better than TrainQA+TestQA on all Knowledge Acquisition tasks, the only exception being the Cont-MemN2N model on the Missing Triple setting. The explanation is that for most tasks in Knowledge Acquisition, the learner has no chance of giving the correct answer without asking questions.
1612.04936#48
1612.04936#50
1612.04936
[ "1511.06931" ]
1612.04936#50
Learning through Dialogue Interactions by Asking Questions
The beneï¬ t from asking is thus large enough to compensate for the negative effect introduced by data discrepancy between training and test time. â ¢ TrainMix offers ï¬ exibility in bridging the gap between datasets generated using QA and AQ, very slightly underperforming TrainAQ+TestAQ, but gives competitive results on both TestQA and TestAQ in the Question Clariï¬ cation and Knowledge Operations tasks. â ¢ TrainAQ+TestQA (allowing questions at training time but forbid questions at test time) per- forms the worst, even worse than TrainQA+TestQA. This has a real-world analogy where a student becomes dependent on the teacher answering their questions, later struggling to answer the test questions without help.
1612.04936#49
1612.04936#51
1612.04936
[ "1511.06931" ]
1612.04936#51
Learning through Dialogue Interactions by Asking Questions
â ¢ In the Missing Question Entity task (the student does not know about the question entity), the Missing Answer Entity task (the student does not know about the answer entity), and Missing Everything task, the bot achieves accuracy less than 0.01 if not asking questions at test time (i.e., TestQA). â ¢ The performance of TestModelAQ, where the bot relies on its model to ask questions at test time (and thus can ask irrelevant questions) performs similarly to asking the correct question at test time (TestAQ) and better than not asking questions (TestQA). - Cont-MemN2N signiï¬ cantly outperforms vanilla-MemN2N. One explanation is that considering context provides signiï¬ cant evidence distinguishing correct answers from candidates in the dialogue history, especially in cases where the model encounters unfamiliar words. RL Results For the RL settings, we present results for Task 2 (Question Veriï¬ cation) and Task 6 (Missing Answer Entities) in Figure 6. Task 2 represents scenarios where different types of student
1612.04936#50
1612.04936#52
1612.04936
[ "1511.06931" ]
1612.04936#52
Learning through Dialogue Interactions by Asking Questions
10 Published as a conference paper at ICLR 2017 Task2 Question Verification Question-Asking Rate vs Question Cost Task2 Question Verification Final Accuracy vs Question Cost 5S g +3 0.8| ae 2 ov 0.6 % ¢ = 6 0.4 i S02 .72| [=e good oO â a medium m= poor 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.00 0.05 010 015 0.20 0.25 0.30 Question Cost Question Cost Task6 Missing Answer Entity Task6 Missing Answer Entity *1opeeQuestion-Asking Rate vs Question Cost Final Accuracy vs Question Cost 2 % 0.8, fe > > 8 § os} 50. o 3 $ < < = § 0.4, 3 0 B £ a it 3 0.2 o 8a OSâ ¢ ie Ts a0 2s 0 «|= 8S OS 80 Question Cost Question Cost Figure 6: Results of online learning for Task 2 and Task 6 have different abilities to correctly answer questions (e.g., a poor student can still sometimes give correct answers even when they do not fully understand the question). Task 6 represents tasks where a poor learner who lacks the knowledge necessary to answer the question can hardly give a correct answer. All types of students including the good student will theoretically beneï¬ t from asking questions (asking for the correct answer) in Task 6. We show the percentage of question-asking versus the cost of AQ on the test set and the accuracy of question-answering on the test set vs the cost of AQ.
1612.04936#51
1612.04936#53
1612.04936
[ "1511.06931" ]
1612.04936#53
Learning through Dialogue Interactions by Asking Questions
Our main ï¬ ndings were: â ¢ A good student does not need to ask questions in Task 2 (Question Veriï¬ cation), because they already understand the question. The student will raise questions asking for the correct answer when cost is low for Task 6 (Missing Answer Entities). â ¢ A poor student always asks questions when the cost is low. As the cost increases, the frequency of question-asking declines. As the AQ cost increases gradually, good students will stop asking questions earlier than the medium and poor students. The explanation is intuitive: poor students beneï¬ t more from asking questions than good students, so they continue asking even with higher penalties. â ¢ As the probability of question-asking declines, the accuracy for poor and medium students
1612.04936#52
1612.04936#54
1612.04936
[ "1511.06931" ]
1612.04936#54
Learning through Dialogue Interactions by Asking Questions
drops. Good students are more resilient to not asking questions. 6.2 MECHANICAL TURK Results for the Mechanical Turk Tasks are given in Table 3. We again compare vanilla-MemN2N and Cont-MemN2N, using the same TrainAQ/TrainQA and TestAQ/TestQA combinations as before, for Tasks 4 and 8 as described in Section 4.2. We tune hyperparameters on the validation set and repeat each experiment 10 times and report the best result. While performance is lower than on the related Task 4 and Task 8 simulator tasks, we still arrive at the same trends and conclusions when real data from humans is used. The performance was expected to be lower because (i) real data has more lexical variety, complexity and noise; and (ii) the training set was smaller due to data collection costs (10k vs. 180k). We perform an analysis of the difference between simulated and real training data (or combining the two) in the appendix, which shows that using real data is indeed important and measurably superior to using simulated data.
1612.04936#53
1612.04936#55
1612.04936
[ "1511.06931" ]
1612.04936#55
Learning through Dialogue Interactions by Asking Questions
11 Published as a conference paper at ICLR 2017 vanilla-MemN2N Cont-MemN2N Train \Test TrainQA TrainAQ Task 4: K. Veriï¬ cation TestQA 0.331 0.318 TestAQ 0.313 0.375 Task 8: Triple TestQA 0.133 0.072 TestAQ 0.162 0.422 Task 4: K. Veriï¬ cation TestQA 0.712 0.679 TestAQ 0.703 0.774 Task 8: Triple TestQA 0.308 0.137 TestAQ 0.234 0.797 Table 3: Mechanical Turk Task Results.
1612.04936#54
1612.04936#56
1612.04936
[ "1511.06931" ]
1612.04936#56
Learning through Dialogue Interactions by Asking Questions
Asking Questions (AQ) outperforms only answering ques- tions without asking (QA). More importantly, the same main conclusion is observed as before: TrainAQ+TestAQ (questions can be asked at both training and test time) performs the best across all the settings. That is, we show that a bot asking questions to humans learns to outperform one that only answers them. # 7 CONCLUSIONS In this paper, we explored how an intelligent agent can beneï¬ t from interacting with users by asking questions. We developed tasks where interaction via asking questions is desired. We explore both online and ofï¬ ine settings that mimic different real world situations and show that in most cases, teaching a bot to interact with humans facilitates language understanding, and consequently leads to better question answering ability. # REFERENCES
1612.04936#55
1612.04936#57
1612.04936
[ "1511.06931" ]
1612.04936#57
Learning through Dialogue Interactions by Asking Questions
Mohammad Amin Bassiri. Interactional feedback and the impact of attitude and motivation on noticing l2 form. English Language and Literature Studies, 1(2):61, 2011. Antoine Bordes and Jason Weston. Learning end-to-end goal-oriented dialog. arXiv preprint arXiv:1605.07683, 2016. Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. Large-scale simple question answering with memory networks. arXiv preprint arXiv:1506.02075, 2015. Jesse Dodge, Andreea Gane, Xiang Zhang, Antoine Bordes, Sumit Chopra, Alexander Miller, Arthur Szlam, and Jason Weston. Evaluating prerequisite qualities for learning end-to-end dialog sys- tems. arXiv preprint arXiv:1511.06931, 2015. Richard Higgins, Peter Hartley, and Alan Skelton.
1612.04936#56
1612.04936#58
1612.04936
[ "1511.06931" ]
1612.04936#58
Learning through Dialogue Interactions by Asking Questions
The conscientious consumer: Reconsidering the role of assessment feedback in student learning. Studies in higher education, 27(1):53â 64, 2002. Andrew S Latham. Learning through feedback. Educational Leadership, 54(8):86â 87, 1997. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. A diversity-promoting objective function for neural conversation models. arXiv preprint arXiv:1510.03055, 2015. Alexander Miller, Adam Fisch, Jesse Dodge, Amir-Hossein Karimi, Antoine Bordes, and Ja- arXiv preprint son Weston. Key-value memory networks for directly reading documents. arXiv:1606.03126, 2016.
1612.04936#57
1612.04936#59
1612.04936
[ "1511.06931" ]
1612.04936#59
Learning through Dialogue Interactions by Asking Questions
Marcâ Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. Sequence level train- ing with recurrent neural networks. arXiv preprint arXiv:1511.06732, 2015. Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, and Bill Dolan. A neural network approach to context-sensitive generation of conversational responses. arXiv preprint arXiv:1506.06714, 2015. Pei-Hao Su, Milica Gasic, Nikola Mrksic, Lina Rojas-Barahona, Stefan Ultes, David Vandyke, Tsung-Hsien Wen, and Steve Young. Continuously learning neural dialogue management. arXiv preprint arXiv:1606.02689, 2016. Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. End-to-end memory networks. In Advances in neural information processing systems, pp. 2440â 2448, 2015. 12 Published as a conference paper at ICLR 2017 Oriol Vinyals and Quoc Le. A neural conversational model. arXiv preprint arXiv:1506.05869, 2015. Sida I Wang, Percy Liang, and Christopher D Manning. Learning language games through interac- tion. arXiv preprint arXiv:1606.02447, 2016. Tsung-Hsien Wen, Milica Gasic, Nikola Mrksic, Lina M Rojas-Barahona, Pei-Hao Su, Stefan Ultes, David Vandyke, and Steve Young. A network-based end-to-end trainable task-oriented dialogue system. arXiv preprint arXiv:1604.04562, 2016. Margaret G Werts, Mark Wolery, Ariane Holcombe, and David L Gast. Instructive feedback: Review of parameters and effects. Journal of Behavioral Education, 5(1):55â 75, 1995.
1612.04936#58
1612.04936#60
1612.04936
[ "1511.06931" ]
1612.04936#60
Learning through Dialogue Interactions by Asking Questions
Jason Weston. Dialog-based language learning. arXiv preprint arXiv:1604.06045, 2016. Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M Rush, Bart van Merri¨enboer, Armand Joulin, and Tomas Mikolov. Towards ai-complete question answering: A set of prerequisite toy tasks. arXiv preprint arXiv:1502.05698, 2015. Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229â
1612.04936#59
1612.04936#61
1612.04936
[ "1511.06931" ]
1612.04936#61
Learning through Dialogue Interactions by Asking Questions
256, 1992. Terry Winograd. Understanding natural language. Cognitive psychology, 3(1):1â 191, 1972. # Ludwig Wittgenstein. Philosophical investigations. John Wiley & Sons, 2010. Wojciech Zaremba and Ilya Sutskever. Reinforcement learning neural turing machines. arXiv preprint arXiv:1505.00521, 362, 2015. # Appendix End-to-End Memory Networks The input to an end-to-end memory network model (MemN2N) is the last utterance of the dialogue history x as well as a set of memories (context) (C=c1, c2, ..., cN ). Memory C encodes both short-term memory, e..g, dialogue histories between the bot and the teacher and long-term memories, e.g., the knowledgebase facts that the bot has access to. Given the input x and C, the goal is to produce an output/label a.
1612.04936#60
1612.04936#62
1612.04936
[ "1511.06931" ]
1612.04936#62
Learning through Dialogue Interactions by Asking Questions
In the ï¬ rst step, the query x is transformed to a vector representation u0 by summing up its con- stituent word embeddings: u0 = Ax. The input x is a bag-of-words vector and A is the d à V word embedding matrix where d denotes the vector dimensionality and V denotes the vocabulary size. Each memory ci is similarly transformed to vector mi. The model will read information from the memory by linking input representation q with memory vectors mi using softmax weights: a= So pim pi = softmax(ug mi) (2) i The goal is to select memories relevant to the last utterance x, i.e., the memories with large values of p1 i . The queried memory vector o1 is the weighted sum of memory vectors. The queried memory vector o1 will be added on top of original input, u1 = o1 + u0. u1 is then used to query the memory vector. Such a process is repeated by querying the memory N times (so called â
1612.04936#61
1612.04936#63
1612.04936
[ "1511.06931" ]
1612.04936#63
Learning through Dialogue Interactions by Asking Questions
hopsâ ). N is set to three in all experiments in this paper. In the end, uN is input to a softmax function for the ï¬ nal prediction: N y1, uT where L denotes the number of candidate answers and y denotes the representation of the answer. If the answer is a word, y is the corresponding word embedding. If the answer is a sentence, y is the embedding for the sentence achieved in the same way as we obtain embeddings for query x and memory c. Reward Based Imitation (RBI) and Forward Prediction (FP) RBI and FP are two dialogue learn- ing strategies proposed in (Weston, 2016) by harnessing different types of dialogue signals. RBI handles the case where the reward or the correctness of a botâ s answer is explicitly given (for ex- ample, +1 if the botâ s answer is correct and 0 otherwise). The model is directly trained to predict the correct answers (with label 1) at training time, which can be done using End-to-End Memory Networks (MemN2N) (Sukhbaatar et al., 2015) that map a dialogue input to a prediction.
1612.04936#62
1612.04936#64
1612.04936
[ "1511.06931" ]
1612.04936#64
Learning through Dialogue Interactions by Asking Questions
13 Published as a conference paper at ICLR 2017 FP handles the situation where a real-valued reward for a botâ s answer is not available, meaning that there is no +1 or 0 labels paired with a studentâ s utterance. However, the teacher will give a response to the botâ s answer, taking the form of a dialogue utterance. More formally, suppose that x denotes the teacherâ s question and C=c1, c2, ..., cN denotes the dialogue history. In our AQ settings, the bot will ask a question a regarding the teacherâ s question, denoted as a â A, where A denotes the studentâ s question pool. The teacher will provide an utterance in response to the student question a. In FP, the model ï¬ rst maps the teacherâ s initial question x and dialogue history C to vector representation u using a memory network with multiple hops. Then the model will perform another hopof attention over all possible studentâ s questions in A, with an additional part that incorporates the information of which candidate (i.e., a) was actually selected in the dialogue: pË a = softmax(uT yË a) o = pË a(yË a + β · 1[Ë a = a]) Ë aâ A (4) where yË a denotes the vector representation for the studentâ s question candidate Ë a. β is a d- dimensional vector to signify the actual action a that the student chooses. For tasks where the student only has one way to ask questions (e.g., â what do you meanâ ), there is no need to perform hops of attention over candidates since the cardinality of A is just 1. We thus directly assign a probability of 1 to the studentâ s question, making o the sum of vector representation of ya and β. o is then combined with u to predict the teacherâ s feedback t using a softmax: u1 = o + u t = softmax(uT where xri denotes the embedding for the ith response. Dialogue Simulator In this section we further detail the simulator and the datasets we generated in order to realize the various scenarios discussed in Section 3. We focused on the problem of movieQA where we adapted the WikiMovies dataset proposed in Weston et al. (2015).
1612.04936#63
1612.04936#65
1612.04936
[ "1511.06931" ]
1612.04936#65
Learning through Dialogue Interactions by Asking Questions
The dataset consists of roughly 100k questions with over 75k entities from the open movie dataset (OMDb). Each dialogue generated by the simulator takes place between a student and a teacher. The simulator samples a random question from the WikiMovies dataset and fetches the set of all KB facts relevant to the chosen question. This question is assumed to be the one the teacher asks its student, and is referred to as the â originalâ question. The student is ï¬ rst presented with the relevant KB facts followed by the original question. Providing the KB facts to the student allows us to control the exact knowledge the student is given access to while answering the questions. At this point, depending on the task at hand and the studentâ s ability to answer, the student might choose to directly answer it or ask a â followupâ question. The nature of the followup question will depend on the scenario under consideration. If the student answers the question, it gets a response from the teacher about its correctness and the conversation ends. However if the student poses a followup question, the teacher gives an appropriate response, which should give additional information to the student to answer the original question. In order to make things more complicated, the simulator pads the conversation with several unrelated student-teacher question-answer pairs. These question-answer pairs can be viewed as distractions and are used to test the studentâ s ability to remember the additional knowledge provided by the teacher after it was queried. For each dialogue, the simulator incorporates 5 such pairs (10 sentences). We refer to these pairs as conversational histories. For the QA setting (see Section 3), the dialogues generated by the simulator are such that the student never asks a clariï¬
1612.04936#64
1612.04936#66
1612.04936
[ "1511.06931" ]
1612.04936#66
Learning through Dialogue Interactions by Asking Questions
cation question. Instead, it simply responds to the original question, even if it is wrong. For the dialogs in the AQ setting, the student always asks a clariï¬ cation question. The nature of the question asked is dependent on the scenario (whether it is Question Clariï¬ cation, Knowledge Operation, or Knowledge Acquisition) under consideration. In order to simulate the case where the student sometimes choses to directly answer the original question and at other times choses to ask question, we created training datasets, which were a combination of QA and AQ (called â Mixedâ ). For all these cases, the student needs to give an answer to the teacherâ s original question at the end.
1612.04936#65
1612.04936#67
1612.04936
[ "1511.06931" ]
1612.04936#67
Learning through Dialogue Interactions by Asking Questions
# Instructions given to Turkers These are the instructions given for the textual feedback Mechanical Turk task (we also constructed a separate task to collect the questions to ask the bot with similar instructions, not described here): Task 4 (answers to botâ s questions): 14 Published as a conference paper at ICLR 2017 Title: Write brief responses to given dialogue exchanges (about 15 min) Description: Write a brief response answering a provided question (25 questions per HIT). # Directions: Each task consists of the following triplets: 1) a question by the teacher 2) the correct answer(s) to the question (separated by â
1612.04936#66
1612.04936#68
1612.04936
[ "1511.06931" ]
1612.04936#68
Learning through Dialogue Interactions by Asking Questions
ORâ ), unknown to the student 3) a clarifying question asking for feedback from the teacher Consider the scenario where you are the teacher and have already asked the question, and received the reply from the student. Please compose a brief response replying to the studentâ s question. The correct answers are provided so that you know whether the studentâ s question was relevant or not. For example, given 1) question: â what is a color in the united states ï¬ ag?â ; 2) correct answer: â white OR blue OR redâ ; 3) student reply: â does this have to do with â US Flag has colors red,white,blueâ ?â , your response could be something like â thatâ s right!â ; for 3) reply: â does this have to do with â United States has population 320 millionâ , you might say â No, that fact is not relevantâ or â Not reallyâ .
1612.04936#67
1612.04936#69
1612.04936
[ "1511.06931" ]
1612.04936#69
Learning through Dialogue Interactions by Asking Questions
Please vary responses and try to minimize spelling mistakes. If the same responses are copied/pasted or similar responses are overused, weâ ll reject the HIT. Avoid naming the student or addressing â the classâ directly. We will consider bonuses for higher quality responses during review. Task 8: answers to botâ s questions: Title: Write brief responses to given dialogue exchanges (about 10 min) Description: Write a sentence describing the answer to a question (25 questions per HIT). # Directions: Each task consists of the following triplets: 1) a question by the teacher 2) the correct answer(s) to the question (separated by â
1612.04936#68
1612.04936#70
1612.04936
[ "1511.06931" ]
1612.04936#70
Learning through Dialogue Interactions by Asking Questions
ORâ ), unknown to the student 3) a question from the student asking the teacher for the answer Consider the scenario where you are the teacher and have already asked the question, and received the reply from the student. Please compose a brief response replying to the studentâ s question. The correct answers are provided so that you know which answers to provide. For example, given 1) question: â what is a color in the united states ï¬ ag?â ; 2) correct answer: â white OR blue OR redâ ; 3) student reply: â i dont know. whatâ s the answer ?â , your response could be something like â the color white is in the US ï¬
1612.04936#69
1612.04936#71
1612.04936
[ "1511.06931" ]
1612.04936#71
Learning through Dialogue Interactions by Asking Questions
agâ or â blue and red both appear in itâ . Please vary responses and try to minimize spelling mistakes, and do not include the capitalized â ORâ in your response. If the same responses are copied/pasted or similar responses are overused, weâ ll reject the HIT. You donâ t need to mention every correct answer in your response. Avoid naming the student or addressing â the classâ directly. We will consider bonuses for higher quality responses during review. # Additional Mechanical Turk Experiments Here we provide additional experiments to supplement the ones described in Section 6.2. In the main paper, results were shown when training and testing on the collected Mechanical Turk data (around 10,000 episodes of training dialogues for training). As we collected the data in the same settings as Task 4 and 8 of our simulator, we could also consider supplementing training with simulated data as well, of which we have a larger amount (over 100,000 episodes). Note this is only for training, we will still test on the real (Mechanical Turk collected) data. Although the simulated data has less lexical variety as it is built from templates, the larger size might obtain improve results. Results are given Table 5 when training on the combination of real and simulator data, and testing on real data. This should be compared to training on only the real data (Table 4) and only on the simulator data (Table 6). The best results are obtained from the combination of simulator and real data. The best real data only results (selecting over algorithm and training strategy) on both tasks outperform the best results using simulator data, i.e. using Cont-MemN2N with the Train AQ / TestAQ setting) 0.774 and 0.797 is obtained vs. 0.714 and 0.788 for Tasks 4 and 8 respectively.
1612.04936#70
1612.04936#72
1612.04936
[ "1511.06931" ]
1612.04936#72
Learning through Dialogue Interactions by Asking Questions
This 15 Published as a conference paper at ICLR 2017 is despite there being far fewer examples of real data compared to simulator data. Overall we obtain two main conclusions from this additional experiment: (i) real data is indeed measurably superior to simulated data for training our models; (ii) in all cases (across different algorithms, tasks and data types â be they real data, simulated data or combinations) the bot asking questions (AQ) outperforms it only answering questions and not asking them (QA). The latter reinforces the main result of the paper. vanilla-MemN2N Cont-MemN2N Train \Test TrainQA TrainAQ Task 4: K. Veriï¬ cation TestQA 0.331 0.318 TestAQ 0.313 0.375 Task 8: Triple TestQA 0.133 0.072 TestAQ 0.162 0.422 Task 4: K. Veriï¬ cation TestQA 0.712 0.679 TestAQ 0.703 0.774 Task 8: Triple TestQA 0.308 0.137 TestAQ 0.234 0.797 Table 4: Mechanical Turk Task Results, using real data for training and testing. vanilla-MemN2N Cont-MemN2N Train \Test TrainQA TrainAQ Task 4:
1612.04936#71
1612.04936#73
1612.04936
[ "1511.06931" ]
1612.04936#73
Learning through Dialogue Interactions by Asking Questions
K. Veriï¬ cation TestQA 0.356 0.340 TestAQ 0.311 0.445 Task 8: Triple TestQA 0.128 0.150 TestAQ 0.174 0.487 Task 4: K. Veriï¬ cation TestQA 0.733 0.704 TestAQ 0.717 0.792 Task 8: Triple TestQA 0.368 0.251 TestAQ 0.352 0.825 Table 5: Results on Mechanical Turk Tasks using a combination of real and simulated data for training, testing on real data. vanilla-MemN2N Cont-MemN2N Train \Test TrainQA TrainAQ Task 4:
1612.04936#72
1612.04936#74
1612.04936
[ "1511.06931" ]
1612.04936#74
Learning through Dialogue Interactions by Asking Questions
K. Veriï¬ cation TestQA 0.340 0.326 TestAQ 0.311 0.390 Task 8: Triple TestQA 0.120 0.067 TestAQ 0.165 0.405 Task 4: K. Veriï¬ cation TestQA 0.665 0.642 TestAQ 0.648 0.714 Task 8: Triple TestQA 0.349 0.197 TestAQ 0.342 0.788 Table 6: Results on Mechanical Turk Tasks using only simulated data for training, but testing on real data. # Additional Ofï¬ ine Supervised Learning Experiments Question Clariï¬ cation Knowledge Operation Train \Test TrainQA TrainAQ TrainAQ(+FP) TrainMix Task 1: Q. Paraphrase TestAQ TestQA 0.284 0.338 0.450 0.213 0.464 0.288 0.373 0.326 Task 2: Q. Veriï¬ cation TestQA 0.340 0.225 0.146 0.329 TestAQ 0.271 0.373 0.320 0.326 Task 3: Ask For Relevant K. TestQA 0.462 0.187 0.342 0.442 TestAQ 0.344 0.632 0.631 0.558 Task 4: K. Veriï¬ cation TestQA 0.482 0.283 0.311 0.476 TestAQ 0.322 0.540 0.524 0.491 Train \Test TestQA Task 5: Q. Entity 0.223 0.660 0.742 0.630 TestAQ TrainQA (vanila) < 0.01 TrainAQ (vanila) < 0.01 < 0.01 TrainAQ(+FP) <0.01 Mix (vanila) Knowledge Acquisition TestQA TestAQ Task 6: Answer Entity <0.01 <0.01 < 0.01 <0.01 TestQA Task 7:
1612.04936#73
1612.04936#75
1612.04936
[ "1511.06931" ]
1612.04936#75
Learning through Dialogue Interactions by Asking Questions
Relation Entity 0.109 0.082 0.085 0.070 TestAQ <0.01 <0.01 < 0.01 <0.01 0.129 0.156 0.188 0.152 TestQA TestAQ Task 8: Triple 0.201 0.124 0.064 0.180 0.259 0.664 0.702 0.572 TestQA TestAQ Task 9: Everything <0.01 <0.01 <0.01 <0.01 <0.01 <0.01 <0.01 <0.01 Table 7: Results for ofï¬ ine settings using memory networks. TrainAQ TrainAQ(+FP) TrainMix Question Clariï¬ cation Task 2: Q. Veriï¬ cation TestModelAQ 0.382 0.344 0.352 Knowledge Acquisition Task 4: K. Veriï¬ cation TestModelAQ 0.480 0.501 0.469 Table 8: Results for TestModelAQ settings.
1612.04936#74
1612.04936#76
1612.04936
[ "1511.06931" ]
1612.04936#76
Learning through Dialogue Interactions by Asking Questions
16
1612.04936#75
1612.04936
[ "1511.06931" ]
1612.03651#0
FastText.zip: Compressing text classification models
6 1 0 2 c e D 2 1 ] L C . s c [ 1 v 1 5 6 3 0 . 2 1 6 1 : v i X r a # Under review as a conference paper at ICLR 2017 # FASTTEXT.ZIP: COMPRESSING TEXT CLASSIFICATION MODELS Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Herv´e J´egou & Tomas Mikolov Facebook AI Research {ajoulin,egrave,bojanowski,matthijs,rvj,tmikolov}@fb.com # ABSTRACT We consider the problem of producing compact architectures for text classiï¬ ca- tion, such that the full model ï¬ ts in a limited amount of memory. After considering different solutions inspired by the hashing literature, we propose a method built upon product quantization to store word embeddings. While the original tech- nique leads to a loss in accuracy, we adapt this method to circumvent quantization artefacts. Our experiments carried out on several benchmarks show that our ap- proach typically requires two orders of magnitude less memory than fastText while being only slightly inferior with respect to accuracy. As a result, it outper- forms the state of the art by a good margin in terms of the compromise between memory usage and accuracy.
1612.03651#1
1612.03651
[ "1510.03009" ]
1612.03651#1
FastText.zip: Compressing text classification models
1 # INTRODUCTION Text classiï¬ cation is an important problem in Natural Language Processing (NLP). Real world use- cases include spam ï¬ ltering or e-mail categorization. It is a core component in more complex sys- tems such as search and ranking. Recently, deep learning techniques based on neural networks have achieved state of the art results in various NLP applications. One of the main successes of deep learning is due to the effectiveness of recurrent networks for language modeling and their application to speech recognition and machine translation (Mikolov, 2012).
1612.03651#0
1612.03651#2
1612.03651
[ "1510.03009" ]
1612.03651#2
FastText.zip: Compressing text classification models
However, in other cases including several text classiï¬ cation problems, it has been shown that deep networks do not convincingly beat the prior state of the art techniques (Wang & Manning, 2012; Joulin et al., 2016). In spite of being (typically) orders of magnitude slower to train than traditional techniques based on n-grams, neural networks are often regarded as a promising alternative due to compact model sizes, in particular for character based models. This is important for applications that need to run on systems with limited memory such as smartphones.
1612.03651#1
1612.03651#3
1612.03651
[ "1510.03009" ]