id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
sequencelengths
1
1
1508.06615#16
Character-Aware Neural Language Models
2. DATA-S DATA-L |V| |C| T |V| |C| English (EN) Czech (CS) German (DE) Spanish (ES) French (FR) Russian (RU) Arabic (AR) 10 k 46 k 37 k 27 k 25 k 62 k 86 k 51 101 74 72 76 62 132 1 m 60 k 1 m 206 k 1 m 339 k 1 m 152 k 1 m 137 k 1 m 497 k 4 m â 197 195 260 222 225 111 â T 20 m 17 m 51 m 56 m 57 m 25 m â Table 1: Corpus statistics. |V| = word vocabulary size; |C| = char- acter vocabulary size; T = number of tokens in training set. The small English data is from the Penn Treebank and the Arabic data is from the News-Commentary corpus. The rest are from the 2013 ACL Workshop on Machine Translation. |C| is large because of (rarely occurring) special characters. standard training (0-20), validation (21-22), and test (23-24) splits along with pre-processing by Mikolov et al. (2010). With approximately 1m tokens and |V| = 10k, this version has been extensively used by the language modeling com- munity and is publicly available.6 With the optimal hyperparameters tuned on PTB, we ap- ply the model to various morphologically rich languages: Czech, German, French, Spanish, Russian, and Arabic. Non- Arabic data comes from the 2013 ACL Workshop on Ma- chine Translation,7 and we use the same train/validation/test splits as in Botha and Blunsom (2014). While the raw data are publicly available, we obtained the preprocessed ver- sions from the authors,8 whose morphological NLM serves as a baseline for our work. We train on both the small datasets (DATA-S) with 1m tokens per language, and the large datasets (DATA-L) including the large English data which has a much bigger |V| than the PTB. Arabic data comes from the News-Commentary corpus,9 and we per- form our own preprocessing and train/validation/test splits.
1508.06615#15
1508.06615#17
1508.06615
[ "1507.06228" ]
1508.06615#17
Character-Aware Neural Language Models
In these datasets only singleton words were replaced with <unk> and hence we effectively use the full vocabulary. It is worth noting that the character model can utilize surface forms of OOV tokens (which were replaced with <unk>), but we do not do this and stick to the preprocessed versions (de- spite disadvantaging the character models) for exact com- parison against prior work. # Optimization The models are trained by truncated backpropagation through time (Werbos 1990; Graves 2013). We backprop- agate for 35 time steps using stochastic gradient descent where the learning rate is initially set to 1.0 and halved if the perplexity does not decrease by more than 1.0 on the validation set after an epoch. On DATA-S we use a batch size of 20 and on DATA-L we use a batch size of 100 (for
1508.06615#16
1508.06615#18
1508.06615
[ "1507.06228" ]
1508.06615#18
Character-Aware Neural Language Models
# 6http://www.ï¬ t.vutbr.cz/â ¼imikolov/rnnlm/ 7http://www.statmt.org/wmt13/translation-task.html 8http://bothameister.github.io/ 9http://opus.lingï¬ l.uu.se/News-Commentary.php Small Large CNN d w [1, 2, 3, 4, 5, 6] [25 · w] h f tanh 15 15 [1, 2, 3, 4, 5, 6, 7] [min{200, 50 · w}] tanh Highway l g 1 ReLU 2 ReLU LSTM l m 300 2 2 650
1508.06615#17
1508.06615#19
1508.06615
[ "1507.06228" ]
1508.06615#19
Character-Aware Neural Language Models
Table 2: Architecture of the small and large models. d = dimensionality of character embeddings; w = ï¬ lter widths; h = number of ï¬ lter matrices, as a function of ï¬ lter width (so the large model has ï¬ lters of width [1, 2, 3, 4, 5, 6, 7] of size [50, 100, 150, 200, 200, 200, 200] for a total of 1100 ï¬ lters); f, g = nonlinearity functions; l = number of layers; m = number of hidden units. greater efï¬ ciency).
1508.06615#18
1508.06615#20
1508.06615
[ "1507.06228" ]
1508.06615#20
Character-Aware Neural Language Models
Gradients are averaged over each batch. We train for 25 epochs on non-Arabic and 30 epochs on Ara- bic data (which was sufï¬ cient for convergence), picking the best performing model on the validation set. Parameters of the model are randomly initialized over a uniform distribu- tion with support [â 0.05, 0.05]. For regularization we use dropout (Hinton et al. 2012) with probability 0.5 on the LSTM input-to-hidden layers (except on the initial Highway to LSTM layer) and the hidden-to-output softmax layer. We further constrain the norm of the gradients to be below 5, so that if the L2 norm of the gradient exceeds 5 then we renormalize it to have || · || = 5 before updating. The gradient norm constraint was crucial in training the model. These choices were largely guided by previous work of Zaremba et al. (2014) on word- level language modeling with LSTMs. Finally, in order to speed up training on DATA-L we em- ploy a hierarchical softmax (Morin and Bengio 2005)â a common strategy for training language models with very large |V|â instead of the usual softmax. We pick the number of clusters c = [\/|V|] and randomly split V into mutually exclusive and collectively exhaustive subsets V),...,V. of (approximately) equal size.!° Then Pr(wi41 = j|wi.t) be- comes, . exp(h,-sâ +tâ Pr(wisa = j|wie) x P(t ) â 1 exp(hy, - sâ + #â ") exp(hy - p} + gf) Vive, exp(hy - pr +q@ ) (10) where r is the cluster index such that j â Vr. The ï¬
1508.06615#19
1508.06615#21
1508.06615
[ "1507.06228" ]
1508.06615#21
Character-Aware Neural Language Models
rst term is simply the probability of picking cluster r, and the second 10While Brown clustering/frequency-based clustering is com- monly used in the literature (e.g. Botha and Blunsom (2014) use Brown clusering), we used random clusters as our implementation enjoys the best speed-up when the number of words in each clus- ter is approximately equal. We found random clustering to work surprisingly well. P P L Size LSTM-Word-Small LSTM-Char-Small LSTM-Word-Large LSTM-Char-Large 97.6 92.3 85.4 78.9 5 m 5 m 20 m 19 m KN-5 (Mikolov et al. 2012) RNNâ (Mikolov et al. 2012) RNN-LDAâ (Mikolov et al. 2012) genCNNâ (Wang et al. 2015) FOFE-FNNLMâ (Zhang et al. 2015) Deep RNN (Pascanu et al. 2013) Sum-Prod Netâ (Cheng et al. 2014) LSTM-1â (Zaremba et al. 2014) LSTM-2â (Zaremba et al. 2014) 141.2 124.7 113.7 116.4 108.0 107.5 100.0 82.7 78.4 2 m 6 m 7 m 8 m 6 m 6 m 5 m 20 m 52 m Table 3: Performance of our model versus other neural language models on the English Penn Treebank test set. P P L refers to per- plexity (lower is better) and size refers to the approximate number of parameters in the model. KN-5 is a Kneser-Ney 5-gram language model which serves as a non-neural baseline. â For these models the authors did not explicitly state the number of parameters, and hence sizes shown here are estimates based on our understanding of their papers or private correspondence with the respective authors.
1508.06615#20
1508.06615#22
1508.06615
[ "1507.06228" ]
1508.06615#22
Character-Aware Neural Language Models
term is the probability of picking word j given that cluster r is picked. We found that hierarchical softmax was not nec- essary for models trained on DATA-S. # Results English Penn Treebank We train two versions of our model to assess the trade-off between performance and size. Architecture of the small (LSTM-Char-Small) and large (LSTM-Char-Large) models is summarized in Table 2. As another baseline, we also train two comparable LSTM models that use word em- beddings only (LSTM-Word-Small, LSTM-Word-Large). LSTM-Word-Small uses 200 hidden units and LSTM-Word- Large uses 650 hidden units. Word embedding sizes are also 200 and 650 respectively. These were chosen to keep the number of parameters similar to the corresponding character-level model. As can be seen from Table 3, our large model is on par with the existing state-of-the-art (Zaremba et al. 2014), despite having approximately 60% fewer parameters. Our small model signiï¬ cantly outperforms other NLMs of sim- ilar size, even though it is penalized by the fact that the dataset already has OOV words replaced with <unk> (other models are purely word-level models). While lower perplex- ities have been reported with model ensembles (Mikolov and Zweig 2012), we do not include them here as they are not comparable to the current work. Other Languages The modelâ s performance on the English PTB is informative to the extent that it facilitates comparison against the large body of existing work. However, English is relatively simple DATA-S CS DE ES FR RU Botha KN-4 MLBL 545 465 366 296 241 200 274 225 396 304 Small Word Morph Char 503 414 401 305 278 260 212 197 182 229 216 189 352 290 278 Large Word Morph Char 493 398 371 286 263 239 200 177 165 222 196 184 357 271 261 AR 323 â 216 230 196 172 148 148 Table 4: Test set perplexities for DATA-S.
1508.06615#21
1508.06615#23
1508.06615
[ "1507.06228" ]
1508.06615#23
Character-Aware Neural Language Models
First two rows are from Botha (2014) (except on Arabic where we trained our own KN-4 model) while the last six are from this paper. KN-4 is a Kneser- Ney 4-gram language model, and MLBL is the best performing morphological logbilinear model from Botha (2014). Small/Large refer to model size (see Table 2), and Word/Morph/Char are models with words/morphemes/characters as inputs respectively. from a morphological standpoint, and thus our next set of results (and arguably the main contribution of this paper) is focused on languages with richer morphology (Table 4, Table 5). We compare our results against the morphological log- bilinear (MLBL) model from Botha and Blunsom (2014), whose model also takes into account subword information through morpheme embeddings that are summed at the input and output layers. As comparison against the MLBL mod- els is confounded by our use of LSTMsâ widely known to outperform their feed-forward/log-bilinear cousinsâ we also train an LSTM version of the morphological NLM, where the input representation of a word given to the LSTM is a summation of the wordâ s morpheme embeddings. Con- cretely, suppose that M is the set of morphemes in a lan- guage, M â
1508.06615#22
1508.06615#24
1508.06615
[ "1507.06228" ]
1508.06615#24
Character-Aware Neural Language Models
Rnà |M| is the matrix of morpheme embed- dings, and mj is the j-th column of M (i.e. a morpheme embedding). Given the input word k, we feed the following representation to the LSTM: xk + mj jâ Mk (11) where xk is the word embedding (as in a word-level model) and Mk â M is the set of morphemes for word k. The morphemes are obtained by running an unsupervised mor- phological tagger as a preprocessing step.11 We emphasize that the word embedding itself (i.e. xk) is added on top of the morpheme embeddings, as was done in Botha and Blunsom (2014). The morpheme embeddings are of size 200/650 for the small/large models respectively. We further train word- level LSTM models as another baseline. On DATA-S it is clear from Table 4 that the character- level models outperform their word-level counterparts de- 11We use Morfessor Cat-MAP (Creutz and Lagus 2007), as in Botha and Blunsom (2014). DATA-L CS DE ES FR RU Botha KN-4 MLBL 862 643 463 404 219 203 243 227 390 300 Small Word Morph Char 701 615 578 347 331 305 186 189 169 202 209 190 353 331 313 EN 291 273 236 233 216 Table 5: Test set perplexities on DATA-L. First two rows are from Botha (2014), while the last three rows are from the small LSTM models described in the paper. KN-4 is a Kneser-Ney 4-gram lan- guage model, and MLBL is the best performing morphological log- bilinear model from Botha (2014). Word/Morph/Char are models with words/morphemes/characters as inputs respectively. spite, again, being smaller.12 The character models also out- perform their morphological counterparts (both MLBL and LSTM architectures), although improvements over the mor- phological LSTMs are more measured. Note that the mor- pheme models have strictly more parameters than the word models because word embeddings are used as part of the in- put.
1508.06615#23
1508.06615#25
1508.06615
[ "1507.06228" ]
1508.06615#25
Character-Aware Neural Language Models
Due to memory constraints13 we only train the small models on DATA-L (Table 5). Interestingly we do not ob- serve signiï¬ cant differences going from word to morpheme LSTMs on Spanish, French, and English. The character models again outperform the word/morpheme models. We also observe signiï¬ cant perplexity reductions even on En- glish when V is large. We conclude this section by noting that we used the same architecture for all languages and did not perform any language-speciï¬ c tuning of hyperparame- ters. Discussion Learned Word Representations We explore the word representations learned by the models on the PTB. Table 6 has the nearest neighbors of word rep- resentations learned from both the word-level and character- level models. For the character models we compare the rep- resentations obtained before and after highway layers. Before the highway layers the representations seem to solely rely on surface formsâ
1508.06615#24
1508.06615#26
1508.06615
[ "1507.06228" ]
1508.06615#26
Character-Aware Neural Language Models
for example the nearest neigh- bors of you are your, young, four, youth, which are close to you in terms of edit distance. The highway layers however, seem to enable encoding of semantic features that are not discernable from orthography alone. After highway layers the nearest neighbor of you is we, which is orthographically distinct from you. Another example is while and thoughâ these words are far apart edit distance-wise yet the composi- tion model is able to place them near each other. The model 12The difference in parameters is greater for non-PTB corpora as the size of the word model scales faster with |V|. For example, on Arabic the small/large word models have 35m/121m parameters while the corresponding character models have 29m/69m parame- ters respectively. 13All models were trained on GPUs with 2GB memory.
1508.06615#25
1508.06615#27
1508.06615
[ "1507.06228" ]
1508.06615#27
Character-Aware Neural Language Models
Figure 2: Plot of character n-gram representations via PCA for English. Colors correspond to: preï¬ xes (red), sufï¬ xes (blue), hy- phenated (orange), and all others (grey). Preï¬ xes refer to character n-grams which start with the start-of-word character. Sufï¬ xes like- wise refer to character n-grams which end with the end-of-word character. also makes some clear mistakes (e.g. his and hhs), highlight- ing the limits of our approach, although this could be due to the small dataset. The learned representations of OOV words (computer- aided, misinformed) are positioned near words with the same part-of-speech. The model is also able to correct for incorrect/non-standard spelling (looooook), indicating po- tential applications for text normalization in noisy domains. Learned Character N -gram Representations As discussed previously, each ï¬ lter of the CharCNN is es- sentially learning to detect particular character n-grams. Our initial expectation was that each ï¬ lter would learn to activate on different morphemes and then build up semantic repre- sentations of words from the identiï¬ ed morphemes. How- ever, upon reviewing the character n-grams picked up by the ï¬ lters (i.e. those that maximized the value of the ï¬ lter), we found that they did not (in general) correspond to valid morphemes. To get a better intuition for what the character composi- tion model is learning, we plot the learned representations of all character n-grams (that occurred as part of at least two words in V) via principal components analysis (Figure 2). We feed each character n-gram into the CharCNN and use the CharCNNâ
1508.06615#26
1508.06615#28
1508.06615
[ "1507.06228" ]
1508.06615#28
Character-Aware Neural Language Models
s output as the ï¬ xed dimensional representa- tion for the corresponding character n-gram. As is appar- ent from Figure 2, the model learns to differentiate between preï¬ xes (red), sufï¬ xes (blue), and others (grey). We also ï¬ nd that the representations are particularly sensitive to character n-grams containing hyphens (orange), presumably because this is a strong signal of a wordâ s part-of-speech. Highway Layers We quantitatively investigate the effect of highway network layers via ablation studies (Table 7). We train a model with- out any highway layers, and ï¬ nd that performance decreases signiï¬ cantly. As the difference in performance could be due to the decrease in model size, we also train a model that feeds yk (i.e. word representation from the CharCNN) In Vocabulary Out-of-Vocabulary while his you richard trading computer-aided misinformed looooook LSTM-Word although letting though minute your her my their conservatives we guys i jonathan robert neil nancy advertised advertising turnover turnover â
1508.06615#27
1508.06615#29
1508.06615
[ "1507.06228" ]
1508.06615#29
Character-Aware Neural Language Models
â â â â â â â â â â â LSTM-Char (before highway) chile whole meanwhile white this hhs is has your young four youth hard rich richer richter heading training reading leading computer-guided computerized disk-drive computer informed performed transformed inform look cook looks shook LSTM-Char (after highway) meanwhile whole though nevertheless hhs this their your we your doug i eduard gerard edward carl trade training traded trader computer-guided computer-driven computerized computer informed performed outperformed transformed look looks looked looking
1508.06615#28
1508.06615#30
1508.06615
[ "1507.06228" ]
1508.06615#30
Character-Aware Neural Language Models
Table 6: Nearest neighbor words (based on cosine similarity) of word representations from the large word-level and character-level (before and after highway layers) models trained on the PTB. Last three words are OOV words, and therefore they do not have representations in the word-level model. LSTM-Char Small Large No Highway Layers One Highway Layer Two Highway Layers One MLP Layer 100.3 92.3 90.1 111.2 84.6 79.7 78.9 92.6 |V| 10 k 25 k 50 k 100 k T 1 m 17% 16% 21% 5 m 10 m 25 m â 8% 14% 16% 21% 9% 12% 15% 9% 9% 10% 8% 9% Table 7:
1508.06615#29
1508.06615#31
1508.06615
[ "1507.06228" ]
1508.06615#31
Character-Aware Neural Language Models
Perplexity on the Penn Treebank for small/large models trained with/without highway layers. through a one-layer multilayer perceptron (MLP) to use as input into the LSTM. We ï¬ nd that the MLP does poorly, al- though this could be due to optimization issues. Table 8: Perplexity reductions by going from small word-level to character-level models based on different corpus/vocabulary sizes on German (DE). |V| is the vocabulary size and T is the number of tokens in the training set. The full vocabulary of the 1m dataset was less than 100k and hence that scenario is unavailable.
1508.06615#30
1508.06615#32
1508.06615
[ "1507.06228" ]
1508.06615#32
Character-Aware Neural Language Models
We hypothesize that highway networks are especially well-suited to work with CNNs, adaptively combining lo- cal features detected by the individual ï¬ lters. CNNs have already proven to be been successful for many NLP tasks (Collobert et al. 2011; Shen et al. 2014; Kalchbrenner, Grefenstette, and Blunsom 2014; Kim 2014; Zhang, Zhao, and LeCun 2015; Lei, Barzilay, and Jaakola 2015), and we posit that further gains could be achieved by employing highway layers on top of existing CNN architectures. ity reductions as a result of going from a small word-level model to a small character-level model. To vary the vocabu- lary size we take the most frequent k words and replace the rest with <unk>. As with previous experiments the character model does not utilize surface forms of <unk> and simply treats it as another token. Although Table 8 suggests that the perplexity reductions become less pronounced as the corpus size increases, we nonetheless ï¬ nd that the character-level model outperforms the word-level model in all scenarios. We also anecdotally note that (1) having one to two high- way layers was important, but more highway layers gener- ally resulted in similar performance (though this may de- pend on the size of the datasets), (2) having more convolu- tional layers before max-pooling did not help, and (3) high- way layers did not improve models that only used word em- beddings as inputs. # Effect of Corpus/Vocab Sizes We next study the effect of training corpus/vocabulary sizes on the relative performance between the different models. We take the German (DE) dataset from DATA-L and vary the training corpus/vocabulary sizes, calculating the perplex-
1508.06615#31
1508.06615#33
1508.06615
[ "1507.06228" ]
1508.06615#33
Character-Aware Neural Language Models
# Further Observations We report on some further experiments and observations: â ¢ Combining word embeddings with the CharCNNâ s out- put to form a combined representation of a word (to be used as input to the LSTM) resulted in slightly worse performance (81 on PTB with a large model). This was surprising, as improvements have been reported on part- of-speech tagging (dos Santos and Zadrozny 2014) and named entity recognition (dos Santos and Guimaraes 2015) by concatenating word embeddings with the out- put from a character-level CNN.
1508.06615#32
1508.06615#34
1508.06615
[ "1507.06228" ]
1508.06615#34
Character-Aware Neural Language Models
While this could be due to insufï¬ cient experimentation on our part,14 it suggests that for some tasks, word embeddings are superï¬ uousâ character inputs are good enough. â ¢ While our model requires additional convolution opera- tions over characters and is thus slower than a comparable word-level model which can perform a simple lookup at the input layer, we found that the difference was manage- able with optimized GPU implementationsâ for example on PTB the large character-level model trained at 1500 to- kens/sec compared to the word-level model which trained at 3000 tokens/sec. For scoring, our model can have the same running time as a pure word-level model, as the CharCNNâ s outputs can be pre-computed for all words in V. This would, however, be at the expense of increased model size, and thus a trade-off can be made between run-time speed and memory (e.g. one could restrict the pre-computation to the most frequent words). Related Work Neural Language Models (NLM) encompass a rich fam- ily of neural network architectures for language modeling. Some example architectures include feed-forward (Bengio, Ducharme, and Vincent 2003), recurrent (Mikolov et al. 2010), sum-product (Cheng et al. 2014), log-bilinear (Mnih and Hinton 2007), and convolutional (Wang et al. 2015) net- works. In order to address the rare word problem, Alexandrescu and Kirchhoff (2006)â building on analogous work on count-based n-gram language models by Bilmes and Kirch- hoff (2003)â represent a word as a set of shared factor em- beddings. Their Factored Neural Language Model (FNLM) can incorporate morphemes, word shape information (e.g. capitalization) or any other annotation (e.g. part-of-speech tags) to represent words.
1508.06615#33
1508.06615#35
1508.06615
[ "1507.06228" ]
1508.06615#35
Character-Aware Neural Language Models
A speciï¬ c class of FNLMs leverages morphemic infor- mation by viewing a word as a function of its (learned) morpheme embeddings (Luong, Socher, and Manning 2013; Botha and Blunsom 2014; Qui et al. 2014). For example Lu- ong, Socher, and Manning (2013) apply a recursive neural network over morpheme embeddings to obtain the embed- ding for a single word. While such models have proved use- ful, they require morphological tagging as a preprocessing step. Another direction of work has involved purely character- level NLMs, wherein both input and output are charac- ters (Sutskever, Martens, and Hinton 2011; Graves 2013). Character-level models obviate the need for morphological tagging or manual feature engineering, and have the attrac- tive property of being able to generate novel words. How- ever they are generally outperformed by word-level models (Mikolov et al. 2012). improvements have been reported on part-of-speech tagging (dos Santos and Zadrozny 2014) and named entity recognition (dos Santos 14We experimented with (1) concatenation, (2) tensor products, (3) averaging, and (4) adaptive weighting schemes whereby the model learns a convex combination of word embeddings and the CharCNN outputs. and Guimaraes 2015) by representing a word as a concatena- tion of its word embedding and an output from a character- level CNN, and using the combined representation as fea- tures in a Conditional Random Field (CRF). Zhang, Zhao, and LeCun (2015) do away with word embeddings com- pletely and show that for text classiï¬ cation, a deep CNN over characters performs well. Ballesteros, Dyer, and Smith (2015) use an RNN over characters only to train a transition- based parser, obtaining improvements on many morpholog- ically rich languages. Finally, Ling et al. (2015) apply a bi-directional LSTM over characters to use as inputs for language modeling and part-of-speech tagging. They show improvements on various languages (English, Portuguese, Catalan, German, Turkish). It remains open as to which character composition model (i.e.
1508.06615#34
1508.06615#36
1508.06615
[ "1507.06228" ]
1508.06615#36
Character-Aware Neural Language Models
CNN or LSTM) performs better. Conclusion We have introduced a neural language model that utilizes only character-level inputs. Predictions are still made at the word-level. Despite having fewer parameters, our model outperforms baseline models that utilize word/morpheme embeddings in the input layer. Our work questions the ne- cessity of word embeddings (as inputs) for neural language modeling. Analysis of word representations obtained from the char- acter composition part of the model further indicates that the model is able to encode, from characters only, rich se- mantic and orthographic features. Using the CharCNN and highway layers for representation learning (e.g. as input into word2vec (Mikolov et al. 2013)) remains an avenue for fu- ture work. Insofar as sequential processing of words as inputs is ubiquitous in natural language processing, it would be in- teresting to see if the architecture introduced in this paper is viable for other tasksâ for example, as an encoder/decoder in neural machine translation (Cho et al. 2014; Sutskever, Vinyals, and Le 2014). Acknowledgments We are especially grateful to Jan Botha for providing the preprocessed datasets and the model results. References Alexandrescu, A., and Kirchhoff, K. 2006. Factored Neural Lan- guage Models. In Proceedings of NAACL.
1508.06615#35
1508.06615#37
1508.06615
[ "1507.06228" ]
1508.06615#37
Character-Aware Neural Language Models
Ballesteros, M.; Dyer, C.; and Smith, N. A. Im- proved Transition-Based Parsing by Modeling Characters instead of Words with LSTMs. In Proceedings of EMNLP. Bengio, Y.; Ducharme, R.; and Vincent, P. 2003. A Neural Prob- abilistic Language Model. Journal of Machine Learning Research 3:1137â 1155. Bengio, Y.; Simard, P.; and Frasconi, P. 1994.
1508.06615#36
1508.06615#38
1508.06615
[ "1507.06228" ]
1508.06615#38
Character-Aware Neural Language Models
Learning Long-term Dependencies with Gradient Descent is Difï¬ cult. IEEE Transac- tions on Neural Networks 5:157â 166. Bilmes, J., and Kirchhoff, K. 2003. Factored Language Models and Generalized Parallel Backoff. In Proceedings of NAACL. Botha, J., and Blunsom, P. 2014. Compositional Morphology for Word Representations and Language Modelling. In Proceedings of ICML.
1508.06615#37
1508.06615#39
1508.06615
[ "1507.06228" ]
1508.06615#39
Character-Aware Neural Language Models
Botha, J. 2014. Probabilistic Modelling of Morphologically Rich Languages. DPhil Dissertation, Oxford University. Chen, S., and Goodman, J. 1998. An Empirical Study of Smooth- ing Techniques for Language Modeling. Technical Report, Har- vard University. Cheng, W. C.; Kok, S.; Pham, H. V.; Chieu, H. L.; and Chai, K. M. 2014.
1508.06615#38
1508.06615#40
1508.06615
[ "1507.06228" ]
1508.06615#40
Character-Aware Neural Language Models
Language Modeling with Sum-Product Networks. In Pro- ceedings of INTERSPEECH. Cho, K.; van Merrienboer, B.; Gulcehre, C.; Bahdanau, D.; Bougares, F.; Schwenk, H.; and Bengio, Y. 2014. Learning Phrase Representations using RNN Encoder-Decoder for Statistical Ma- chine Translation. In Proceedings of EMNLP. Collobert, R.; Weston, J.; Bottou, L.; Karlen, M.; Kavukcuoglu, K.; and Kuksa, P. 2011. Natural Language Processing (almost) from Scratch. Journal of Machine Learning Research 12:2493â 2537. Creutz, M., and Lagus, K. 2007.
1508.06615#39
1508.06615#41
1508.06615
[ "1507.06228" ]
1508.06615#41
Character-Aware Neural Language Models
Unsupervised Models for Mor- pheme Segmentation and Morphology Learning. In Proceedings of the ACM Transations on Speech and Language Processing. Deerwester, S.; Dumais, S.; and Harshman, R. 1990. Indexing by Latent Semantic Analysis. Journal of American Society of Infor- mation Science 41:391â 407. dos Santos, C. N., and Guimaraes, V. 2015. Boosting Named Entity Recognition with Neural Character Embeddings. In Proceedings of ACL Named Entities Workshop.
1508.06615#40
1508.06615#42
1508.06615
[ "1507.06228" ]
1508.06615#42
Character-Aware Neural Language Models
dos Santos, C. N., and Zadrozny, B. 2014. Learning Character- level Representations for Part-of-Speech Tagging. In Proceedings of ICML. Graves, A. 2013. Generating Sequences with Recurrent Neural Networks. arXiv:1308.0850. Hinton, G.; Srivastava, N.; Krizhevsky, A.; Sutskever, I.; and Salakhutdinov, R. 2012. Improving Neural Networks by Prevent- ing Co-Adaptation of Feature Detectors. arxiv:1207.0580. Hochreiter, S., and Schmidhuber, J. 1997. Long Short-Term Mem- ory. Neural Computation 9:1735â
1508.06615#41
1508.06615#43
1508.06615
[ "1507.06228" ]
1508.06615#43
Character-Aware Neural Language Models
1780. Kalchbrenner, N.; Grefenstette, E.; and Blunsom, P. 2014. A Con- volutional Neural Network for Modelling Sentences. In Proceed- ings of ACL. Kim, Y. 2014. Convolutional Neural Networks for Sentence Clas- siï¬ cation. In Proceedings of EMNLP. ImageNet Krizhevsky, A.; Sutskever, I.; and Hinton, G. 2012.
1508.06615#42
1508.06615#44
1508.06615
[ "1507.06228" ]
1508.06615#44
Character-Aware Neural Language Models
Classiï¬ cation with Deep Convolutional Neural Networks. In Pro- ceedings of NIPS. LeCun, Y.; Boser, B.; Denker, J. S.; Henderson, D.; Howard, R. E.; Hubbard, W.; and Jackel, L. D. 1989. Handwritten Digit Recogni- tion with a Backpropagation Network. In Proceedings of NIPS. Lei, T.; Barzilay, R.; and Jaakola, T. 2015. Molding CNNs for Text: Non-linear, Non-consecutive Convolutions. In Proceedings of EMNLP. Ling, W.; Lui, T.; Marujo, L.; Astudillo, R. F.; Amir, S.; Dyer, C.; Black, A. W.; and Trancoso, I. 2015.
1508.06615#43
1508.06615#45
1508.06615
[ "1507.06228" ]
1508.06615#45
Character-Aware Neural Language Models
Finding Function in Form: Compositional Character Models for Open Vocabulary Word Rep- resentation. In Proceedings of EMNLP. Luong, M.-T.; Socher, R.; and Manning, C. 2013. Better Word Representations with Recursive Neural Networks for Morphology. In Proceedings of CoNLL. Marcus, M.; Santorini, B.; and Marcinkiewicz, M. 1993. Building a Large Annotated Corpus of English: the Penn Treebank. Compu- tational Linguistics 19:331â 330.
1508.06615#44
1508.06615#46
1508.06615
[ "1507.06228" ]
1508.06615#46
Character-Aware Neural Language Models
Mikolov, T., and Zweig, G. 2012. Context Dependent Recurrent Neural Network Language Model. In Proceedings of SLT. Mikolov, T.; Karaï¬ at, M.; Burget, L.; Cernocky, J.; and Khudanpur, S. 2010. Recurrent Neural Network Based Language Model. In Proceedings of INTERSPEECH. Mikolov, T.; Deoras, A.; Kombrink, S.; Burget, L.; and Cernocky, J. 2011. Empirical Evaluation and Combination of Advanced Lan- guage Modeling Techniques. In Proceedings of INTERSPEECH. Mikolov, T.; Sutskever, I.; Deoras, A.; Le, H.-S.; Kombrink, S.; and Cernocky, J. 2012. Subword Language Modeling with Neural Networks. preprint: www.ï¬ t.vutbr.cz/Ë imikolov/rnnlm/char.pdf. Mikolov, T.; Chen, K.; Corrado, G.; and Dean, J. 2013.
1508.06615#45
1508.06615#47
1508.06615
[ "1507.06228" ]
1508.06615#47
Character-Aware Neural Language Models
Ef- ï¬ cient Estimation of Word Representations in Vector Space. arXiv:1301.3781. Mnih, A., and Hinton, G. 2007. Three New Graphical Models for Statistical Language Modelling. In Proceedings of ICML. Morin, F., and Bengio, Y. 2005. Hierarchical Probabilistic Neural Network Language Model. In Proceedings of AISTATS. Pascanu, R.; Culcehre, C.; Cho, K.; and Bengio, Y. 2013. How to Construct Deep Neural Networks. arXiv:1312.6026. Qui, S.; Cui, Q.; Bian, J.; and Gao, B. 2014. Co-learning of Word Representations and Morpheme Representations. In Proceedings of COLING. Shen, Y.; He, X.; Gao, J.; Deng, L.; and Mesnil, G. 2014. A Latent Semantic Model with Convolutional-pooling Structure for Infor- mation Retrieval. In Proceedings of CIKM.
1508.06615#46
1508.06615#48
1508.06615
[ "1507.06228" ]
1508.06615#48
Character-Aware Neural Language Models
Srivastava, R. K.; Greff, K.; and Schmidhuber, J. 2015. Training Very Deep Networks. arXiv:1507.06228. Sundermeyer, M.; Schluter, R.; and Ney, H. 2012. LSTM Neural Networks for Language Modeling. Sutskever, I.; Martens, J.; and Hinton, G. 2011. Generating Text with Recurrent Neural Networks. Sutskever, I.; Vinyals, O.; and Le, Q. 2014. Sequence to Sequence Learning with Neural Networks. Wang, M.; Lu, Z.; Li, H.; Jiang, W.; and Liu, Q. 2015. genCNN: In A Convolutional Architecture for Word Sequence Prediction. Proceedings of ACL.
1508.06615#47
1508.06615#49
1508.06615
[ "1507.06228" ]
1508.06615#49
Character-Aware Neural Language Models
Werbos, P. 1990. Back-propagation Through Time: what it does and how to do it. In Proceedings of IEEE. Zaremba, W.; Sutskever, I.; and Vinyals, O. 2014. Recurrent Neural Network Regularization. arXiv:1409.2329. Zhang, S.; Jiang, H.; Xu, M.; Hou, J.; and Dai, L. 2015. The Fixed- Size Ordinally-Forgetting Encoding Method for Neural Network Language Models. In Proceedings of ACL. Zhang, X.; Zhao, J.; and LeCun, Y. 2015. Character-level Convo- lutional Networks for Text Classiï¬
1508.06615#48
1508.06615#50
1508.06615
[ "1507.06228" ]
1508.06615#50
Character-Aware Neural Language Models
cation. In Proceedings of NIPS.
1508.06615#49
1508.06615
[ "1507.06228" ]
1508.05326#0
A large annotated corpus for learning natural language inference
5 1 0 2 g u A 1 2 ] L C . s c [ 1 v 6 2 3 5 0 . 8 0 5 1 : v i X r a # A large annotated corpus for learning natural language inference # Samuel R. Bowmanâ â [email protected] # Gabor Angeliâ â ¡ [email protected] # Christopher Pottsâ [email protected] Christopher D. Manningâ â â
1508.05326#1
1508.05326
[ "1502.05698" ]
1508.05326#1
A large annotated corpus for learning natural language inference
¡ [email protected] â Stanford Linguistics â Stanford NLP Group â ¡Stanford Computer Science # Abstract Understanding entailment and contradic- tion is fundamental to understanding nat- ural language, and inference about entail- ment and contradiction is a valuable test- ing ground for the development of seman- tic representations. However, machine learning research in this area has been dra- matically limited by the lack of large-scale resources. To address this, we introduce the Stanford Natural Language Inference corpus, a new, freely available collection of labeled sentence pairs, written by hu- mans doing a novel grounded task based on image captioning. At 570K pairs, it is two orders of magnitude larger than all other resources of its type. This in- crease in scale allows lexicalized classi- ï¬ ers to outperform some sophisticated ex- isting entailment models, and it allows a neural network-based model to perform competitively on natural language infer- ence benchmarks for the ï¬
1508.05326#0
1508.05326#2
1508.05326
[ "1502.05698" ]
1508.05326#2
A large annotated corpus for learning natural language inference
rst time. # Introduction for approaches employing distributed word and phrase representations. Distributed representa- tions excel at capturing relations based in similar- ity, and have proven effective at modeling simple dimensions of meaning like evaluative sentiment (e.g., Socher et al. 2013), but it is less clear that they can be trained to support the full range of logical and commonsense inferences required for NLI (Bowman et al., 2015; Weston et al., 2015b; In a SemEval 2014 task Weston et al., 2015a). aimed at evaluating distributed representations for NLI, the best-performing systems relied heavily on additional features and reasoning capabilities (Marelli et al., 2014a). Our ultimate objective is to provide an empiri- cal evaluation of learning-centered approaches to NLI, advancing the case for NLI as a tool for the evaluation of domain-general approaches to semantic representation. However, in our view, existing NLI corpora do not permit such an as- sessment. They are generally too small for train- ing modern data-intensive, wide-coverage models, many contain sentences that were algorithmically generated, and they are often beset with indeter- minacies of event and entity coreference that sig- niï¬ cantly impact annotation quality. The semantic concepts of entailment and contra- diction are central to all aspects of natural lan- guage meaning (Katz, 1972; van Benthem, 2008), from the lexicon to the content of entire texts. Thus, natural language inference (NLI) â charac- terizing and using these relations in computational systems (Fyodorov et al., 2000; Condoravdi et al., 2003; Bos and Markert, 2005; Dagan et al., 2006; MacCartney and Manning, 2009) â is essential in tasks ranging from information retrieval to seman- tic parsing to commonsense reasoning. NLI has been addressed using a variety of tech- niques, including those based on symbolic logic, knowledge bases, and neural networks. In recent years, it has become an important testing ground To address this, this paper introduces the Stan- ford Natural Language Inference (SNLI) corpus, a collection of sentence pairs labeled for entail- ment, contradiction, and semantic independence.
1508.05326#1
1508.05326#3
1508.05326
[ "1502.05698" ]
1508.05326#3
A large annotated corpus for learning natural language inference
At 570,152 sentence pairs, SNLI is two orders of magnitude larger than all other resources of its type. And, in contrast to many such resources, all of its sentences and labels were written by hu- mans in a grounded, naturalistic context. In a sepa- rate validation phase, we collected four additional judgments for each label for 56,941 of the exam- ples. Of these, 98% of cases emerge with a three- annotator consensus, and 58% see a unanimous consensus from all ï¬ ve annotators. In this paper, we use this corpus to evaluate
1508.05326#2
1508.05326#4
1508.05326
[ "1502.05698" ]
1508.05326#4
A large annotated corpus for learning natural language inference
A man inspects the uniform of a ï¬ gure in some East Asian country. contradiction C C C C C The man is sleeping An older and younger man smiling. neutral N N E N N Two men are smiling and laughing at the cats play- ing on the ï¬ oor. A black race car starts up in front of a crowd of people. contradiction C C C C C A man is driving down a lonely road. A soccer game with multiple males playing. entailment E E E E E Some men are playing a sport.
1508.05326#3
1508.05326#5
1508.05326
[ "1502.05698" ]
1508.05326#5
A large annotated corpus for learning natural language inference
A smiling costumed woman is holding an um- brella. neutral N N E C N A happy woman in a fairy costume holds an um- brella. Table 1: Randomly chosen examples from the development section of our new corpus, shown with both the selected gold labels and the full set of labels (abbreviated) from the individual annotators, including (in the ï¬ rst position) the label used by the initial author of the pair. a variety of models for natural language infer- ence, including rule-based systems, simple lin- ear classiï¬ ers, and neural network-based models.
1508.05326#4
1508.05326#6
1508.05326
[ "1502.05698" ]
1508.05326#6
A large annotated corpus for learning natural language inference
We ï¬ nd that two models achieve comparable per- formance: a feature-rich classiï¬ er model and a neural network model centered around a Long Short-Term Memory network (LSTM; Hochreiter and Schmidhuber 1997). We further evaluate the LSTM model by taking advantage of its ready sup- port for transfer learning, and show that it can be adapted to an existing NLI challenge task, yielding the best reported performance by a neural network model and approaching the overall state of the art. # 2 A new corpus for NLI To date, the primary sources of annotated NLI cor- pora have been the Recognizing Textual Entail- ment (RTE) challenge tasks.1 These are generally high-quality, hand-labeled data sets, and they have stimulated innovative logical and statistical mod- els of natural language reasoning, but their small size (fewer than a thousand examples each) limits their utility as a testbed for learned distributed rep- resentations. The data for the SemEval 2014 task called Sentences Involving Compositional Knowl- edge (SICK) is a step up in terms of size, but only to 4,500 training examples, and its partly automatic construction introduced some spurious patterns into the data (Marelli et al. 2014a, §6). The Denotation Graph entailment set (Young et al., 2014) contains millions of examples of en- tailments between sentences and artiï¬ cially con- structed short phrases, but it was labeled using fully automatic methods, and is noisy enough that it is probably suitable only as a source of sup- plementary training data. Outside the domain of sentence-level entailment, Levy et al. (2014) intro- duce a large corpus of semi-automatically anno- tated entailment examples between subjectâ verbâ object relation triples, and the second release of the Paraphrase Database (Pavlick et al., 2015) in- cludes automatically generated entailment anno- tations over a large corpus of pairs of words and short phrases.
1508.05326#5
1508.05326#7
1508.05326
[ "1502.05698" ]
1508.05326#7
A large annotated corpus for learning natural language inference
Existing resources suffer from a subtler issue impacts even projects using only human- that provided annotations: indeterminacies of event and entity coreference lead to insurmountable in- determinacy concerning the correct semantic la- bel (de Marneffe et al. 2008 §4.3; Marelli et al. 2014b). For an example of the pitfalls surround- ing entity coreference, consider the sentence pair A boat sank in the Paciï¬ c Ocean and A boat sank in the Atlantic Ocean. The pair could be labeled as a contradiction if one assumes that the two sen- tences refer to the same single event, but could also be reasonably labeled as neutral if that as- sumption is not made. In order to ensure that our labeling scheme assigns a single correct label to every pair, we must select one of these approaches across the board, but both choices present prob- lems. If we opt not to assume that events are coreferent, then we will only ever ï¬ nd contradic- tions between sentences that make broad univer- sal assertions, but if we opt to assume coreference, new counterintuitive predictions emerge. For ex- ample, Ruth Bader Ginsburg was appointed to the US Supreme Court and I had a sandwich for lunch today would unintuitively be labeled as a contra- diction, rather than neutral, under this assumption. Entity coreference presents a similar kind of in- determinacy, as in the pair A tourist visited New # 1http://aclweb.org/aclwiki/index.php? title=Textual_Entailment_Resource_Pool
1508.05326#6
1508.05326#8
1508.05326
[ "1502.05698" ]
1508.05326#8
A large annotated corpus for learning natural language inference
York and A tourist visited the city. Assuming coreference between New York and the city justi- ï¬ es labeling the pair as an entailment, but with- out that assumption the city could be taken to refer to a speciï¬ c unknown city, leaving the pair neu- tral. This kind of indeterminacy of label can be re- solved only once the questions of coreference are resolved. With SNLI, we sought to address the issues of size, quality, and indeterminacy. To do this, we employed a crowdsourcing framework with the following crucial innovations. First, the exam- ples were grounded in speciï¬ c scenarios, and the premise and hypothesis sentences in each exam- ple were constrained to describe that scenario from the same perspective, which helps greatly in con- trolling event and entity coreference.2 Second, the prompt gave participants the freedom to produce entirely novel sentences within the task setting, which led to richer examples than we see with the more proscribed string-editing techniques of ear- lier approaches, without sacriï¬ cing consistency. Third, a subset of the resulting sentences were sent to a validation task aimed at providing a highly re- liable set of annotations over the same data, and at identifying areas of inferential uncertainty. # 2.1 Data collection We used Amazon Mechanical Turk for data col- lection. In each individual task (each HIT), a worker was presented with premise scene descrip- tions from a pre-existing corpus, and asked to supply hypotheses for each of our three labelsâ
1508.05326#7
1508.05326#9
1508.05326
[ "1502.05698" ]
1508.05326#9
A large annotated corpus for learning natural language inference
entailment, neutral, and contradictionâ forcing the data to be balanced among these classes. The instructions that we provided to the work- ers are shown in Figure 1. Below the instructions were three ï¬ elds for each of three requested sen- tences, corresponding to our entailment, neutral, and contradiction labels, a fourth ï¬ eld (marked optional) for reporting problems, and a link to an FAQ page. That FAQ grew over the course of data collection. It warned about disallowed tech- niques (e.g., reusing the same sentence for many different prompts, which we saw in a few cases), provided guidance concerning sentence length and 2 Issues of coreference are not completely solved, but greatly mitigated. For example, with the premise sentence A dog is lying in the grass, a worker could safely assume that the dog is the most prominent thing in the photo, and very likely the only dog, and build contradicting sentences assum- ing reference to the same dog.
1508.05326#8
1508.05326#10
1508.05326
[ "1502.05698" ]
1508.05326#10
A large annotated corpus for learning natural language inference
We will show you the caption for a photo. We will not show you the photo. Using only the caption and what you know about the world: â ¢ Write one alternate caption that is deï¬ nitely a true description of the photo. Example: For the caption â Two dogs are running through a ï¬ eld.â you could write â There are animals outdoors.â â ¢ Write one alternate caption that might be a true description of the photo. Example: For the cap- tion â Two dogs are running through a ï¬ eld.â you could write â
1508.05326#9
1508.05326#11
1508.05326
[ "1502.05698" ]
1508.05326#11
A large annotated corpus for learning natural language inference
Some puppies are running to catch a stick.â â ¢ Write one alternate caption that is deï¬ nitely a false description of the photo. Example: For the caption â Two dogs are running through a ï¬ eld.â you could write â The pets are sitting on a couch.â This is different from the maybe correct category because itâ s impossible for the dogs to be both running and sitting. Figure 1: The instructions used on Mechanical Turk for data collection. complexity (we did not enforce a minimum length, and we allowed bare NPs as well as full sen- tences), and reviewed logistical issues around pay- ment timing. About 2,500 workers contributed. For the premises, we used captions from the Flickr30k corpus (Young et al., 2014), a collection of approximately 160k captions (corresponding to about 30k images) collected in an earlier crowd- sourced effort.3 The captions were not authored by the photographers who took the source images, and they tend to contain relatively literal scene de- scriptions that are suited to our approach, rather than those typically associated with personal pho- tographs (as in their example:
1508.05326#10
1508.05326#12
1508.05326
[ "1502.05698" ]
1508.05326#12
A large annotated corpus for learning natural language inference
Our trip to the Olympic Peninsula). In order to ensure that the la- bel for each sentence pair can be recovered solely based on the available text, we did not use the im- ages at all during corpus collection. Table 2 reports some key statistics about the col- lected corpus, and Figure 2 shows the distributions of sentence lengths for both our source hypotheses and our newly collected premises. We observed that while premise sentences varied considerably in length, hypothesis sentences tended to be as 3 We additionally include about 4k sentence pairs from a pilot study in which the premise sentences were instead drawn from the VisualGenome corpus (under construction; visualgenome.org). These examples appear only in the training set, and have pair identiï¬ ers preï¬ xed with vg in our corpus.
1508.05326#11
1508.05326#13
1508.05326
[ "1502.05698" ]
1508.05326#13
A large annotated corpus for learning natural language inference
Data set sizes: Training pairs Development pairs Test pairs 550,152 10,000 10,000 Sentence length: Premise mean token count Hypothesis mean token count 14.1 8.3 Parser output: Premise â Sâ -rooted parses Hypothesis â Sâ -rooted parses Distinct words (ignoring case) 74.0% 88.9% 37,026 Table 2: Key statistics for the raw sentence pairs in SNLI. Since the two halves of each pair were collected separately, we report some statistics for both. short as possible while still providing enough in- formation to yield a clear judgment, clustering at around seven words. We also observed that the bulk of the sentences from both sources were syn- tactically complete rather than fragments, and the frequency with which the parser produces a parse rooted with an â
1508.05326#12
1508.05326#14
1508.05326
[ "1502.05698" ]
1508.05326#14
A large annotated corpus for learning natural language inference
Sâ (sentence) node attests to this. # 2.2 Data validation In order to measure the quality of our corpus, and in order to construct maximally useful test- ing and development sets, we performed an addi- tional round of validation for about 10% of our data. This validation phase followed the same basic form as the Mechanical Turk labeling task used to label the SICK entailment data: we pre- sented workers with pairs of sentences in batches of ï¬ ve, and asked them to choose a single label for each pair. We supplied each pair to four an- notators, yielding ï¬ ve labels per pair including the label used by the original author. The instructions were similar to the instructions for initial data col- lection shown in Figure 1, and linked to a similar FAQ. Though we initially used a very restrictive qualiï¬ cation (based on past approval rate) to se- lect workers for the validation task, we nonethe- less discovered (and deleted) some instances of random guessing in an early batch of work, and subsequently instituted a fully closed qualiï¬ cation restricted to about 30 trusted workers. For each pair that we validated, we assigned a gold label. If any one of the three labels was cho- sen by at least three of the ï¬ ve annotators, it was â â Premise â
1508.05326#13
1508.05326#15
1508.05326
[ "1502.05698" ]
1508.05326#15
A large annotated corpus for learning natural language inference
Hypothesis 100,000 90,000 80,000 70,000 60,000 50,000 40,000 30,000 20,000 Number of sentences 0 5 10 15 20 25 30 35 40 Sentence length (tokens) Figure 2: The distribution of sentence length. chosen as the gold label. If there was no such con- sensus, which occurred in about 2% of cases, we assigned the placeholder label â -â . While these un- labeled examples are included in the corpus dis- tribution, they are unlikely to be helpful for the standard NLI classiï¬ cation task, and we do not in- clude them in either training or evaluation in the experiments that we discuss in this paper. The results of this validation process are sum- marized in Table 3. Nearly all of the examples received a majority label, indicating broad con- sensus about the nature of the data and categories. The gold-labeled examples are very nearly evenly distributed across the three labels. The Fleiss κ scores (computed over every example with a full ï¬ ve annotations) are likely to be conservative given our large and unevenly distributed pool of annotators, but they still provide insights about the levels of disagreement across the three semantic classes. This disagreement likely reï¬ ects not just the limitations of large crowdsourcing efforts but also the uncertainty inherent in naturalistic NLI. Regardless, the overall rate of agreement is ex- tremely high, suggesting that the corpus is sufï¬ - ciently high quality to pose a challenging but real- istic machine learning task. # 2.3 The distributed corpus Table 1 shows a set of randomly chosen validated examples from the development set with their la- bels. Qualitatively, we ï¬ nd the data that we col- lected draws fairly extensively on commonsense knowledge, and that hypothesis and premise sen- tences often differ structurally in signiï¬ cant ways, suggesting that there is room for improvement be- yond superï¬ cial word alignment models. We also ï¬ nd the sentences that we collected to be largely General: Validated pairs Pairs w/ unanimous gold label 56,951 58.3% Individual annotator label agreement: Individual label = gold label 89.0% Individual label = authorâ
1508.05326#14
1508.05326#16
1508.05326
[ "1502.05698" ]
1508.05326#16
A large annotated corpus for learning natural language inference
s label 85.8% Gold label/authorâ s label agreement: Gold label = authorâ s label 91.2% Gold label 4 authorâ s label 6.8% No gold label (no 3 labels match) 2.0% Fleiss «: contradiction 0.77 entailment 0.72 neutral 0.60 Overall 0.70 Table 3: Statistics for the validated pairs. The au- thorâ s label is the label used by the worker who wrote the premise to create the sentence pair. A gold label reï¬
1508.05326#15
1508.05326#17
1508.05326
[ "1502.05698" ]
1508.05326#17
A large annotated corpus for learning natural language inference
ects a consensus of three votes from among the author and the four annotators. ï¬ uent, correctly spelled English, with a mix of full sentences and caption-style noun phrase frag- ments, though punctuation and capitalization are often omitted. The corpus is available under a CreativeCom- mons Attribution-ShareAlike license, the same li- cense used for the Flickr30k source captions. It can be downloaded at: nlp.stanford.edu/projects/snli/ Partition We distribute the corpus with a pre- speciï¬ ed train/test/development split. The test and development sets contain 10k examples each. Each original ImageFlickr caption occurs in only one of the three sets, and all of the examples in the test and development sets have been validated. Parses The distributed corpus includes parses produced by the Stanford PCFG Parser 3.5.2 (Klein and Manning, 2003), trained on the stan- dard training set as well as on the Brown Corpus (Francis and Kucera 1979), which we found to im- prove the parse quality of the descriptive sentences and noun phrases found in the descriptions. # 3 Our data as a platform for evaluation The most immediate application for our corpus is in developing models for the task of NLI. In par- System SNLI SICK RTE-3 Edit Distance Based 71.9 65.4 61.9 Classiï¬ er Based 72.2 71.4 61.5 + Lexical Resources 75.0 78.8 63.6 Table 4: 2-class test accuracy for two simple baseline systems included in the Excitement Open Platform, as well as SICK and RTE results for a model making use of more sophisticated lexical resources. ticular, since it is dramatically larger than any ex- isting corpus of comparable quality, we expect it to be suitable for training parameter-rich models like neural networks, which have not previously been competitive at this task. Our ability to evaluate standard classiï¬ er-base NLI models, however, was limited to those which were designed to scale to SNLIâ s size without modiï¬ cation, so a more com- plete comparison of approaches will have to wait for future work.
1508.05326#16
1508.05326#18
1508.05326
[ "1502.05698" ]
1508.05326#18
A large annotated corpus for learning natural language inference
In this section, we explore the per- formance of three classes of models which could scale readily: (i) models from a well-known NLI system, the Excitement Open Platform; (ii) vari- ants of a strong but simple feature-based classi- ï¬ er model, which makes use of both unlexicalized and lexicalized features, and (iii) distributed repre- sentation models, including a baseline model and neural network sequence models. # 3.1 Excitement Open Platform models
1508.05326#17
1508.05326#19
1508.05326
[ "1502.05698" ]
1508.05326#19
A large annotated corpus for learning natural language inference
The ï¬ rst class of models is from the Excitement Open Platform (EOP, Pad´o et al. 2014; Magnini et al. 2014)â an open source platform for RTE re- search. EOP is a tool for quickly developing NLI systems while sharing components such as com- mon lexical resources and evaluation sets. We evaluate on two algorithms included in the dis- tribution: a simple edit-distance based algorithm and a classiï¬ er-based algorithm, the latter both in a bare form and augmented with EOPâ s full suite of lexical resources. Our initial goal was to better understand the dif- ï¬ culty of the task of classifying SNLI corpus in- ferences, rather than necessarily the performance of a state-of-the-art RTE system. We approached this by running the same system on several data sets: our own test set, the SICK test data, and the standard RTE-3 test set (Giampiccolo et al., 2007). We report results in Table 4. Each of the models was separately trained on the training set of each corpus. All models are evaluated only on 2-class entailment. To convert 3-class problems like SICK and SNLI to this setting, all instances of contradic- tion and unknown are converted to nonentailment. This yields a most-frequent-class baseline accu- racy of 66% on SNLI, and 71% on SICK. This is intended primarily to demonstrate the difï¬ culty of the task, rather than necessarily the performance of a state-of-the-art RTE system. The edit dis- tance algorithm tunes the weight of the three case- insensitive edit distance operations on the train- In addition ing set, after removing stop words. to the base classiï¬ er-based system distributed with the platform, we train a variant which includes in- formation from WordNet (Miller, 1995) and Verb- Ocean (Chklovski and Pantel, 2004), and makes use of features based on tree patterns and depen- dency tree skeletons (Wang and Neumann, 2007). # 3.2 Lexicalized Classiï¬ er Unlike the RTE datasets, SNLIâ s size supports ap- proaches which make use of rich lexicalized fea- tures.
1508.05326#18
1508.05326#20
1508.05326
[ "1502.05698" ]
1508.05326#20
A large annotated corpus for learning natural language inference
We evaluate a simple lexicalized classiï¬ er to explore the ability of non-specialized models to exploit these features in lieu of more involved lan- guage understanding. Our classiï¬ er implements 6 feature types; 3 unlexicalized and 3 lexicalized: 1. The BLEU score of the hypothesis with re- spect to the premise, using an n-gram length between 1 and 4. 2. The length difference between the hypothesis and the premise, as a real-valued feature. 3. The overlap between words in the premise and hypothesis, both as an absolute count and a percentage of possible overlap, and both over all words and over just nouns, verbs, ad- jectives, and adverbs.
1508.05326#19
1508.05326#21
1508.05326
[ "1502.05698" ]
1508.05326#21
A large annotated corpus for learning natural language inference
4. An indicator for every unigram and bigram in # the hypothesis. 5. Cross-unigrams: for every pair of words across the premise and hypothesis which share a POS tag, an indicator feature over the two words. 6. Cross-bigrams: for every pair of bigrams across the premise and hypothesis which share a POS tag on the second word, an in- dicator feature over the two bigrams. We report results in Table 5, along with abla- tion studies for removing the cross-bigram fea- tures (leaving only the cross-unigram feature) and System SNLI SICK Train Test Train Test Lexicalized Unigrams Only Unlexicalized 99.7 78.2 93.1 71.6 49.4 50.4 90.4 77.8 88.1 77.0 69.9 69.6 Table 5: 3-class accuracy, training on either our data or SICK, including models lacking cross- bigram features (Feature 6), and lacking all lexical features (Features 4â 6). We report results both on the test set and the training set to judge overï¬ tting.
1508.05326#20
1508.05326#22
1508.05326
[ "1502.05698" ]
1508.05326#22
A large annotated corpus for learning natural language inference
for removing all lexicalized features. On our large corpus in particular, there is a substantial jump in accuracy from using lexicalized features, and an- other from using the very sparse cross-bigram fea- tures. The latter result suggests that there is value in letting the classiï¬ er automatically learn to rec- ognize structures like explicit negations and adjec- tive modiï¬ cation. A similar result was shown in Wang and Manning (2012) for bigram features in sentiment analysis. It is surprising that the classiï¬ er performs as well as it does without any notion of alignment or tree transformations. Although we expect that richer models would perform better, the results suggest that given enough data, cross bigrams with the noisy part-of-speech overlap constraint can produce an effective model. # 3.3 Sentence embeddings and NLI SNLI is suitably large and diverse to make it pos- sible to train neural network models that produce distributed representations of sentence meaning. In this section, we compare the performance of three such models on the corpus. To focus specif- ically on the strengths of these models at produc- ing informative sentence representations, we use sentence embedding as an intermediate step in the NLI classiï¬ cation task: each model must produce a vector representation of each of the two sen- tences without using any context from the other sentence, and the two resulting vectors are then passed to a neural network classiï¬ er which pre- dicts the label for the pair. This choice allows us to focus on existing models for sentence embedding, and it allows us to evaluate the ability of those models to learn useful representations of mean- ing (which may be independently useful for sub- sequent tasks), at the cost of excluding from con- 3-way softmax classiï¬ er 200d tanh layer 200d tanh layer 200d tanh layer 100d premise 100d hypothesis sentence model with premise input sentence model with hypothesis input Figure 3: The neural network classiï¬ cation archi- tecture: for each sentence embedding model eval- uated in Tables 6 and 7, two identical copies of the model are run with the two sentences as input, and their outputs are used as the two 100d inputs shown here.
1508.05326#21
1508.05326#23
1508.05326
[ "1502.05698" ]
1508.05326#23
A large annotated corpus for learning natural language inference
sideration possible strong neural models for NLI that directly compare the two inputs at the word or phrase level. Our neural network classiï¬ er, depicted in Fig- ure 3 (and based on a one-layer model in Bow- man et al. 2015), is simply a stack of three 200d tanh layers, with the bottom layer taking the con- catenated sentence representations as input and the top layer feeding a softmax classiï¬ er, all trained jointly with the sentence embedding model itself. We test three sentence embedding models, each set to use 100d phrase and sentence embeddings. Our baseline sentence embedding model simply sums the embeddings of the words in each sen- tence. In addition, we experiment with two simple sequence embedding models: a plain RNN and an LSTM RNN (Hochreiter and Schmidhuber, 1997). The word embeddings for all of the models are initialized with the 300d reference GloVe vectors (840B token version, Pennington et al. 2014) and ï¬ ne-tuned as part of training. In addition, all of the models use an additional tanh neural net- work layer to map these 300d embeddings into the lower-dimensional phrase and sentence em- bedding space. All of the models are randomly initialized using standard techniques and trained using AdaDelta (Zeiler, 2012) minibatch SGD un- til performance on the development set stops im- proving. We applied L2 regularization to all mod- els, manually tuning the strength coefï¬ cient λ for each, and additionally applied dropout (Srivastava et al., 2014) to the inputs and outputs of the sen- Sentence model Train Test 100d Sum of words 100d RNN 100d LSTM RNN 79.3 73.1 84.8 75.3 72.2 77.6 Table 6: Accuracy in 3-class classiï¬ cation on our training and test sets for each model. tence embedding models (though not to its internal connections) with a ï¬ xed dropout rate. All mod- els were implemented in a common framework for this paper, and the implementations will be made available at publication time. The results are shown in Table 6. The sum of words model performed slightly worse than the fundamentally similar lexicalized classiï¬
1508.05326#22
1508.05326#24
1508.05326
[ "1502.05698" ]
1508.05326#24
A large annotated corpus for learning natural language inference
erâ while the sum of words model can use pretrained word embeddings to better handle rare words, it lacks even the rudimentary sensitivity to word or- der that the lexicalized modelâ s bigram features provide. Of the two RNN models, the LSTMâ s more robust ability to learn long-term dependen- cies serves it well, giving it a substantial advan- tage over the plain RNN, and resulting in perfor- mance that is essentially equivalent to the lexical- ized classiï¬ er on the test set (LSTM performance near the stopping iteration varies by up to 0.5% between evaluation steps). While the lexicalized model ï¬ ts the training set almost perfectly, the gap between train and test set accuracy is relatively small for all three neural network models, suggest- ing that research into signiï¬ cantly higher capacity versions of these models would be productive. # 3.4 Analysis and discussion Figure 4 shows a learning curve for the LSTM and the lexicalized and unlexicalized feature-based models. It shows that the large size of the corpus is crucial to both the LSTM and the lexicalized model, and suggests that additional data would yield still better performance for both. In addi- tion, though the LSTM and the lexicalized model show similar performance when trained on the cur- rent full corpus, the somewhat steeper slope for the LSTM hints that its ability to learn arbitrar- ily structured representations of sentence mean- ing may give it an advantage over the more con- strained lexicalized model on still larger datasets. We were struck by the speed with which the lexicalized classiï¬ er outperforms its unlexicalized Unlexicalized â 4~ Lexicalized LSTM ca i=) % Accuracy w oN x 36 tJ S L 3S 30 1 10 100 1,000 10,000 â 100,000 1,000,000 Training pairs used (log scale) Figure 4: A learning curve showing how the baseline classiï¬
1508.05326#23
1508.05326#25
1508.05326
[ "1502.05698" ]
1508.05326#25
A large annotated corpus for learning natural language inference
ers and the LSTM perform when trained to convergence on varied amounts of train- ing data. The y-axis starts near a random-chance accuracy of 33%. The minibatch size of 64 that we used to tune the LSTM sets a lower bound on data for that model. counterpart. With only 100 training examples, the cross-bigram classiï¬ er is already performing bet- ter. Empirically, we ï¬ nd that the top weighted features for the classiï¬ er trained on 100 examples tend to be high precision entailments; e.g., playing â
1508.05326#24
1508.05326#26
1508.05326
[ "1502.05698" ]
1508.05326#26
A large annotated corpus for learning natural language inference
outside (most scenes are outdoors), a banana â person eating. If relatively few spurious entail- ments get high weightâ as it appears is the caseâ then it makes sense that, when these do ï¬ re, they boost accuracy in identifying entailments. There are revealing patterns in the errors com- mon to all the models considered here. Despite the large size of the training corpus and the distri- butional information captured by GloVe initializa- tion, many lexical relationships are still misana- lyzed, leading to incorrect predictions of indepen- dent, even for pairs that are common in the train- ing corpus like beach/surf and sprinter/runner. Semantic mistakes at the phrasal level (e.g., pre- dicting contradiction for A male is placing an order in a deli/A man buying a sandwich at a deli) indicate that additional attention to composi- tional semantics would pay off. However, many of the persistent problems run deeper, to inferences that depend on world knowledge and context- speciï¬ c inferences, as in the entailment pair A race car driver leaps from a burning car/A race car driver escaping danger, for which both the lex- icalized classiï¬
1508.05326#25
1508.05326#27
1508.05326
[ "1502.05698" ]
1508.05326#27
A large annotated corpus for learning natural language inference
er and the LSTM predict neutral. In other cases, the modelsâ attempts to shortcut this kind of inference through lexical cues can lead them astray. Some of these examples have quali- ties reminiscent of Winograd schemas (Winograd, 1972; Levesque, 2013). For example, all the mod- els wrongly predict entailment for A young girl throws sand toward the ocean/A girl canâ t stand the ocean, presumably because of distributional associations between throws and canâ t stand.
1508.05326#26
1508.05326#28
1508.05326
[ "1502.05698" ]
1508.05326#28
A large annotated corpus for learning natural language inference
Analysis of the modelsâ predictions also yields insights into the extent to which they grapple with event and entity coreference. For the most part, the original image prompts contained a focal element that the caption writer identiï¬ ed with a syntac- tic subject, following information structuring con- ventions associating subjects and topics in English (Ward and Birner, 2004). Our annotators generally followed suit, writing sentences that, while struc- turally diverse, share topic/focus (theme/rheme) structure with their premises. This promotes a coherent, situation-speciï¬ c construal of each sen- tence pair.
1508.05326#27
1508.05326#29
1508.05326
[ "1502.05698" ]
1508.05326#29
A large annotated corpus for learning natural language inference
This is information that our models can easily take advantage of, but it can lead them astray. For instance, all of them stumble with the amusingly simple case A woman prepares ingre- dients for a bowl of soup/A soup bowl prepares a woman, in which prior expectations about paral- lelism are not met. Another headline example of this type is A man wearing padded arm protec- tion is being bitten by a German shepherd dog/A man bit a dog, which all the models wrongly di- agnose as entailment, though the sentences report two very different stories. A model with access to explicit information about syntactic or semantic structure should perform better on cases like these. # 4 Transfer learning with SICK To the extent that successfully training a neural network model like our LSTM on SNLI forces that model to encode broadly accurate representations of English scene descriptions and to build an en- tailment classiï¬ er over those relations, we should expect it to be readily possible to adapt the trained model for use on other NLI tasks. In this section, we evaluate on the SICK entailment task using a simple transfer learning method (Pratt et al., 1991) and achieve competitive results. To perform transfer, we take the parameters of the LSTM RNN model trained on SNLI and use them to initialize a new model, which is trained from that point only on the training portion of SICK. The only newly initialized parameters are Training sets Train Test Our data only SICK only Our data and SICK (transfer) 42.0 100.0 99.9 46.7 71.3 80.8 Table 7: LSTM 3-class accuracy on the SICK train and test sets under three training regimes. softmax layer parameters and the embeddings for words that appear in SICK, but not in SNLI (which are populated with GloVe embeddings as above). We use the same model hyperparameters that were used to train the original model, with the excep- tion of the L2 regularization strength, which is re-tuned. We additionally transfer the accumula- tors that are used by AdaDelta to set the learn- ing rates. This lowers the starting learning rates, and is intended to ensure that the model does not learn too quickly in its ï¬ rst few epochs after trans- fer and destroy the knowledge accumulated in the pre-transfer phase of training.
1508.05326#28
1508.05326#30
1508.05326
[ "1502.05698" ]
1508.05326#30
A large annotated corpus for learning natural language inference
The results are shown in Table 7. Training on SICK alone yields poor performance, and the model trained on SNLI fails when tested on SICK data, labeling more neutral examples as contradic- tions than correctly, possibly as a result of subtle differences in how the labeling task was presented. In contrast, transferring SNLI representations to SICK yields the best performance yet reported for an unaugmented neural network model, surpasses the available EOP models, and approaches both the overall state of the art at 84.6% (Lai and Hock- enmaier, 2014) and the 84% level of interannota- tor agreement, which likely represents an approx- imate performance ceiling. This suggests that the introduction of a large high-quality corpus makes it possible to train representation-learning models for sentence meaning that are competitive with the best hand-engineered models on inference tasks. We attempted to apply this same transfer evalu- ation technique to the RTE-3 challenge, but found that the small training set (800 examples) did not allow the model to adapt to the unfamiliar genre of text used in that corpus, such that no training con- ï¬ guration yielded competitive performance. Fur- ther research on effective transfer learning on small data sets with neural models might facilitate improvements here. # 5 Conclusion Natural languages are powerful vehicles for rea- soning, and nearly all questions about meaning- fulness in language can be reduced to questions of entailment and contradiction in context. This sug- gests that NLI is an ideal testing ground for the- ories of semantic representation, and that training for NLI tasks can provide rich domain-general se- mantic representations. To date, however, it has not been possible to fully realize this potential due to the limited nature of existing NLI resources. This paper sought to remedy this with a new, large- scale, naturalistic corpus of sentence pairs labeled for entailment, contradiction, and independence. We used this corpus to evaluate a range of models, and found that both simple lexicalized models and neural network models perform well, and that the representations learned by a neural network model on our corpus can be used to dramatically improve performance on a standard challenge dataset. We hope that SNLI presents valuable training data and a challenging testbed for the continued application of machine learning to semantic representation.
1508.05326#29
1508.05326#31
1508.05326
[ "1502.05698" ]
1508.05326#31
A large annotated corpus for learning natural language inference
# Acknowledgments We gratefully acknowledge support from a Google Faculty Research Award, a gift from Bloomberg L.P., the Defense Advanced Research Projects Agency (DARPA) Deep Exploration and Filter- ing of Text (DEFT) Program under Air Force Re- search Laboratory (AFRL) contract no. FA8750- 13-2-0040, the National Science Foundation un- der grant no. IIS 1159679, and the Department of the Navy, Ofï¬ ce of Naval Research, under grant no. N00014-10-1-0109.
1508.05326#30
1508.05326#32
1508.05326
[ "1502.05698" ]
1508.05326#32
A large annotated corpus for learning natural language inference
Any opinions, ï¬ nd- ings, and conclusions or recommendations ex- pressed in this material are those of the authors and do not necessarily reï¬ ect the views of Google, Bloomberg L.P., DARPA, AFRL NSF, ONR, or the US government. We also thank our many ex- cellent Mechanical Turk contributors. # References Johan Bos and Katja Markert. 2005. Recognising In Proc. textual entailment with logical inference. EMNLP.
1508.05326#31
1508.05326#33
1508.05326
[ "1502.05698" ]
1508.05326#33
A large annotated corpus for learning natural language inference
Samuel R. Bowman, Christopher Potts, and Christo- pher D. Manning. 2015. Recursive neural networks In Proc. of the 3rd can learn logical semantics. Workshop on Continuous Vector Space Models and their Compositionality. Timothy Chklovski and Patrick Pantel. 2004. Verb- Ocean: Mining the web for ï¬ ne-grained semantic verb relations. In Proc. EMNLP. Cleo Condoravdi, Dick Crouch, Valeria de Paiva, Rein- hard Stolle, and Daniel G. Bobrow. 2003.
1508.05326#32
1508.05326#34
1508.05326
[ "1502.05698" ]
1508.05326#34
A large annotated corpus for learning natural language inference
En- In tailment, intensionality and text understanding. Proc. of the HLT-NAACL 2003 Workshop on Text Meaning. Ido Dagan, Oren Glickman, and Bernardo Magnini. 2006. The PASCAL recognising textual entailment challenge. In Machine learning challenges. Evalu- ating predictive uncertainty, visual object classiï¬ ca- tion, and recognising tectual entailment, pages 177â 190. Springer. Marie-Catherine de Marneffe, Anna N. Rafferty, and Christopher D. Manning. 2008.
1508.05326#33
1508.05326#35
1508.05326
[ "1502.05698" ]
1508.05326#35
A large annotated corpus for learning natural language inference
Finding contradic- tions in text. In Proc. ACL. W. Nelson Francis and Henry Kucera. 1979. Brown corpus manual. Brown University. Yaroslav Fyodorov, Yoad Winter, and Nissim Francez. In Proc. 2000. A natural logic inference system. of the 2nd Workshop on Inference in Computational Semantics. Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, and Bill Dolan. 2007. The third PASCAL recog- nizing textual entailment challenge. In Proc. of the ACL-PASCAL workshop on textual entailment and paraphrasing. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Neural computation, Long short-term memory. 9(8):1735â
1508.05326#34
1508.05326#36
1508.05326
[ "1502.05698" ]
1508.05326#36
A large annotated corpus for learning natural language inference
1780. Jerrold J. Katz. 1972. Semantic Theory. Harper & Row, New York. Dan Klein and Christopher D. Manning. 2003. Accu- rate unlexicalized parsing. In Proc. ACL. Alice Lai and Julia Hockenmaier. 2014. Illinois-LH: A denotational and distributional approach to seman- tics. In Proc. SemEval. Hector J. Levesque. 2013.
1508.05326#35
1508.05326#37
1508.05326
[ "1502.05698" ]
1508.05326#37
A large annotated corpus for learning natural language inference
On our best behaviour. In Proc. AAAI. Omer Levy, Ido Dagan, and Jacob Goldberger. 2014. Focused entailment graphs for open IE propositions. In Proc. CoNLL. Bill MacCartney and Christopher D Manning. 2009. An extended model of natural logic. In Proc. of the Eighth International Conference on Computational Semantics. Bernardo Magnini, Roberto Zanoli, Ido Dagan, Kathrin Eichler, G¨unter Neumann, Tae-Gil Noh, Sebastian Pado, Asher Stern, and Omer Levy. 2014. The Ex- citement Open Platform for textual inferences. Proc. ACL.
1508.05326#36
1508.05326#38
1508.05326
[ "1502.05698" ]
1508.05326#38
A large annotated corpus for learning natural language inference
Marco Marelli, Luisa Bentivogli, Marco Baroni, Raf- faella Bernardi, Stefano Menini, and Roberto Zam- parelli. 2014a. SemEval-2014 task 1: Evaluation of compositional distributional semantic models on full sentences through semantic relatedness and tex- tual entailment. In Proc. SemEval. Marco Marelli, Stefano Menini, Marco Baroni, Luisa Bentivogli, Raffaella Bernardi, and Roberto Zam- parelli. 2014b. A SICK cure for the evaluation of compositional distributional semantic models. In Proc. LREC. a lexical database for english.
1508.05326#37
1508.05326#39
1508.05326
[ "1502.05698" ]
1508.05326#39
A large annotated corpus for learning natural language inference
Communications of the ACM, 38(11):39â 41. Sebastian Pad´o, Tae-Gil Noh, Asher Stern, Rui Wang, and Roberto Zanoli. 2014. Design and realization of a modular architecture for textual entailment. Jour- nal of Natural Language Engineering. Ellie Pavlick, Johan Bos, Malvina Nissim, Charley Beller, Ben Van Durme, and Chris Callison-Burch. 2015. PPDB 2.0: Better paraphrase ranking, ï¬ ne- grained entailment relations, word embeddings, and style classiï¬
1508.05326#38
1508.05326#40
1508.05326
[ "1502.05698" ]
1508.05326#40
A large annotated corpus for learning natural language inference
cation. In Proc. ACL. Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. GloVe: Global vectors for word representation. In Proc. EMNLP. Lorien Y Pratt, Jack Mostow, Candace A Kamm, and Ace A Kamm. 1991. Direct transfer of learned in- formation among neural networks. In Proc. AAAI. Richard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep mod- els for semantic compositionality over a sentiment treebank. In Proc. EMNLP. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overï¬
1508.05326#39
1508.05326#41
1508.05326
[ "1502.05698" ]
1508.05326#41
A large annotated corpus for learning natural language inference
tting. JMLR. Johan van Benthem. 2008. A brief history of natural In M. Chakraborty, B. L¨owe, M. Nath Mi- logic. tra, and S. Sarukki, editors, Logic, Navya-Nyaya and Applications: Homage to Bimal Matilal. Col- lege Publications. 2012. Baselines and bigrams: Simple, good sentiment and topic classiï¬ cation. In Proc. ACL. Rui Wang and G¨unter Neumann. 2007. Recognizing textual entailment using sentence similarity based on dependency tree skeletons. In ACL-PASCAL Work- shop on Textual Entailment and Paraphrasing. Information structure and non-canonical syntax. In Laurence R. Horn and Gregory Ward, editors, Handbook of Prag- matics, pages 153â 174. Blackwell, Oxford. Jason Weston, Antoine Bordes, Sumit Chopra, and 2015a. Towards AI-complete Tomas Mikolov. question answering: A set of prerequisite toy tasks. arXiv:1502.05698. Jason Weston, Sumit Chopra, and Antoine Bordes. 2015b. Memory networks. In Proc. ICLR. Terry Winograd. 1972. Understanding natural lan- guage. Cognitive Psychology, 3(1):1â 191. Peter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. 2014.
1508.05326#40
1508.05326#42
1508.05326
[ "1502.05698" ]
1508.05326#42
A large annotated corpus for learning natural language inference
From image descriptions to vi- sual denotations: New similarity metrics for seman- tic inference over event descriptions. TACL, 2:67â 78. Matthew D. Zeiler. 2012. adaptive learning rate method. arXiv:1212.5701. ADADELTA: an arXiv preprint
1508.05326#41
1508.05326
[ "1502.05698" ]
1506.08909#0
The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems
6 1 0 2 b e F 4 ] L C . s c [ 3 v 9 0 9 8 0 . 6 0 5 1 : v i X r a # The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems Ryan Loweâ *, Nissan Pow*, Iulian V. Serbanâ and Joelle Pineau* *School of Computer Science, McGill University, Montreal, Canada â Department of Computer Science and Operations Research, Universié de Montréal, Montreal, Canada # Abstract This paper introduces the Ubuntu Dia- logue Corpus, a dataset containing almost 1 million multi-turn dialogues, with a to- tal of over 7 million utterances and 100 million words. This provides a unique re- source for research into building dialogue managers based on neural language mod- els that can make use of large amounts of unlabeled data. The dataset has both the multi-turn property of conversations in the Dialog State Tracking Challenge datasets, and the unstructured nature of in- teractions from microblog services such as Twitter. We also describe two neural learning architectures suitable for analyz- ing this dataset, and provide benchmark performance on the task of selecting the best next response.
1506.08909#1
1506.08909
[ "1503.02364" ]
1506.08909#1
The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems
# Introduction The ability for a computer to converse in a nat- ural and coherent manner with a human has long been held as one of the primary objectives of artiï¬ - cial intelligence (AI). In this paper we consider the problem of building dialogue agents that have the ability to interact in one-on-one multi-turn con- versations on a diverse set of topics. We primar- ily target unstructured dialogues, where there is no a priori logical representation for the informa- tion exchanged during the conversation. This is in contrast to recent systems which focus on struc- tured dialogue tasks, using a slot-ï¬ lling represen- tation [10, 27, 32]. methods, more speciï¬
1506.08909#0
1506.08909#2
1506.08909
[ "1503.02364" ]
1506.08909#2
The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems
cally with neural architec- tures [1]; however, it is worth noting that many of the most successful approaches, in particular convolutional and recurrent neural networks, were known for many years prior. It is therefore rea- sonable to attribute this progress to three major factors: 1) the public distribution of very large rich datasets [5], 2) the availability of substantial computing power, and 3) the development of new training methods for neural architectures, in par- ticular leveraging unlabeled data. Similar progress has not yet been observed in the development of dialogue systems. We hypothesize that this is due to the lack of sufï¬ ciently large datasets, and aim to overcome this barrier by providing a new large corpus for research in multi-turn conversation. The new Ubuntu Dialogue Corpus consists of almost one million two-person conversations ex- tracted from the Ubuntu chat logs1, used to receive technical support for various Ubuntu-related prob- lems. The conversations have an average of 8 turns each, with a minimum of 3 turns. All conversa- tions are carried out in text form (not audio). The dataset is orders of magnitude larger than struc- tured corpuses such as those of the Dialogue State Tracking Challenge [32]. It is on the same scale as recent datasets for solving problems such as ques- tion answering and analysis of microblog services, such as Twitter [22, 25, 28, 33], but each conversa- tion in our dataset includes several more turns, as well as longer utterances.
1506.08909#1
1506.08909#3
1506.08909
[ "1503.02364" ]
1506.08909#3
The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems
Furthermore, because it targets a speciï¬ c domain, namely technical sup- port, it can be used as a case study for the devel- opment of AI agents in targeted applications, in contrast to chatbox agents that often lack a well- deï¬ ned goal [26]. We observe that in several subï¬ elds of AIâ computer vision, speech recognition, machine translationâ fundamental break-throughs were achieved in recent years using machine learning In addition to the corpus, we present learning architectures suitable for analyzing this dataset, ranging from the simple frequency-inverse docu-
1506.08909#2
1506.08909#4
1506.08909
[ "1503.02364" ]
1506.08909#4
The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems
â The ï¬ rst two authors contributed equally. 1These logs are available from 2004 to 2015 at http: //irclogs.ubuntu.com/ ment frequency (TF-IDF) approach, to more so- phisticated neural models including a Recurrent Neural Network (RNN) and a Long Short-Term Memory (LSTM) architecture. We provide bench- trained mark performance of these algorithms, with our new corpus, on the task of selecting the best next response, which can be achieved with- out requiring any human labeling. The dataset is ready for public release2. The code developed for the empirical results is also available3. # 2 Related Work
1506.08909#3
1506.08909#5
1506.08909
[ "1503.02364" ]
1506.08909#5
The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems
We brieï¬ y review existing dialogue datasets, and some of the more recent learning architectures used for both structured and unstructured dia- logues. This is by no means an exhaustive list (due to space constraints), but surveys resources most related to our contribution. A list of datasets discussed is provided in Table 1. # 2.1 Dialogue Datasets The Switchboard dataset [8], and the Dialogue State Tracking Challenge (DSTC) datasets [32] have been used to train and validate dialogue man- agement systems for interactive information re- trieval.
1506.08909#4
1506.08909#6
1506.08909
[ "1503.02364" ]
1506.08909#6
The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems
The problem is typically formalized as a slot ï¬ lling task, where agents attempt to predict the goal of a user during the conversation. These datasets have been signiï¬ cant resources for struc- tured dialogues, and have allowed major progress in this ï¬ eld, though they are quite small compared to datasets currently used for training neural archi- tectures. Recently, a few datasets have been used con- taining unstructured dialogues extracted from Twitter4. Ritter et al. [21] collected 1.3 million conversations; this was extended in [28] to take ad- vantage of longer contexts by using A-B-A triples. Shang et al. [25] used data from a similar Chinese website called Weibo5. However to our knowl- edge, these datasets have not been made public, and furthermore, the post-reply format of such mi- croblogging services is perhaps not as represen- tative of natural dialogue between humans as the continuous stream of messages in a chat room.
1506.08909#5
1506.08909#7
1506.08909
[ "1503.02364" ]
1506.08909#7
The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems
In is now https://github.com/rkadlec/ available: ubuntu-ranking-dataset-creator. This ver- sion makes some adjustments and ï¬ xes some bugs from the ï¬ rst version. 3http://github.com/npow/ubottu 4https://twitter.com/ 5http://www.weibo.com/ fact, Ritter et al. estimate that only 37% of posts on Twitter are â conversational in natureâ , and 69% of their collected data contained exchanges of only length 2 [21]. We hypothesize that chat-room style messaging is more closely correlated to human-to- human dialogue than micro-blogging websites, or forum-based sites such as Reddit. Part of the Ubuntu chat logs have previously been aggregated into a dataset, called the Ubuntu Chat Corpus [30]. However that resource pre- serves the multi-participant structure and thus is less amenable to the investigation of more tradi- tional two-party conversations. Also weakly related to our contribution is the problem of question-answer systems. Several datasets of question-answer pairs are available [3], however these interactions are much shorter than what we seek to study. # 2.2 Learning Architectures Most dialogue research has historically focused on structured slot-ï¬ lling tasks [24]. Various ap- proaches were proposed, yet few attempts lever- age more recent developments in neural learning architectures. A notable exception is the work of Henderson et al. [11], which proposes an RNN structure, initialized with a denoising autoencoder, to tackle the DSTC 3 domain. Work on unstructured dialogues, recently pi- oneered by Ritter et al. [22], proposed a re- sponse generation model for Twitter data based on ideas from Statistical Machine Translation. This is shown to give superior performance to previ- ous information retrieval (e.g. nearest neighbour) approaches [14]. This idea was further devel- oped by Sordoni et al. [28] to exploit information from a longer context, using a structure similar to the Recurrent Neural Network Encoder-Decoder model [4]. This achieves rather poor performance on A-B-A Twitter triples when measured by the BLEU score (a standard for machine translation), yet performs comparatively better than the model of Ritter et al. [22]. Their results are also veriï¬ ed with a human-subject study.
1506.08909#6
1506.08909#8
1506.08909
[ "1503.02364" ]
1506.08909#8
The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems
A similar encoder- decoder framework is presented in [25]. This model uses one RNN to transform the input to some vector representation, and another RNN to â decodeâ this representation to a response by gen- erating one word at a time. This model is also eval- uated in a human-subject study, although much smaller in size than in [28]. Overall, these models Dataset Type Task # Dialogues # Utterances # Words Description Switchboard [8] DSTC1 [32] DSTC2 [10] DSTC3 [9] DSTC4[13] Twitter Corpus [21] Twitter Triple Corpus [28] Sina Weibo [25] Ubuntu Dialogue Corpus Human-human spoken Human-computer spoken Human-computer spoken Human-computer spoken Human-human spoken Human-human micro-blog Human-human micro-blog Human-human micro-blog Human-human chat Various State tracking State tracking State tracking State tracking Next utterance generation Next utterance generation Next utterance generation Next utterance classiï¬ cation 2,400 15,000 3,000 2,265 35 1,300,000 29,000,000 4,435,959 930,000 â 210,000 24,000 15,000 â 3,000,000 87,000,000 8,871,918 7,100,000 3,000,000 â â â â â â 100,000,000 Telephone conversations on pre-speciï¬ ed topics Bus ride information system Restaurant booking system Tourist information system 21 hours of tourist info exchange over Skype Post/ replies extracted from Twitter A-B-A triples from Twitter replies Post/ reply pairs extracted from Weibo Extracted from Ubuntu Chat Logs Table 1: A selection of structured and unstructured large-scale datasets applicable to dialogue systems. Faded datasets are not publicly available.
1506.08909#7
1506.08909#9
1506.08909
[ "1503.02364" ]
1506.08909#9
The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems
The last entry is our contribution. highlight the potential of neural learning architec- tures for interactive systems, yet so far they have been limited to very short conversations. # 3 The Ubuntu Dialogue Corpus We seek a large dataset for research in dialogue systems with the following properties: â ¢ Two-way (or dyadic) conversation, as op- posed to multi-participant chat, preferably human-human. â ¢ Large number of conversations; 105 â 106 is typical of datasets used for neural-network learning in other areas of AI.
1506.08909#8
1506.08909#10
1506.08909
[ "1503.02364" ]
1506.08909#10
The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems
â ¢ Many conversations with several turns (more than 3). â ¢ Task-speciï¬ c domain, as opposed to chatbot systems. All of these requirements are satisï¬ ed by the Ubuntu Dialogue Corpus presented in this paper. # 3.1 Ubuntu Chat Logs The Ubuntu Chat Logs refer to a collection of logs from Ubuntu-related chat rooms on the Freenode Internet Relay Chat (IRC) network. This protocol allows for real-time chat between a large number of participants. Each chat room, or channel, has a particular topic, and every channel participant can see all the messages posted in a given chan- nel. Many of these channels are used for obtaining technical support with various Ubuntu issues.
1506.08909#9
1506.08909#11
1506.08909
[ "1503.02364" ]
1506.08909#11
The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems
a potential solution, after ï¬ rst addressing the â user- nameâ of the ï¬ rst user. This is called a name men- tion [29], and is done to avoid confusion in the channel â at any given time during the day, there can be between 1 and 20 simultaneous conversa- tions happening in some channels. In the most popular channels, there is almost never a time when only one conversation is occurring; this ren- ders it particularly problematic to extract dyadic dialogues. A conversation between a pair of users generally stops when the problem has been solved, though some users occasionally continue to dis- cuss a topic not related to Ubuntu. Despite the nature of the chat room being a con- stant stream of messages from multiple users, it is through the fairly rigid structure in the messages that we can extract the dialogues between users. Figure 4 shows an example chat room conversa- tion from the #ubuntu channel as well as the ex- tracted dialogues, which illustrates how users usu- ally state the username of the intended message recipient before writing their reply (we refer to all replies and initial questions as â utterancesâ ). For example, it is clear that users â Taruâ and â kujaâ are engaged in a dialogue, as are users â Oldâ and â bur[n]erâ , while user â _pmâ is asking an initial question, and â LiveCDâ is perhaps elaborating on a previous comment.
1506.08909#10
1506.08909#12
1506.08909
[ "1503.02364" ]
1506.08909#12
The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems
# 3.2 Dataset Creation As the contents of each channel are moderated, most interactions follow a similar pattern. A new user joins the channel, and asks a general ques- tion about a problem they are having with Ubuntu. Then, another more experienced user replies with In order to create the Ubuntu Dialogue Corpus, ï¬ rst a method had to be devised to extract dyadic dialogues from the chat room multi-party conver- sations. The ï¬ rst step was to separate every mes- sage into 4-tuples of (time, sender, recipient, utter- ance). Given these 4-tuples, it is straightforward to group all tuples where there is a matching sender and recipient. Although it is easy to separate the time and the sender from the rest, ï¬ nding the in- tended recipient of the message is not always triv- ial. 3.2.1 Recipient Identiï¬ cation While in most cases the recipient is the ï¬
1506.08909#11
1506.08909#13
1506.08909
[ "1503.02364" ]
1506.08909#13
The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems
rst word of the utterance, it is sometimes located at the end, or not at all in the case of initial questions. Fur- thermore, some users choose names correspond- ing to common English words, such as â theâ or â stopâ , which could lead to many false positives. In order to solve this issue, we create a dictionary of usernames from the current and previous days, and compare the ï¬ rst word of each utterance to its If a match is found, and the word does entries. not correspond to a very common English word6, it is assumed that this user was the intended recip- ient of the message. If no matches are found, it is assumed that the message was an initial question, and the recipient value is left empty. 3.2.2 Utterance Creation The dialogue extraction algorithm works back- wards from the ï¬ rst response to ï¬ nd the initial question that was replied to, within a time frame of 3 minutes. A ï¬ rst response is identiï¬ ed by the presence of a recipient name (someone from the recent conversation history). The initial question is identiï¬ ed to be the most recent utterance by the recipient identiï¬ ed in the ï¬ rst response. All utterances that do not qualify as a ï¬ rst re- sponse or an initial question are discarded; initial questions that do not generate any response are also discarded. We additionally discard conversa- tions longer than ï¬ ve utterances where one user says more than 80% of the utterances, as these are typically not representative of real chat dialogues. Finally, we consider only extracted dialogues that consist of 3 turns or more to encourage the model- ing of longer-term dependencies.
1506.08909#12
1506.08909#14
1506.08909
[ "1503.02364" ]
1506.08909#14
The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems
To alleviate the problem of â holesâ in the dia- logue, where one user does not address the other explicitly, as in Figure 5, we check whether each user talks to someone else for the duration of their conversation. If not, all non-addressed utterances are added to the dialogue. An example conversa- tion along with the extracted dialogues is shown in Figure 5. Note that we also concatenate all con- secutive utterances from a given user. 6We use the GNU Aspell spell checking dictionary. 10° 108 10° Number of dialogues, log scale 10? 10) 10? Number of turns per dialogue, log scale Figure 1: Plot of number of conversations with a given number of turns. Both axes use a log scale. # dialogues (human-human) # utterances (in total) # words (in total) Min. # turns per dialogue Avg. # turns per dialogue Avg. # words per utterance Median conversation length (min) 930,000 7,100,000 100,000,000 3 7.71 10.34 6 Table 2: Properties of Ubuntu Dialogue Corpus. We do not apply any further pre-processing (e.g. tokenization, stemming) to the data as released in the Ubuntu Dialogue Corpus. However the use of pre-processing is standard for most NLP systems, and was also used in our analysis (see Section 4.) # 3.2.3 Special Cases and Limitations It is often the case that a user will post an ini- tial question, and multiple people will respond to it with different answers. In this instance, each conversation between the ï¬ rst user and the user who replied is treated as a separate dialogue. This has the unfortunate side-effect of having the ini- tial question appear multiple times in several dia- logues. However the number of such cases is suf- ï¬ ciently small compared to the size of the dataset. Another issue to note is that the utterance post- ing time is not considered for segmenting conver- sations between two users. Even if two users have a conversation that spans multiple hours, or even days, this is treated as a single dialogue. However, such dialogues are rare. We include the posting time in the corpus so that other researchers may ï¬ lter as desired.
1506.08909#13
1506.08909#15
1506.08909
[ "1503.02364" ]
1506.08909#15
The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems
# 3.3 Dataset Statistics Table 2 summarizes properties of the Ubuntu Dia- logue Corpus. One of the most important features of the Ubuntu chat logs is its size. This is cru- cial for research into building dialogue managers based on neural architectures. Another important characteristic is the number of turns in these dia- logues. The distribution of the number of turns is shown in Figure 1. It can be seen that the num- ber of dialogues and turns per dialogue follow an approximate power law relationship. # 3.4 Test Set Generation We set aside 2% of the Ubuntu Dialogue Corpus conversations (randomly selected) to form a test set that can be used for evaluation of response se- lection algorithms. Compared to the rest of the corpus, this test set has been further processed to extract a pair of (context, response, ï¬ ag) triples from each dialogue. The ï¬ ag is a Boolean vari- able indicating whether or not the response was the actual next utterance after the given context. The response is a target (output) utterance which we aim to correctly identify. The context consists of the sequence of utterances appearing in dialogue prior to the response. We create a pair of triples, where one triple contains the correct response (i.e. the actual next utterance in the dialogue), and the other triple contains a false response, sampled ran- domly from elsewhere within the test set.
1506.08909#14
1506.08909#16
1506.08909
[ "1503.02364" ]
1506.08909#16
The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems
The ï¬ ag is set to 1 in the ï¬ rst case and to 0 in the second case. An example pair is shown in Table 3. To make the task harder, we can move from pairs of responses (one correct, one incorrect) to a larger set of wrong responses (all with ï¬ ag=0). In our experiments below, we consider both the case of 1 wrong response and 10 wrong responses. Context well, can I move the drives? __EOS__ ah not like that well, can I move the drives? __EOS__ ah not like that Response I guess I could just get an enclosure and copy via USB you can use "ps ax" and "kill (PID #)" Flag 1 0 Table 3: Test set example with (context, reply, ï¬ ag) format. The â __EOS__â tag is used to denote the end of an utterance within the context. Since we want to learn to predict all parts of a conversation, as opposed to only the closing state- ment, we consider various portions of context for the conversations in the test set. The context size is determined stochastically using a simple formula:
1506.08909#15
1506.08909#17
1506.08909
[ "1503.02364" ]
1506.08909#17
The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems
c = min(t â 1, n â 1), where n = 10C η + 2, η â ¼ U nif (C/2, 10C) Here, C denotes the maximum desired context size, which we set to C = 20. The last term is the desired minimum context size, which we set to be 2. Parameter t is the actual length of that dialogue (thus the constraint that c â ¤ t â 1), and n is a random number corresponding to the ran- domly sampled context length, that is selected to be inversely proportional to C. In practice, this leads to short test dialogues having short contexts, while longer dialogues are often broken into short or medium-length seg- ments, with the occasional long context of 10 or more turns. # 3.5 Evaluation Metric We consider the task of best response selection. This can be achieved by processing the data as de- scribed in Section 3.4, without requiring any hu- man labels.
1506.08909#16
1506.08909#18
1506.08909
[ "1503.02364" ]
1506.08909#18
The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems
This classiï¬ cation task is an adapta- tion of the recall and precision metrics previously applied to dialogue datasets [24]. A family of metrics often used in language tasks is Recall@k (denoted R@1 R@2, R@5 below). Here the agent is asked to select the k most likely responses, and it is correct if the true response is among these k candidates. Only the R@1 metric is relevant in the case of binary classiï¬ cation (as in the Table 3 example). Although a language model that performs well on response classiï¬ cation is not a gauge of good performance on next utterance generation, we hy- pothesize that improvements on a model with re- gards to the classiï¬ cation task will eventually lead to improvements for the generation task. See Sec- tion 6 for further discussion of this point. # 4 Learning Architectures for Unstructured Dialogues To provide further evidence of the value of our dataset for research into neural architectures for dialogue managers, we provide performance benchmarks for two neural learning algorithms, as well as one naive baseline. The approaches con- sidered are: TF-IDF, Recurrent Neural networks (RNN), and Long Short-Term Memory (LSTM). Prior to applying each method, we perform stan- dard pre-processing of the data using the NLTK7 library and Twitter tokenizer8 to parse each utter- ance. We use generic tags for various word cat- 7www.nltk.org/ 8http://www.ark.cs.cmu.edu/TweetNLP/ egories, such as names, locations, organizations, URLs, and system paths. To train the RNN and LSTM architectures, we process the full training Ubuntu Dialogue Corpus into the same format as the test set described in Section 3.4, extracting (context, response, ï¬ ag) triples from dialogues. For the training set, we do not sample the context length, but instead con- sider each utterance (starting at the 3rd one) as a potential response, with the previous utterances as its context. So a dialogue of length 10 yields 8 training examples. Since these are overlapping, they are clearly not independent, but we consider this a minor issue given the size of the dataset (we further alleviate the issue by shufï¬ ing the training examples).
1506.08909#17
1506.08909#19
1506.08909
[ "1503.02364" ]
1506.08909#19
The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems
Negative responses are selected at ran- dom from the rest of the training data. # 4.1 TF-IDF Term frequency-inverse document frequency is a statistic that intends to capture how important a given word is to some document, which in our case is the context [20]. It is a technique often used in document classiï¬ cation and information retrieval. The â term-frequencyâ term is simply a count of the number of times a word appears in a given context, while the â inverse document frequencyâ term puts a penalty on how often the word appears elsewhere in the corpus. The ï¬ nal score is calculated as the product of these two terms, and has the form: tï¬ df(w, d, D) = f (w, d)à log N |{d â D : w â d}| , where f (w, d) indicates the number of times word w appeared in context d, N is the total number of dialogues, and the denominator represents the number of dialogues in which the word w appears. For classiï¬ cation, the TF-IDF vectors are ï¬ rst calculated for the context and each of the candi- date responses. Given a set of candidate response vectors, the one with the highest cosine similarity to the context vector is selected as the output. For Recall@k, the top k responses are returned. # 4.2 RNN Recurrent neural networks are a variant of neural networks that allows for time-delayed directed cy- cles between units [17]. This leads to the forma- tion of an internal state of the network, ht, which allows it to model time-dependent data. The in- ternal state is updated at each time step as some
1506.08909#18
1506.08909#20
1506.08909
[ "1503.02364" ]
1506.08909#20
The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems
ho ho Figure 2: Diagram of our model. The RNNs have tied weights. c, r are the last hidden states from the RNNs. ci, ri are word vectors for the context and response, i < t. We consider contexts up to a maximum of t = 160. function of the observed variables xt, and the hid- den state at the previous time step htâ 1. Wx and Wh are matrices associated with the input and hid- den state. ht = f (Whhtâ 1 + Wxxt).
1506.08909#19
1506.08909#21
1506.08909
[ "1503.02364" ]
1506.08909#21
The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems
A diagram of an RNN can be seen in Figure 2. RNNs have been the primary building block of many current neural language models [22, 28], which use RNNs for an encoder and decoder. The ï¬ rst RNN is used to encode the given context, and the second RNN generates a response by us- ing beam-search, where its initial hidden state is biased using the ï¬ nal hidden state from the ï¬ rst RNN. In our work, we are concerned with classi- ï¬ cation of responses, instead of generation. We build upon the approach in [2], which has also been recently applied to the problem of question answering [33]. We utilize a siamese network consisting of two RNNs with tied weights to produce the embed- dings for the context and response. Given some input context and response, we compute their em- beddings â c, r â Rd, respectively â
1506.08909#20
1506.08909#22
1506.08909
[ "1503.02364" ]