id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
1605.07725#24
Adversarial Training Methods for Semi-Supervised Text Classification
falls to the 19th nearest neighbor for adversarial training and 21st nearest neighbor for virtual adversarial training, with cosine distances of 0.463 and 0.464, respectively. For the baseline and random perturbation method, the cosine distances were 0.361 and 0.377, respectively. In the other direction, the nearest neighbors to â badâ included â goodâ as the 4th nearest neighbor for the baseline method and random perturbation method. For both adversarial methods, â goodâ drops to the 36th nearest neighbor of â badâ . We also investigated the 15 nearest neighbors to â greatâ and its cosine distances with the trained embeddings. We saw that cosine distance on adversarial and virtual adversarial training (0.159â 0.331) were much smaller than ones on the baseline and random perturbation method (0.244â 0.399). 6
1605.07725#23
1605.07725#25
1605.07725
[ "1603.04467" ]
1605.07725#25
Adversarial Training Methods for Semi-Supervised Text Classification
Published as a conference paper at ICLR 2017 Table 2: Test performance on the IMDB sentiment classiï¬ cation task. * indicates using pretrained embeddings of CNN and bidirectional LSTM. Method Test error rate Baseline (without embedding normalization) 7.33% Baseline Random perturbation with labeled examples Random perturbation with labeled and unlabeled examples Adversarial Virtual Adversarial Adversarial + Virtual Adversarial 7.39% 7.20% 6.78% 6.21% 5.91% 6.09% Virtual Adversarial (on bidirectional LSTM) Adversarial + Virtual Adversarial (on bidirectional LSTM) 5.91% 6.02% Full+Unlabeled+BoW (Maas et al., 2011) Transductive SVM (Johnson & Zhang, 2015b) NBSVM-bigrams (Wang & Manning, 2012) Paragraph Vectors (Le & Mikolov, 2014) SA-LSTM (Dai & Le, 2015) One-hot bi-LSTM* (Johnson & Zhang, 2016b) 11.11% 9.99% 8.78% 7.42% 7.24% 5.94% Table 3: 10 top nearest neighbors to â
1605.07725#24
1605.07725#26
1605.07725
[ "1603.04467" ]
1605.07725#26
Adversarial Training Methods for Semi-Supervised Text Classification
goodâ and â badâ with the word embeddings trained on each method. We used cosine distance for the metric. â Baselineâ means training with embedding dropout and â Randomâ means training with random perturbation with labeled examples. â Adversarialâ and â Virtual Adversarialâ mean adversarial training and virtual adversarial training. â goodâ â badâ Baseline Random Adversarial Virtual Adversarial Baseline Random Adversarial Virtual Adversarial 1 2 3 4 5 6 7 8 9 10 great decent à bad excellent Good ï¬ ne nice interesting solid entertaining great decent excellent nice Good à bad ï¬ ne interesting entertaining solid decent great nice ï¬ ne entertaining interesting Good excellent solid cool decent great nice ï¬ ne entertaining interesting Good cool enjoyable excellent terrible awful horrible à good Bad BAD poor stupid Horrible horrendous terrible awful horrible à good poor BAD Bad stupid Horrible horrendous terrible awful horrible poor BAD stupid Bad laughable lame Horrible terrible awful horrible poor BAD stupid Bad laughable lame Horrible The much weaker positive word â goodâ
1605.07725#25
1605.07725#27
1605.07725
[ "1603.04467" ]
1605.07725#27
Adversarial Training Methods for Semi-Supervised Text Classification
also moved from the 3rd nearest neighbor to the 15th after virtual adversarial training. 5.2 TEST PERFORMANCE ON ELEC, RCV1 AND ROTTEN TOMATOES DATASET Table 4 shows the test performance on the Elec and RCV1 datasets. We can see our proposed method improved test performance on the baseline method and achieved state of the art performance on both datasets, even though the state of the art method uses a combination of CNN and bidirectional LSTM models. Our unidirectional LSTM model improves on the state of the art method and our method with a bidirectional LSTM further improves results on RCV1. The reason why the bidirectional models have better performance on the RCV1 dataset would be that, on the RCV1 dataset, there are some very long sentences compared with the other datasets, and the bidirectional model could better handle such long sentences with the shorter dependencies from the reverse order sentences. Table 5 shows test performance on the Rotten Tomatoes dataset. Adversarial training was able to improve over the baseline method, and with both adversarial and virtual adversarial cost, achieved almost the same performance as the current state of the art method. However the test performance of only virtual adversarial training was worse than the baseline. We speculate that this is because the Rotten Tomatoes dataset has very few labeled sentences and the labeled sentences are very short.
1605.07725#26
1605.07725#28
1605.07725
[ "1603.04467" ]
1605.07725#28
Adversarial Training Methods for Semi-Supervised Text Classification
7 Published as a conference paper at ICLR 2017 Table 4: Test performance on the Elec and RCV1 classiï¬ cation tasks. * indicates using pretrained embeddings of CNN, and â indicates using pretrained embeddings of CNN and bidirectional LSTM. Method Test error rate Elec RCV1 Baseline Adversarial Virtual Adversarial Adversarial + Virtual Adversarial 6.24% 5.61% 5.54% 5.40% 7.40% 7.12% 7.05% 6.97% Virtual Adversarial (on bidirectional LSTM) Adversarial + Virtual Adversarial (on bidirectional LSTM) 5.55% 5.45% 6.71% 6.68% Transductive SVM (Johnson & Zhang, 2015b) NBLM (Naıve Bayes logisitic regression model) (Johnson & Zhang, 2015a) One-hot CNN* (Johnson & Zhang, 2015b) One-hot CNNâ (Johnson & Zhang, 2016b) One-hot bi-LSTMâ (Johnson & Zhang, 2016b) 16.41% 10.77% 8.11% 13.97% 7.71% 6.27% 7.15% 5.87% 8.52% 5.55% In this case, the virtual adversarial loss on unlabeled examples overwhelmed the supervised loss, so the model prioritized being robust to perturbation rather than obtaining the correct answer. Table 5: Test performance on the Rotten Tomatoes sentiment classiï¬ cation task. * indicates using pretrained embeddings from word2vec Google News, and â indicates using unlabeled data from Amazon reviews. Method Test error rate Baseline Adversarial Virtual Adversarial Adversarial + Virtual Adversarial 17.9% 16.8% 19.1% 16.6% NBSVM-bigrams(Wang & Manning, 2012) CNN*(Kim, 2014) AdaSent*(Zhao et al., 2015) SA-LSTMâ (Dai & Le, 2015) 20.6% 18.5% 16.9% 16.7%
1605.07725#27
1605.07725#29
1605.07725
[ "1603.04467" ]
1605.07725#29
Adversarial Training Methods for Semi-Supervised Text Classification
5.3 PERFORMANCE ON THE DBPEDIA PURELY SUPERVISED CLASSIFICATION TASK Table 6 shows the test performance of each method on DBpedia. The â Random perturbationâ is the same method as the â Random perturbation with labeled examplesâ explained in Section 5.1. Note that DBpedia has only labeled examples, as we explained in Section 4, so this task is purely supervised learning. We can see that the baseline method has already achieved nearly the current state of the art performance, and our proposed method improves from the baseline method. # 6 RELATED WORKS Dropout (Srivastava et al., 2014) is a regularization method widely used for many domains includ- ing text. There are some previous works adding random noise to the input and hidden layer during training, to prevent overï¬ tting (e.g. (Sietsma & Dow, 1991; Poole et al., 2013)). However, in our experiments and in previous works (Miyato et al., 2016), training with adversarial and virtual adver- sarial perturbations outperformed the method with random perturbations. For semi-supervised learning with neural networks, a common approach, especially in the image domain, is to train a generative model whose latent features may be used as features for classiï¬ - cation (e.g. (Hinton et al., 2006; Maaløe et al., 2016)). These models now achieve state of the art
1605.07725#28
1605.07725#30
1605.07725
[ "1603.04467" ]
1605.07725#30
Adversarial Training Methods for Semi-Supervised Text Classification
8 Published as a conference paper at ICLR 2017 Table 6: Test performance on the DBpedia topic classiï¬ cation task Method Test error rate Baseline (without embedding normalization) 0.87% Baseline Random perturbation Adversarial Virtual Adversarial 0.90% 0.85% 0.79% 0.76% Bag-of-words(Zhang et al., 2015) Large-CNN(character-level) (Zhang et al., 2015) SA-LSTM(word-level)(Dai & Le, 2015) N-grams TFIDF (Zhang et al., 2015) SA-LSTM(character-level)(Dai & Le, 2015) Word CNN (Johnson & Zhang, 2016a) 3.57% 1.73% 1.41% 1.31% 1.19% 0.84% performance on the image domain. However, these methods require numerous additional hyperpa- rameters with generative models, and the conditions under which the generative model will provide good supervised learning performance are poorly understood. By comparison, adversarial and vir- tual adversarial training requires only one hyperparameter, and has a straightforward interpretation as robust optimization. Adversarial and virtual adversarial training resemble some semi-supervised or transductive SVM ap- proaches (Joachims, 1999; Chapelle & Zien, 2005; Collobert et al., 2006; Belkin et al., 2006) in that both families of methods push the decision boundary far from training examples (or in the case of transductive SVMs, test examples). However, adversarial training methods insist on margins on the input space , while SVMs insist on margins on the feature space deï¬ ned by the kernel function. This property allows adversarial training methods to achieve the models with a more ï¬ exible function on the space where the margins are imposed. In our experiments (Table 2, 4) and Miyato et al. (2016), adversarial and virtual adversarial training achieve better performance than SVM based methods. There has also been semi-supervised approaches applied to text classiï¬ cation with both CNNs and RNNs. These approaches utilize â view-embeddingsâ
1605.07725#29
1605.07725#31
1605.07725
[ "1603.04467" ]
1605.07725#31
Adversarial Training Methods for Semi-Supervised Text Classification
(Johnson & Zhang, 2015b; 2016b) which use the window around a word to generate its embedding. When these are used as a pretrained model for the classiï¬ cation model, they are found to improve generalization performance. These methods and our method are complementary as we showed that our method improved from a recurrent pretrained language model. # 7 CONCLUSION In our experiments, we found that adversarial and virtual adversarial training have good regular- ization performance in sequence models on text classiï¬ cation tasks. On all datasets, our proposed method exceeded or was on par with the state of the art performance. We also found that adversarial and virtual adversarial training improved not only classiï¬ cation performance but also the quality of word embeddings. These results suggest that our proposed method is promising for other text do- main tasks, such as machine translation(Sutskever et al., 2014), learning distributed representations of words or paragraphs(Mikolov et al., 2013; Le & Mikolov, 2014) and question answering tasks. Our approach could also be used for other general sequential tasks, such as for video or speech.
1605.07725#30
1605.07725#32
1605.07725
[ "1603.04467" ]
1605.07725#32
Adversarial Training Methods for Semi-Supervised Text Classification
ACKNOWLEDGMENTS We thank the developers of Tensorï¬ ow. We thank the members of Google Brain team for their warm support and valuable comments. This work is partly supported by NEDO. # REFERENCES Martın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorï¬ ow: Large-scale machine learning on heteroge- 9 Published as a conference paper at ICLR 2017 neous distributed systems. arXiv preprint arXiv:1603.04467, 2016. Mikhail Belkin, Partha Niyogi, and Vikas Sindhwani. Manifold regularization: A geometric framework for learning from labeled and unlabeled examples. The Journal of Machine Learning Research, 7(Nov):2399â 2434, 2006. Yoshua Bengio, Holger Schwenk, Jean-Sébastien Senécal, Fréderic Morin, and Jean-Luc Gauvain. Neural probabilistic language models. In Innovations in Machine Learning, pp. 137â 186. Springer, 2006. Olivier Chapelle and Alexander Zien.
1605.07725#31
1605.07725#33
1605.07725
[ "1603.04467" ]
1605.07725#33
Adversarial Training Methods for Semi-Supervised Text Classification
Semi-supervised classiï¬ cation by low density separation. In AISTATS, 2005. Ronan Collobert, Fabian Sinz, Jason Weston, and Léon Bottou. Large scale transductive svms. Journal of Machine Learning Research, 7(Aug):1687â 1712, 2006. Andrew M Dai and Quoc V Le. Semi-supervised sequence learning. In NIPS, 2015. Xavier Glorot, Antoine Bordes, and Yoshua Bengio. Deep sparse rectiï¬ er neural networks. In AISTATS, 2011. Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In ICLR, 2015.
1605.07725#32
1605.07725#34
1605.07725
[ "1603.04467" ]
1605.07725#34
Adversarial Training Methods for Semi-Supervised Text Classification
Alex Graves and Jürgen Schmidhuber. Framewise phoneme classiï¬ cation with bidirectional lstm and other neural network architectures. Neural Networks, 18(5):602â 610, 2005. Geoffrey E. Hinton, Simon Osindero, and Yee Whye Teh. A fast learning algorithm for deep belief nets. Neural Computation, 18:1527â 1554, 2006. Kevin Jarrett, Koray Kavukcuoglu, Marcâ Aurelio Ranzato, and Yann LeCun. What is the best multi-stage architecture for object recognition? In ICCV, 2009.
1605.07725#33
1605.07725#35
1605.07725
[ "1603.04467" ]
1605.07725#35
Adversarial Training Methods for Semi-Supervised Text Classification
Thorsten Joachims. Transductive inference for text classiï¬ cation using support vector machines. 1999. In ICML, Rie Johnson and Tong Zhang. Effective use of word order for text categorization with convolutional neural networks. NAACL HLT, 2015a. Rie Johnson and Tong Zhang. Semi-supervised convolutional neural networks for text categorization via region embedding. In NIPS, 2015b. Rie Johnson and Tong Zhang. Convolutional neural networks for text categorization: Shallow word-level vs. deep character-level. arXiv preprint arXiv:1609.00718, 2016a. Rie Johnson and Tong Zhang. Supervised and semi-supervised text categorization using LSTM for region embeddings. In ICML, 2016b. Yoon Kim. Convolutional neural networks for sentence classiï¬ cation. In EMNLP, 2014. Diederik Kingma and Jimmy Ba. Adam:
1605.07725#34
1605.07725#36
1605.07725
[ "1603.04467" ]
1605.07725#36
Adversarial Training Methods for Semi-Supervised Text Classification
A method for stochastic optimization. In ICLR, 2015. Quoc V Le and Tomas Mikolov. Distributed representations of sentences and documents. In ICML, 2014. Jens Lehmann, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N Mendes, Sebastian Hellmann, Mohamed Morsey, Patrick van Kleef, Sören Auer, et al. Dbpediaâ a large-scale, multilingual knowledge base extracted from wikipedia. Semantic Web, 6(2):167â 195, 2015. David D Lewis, Yiming Yang, Tony G Rose, and Fan Li. Rcv1:
1605.07725#35
1605.07725#37
1605.07725
[ "1603.04467" ]
1605.07725#37
Adversarial Training Methods for Semi-Supervised Text Classification
A new benchmark collection for text catego- rization research. The Journal of Machine Learning Research, 5:361â 397, 2004. Lars Maaløe, Casper Kaae Sønderby, Søren Kaae Sønderby, and Ole Winther. Auxiliary deep generative models. In ICML, 2016. Andrew L Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and Christopher Potts. Learning word vectors for sentiment analysis.
1605.07725#36
1605.07725#38
1605.07725
[ "1603.04467" ]
1605.07725#38
Adversarial Training Methods for Semi-Supervised Text Classification
In ACL: Human Language Technologies-Volume 1, 2011. Julian McAuley and Jure Leskovec. Hidden factors and hidden topics: understanding rating dimensions with review text. In ACM conference on Recommender systems, 2013. Tomas Mikolov, Martin Karaï¬ Ã¡t, Lukas Burget, Jan Cernock`y, and Sanjeev Khudanpur. Recurrent neural network based language model. In INTERSPEECH, 2010.
1605.07725#37
1605.07725#39
1605.07725
[ "1603.04467" ]
1605.07725#39
Adversarial Training Methods for Semi-Supervised Text Classification
10 Published as a conference paper at ICLR 2017 Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of words and phrases and their compositionality. In NIPS, 2013. Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, Ken Nakae, and Shin Ishii. Distributional smoothing with virtual adversarial training. In ICLR, 2016. Vinod Nair and Geoffrey E Hinton. Rectiï¬
1605.07725#38
1605.07725#40
1605.07725
[ "1603.04467" ]
1605.07725#40
Adversarial Training Methods for Semi-Supervised Text Classification
ed linear units improve restricted boltzmann machines. In ICML, 2010. Bo Pang and Lillian Lee. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In ACL, 2005. Ben Poole, Jascha Sohl-Dickstein, and Surya Ganguli. Analyzing noise in autoencoders and deep networks. In Deep Leanring Workshop on NIPS, 2013. J. Sietsma and R. Dow. Creating artiï¬
1605.07725#39
1605.07725#41
1605.07725
[ "1603.04467" ]
1605.07725#41
Adversarial Training Methods for Semi-Supervised Text Classification
cial neural networks that generalize. Neural Networks, 4(1), 1991. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overï¬ tting. The Journal of Machine Learning Research, 15(1), 2014. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In NIPS, 2014. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. In ICLR, 2014. Sida Wang and Christopher D Manning. Baselines and bigrams:
1605.07725#40
1605.07725#42
1605.07725
[ "1603.04467" ]
1605.07725#42
Adversarial Training Methods for Semi-Supervised Text Classification
Simple, good sentiment and topic classiï¬ cation. In ACL: Short Papers, 2012. David Warde-Farley and Ian Goodfellow. Adversarial perturbations of deep neural networks. In Tamir Hazan, George Papandreou, and Daniel Tarlow (eds.), Perturbations, Optimization, and Statistics, chapter 11. 2016. Book in preparation for MIT Press. Xiang Zhang, Junbo Zhao, and Yann LeCun. Character-level convolutional networks for text classiï¬ cation. In NIPS, 2015. Han Zhao, Zhengdong Lu, and Pascal Poupart. Self-adaptive hierarchical sentence model. In IJCAI, 2015.
1605.07725#41
1605.07725#43
1605.07725
[ "1603.04467" ]
1605.07725#43
Adversarial Training Methods for Semi-Supervised Text Classification
11
1605.07725#42
1605.07725
[ "1603.04467" ]
1605.07678#0
An Analysis of Deep Neural Network Models for Practical Applications
7 1 0 2 r p A 4 1 ] V C . s c [ 4 v 8 7 6 7 0 . 5 0 6 1 : v i X r a AN ANALYSIS OF DEEP NEURAL NETWORK MODELS FOR PRACTICAL APPLICATIONS Alfredo Canziani & Eugenio Culurciello Weldon School of Biomedical Engineering Purdue University {canziani,euge}@purdue.edu # Adam Paszke Faculty of Mathematics, Informatics and Mechanics University of Warsaw [email protected] # Alfredo Canziani & Eugenio Culurciello Adam Paszke # ABSTRACT
1605.07678#1
1605.07678
[ "1602.07261" ]
1605.07678#1
An Analysis of Deep Neural Network Models for Practical Applications
Since the emergence of Deep Neural Networks (DNNs) as a prominent technique in the ï¬ eld of computer vision, the ImageNet classiï¬ cation challenge has played a major role in advancing the state-of-the-art. While accuracy ï¬ gures have steadily increased, the resource utilisation of winning models has not been properly taken into account. In this work, we present a comprehensive analysis of important met- rics in practical applications: accuracy, memory footprint, parameters, operations count, inference time and power consumption.
1605.07678#0
1605.07678#2
1605.07678
[ "1602.07261" ]
1605.07678#2
An Analysis of Deep Neural Network Models for Practical Applications
Key ï¬ ndings are: (1) power con- sumption is independent of batch size and architecture; (2) accuracy and inference time are in a hyperbolic relationship; (3) energy constraint is an upper bound on the maximum achievable accuracy and model complexity; (4) the number of oper- ations is a reliable estimate of the inference time. We believe our analysis provides a compelling set of information that helps design and engineer efï¬ cient DNNs. 1 # 1 INTRODUCTION
1605.07678#1
1605.07678#3
1605.07678
[ "1602.07261" ]
1605.07678#3
An Analysis of Deep Neural Network Models for Practical Applications
Since the breakthrough in 2012 ImageNet competition (Russakovsky et al., 2015) achieved by AlexNet (Krizhevsky et al., 2012) â the ï¬ rst entry that used a Deep Neural Network (DNN) â several other DNNs with increasing complexity have been submitted to the challenge in order to achieve better performance. In the ImageNet classiï¬ cation challenge, the ultimate goal is to obtain the highest accuracy in a multi-class classiï¬ cation problem framework, regardless of the actual inference time.
1605.07678#2
1605.07678#4
1605.07678
[ "1602.07261" ]
1605.07678#4
An Analysis of Deep Neural Network Models for Practical Applications
We believe that this has given rise to several problems. Firstly, it is now normal practice to run several trained instances of a given model over multiple similar instances of each validation image. This practice, also know as model averaging or ensemble of DNNs, dramatically increases the amount of com- putation required at inference time to achieve the published accuracy. Secondly, model selection is hindered by the fact that different submissions are evaluating their (ensemble of) models a different number of times on the validation images, and therefore the reported accuracy is biased on the spe- ciï¬ c sampling technique (and ensemble size). Thirdly, there is currently no incentive in speeding up inference time, which is a key element in practical applications of these models, and affects resource utilisation, power-consumption, and latency. This article aims to compare state-of-the-art DNN architectures, submitted for the ImageNet chal- lenge over the last 4 years, in terms of computational requirements and accuracy. We compare these architectures on multiple metrics related to resource utilisation in actual deployments: accuracy, memory footprint, parameters, operations count, inference time and power consumption.
1605.07678#3
1605.07678#5
1605.07678
[ "1602.07261" ]
1605.07678#5
An Analysis of Deep Neural Network Models for Practical Applications
The pur- pose of this paper is to stress the importance of these ï¬ gures, which are essential hard constraints for the optimisation of these networks in practical deployments and applications. # 2 METHODS In order to compare the quality of different models, we collected and analysed the accuracy values reported in the literature. We immediately found that different sampling techniques do not allow for a direct comparison of resource utilisation. For example, central-crop (top-5 validation) errors of a 1 Top-1 accuracy [%] ll ye yt KAP rox BR 59 40> 45h NP ne ens we oS eS REY Oe A NES AT EHOâ ¢ BeePRee® Inception-v4 Inceptionv3 e ResNet-50 ResNet-101 oe ResNet-34 ResNet-18 9" GcogLenet ENet ResNet-152 VGG-16 VGG-19 accuracy © svn 60 5M. 35M. 65M. 95M BN-AlexNet 55 AlexNet 125M..155M 50 0 5 vo 15 20 25 30035 40 Operations [G-Ops] [%] # Top-1 Figure 1: Top1 vs. network. Single-crop top-1 vali- dation accuracies for top scoring single-model archi- tectures. We introduce with this chart our choice of colour scheme, which will be used throughout this publication to distinguish effectively different archi- tectures and their correspondent authors. Notice that networks of the same group share the same hue, for example ResNet are all variations of pink. Figure 2: Top1 vs. operations, size â parameters. Top-1 one-crop accuracy versus amount of operations required for a single forward pass. The size of the blobs is proportional to the number of network pa- rameters; a legend is reported in the bottom right cor- ner, spanning from 5à 106 to 155à 106 params. Both these ï¬
1605.07678#4
1605.07678#6
1605.07678
[ "1602.07261" ]
1605.07678#6
An Analysis of Deep Neural Network Models for Practical Applications
gures share the same y-axis, and the grey dots highlight the centre of the blobs. single run of VGG-161 (Simonyan & Zisserman, 2014) and GoogLeNet (Szegedy et al., 2014) are 8.70% and 10.07% respectively, revealing that VGG-16 performs better than GoogLeNet. When models are run with 10-crop sampling,2 then the errors become 9.33% and 9.15% respectively, and therefore VGG-16 will perform worse than GoogLeNet, using a single central-crop. For this reason, we decided to base our analysis on re-evaluations of top-1 accuracies3 for all networks with a single central-crop sampling technique (Zagoruyko, 2016). For inference time and memory usage measurements we have used Torch7 with cuDNN-v5 and CUDA-V8 back-end. All experiments were conducted on a JetPack-2.3 NVIDIA Jetson TX1 board (nVIDIA): an embedded visual computing system with a 64-bit ARM@®) A57 CPU, a | T-Flop/s 256-core NVIDIA Maxwell GPU and 4 GB LPDDR4 of shared RAM. We use this resource-limited device to better underline the differences between network architecture, but similar results can be obtained on most recent GPUs, such as the NVIDIA K40 or Titan X, to name a few. Operation counts were obtained using an open-source tool that we developed (Paszke| {2016). For measuring the power consumption, a Keysight 1146B Hall effect current probe has been used with a Keysight MSO-X 2024A 200 MHz digital oscilloscope with a sampling period of 2s and 50kSa/s sample rate. The system was powered by a Keysight E3645A GPIB controlled DC power supply.
1605.07678#5
1605.07678#7
1605.07678
[ "1602.07261" ]
1605.07678#7
An Analysis of Deep Neural Network Models for Practical Applications
# 3 RESULTS In this section we report our results and comparisons. We analysed the following DDNs: AlexNet (Krizhevsky et al., 2012), batch normalised AlexNet (Zagoruyko, 2016), batch normalised Network In Network (NIN) (Lin et al., 2013), ENet (Paszke et al., 2016) for ImageNet (Culurciello, 2016), GoogLeNet (Szegedy et al., 2014), VGG-16 and -19 (Simonyan & Zisserman, 2014), ResNet-18, -34, -50, -101 and -152 (He et al., 2015), Inception-v3 (Szegedy et al., 2015) and Inception-v4 (Szegedy et al., 2016) since they obtained the highest performance, in these four years, on the ImageNet (Russakovsky et al., 2015) challenge. 1 In the original paper this network is called VGG-D, which is the best performing network. Here we prefer to highlight the number of layer utilised, so we will call it VGG-16 in this publication. 2 From a given image multiple patches are extracted: four corners plus central crop and their horizontal
1605.07678#6
1605.07678#8
1605.07678
[ "1602.07261" ]
1605.07678#8
An Analysis of Deep Neural Network Models for Practical Applications
mirrored twins. 3 Accuracy and error rate always sum to 100, therefore in this paper they are used interchangeably. 2 200 i â BNNIN â â GoogLeNet â Inception-v3 Inception-v4 â AlexNet â BN-AlexNet â vec-16 50 â vecis ResNet-152 â ENet Foward time per image [ms] II ae az 10 Batch size [/] power consumption â â _ â __ BN-AlexNet â ResNet-50 VGG-16 ResNet-101 VGG-19 ResNet-152 ResNet18 = = ENet â BNNIN â GoogLenet 9 â Inception-v3 Inception-v4 Batch size [/] [W] # Net
1605.07678#7
1605.07678#9
1605.07678
[ "1602.07261" ]
1605.07678#9
An Analysis of Deep Neural Network Models for Practical Applications
Inference time vs. batch size. This Figure 3: chart show inference time across different batch sizes with a logarithmic ordinate and logarithmic abscissa. Missing data points are due to lack of enough system memory required to process larger batches. A speed up of 3Ã is achieved by AlexNet due to better optimi- sation of its fully connected layers for larger batches. Figure 4: Power vs. batch size. Net power consump- tion (due only to the forward processing of several DNNs) for different batch sizes. The idle power of the TX1 board, with no HDMI screen connected, was 1.30 W on average. The max frequency component of power supply current was 1.4 kHz, corresponding to a Nyquist sampling frequency of 2.8 kHz.
1605.07678#8
1605.07678#10
1605.07678
[ "1602.07261" ]
1605.07678#10
An Analysis of Deep Neural Network Models for Practical Applications
# 3.1 ACCURACY Figure 1 shows one-crop accuracies of the most relevant entries submitted to the ImageNet chal- lenge, from the AlexNet (Krizhevsky et al., 2012), on the far left, to the best performing Inception-v4 (Szegedy et al., 2016). The newest ResNet and Inception architectures surpass all other architectures by a signiï¬ cant margin of at least 7%. Figure 2 provides a different, but more informative view of the accuracy values, because it also visualises computational cost and number of networkâ s parameters. The ï¬ rst thing that is very ap- parent is that VGG, even though it is widely used in many applications, is by far the most expensive architecture â both in terms of computational requirements and number of parameters. Its 16- and 19-layer implementations are in fact isolated from all other networks. The other architectures form a steep straight line, that seems to start to ï¬ atten with the latest incarnations of Inception and ResNet. This might suggest that models are reaching an inï¬ ection point on this data set. At this inï¬ ection point, the costs â in terms of complexity â start to outweigh gains in accuracy. We will later show that this trend is hyperbolic. 3.2 INFERENCE TIME Figure 3 reports inference time per image on each architecture, as a function of image batch size (from 1 to 64). We notice that VGG processes one image in a ï¬ fth of a second, making it a less likely contender in real-time applications on an NVIDIA TX1. AlexNet shows a speed up of roughly 3à going from batch of 1 to 64 images, due to weak optimisation of its fully connected layers.
1605.07678#9
1605.07678#11
1605.07678
[ "1602.07261" ]
1605.07678#11
An Analysis of Deep Neural Network Models for Practical Applications
It is a very surprising ï¬ nding, that will be further discussed in the next subsection. # 3.3 POWER Power measurements are complicated by the high frequency swings in current consumption, which required high sampling current read-out to avoid aliasing. In this work, we used a 200 MHz digital oscilloscope with a current probe, as reported in section 2. Other measuring instruments, such as an AC power strip with 2 Hz sampling rate, or a GPIB controlled DC power supply with 12 Hz sampling rate, did not provide enough bandwidth to properly conduct power measurements.
1605.07678#10
1605.07678#12
1605.07678
[ "1602.07261" ]
1605.07678#12
An Analysis of Deep Neural Network Models for Practical Applications
In ï¬ gure 4 we see that the power consumption is mostly independent with the batch size. Low power values for AlexNet (batch of 1) and VGG (batch of 2) are associated to slower forward times per image, as shown in ï¬ gure 3. 3 BN-NIN GoogLeNet Inception-v3 AlexNet BN-AlexNet VGG-16 VGG-19 ResNet is ResNet-34 ResNet'50 ResNet-101 2000 1000 Maximum net memory utilisation [MB] Batch size[/] Batch of 1 image ee Go «ee 0 100 200 300 400 500 Parameters [MB] Figure 5: Memory vs. batch size. Maximum sys- tem memory utilisation for batches of different sizes. Memory usage shows a knee graph, due to the net- work model memory static allocation and the variable memory used by batch size. Figure 6: Memory vs. parameters count. De- tailed view on static parameters allocation and cor- responding memory utilisation. Minimum memory of 200 MB, linear afterwards with slope 1.30. Batch of 1 image Batch of 16 images 10 e e e ® â ee 0 20 40 60 ao 1001202140160) Foward time per image [ms] 40 60 ao 100=« 12040160) Foward time per image [ms] # Operations [G-Ops] Figure 7: Operations vs. inference time, size â parameters. Relationship between operations and inference time, for batches of size 1 and 16 (biggest size for which all architectures can still run). Not surprisingly, we notice a linear trend, and therefore operations count represent a good estimation of inference time. Furthermore, we can notice an increase in the slope of the trend for larger batches, which correspond to shorter inference time due to batch processing optimisation.
1605.07678#11
1605.07678#13
1605.07678
[ "1602.07261" ]
1605.07678#13
An Analysis of Deep Neural Network Models for Practical Applications
3.4 MEMORY We analysed system memory consumption of the TX1 device, which uses shared memory for both CPU and GPU. Figure 5 shows that the maximum system memory usage is initially constant and then raises with the batch size. This is due the initial memory allocation of the network model â which is the large static component â and the contribution of the memory required while processing the batch, proportionally increasing with the number of images. In ï¬ gure 6 we can also notice that the initial allocation never drops below 200 MB, for network sized below 100 MB, and it is linear afterwards, with respect to the parameters and a slope of 1.30. # 3.5 OPERATIONS Operations count is essential for establishing a rough estimate of inference time and hardware circuit size, in case of custom implementation of neural network accelerators.
1605.07678#12
1605.07678#14
1605.07678
[ "1602.07261" ]
1605.07678#14
An Analysis of Deep Neural Network Models for Practical Applications
In ï¬ gure 7, for a batch of 16 images, there is a linear relationship between operations count and inference time per image. Therefore, at design time, we can pose a constraint on the number of operation to keep processing speed in a usable range for real-time applications or resource-limited deployments. 4 Batch of 1 image Batch of 16 images Operations [G-Ops] 10 e ° @ 5 ; e e) ® | @ sa. te Net power consumption [W] Net power consumption [W] Figure 8: Operations vs. power consumption, size â parameters. Independency of power and operations is shown by a lack of directionality of the distributions shown in these scatter charts.
1605.07678#13
1605.07678#15
1605.07678
[ "1602.07261" ]
1605.07678#15
An Analysis of Deep Neural Network Models for Practical Applications
Full resources utilisation and lower inference time for AlexNet architecture is reached with larger batches. Batch of 1 image Batch of 16 images â \@ @ 1 â C) 5 gâ ® e@ ® z e? e. Bos a e e 60 . o ss é é Images per second [Hz] Images per second [Hz] Figure 9: Accuracy vs. inferences per second, size â operations. Non trivial linear upper bound is shown in these scatter plots, illustrating the relationship between prediction accuracy and throughput of all examined architectures. These are the ï¬ rst charts in which the area of the blobs is proportional to the amount of operations, instead of the parameters count. We can notice that larger blobs are concentrated on the left side of the charts, in correspondence of low throughput, i.e. longer inference times. Most of the architectures lay on the linear interface between the grey and white areas. If a network falls in the shaded area, it means it achieves exceptional accuracy or inference speed. The white area indicates a suboptimal region. E.g. both AlexNet architectures improve processing speed as larger batches are adopted, gaining 80 Hz.
1605.07678#14
1605.07678#16
1605.07678
[ "1602.07261" ]
1605.07678#16
An Analysis of Deep Neural Network Models for Practical Applications
3.6 OPERATIONS AND POWER In this section we analyse the relationship between power consumption and number of operations required by a given model. Figure 8 reports that there is no speciï¬ c power footprint for different ar- chitectures. When full resources utilisation is reached, generally with larger batch sizes, all networks consume roughly an additional 11.8 W, with a standard deviation of 0.7 W. Idle power is 1.30 W. This corresponds to the maximum system power at full utilisation. Therefore, if energy consumption is one of our concerns, for example for battery-powered devices, one can simply choose the slowest architecture which satisï¬ es the application minimum requirements. 3.7 ACCURACY AND THROUGHPUT We note that there is a non-trivial linear upper bound between accuracy and number of inferences per unit time. Figure 9 illustrates that for a given frame rate, the maximum accuracy that can be achieved is linearly proportional to the frame rate itself. All networks analysed here come from several publications, and have been independently trained by other research groups. A linear ï¬ t of the accuracy shows all architecture trade accuracy vs. speed. Moreover, chosen a speciï¬ c inference time, one can now come up with the theoretical accuracy upper bound when resources are fully
1605.07678#15
1605.07678#17
1605.07678
[ "1602.07261" ]
1605.07678#17
An Analysis of Deep Neural Network Models for Practical Applications
5 Top-1 accuracy density (%/M-Params} < or ooâ PN? dh ne so a ok eas ee et Figure 10: Accuracy per parameter vs. network. Information density (accuracy per parameters) is an efï¬ - ciency metric that highlight that capacity of a speciï¬ c architecture to better utilise its parametric space. Models like VGG and AlexNet are clearly oversized, and do not take fully advantage of their potential learning abil- ity. On the far right, ResNet-18, BN-NIN, GoogLeNet and ENet (marked by grey arrows) do a better job at â
1605.07678#16
1605.07678#18
1605.07678
[ "1602.07261" ]
1605.07678#18
An Analysis of Deep Neural Network Models for Practical Applications
squeezingâ all their neurons to learn the given task, and are the winners of this section. utilised, as seen in section 3.6. Since the power consumption is constant, we can even go one step further, and obtain an upper bound in accuracy even for an energetic constraint, which could possibly be an essential designing factor for a network that needs to run on an embedded system. As the spoiler in section 3.1 gave already away, the linear nature of the accuracy vs. throughput relationship translates into a hyperbolical one when the forward inference time is considered instead. Then, given that the operations count is linear with the inference time, we get that the accuracy has an hyperbolical dependency on the amount of computations that a network requires.
1605.07678#17
1605.07678#19
1605.07678
[ "1602.07261" ]
1605.07678#19
An Analysis of Deep Neural Network Models for Practical Applications
3.8 PARAMETERS UTILISATION DNNs are known to be highly inefï¬ cient in utilising their full learning power (number of parameters / degrees of freedom). Prominent work (Han et al., 2015) exploits this ï¬ aw to reduce network ï¬ le size up to 50à , using weights pruning, quantisation and variable-length symbol encoding. It is worth noticing that, using more efï¬ cient architectures to begin with may produce even more compact representations. In ï¬ gure 10 we clearly see that, although VGG has a better accuracy than AlexNet (as shown by ï¬ gure 1), its information density is worse. This means that the amount of degrees of freedom introduced in the VGG architecture bring a lesser improvement in terms of accuracy. Moreover, ENet (Paszke et al., 2016) â
1605.07678#18
1605.07678#20
1605.07678
[ "1602.07261" ]
1605.07678#20
An Analysis of Deep Neural Network Models for Practical Applications
which we have speciï¬ cally designed to be highly efï¬ cient and it has been adapted and retrained on ImageNet (Culurciello, 2016) for this work â achieves the highest score, showing that 24à less parameters are sufï¬ cient to provide state-of-the-art results. # 4 CONCLUSIONS In this paper we analysed multiple state-of-the-art deep neural networks submitted to the ImageNet challenge, in terms of accuracy, memory footprint, parameters, operations count, inference time and power consumption. Our goal is to provide insights into the design choices that can lead to efï¬ cient neural networks for practical application, and optimisation of the often-limited resources in actual deployments, which lead us to the creation of ENet â or Efï¬ cient-Network â for ImageNet. We show that accuracy and inference time are in a hyperbolic relationship: a little increment in accuracy costs a lot of computational time. We show that number of operations in a network model can effectively estimate inference time. We show that an energy constraint will set a speciï¬ c upper bound on the maximum achievable accuracy and model complexity, in terms of operations counts. Finally, we show that ENet is the best architecture in terms of parameters space utilisation, squeezing up to 13à more information per parameter used respect to the reference model AlexNet, and 24Ã
1605.07678#19
1605.07678#21
1605.07678
[ "1602.07261" ]
1605.07678#21
An Analysis of Deep Neural Network Models for Practical Applications
respect VGG-19. 6 # ACKNOWLEDGMENTS This paper would have not look so pretty without the Python Software Foundation, the matplot- lib library and the communities of stackoverï¬ ow and TEX of StackExchange which I ought to thank. This work is partly supported by the Ofï¬ ce of Naval Research (ONR) grants N00014-12-1- 0167, N00014-15-1-2791 and MURI N00014-10-1-0278. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the TX1, Titan X, K40 GPUs used for this research. # REFERENCES Sharan Chetlur, Cliff Woolley, Philippe Vandermersch, Jonathan Cohen, John Tran, Bryan Catanzaro, and Evan Shelhamer. cuDNN:
1605.07678#20
1605.07678#22
1605.07678
[ "1602.07261" ]
1605.07678#22
An Analysis of Deep Neural Network Models for Practical Applications
Efï¬ cient Primitives for Deep Learning. arXiv.org arXiv:1410.0759, 2014. Ronan Collobert, Koray Kavukcuoglu, and Cl´ement Farabet. Torch7: A matlab-like environment for machine learning. In BigLearn, NIPS Workshop, number EPFL-CONF-192376, 2011. Eugenio Culurciello. Training enet. https://culurciello.github.io/tech/2016/06/20/ training-enet.html, 2016.
1605.07678#21
1605.07678#23
1605.07678
[ "1602.07261" ]
1605.07678#23
An Analysis of Deep Neural Network Models for Practical Applications
Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385, 2015. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classiï¬ cation with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097â 1105, 2012. Min Lin, Qiang Chen, and Shuicheng Yan. Network in network. arXiv preprint arXiv:1312.4400, 2013. # nVIDIA. Jetson tx1 module. http://www.nvidia.com/object/jetson-tx1-module.html. Adam Paszke. torch-opcounter. https://github.com/apaszke/torch-opCounter, 2016. Adam Paszke, Abhishek Chaurasia, Sangpil Kim, and Eugenio Culurciello.
1605.07678#22
1605.07678#24
1605.07678
[ "1602.07261" ]
1605.07678#24
An Analysis of Deep Neural Network Models for Practical Applications
Enet: A deep neural network architecture for real-time semantic segmentation. arXiv preprint arXiv:1606.02147, 2016. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, An- drej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211â 252, 2015. Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Er- arXiv preprint han, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. arXiv:1409.4842, 2014. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. arXiv preprint arXiv:1512.00567, 2015. Christian Szegedy, Sergey Ioffe, and Vincent Vanhoucke. Inception-v4, inception-resnet and the impact of residual connections on learning. arXiv preprint arXiv:1602.07261, 2016. Sergey Zagoruyko. imagenet-validation.torch. https://github.com/szagoruyko/imagenet- validation.torch, 2016.
1605.07678#23
1605.07678#25
1605.07678
[ "1602.07261" ]
1605.07678#25
An Analysis of Deep Neural Network Models for Practical Applications
7
1605.07678#24
1605.07678
[ "1602.07261" ]
1605.07427#0
Hierarchical Memory Networks
6 1 0 2 y a M 4 2 ] L M . t a t s [ 1 v 7 2 4 7 0 . 5 0 6 1 : v i X r a # Hierarchical Memory Networks # Sarath Chandarâ 1, Sungjin Ahn1, Hugo Larochelle2,4, Pascal Vincent1,4, Gerald Tesauro3, Yoshua Bengio1,4 1 Université de Montréal, Canada. 2 Twitter Cortex, USA. 3 IBM Watson Research Center, USA. 4 CIFAR, Canada. # Abstract
1605.07427#1
1605.07427
[ "1507.05910" ]
1605.07427#1
Hierarchical Memory Networks
Memory networks are neural networks with an explicit memory component that can be both read and written to by the network. The memory is often addressed in a soft way using a softmax function, making end-to-end training with backpropagation possible. However, this is not computationally scalable for applications which require the network to read from extremely large memories. On the other hand, it is well known that hard attention mechanisms based on reinforcement learning are challenging to train successfully. In this paper, we explore a form of hierarchical memory network, which can be considered as a hybrid between hard and soft attention memory networks. The memory is organized in a hierarchical structure such that reading from it is done with less computation than soft attention over a ï¬ at memory, while also being easier to train than hard attention over a ï¬ at memory. Speciï¬ cally, we propose to incorporate Maximum Inner Product Search (MIPS) in the training and inference procedures for our hierarchical memory network. We explore the use of various state-of-the art approximate MIPS techniques and report results on SimpleQuestions, a challenging large scale factoid question answering task.
1605.07427#0
1605.07427#2
1605.07427
[ "1507.05910" ]
1605.07427#2
Hierarchical Memory Networks
# 1 Introduction Until recently, traditional machine learning approaches for challenging tasks such as image captioning, object detection, or machine translation have consisted in complex pipelines of algorithms, each being separately tuned for better performance. With the recent success of neural networks and deep learning research, it has now become possible to train a single model end-to-end, using backpropagation. Such end-to-end systems often outperform traditional approaches, since the entire model is directly optimized with respect to the ï¬ nal task at hand. However, simple encode-decode style neural networks often underperform on knowledge-based reasoning tasks like question-answering or dialog systems. Indeed, in such cases it is nearly impossible for regular neural networks to store all the necessary knowledge in their parameters. Neural networks with memory [1, 2] can deal with knowledge bases by having an external memory component which can be used to explicitly store knowledge. The memory is accessed by reader and writer functions, which are both made differentiable so that the entire architecture (neural network, reader, writer and memory components) can be trained end-to-end using backpropagation. Memory-based architectures can also be considered as generalizations of RNNs and LSTMs, where the memory is analogous to recurrent hidden states. However they are much richer in structure and can handle very long-term dependencies because once a vector (i.e., a memory) is stored, it is copied
1605.07427#1
1605.07427#3
1605.07427
[ "1507.05910" ]
1605.07427#3
Hierarchical Memory Networks
â Corresponding author: [email protected] from time step to time step and can thus stay there for a very long time (and gradients correspondingly ï¬ ow back time unhampered). There exists several variants of neural networks with a memory component: Memory Networks [2], Neural Turing Machines (NTM) [1], Dynamic Memory Networks (DMN) [3]. They all share ï¬ ve major components: memory, input module, reader, writer, and output module. Memory: The memory is an array of cells, each capable of storing a vector. The memory is often initialized with external data (e.g. a database of facts), by ï¬ lling in its cells with a pre-trained vector representations of that data. Input module: The input module is to compute a representation of the input that can be used by other modules.
1605.07427#2
1605.07427#4
1605.07427
[ "1507.05910" ]
1605.07427#4
Hierarchical Memory Networks
Writer: The writer takes the input representation and updates the memory based on it. The writer can be as simple as ï¬ lling the slots in the memory with input vectors in a sequential way (as often done in memory networks). If the memory is bounded, instead of sequential writing, the writer has to decide where to write and when to rewrite cells (as often done in NTMs). Reader: Given an input and the current state of the memory, the reader retrieves content from the memory, which will then be used by an output module. This often requires comparing the inputâ s representation or a function of the recurrent state with memory cells using some scoring function such as a dot product. Output module: Given the content retrieved by the reader, the output module generates a prediction, which often takes the form of a conditional distribution over multiple labels for the output. For the rest of the paper, we will use the name memory network to describe any model which has any form of these ï¬ ve components. We would like to highlight that all the components except the memory are learnable. Depending on the application, any of these components can also be ï¬ xed.
1605.07427#3
1605.07427#5
1605.07427
[ "1507.05910" ]
1605.07427#5
Hierarchical Memory Networks
In this paper, we will focus on the situation where a network does not write and only reads from the memory. In this paper, we focus on the application of memory networks to large-scale tasks. Speciï¬ cally, we focus on large scale factoid question answering. For this problem, given a large set of facts and a natural language question, the goal of the system is to answer the question by retrieving the supporting fact for that question, from which the answer can be derived. Application of memory networks to this task has been studied in [4]. However, [4] depended on keyword based heuristics to ï¬ lter the facts to a smaller set which is manageable for training. However heuristics are invariably dataset dependent and we are interested in a more general solution which can be used when the facts are of any structure. One can design soft attention retrieval mechanisms, where a convex combination of all the cells is retrieved or design hard attention retrieval mechanisms where one or few cells from the memory are retrieved. Soft attention is achieved by using softmax over the memory which makes the reader differentiable and hence learning can be done using gradient descent. Hard attention is achieved by using methods like REINFORCE [5], which provides a noisy gradient estimate when discrete stochastic decisions are made by a model. Both soft attention and hard attention have limitations. As the size of the memory grows, soft attention using softmax weighting is not scalable. It is computationally very expensive, since its complexity is linear in the size of the memory. Also, at initialization, gradients are dispersed so much that it can reduce the effectiveness of gradient descent. These problems can be alleviated by a hard attention mechanism, for which the training method of choice is REINFORCE. However, REINFORCE can be brittle due to its high variance and existing variance reduction techniques are complex. Thus, it is rarely used in memory networks (even in cases of a small memory). In this paper, we propose a new memory selection mechanism based on Maximum Inner Product Search (MIPS) which is both scalable and easy to train. This can be considered as a hybrid of soft and hard attention mechanisms. The key idea is to structure the memory in a hierarchical way such that it is easy to perform MIPS, hence the name Hierarchical Memory Network (HMN). HMNs are scalable at both training and inference time.
1605.07427#4
1605.07427#6
1605.07427
[ "1507.05910" ]
1605.07427#6
Hierarchical Memory Networks
The main contributions of the paper are as follows: â ¢ We explore hierarchical memory networks, where the memory is organized in a hierarchical fashion, which allows the reader to efï¬ ciently access only a subset of the memory. â ¢ While there are several ways to decide which subset to access, we propose to pose memory access as a maximum inner product search (MIPS) problem. 2 â ¢ We empirically show that exact MIPS-based algorithms not only enjoy similar convergence as soft attention models, but can even improve the performance of the memory network. â ¢ Since exact MIPS is as computationally expensive as a full soft attention model, we propose to train the memory networks using approximate MIPS techniques for scalable memory access. â ¢ We empirically show that unlike exact MIPS, approximate MIPS algorithms provide a speedup and scalability of training, though at the cost of some performance.
1605.07427#5
1605.07427#7
1605.07427
[ "1507.05910" ]
1605.07427#7
Hierarchical Memory Networks
# 2 Hierarchical Memory Networks In this section, we describe the proposed Hierarchical Memory Network (HMN). In this paper, HMNs only differ from regular memory networks in two of its components: the memory and the reader. Memory: Instead of a ï¬ at array of cells for the memory structure, HMNs leverages a hierarchical memory structure. Memory cells are organized into groups and the groups can further be organized into higher level groups. The choice for the memory structure is tightly coupled with the choice of reader, which is essential for fast memory access. We consider three classes of approaches for the memoryâ s structure: hashing-based approaches, tree-based approaches, and clustering-based approaches.
1605.07427#6
1605.07427#8
1605.07427
[ "1507.05910" ]
1605.07427#8
Hierarchical Memory Networks
This is explained in detail in the next section. Reader: The reader in the HMN is different from the readers in ï¬ at memory networks. Flat memory- based readers use either soft attention over the entire memory or hard attention that retrieves a single cell. While these mechanisms might work with small memories, with HMNs we are more interested in achieving scalability towards very large memories. So instead, HMN readers use soft attention only over a selected subset of the memory. Selecting memory subsets is guided by a maximum inner product search algorithm, which can exploit the hierarchical structure of the organized memory to retrieve the most relevant facts in sub-linear time. The MIPS-based reader is explained in more detail in the next section. In HMNs, the reader is thus trained to create MIPS queries such that it can retrieve a sufï¬
1605.07427#7
1605.07427#9
1605.07427
[ "1507.05910" ]
1605.07427#9
Hierarchical Memory Networks
cient set of facts. While most of the standard applications of MIPS [6â 8] so far have focused on settings where both query vector and database (memory) vectors are precomputed and ï¬ xed, memory readers in HMNs are learning to do MIPS by updating the input representation such that the result of MIPS retrieval contains the correct fact(s). # 3 Memory Reader with K-MIPS attention In this section, we describe how the HMN memory reader uses Maximum Inner Product Search (MIPS) during learning and inference.
1605.07427#8
1605.07427#10
1605.07427
[ "1507.05910" ]
1605.07427#10
Hierarchical Memory Networks
We begin with a formal deï¬ nition of K-MIPS. Given a set of points X = {x1, . . . , xn} and a query vector q, our goal is to ï¬ nd argmax(2), q! x; qd) where the argmax(K) returns the indices of the top-K maximum values. In the case of HMNs, X corresponds to the memory and q corresponds to the vector computed by the input module. A simple but inefï¬ cient solution for K-MIPS involves a linear search over the cells in memory by performing the dot product of q with all the memory cells. While this will return the exact result for K-MIPS, it is too costly to perform when we deal with a large-scale memory. However, in many practical applications, it is often sufï¬ cient to have an approximate result for K-MIPS, trading speed-up at the cost of the accuracy. There exist several approximate K-MIPS solutions in the literature [8, 9, 7, 10]. All the approximate K-MIPS solutions add a form of hierarchical structure to the memory and visit only a subset of the memory cells to ï¬ nd the maximum inner product for a given query. Hashing-based approaches [8â 10] hash cells into multiple bins, and given a query they search for K-MIPS cell vectors only in bins that are close to the bin associated with the query. Tree-based approaches [6, 7] create search trees with cells in the leaves of the tree. Given a query, a path in the tree is followed and MIPS is performed only for the leaf for the chosen path. Clustering-based approaches [11] cluster
1605.07427#9
1605.07427#11
1605.07427
[ "1507.05910" ]
1605.07427#11
Hierarchical Memory Networks
3 cells into multiple clusters (or a hierarchy of clusters) and given a query, they perform MIPS on the centroids of the top few clusters. We refer the readers to [11] for an extensive comparison of various state-of-the-art approaches for approximate K-MIPS. Our proposal is to exploit this rich approximate K-MIPS literature to achieve scalable training and inference in HMNs. Instead of ï¬ ltering the memory with heuristics, we propose to organize the memory based on approximate K-MIPS algorithms and then train the reader to learn to perform MIPS. Speciï¬ cally, consider the following softmax over the memory which the reader has to perform for every reading step to retrieve a set of relevant candidates: Rout = softmax(h(q)M T ) (2) where h(q) â Rd is the representation of the query, M â RN à d is the memory with N being the total number of cells in the memory.
1605.07427#10
1605.07427#12
1605.07427
[ "1507.05910" ]
1605.07427#12
Hierarchical Memory Networks
We propose to replace this softmax with softmax(K) which is deï¬ ned as follows: C = argmax(K) h(q)M T Rout = softmax(K)(h(q)M T ) = softmax(h(q)M [C]T ) (4) where C is the indices of top-K MIP candidate cells and M [C] is a sub-matrix of M where the rows are indexed by C. One advantage of using the softmax(K) is that it naturally focuses on cells that would normally receive the strongest gradients during learning. That is, in a full softmax, the gradients are otherwise more dispersed across cells, given the large number of cells and despite many contributing a small gradient. As our experiments will show, this results in slower training. One problematic situation when learning with the softmax(K) is when we are at the initial stages of training and the K-MIPS reader is not including the correct fact candidate. To avoid this issue, we always include the correct candidate to the top-K candidates retrieved by the K-MIPS algorithm, effectively performing a fully supervised form of learning. During training, the reader is updated by backpropagation from the output module, through the subset of memory cells. Additionally, the log-likelihood of the correct fact computed using K-softmax is also maximized. This second supervision helps the reader learn to modify the query such that the maximum inner product of the query with respect to the memory will yield the correct supporting fact in the top K candidate set. Until now, we described the exact K-MIPS-based learning framework, which still requires a linear look-up over all memory cells and would be prohibitive for large-scale memories. In such scenarios, we can replace the exact K-MIPS in the training procedure with the approximate K-MIPS. This is achieved by deploying a suitable memory hierarchical structure. The same approximate K-MIPS- based reader can be used during inference stage as well. Of course, approximate K-MIPS algorithms might not return the exact MIPS candidates and will likely to hurt performance, but at the beneï¬ t of achieving scalability. While the memory representation is ï¬ xed in this paper, updating the memory along with the query representation should improve the likelihood of choosing the correct fact. However, updating the memory will reduce the precision of the approximate K-MIPS algorithms, since all of them assume that the vectors in the memory are static.
1605.07427#11
1605.07427#13
1605.07427
[ "1507.05910" ]
1605.07427#13
Hierarchical Memory Networks
Designing efï¬ cient dynamic K-MIPS should improve the performance of HMNs even further, a challenge that we hope to address in future work. # 3.1 Reader with Clustering-based approximate K-MIPS Clustering-based approximate K-MIPS was proposed in [11] and it has been shown to outperform various other state-of-the-art data dependent and data independent approximate K-MIPS approaches for inference tasks. As we will show in the experiments section, clustering-based MIPS also performs better when used to training HMNs. Hence, we focus our presentation on the clustering-based approach and propose changes that were found to be helpful for learning HMNs. Following most of the other approximate K-MIPS algorithms, [11] converts MIPS to Maximum Cosine Similarity Search (MCSS) problem: argmax(K) iâ X qT xi ||q|| ||xi|| = argmax(K) iâ X qT xi ||xi|| (5)
1605.07427#12
1605.07427#14
1605.07427
[ "1507.05910" ]
1605.07427#14
Hierarchical Memory Networks
4 When all the data vectors xi have the same norm, then MCSS is equivalent to MIPS. However, it is often restrictive to have this additional constraint. Instead, [11] appends additional dimensions to both query and data vectors to convert MIPS to MCSS. In HMN terminology, this would correspond to adding a few more dimensions to the memory cells and input representations. The algorithm introduces two hyper-parameters, U < 1 and m â Nâ . The ï¬ rst step is to scale all the vectors in the memory by the same factor, such that maxi ||xi||2 = U . We then apply two mappings, P and Q, on the memory cells and on the input vector, respectively. These two mappings simply concatenate m new components to the vectors and make the norms of the data points all roughly the same [9]. The mappings are deï¬ ned as follows: P (x) = [x, 1/2 â ||x||2 Q(x) = [x, 0, 0, . . . , 0] 2, 1/2 â ||x||4 2, . . . , 1/2 â ||x||2m 2 ] (6) (7) We thus have the following approximation of MIPS by MCSS for any query vector q: (K) tT cK) Q(g)' Pla) argmax; q ~ argmax, TTT â * lQ(@)ll2 - ||P(wa)|l2 (8) Once we convert MIPS to MCSS, we can use spherical K-means [12] or its hierarchical version to approximate and speedup the cosine similarity search. Once the memory is clustered, then every read operation requires only K dot-products, where K is the number of cluster centroids. Since this is an approximation, it is error-prone. As we are using this approximation for the learning process, this introduces some bias in gradients, which can affect the overall performance of HMN. To alleviate this bias, we propose three simple strategies.
1605.07427#13
1605.07427#15
1605.07427
[ "1507.05910" ]
1605.07427#15
Hierarchical Memory Networks
â ¢ Instead of using only the top-K candidates for a single read query, we also add top-K candidates retrieved for every other read query in the mini-batch. This serves two purposes. First, we can do efï¬ cient matrix multiplications by leveraging GPUs since all the K-softmax in a minibatch are over the same set of elements. Second, this also helps to decrease the bias introduced by the approximation error. â ¢ For every read access, instead of only using the top few clusters which has a maximum product with the read query, we also sample some clusters from the rest, based on a probability distribution log-proportional to the dot product with the cluster centroids. This also decreases the bias.
1605.07427#14
1605.07427#16
1605.07427
[ "1507.05910" ]
1605.07427#16
Hierarchical Memory Networks
â ¢ We can also sample random blocks of memory and add it to top-K candidates. We empirically investigate the effect of these variations in Section 5.5. # 4 Related Work Memory networks have been introduced in [2] and have been so far applied to comprehension-based question answering [13, 14], large scale question answering [4] and dialogue systems [15]. While [2] considered supervised memory networks in which the correct supporting fact is given during the training stage, [14] introduced semi-supervised memory networks that can learn the supporting fact by itself. [3, 16] introduced Dynamic Memory Networks (DMNs) which can be considered as a memory network with two types of memory: a regular large memory and an episodic memory. Another related class of model is the Neural Turing Machine [1], which is uses softmax-based soft attention. Later [17] extended NTM to hard attention using reinforcement learning. [15, 4] alleviate the problem of the scalability of soft attention by having an initial keyword based ï¬ ltering stage, which reduces the number of facts being considered. Our work generalizes this ï¬ ltering by using MIPS for ï¬ ltering. This is desirable because MIPS can be applied for any modality of data or even when there is no overlap between the words in a question and the words in facts. The softmax arises in various situations and most relevant to this work are scaling methods for large vocabulary neural language modeling. In neural language modeling, the ï¬ nal layer is a softmax distribution over the next word and there exist several approaches to achieve scalability. [18] proposes a hierarchical softmax based on prior clustering of the words into a binary, or more generally n-ary tree, that serves as a ï¬ xed structure for the learning process of the model. The complexity of training
1605.07427#15
1605.07427#17
1605.07427
[ "1507.05910" ]
1605.07427#17
Hierarchical Memory Networks
5 is reduced from O(n) to O(log n). Due to its clustering and tree structure, it resembles the clustering- based MIPS techniques we explore in this paper. However, the approaches differ at a fundamental level. Hierarchical softmax deï¬ nes the probability of a leaf node as the product of all the probabilities computed by all the intermediate softmaxes on the way to that leaf node. By contrast, an approximate MIPS search imposes no such constraining structure on the probabilistic model, and is better thought as efï¬ ciently searching for top winners of what amounts to be a large ordinary ï¬ at softmax. Other methods such as Noice Constrastive Estimation [19] and Negative Sampling [20] avoid an expensive normalization constant by sampling negative samples from some marginal distribution. By contrast, our approach approximates the softmax by explicitly including in its negative samples candidates that likely would have a large softmax value. [21] introduces an importance sampling approach that considers all the words in a mini-batch as the candidate set. This in general might also not include the MIPS candidates with highest softmax values. [22] is the only work that we know of, proposing to use MIPS during learning. It proposes hashing- based MIPS to sort the hidden layer activations and reduce the computation in every layer. However, a small scale application was considered and data-independent methods like hashing will likely suffer as dimensionality increases.
1605.07427#16
1605.07427#18
1605.07427
[ "1507.05910" ]
1605.07427#18
Hierarchical Memory Networks
# 5 Experiments In this section, we report experiments on factoid question answering using hierarchical memory networks. Speciï¬ cally, we use the SimpleQuestions dataset [4]. The aim of these experiments is not to achieve state-of-the-art results on this dataset. Rather, we aim to propose and analyze various approaches to make memory networks more scalable and explore the achieved tradeoffs between speed and accuracy. # 5.1 Dataset We use SimpleQuestions [4] which is a large scale factoid question answering dataset. SimpleQues- tions consists of 108,442 natural language questions, each paired with a corresponding fact from Freebase. Each fact is a triple (subject,relation,object) and the answer to the question is always the ob- ject.
1605.07427#17
1605.07427#19
1605.07427
[ "1507.05910" ]
1605.07427#19
Hierarchical Memory Networks
The dataset is divided into training (75910), validation (10845), and test (21687) sets. Unlike [4] who additionally considered FB2M (10M facts) or FB5M (12M facts) with keyword-based heuristics for ï¬ ltering most of the facts for each question, we only use SimpleQuestions, with no keyword-based heuristics. This allows us to do a direct comparison with the full softmax approach in a reasonable amount of time. Moreover, we would like to highlight that for this dataset, keyword-based ï¬ ltering is a very efï¬ cient heuristic since all questions have an appropriate source entity with a matching word.
1605.07427#18
1605.07427#20
1605.07427
[ "1507.05910" ]
1605.07427#20
Hierarchical Memory Networks
Nevertheless, our goal is to design a general purpose architecture without such strong assumptions on the nature of the data. # 5.2 Model Let Vq be the vocabulary of all words in the natural language questions. Let Wq be a |Vq| â m matrix where each row is some m dimensional embedding for a word in the question vocabulary. This matrix is initialized with random values and learned during training. Given any question, we represent it with a bag-of-words representation by summing the vector representation of each word in the question. Let q = {wi}p P h(q) = > W, [wil i=l Then, to ï¬ nd the relevant fact from the memory M, we call the K-MIPS-based reader module with h(q) as the query. This uses Equation 3 and 4 to compute the output of the reader Rout. The reader is trained by minimizing the Negative Log Likelihood (NLL) of the correct fact. N Jo = > â log(Rout| fil) i=l 6 where fi is the index of the correct fact in Wm. We are ï¬
1605.07427#19
1605.07427#21
1605.07427
[ "1507.05910" ]
1605.07427#21
Hierarchical Memory Networks
xing the memory embeddings to the TransE [23] embeddings and learning only the question embeddings. This model is simpler than the one reported in [4] so that it is esay to analyze the effect of various memory reading strategies. # 5.3 Training Details We trained the model with the Adam optimizer [24], with a ï¬ xed learning rate of 0.001. We used mini-batches of size 128. We used 200 dimensional embeddings for the TransE entities, yielding 600 dimensional embeddings for facts by concatenating the embeddings of the subject, relation and object. We also experimented with summing the entities in the triple instead of concatenating, but we found that it was difï¬ cult for the model to differentiate facts this way.
1605.07427#20
1605.07427#22
1605.07427
[ "1507.05910" ]
1605.07427#22
Hierarchical Memory Networks
The only learnable parameters by the HMN model are the question word embeddings. The entity distribution in SimpleQuestions is extremely sparse and hence, following [4], we also add artiï¬ cial questions for all the facts for which we do not have natural language questions. Unlike [4], we do not add any other additional tasks like paraphrase detection to the model, mainly to study the effect of the reader. We stopped training for all the models when the validation accuracy consistently decreased for 3 epochs. # 5.4 Exact K-MIPS improves accuracy In this section, we compare the performance of the full soft attention reader and exact K-MIPS attention readers. Our goal is to verify that K-MIPS attention is in fact a valid and useful attention mechanism and see how it fares when compared to full soft attention. For K-MIPS attention, we tried K â
1605.07427#21
1605.07427#23
1605.07427
[ "1507.05910" ]
1605.07427#23
Hierarchical Memory Networks
10, 50, 100, 1000. We would like to emphasize that, at training time, along with K candidates for a particular question, we also add the K-candidates for each question in the mini-batch. So the exact size of the softmax layer would be higer than K during training. In Table 1, we report the test performance of memory networks using the soft attention reader and K-MIPS attention reader. We also report the average softmax size during training. From the table, it is clear that the K-MIPS attention readers improve the performance of the network compared to soft attention reader. In fact, smaller the value of K is, better the performance. This result suggests that it is better to use a K-MIPS layer instead of softmax layer whenever possible. It is interesting to see that the convergence of the model is not slowed down due to this change in softmax computation (as shown in Figure 1). Model Full-softmax 10-MIPS 50-MIPS 100-MIPS 1000-MIPS Clustering PCA-Tree WTA-Hash Test Acc. Avg. Softmax Size 59.5 62.2 61.2 60.6 59.6 51.5 32.4 40.2 108442 1290 6180 11928 70941 20006 21108 20008 Table 1: Accuracy in SQ test-set and average size of memory used. 10-softmax has high performance while using only smaller amount of memory. Figure 1: Validation curve for various models. Convergence is not slowed down by k-softmax.
1605.07427#22
1605.07427#24
1605.07427
[ "1507.05910" ]
1605.07427#24
Hierarchical Memory Networks
This experiment conï¬ rms the usefulness of K-MIPS attention. However, exact K-MIPS has the same complexity as a full softmax. Hence, to scale up the training, we need more efï¬ cient forms of K-MIPS attention, which is the focus of next experiment. # 5.5 Approximate K-MIPS based learning As mentioned previously, designing faster algorithms for K-MIPS is an active area of research. [11] compared several state-of-the-art data-dependent and data-independent methods for faster approximate K-MIPS and it was found that clustering-based MIPS performs signiï¬ cantly better than other approaches. However the focus of the comparison was on performance during the inference
1605.07427#23
1605.07427#25
1605.07427
[ "1507.05910" ]
1605.07427#25
Hierarchical Memory Networks
7 stage. In HMNs, K-MIPS must be used at both training stage and inference stages. To verify if the same trend can been seen during learning stage as well, we compared three different approaches: Clustering: This was explained in detail in section 3. WTA-Hash: Winner Takes All hashing [25] is a hashing-based K-MIPS algorithm which also converts MIPS to MCSS by augmenting additional dimensions to the vectors. This method used n hash functions and each hash function does p different random permutations of the vector.
1605.07427#24
1605.07427#26
1605.07427
[ "1507.05910" ]
1605.07427#26
Hierarchical Memory Networks
Then the preï¬ x constituted by the ï¬ rst k elements of each permuted vector is used to construct the hash for the vector. PCA-Tree: PCA-Tree [7] is the state-of-the-art tree-based method, which converts MIPS to NNS by vector augmentation. It uses the principal components of the data to construct a balanced binary tree with data residing in the leaves. For a fair comparison, we varied the hyper-parameters of each algorithm in such a way that the average speedup is approximately the same. Table 1 shows the performance of all three methods, compared to a full softmax. From the table, it is clear that the clustering-based method performs signiï¬ cantly better than the other two methods. However, performances are lower when compared to the performance of the full softmax. As a next experiment, we analyze various the strategies proposed in Section 3.1 to reduce the approximation bias of clustering-based K-MIPS: Top-K: This strategy picks the vectors in the top K clusters as candidates. Sample-K: This strategy samples K clusters, without replacement, based on a probability distribution based on the dot product of the query with the cluster centroids. When combined with the Top-K strategy, we ignore clusters selected by the Top-k strategy for sampling.
1605.07427#25
1605.07427#27
1605.07427
[ "1507.05910" ]
1605.07427#27
Hierarchical Memory Networks
Rand-block: This strategy divides the memory into several blocks and uniformly samples a random block as candidate. We experimented with 1000 clusters and 2000 clusters. While comparing various training strategies, we made sure that the effective speedup is approximately the same. Memory access to facts per query for all the models is approximately 20,000, hence yielding a 5X speedup. Top-K Sample-K rand-block Yes No Yes Yes Yes No Yes Yes No Yes No No No Yes Yes 1000 clusters Test Acc. 50.2 52.5 52.8 51.8 52.5 epochs 16 68 31 32 38 2000 clusters Test Acc. 51.5 52.8 53.1 52.3 52.7 epochs 22 63 26 26 19 Table 2: Accuracy in SQ test set and number of epochs for convergence. Results are given in Table 2. We observe that the best approach is to combine the Top-K and Sample-K strategies, with Rand-block not being beneï¬
1605.07427#26
1605.07427#28
1605.07427
[ "1507.05910" ]
1605.07427#28
Hierarchical Memory Networks
cial. Interestingly, the worst performances correspond to cases where the Sample-K strategy is ignored. # 6 Conclusion In this paper, we proposed a hierarchical memory network that exploits K-MIPS for its attention- based reader. Unlike soft attention readers, K-MIPS attention reader is easily scalable to larger memories. This is achieved by organizing the memory in a hierarchical way. Experiments on the SimpleQuestions dataset demonstrate that exact K-MIPS attention is better than soft attention. However, existing state-of-the-art approximate K-MIPS techniques provide a speedup at the cost of some accuracy. Future research will investigate designing efï¬ cient dynamic K-MIPS algorithms, where the memory can be dynamically updated during training. This should reduce the approximation bias and hence improve the overall performance.
1605.07427#27
1605.07427#29
1605.07427
[ "1507.05910" ]
1605.07427#29
Hierarchical Memory Networks
8 # References [1] Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. arXiv preprint arXiv:1410.5401, 2014. [2] Jason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. In Proceedings Of The International Conference on Representation Learning (ICLR 2015), 2015. In Press. [3] Ankit Kumar et al. Ask me anything: Dynamic memory networks for natural language processing. CoRR, abs/1506.07285, 2015. [4] Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. Large-scale simple question answering with memory networks. arXiv preprint arXiv:1506.02075, 2015. [5] Ronald J. Williams.
1605.07427#28
1605.07427#30
1605.07427
[ "1507.05910" ]
1605.07427#30
Hierarchical Memory Networks
Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8:229â 256, 1992. [6] Parikshit Ram and Alexander G. Gray. Maximum inner-product search using cone trees. KDD â 12, pages 931â 939, 2012. [7] Yoram Bachrach et al. Speeding up the xbox recommender system using a euclidean transformation for inner-product spaces. RecSys â 14, pages 257â 264, 2014. [8] Anshumali Shrivastava and Ping Li. Asymmetric LSH (ALSH) for sublinear time maximum inner product search (MIPS). In Advances in Neural Information Processing Systems 27, pages 2321â 2329, 2014. [9] Anshumali Shrivastava and Ping Li. Improved asymmetric locality sensitive hashing (alsh) for maximum inner product search (mips). In Proceedings of Conference on Uncertainty in Artiï¬ cial Intelligence (UAI), 2015.
1605.07427#29
1605.07427#31
1605.07427
[ "1507.05910" ]
1605.07427#31
Hierarchical Memory Networks
[10] Behnam Neyshabur and Nathan Srebro. On symmetric and asymmetric lshs for inner product search. In Proceedings of the 31st International Conference on Machine Learning, 2015. [11] Alex Auvolat, Sarath Chandar, Pascal Vincent, Hugo Larochelle, and Yoshua Bengio. Clustering is efï¬ cient for approximate maximum inner product search. arXiv preprint arXiv:1507.05910, 2015.
1605.07427#30
1605.07427#32
1605.07427
[ "1507.05910" ]
1605.07427#32
Hierarchical Memory Networks
[12] Shi Zhong. Efï¬ cient online spherical k-means clustering. In Neural Networks, 2005. IJCNNâ 05. Proceed- ings. 2005 IEEE International Joint Conference on, volume 5, pages 3180â 3185. IEEE, 2005. [13] Jason Weston, Antoine Bordes, Sumit Chopra, and Tomas Mikolov. Towards ai-complete question answering: a set of prerequisite toy tasks. arXiv preprint arXiv:1502.05698, 2015. [14] Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. End-to-end memory networks. arXiv preprint arXiv:1503.08895, 2015. [15] Jesse Dodge, Andreea Gane, Xiang Zhang, Antoine Bordes, Sumit Chopra, Alexander Miller, Arthur Szlam, and Jason Weston. Evaluating prerequisite qualities for learning end-to-end dialog systems. CoRR, abs/1511.06931, 2015. [16] Caiming Xiong, Stephen Merity, and Richard Socher. Dynamic memory networks for visual and textual question answering. CoRR, abs/1603.01417, 2016. [17] Wojciech Zaremba and Ilya Sutskever. Reinforcement learning neural turing machines. CoRR, abs/1505.00521, 2015. [18] Frederic Morin and Yoshua Bengio. Hierarchical probabilistic neural network language model.
1605.07427#31
1605.07427#33
1605.07427
[ "1507.05910" ]
1605.07427#33
Hierarchical Memory Networks
In Robert G. Cowell and Zoubin Ghahramani, editors, Proceedings of AISTATS, pages 246â 252, 2005. [19] Andriy Mnih and Karol Gregor. Neural variational inference and learning in belief networks. arXiv preprint arXiv:1402.0030, 2014. [20] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efï¬ cient estimation of word representations in vector space. In International Conference on Learning Representations, Workshop Track, 2013. [21] Sébastien Jean, KyungHyun Cho, Roland Memisevic, and Yoshua Bengio. On using very large target vocabulary for neural machine translation. In Proceedings of ACL,2015, pages 1â 10, 2015. [22] Ryan Spring and Anshumali Shrivastava.
1605.07427#32
1605.07427#34
1605.07427
[ "1507.05910" ]
1605.07427#34
Hierarchical Memory Networks
Scalable and sustainable deep learning via randomized hashing. CoRR, abs/1602.08194, 2016. 9 [23] Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. Translat- ing embeddings for modeling multi-relational data. In Advances in NIPS, pages 2787â 2795. 2013. [24] Diederik P. Kingma and Jimmy Ba. Adam:
1605.07427#33
1605.07427#35
1605.07427
[ "1507.05910" ]
1605.07427#35
Hierarchical Memory Networks
A method for stochastic optimization. CoRR, abs/1412.6980, 2014. [25] Sudheendra Vijayanarasimhan, Jon Shlens, Rajat Monga, and Jay Yagnik. Deep networks with large output spaces. arXiv preprint arXiv:1412.7479, 2014. 10
1605.07427#34
1605.07427
[ "1507.05910" ]
1605.07683#0
Learning End-to-End Goal-Oriented Dialog
7 1 0 2 r a M 0 3 ] L C . s c [ 4 v 3 8 6 7 0 . 5 0 6 1 : v i X r a Published as a conference paper at ICLR 2017 # LEARNING END-TO-END GOAL-ORIENTED DIALOG Antoine Bordes, Y-Lan Boureau & Jason Weston Facebook AI Research New York, USA {abordes, ylan, jase}@fb.com # ABSTRACT Traditional dialog systems used in goal-oriented applications require a lot of domain-speciï¬ c handcrafting, which hinders scaling up to new domains. End- to-end dialog systems, in which all components are trained from the dialogs themselves, escape this limitation. But the encouraging success recently obtained in chit-chat dialog may not carry over to goal-oriented settings. This paper proposes a testbed to break down the strengths and shortcomings of end-to-end dialog systems in goal-oriented applications. Set in the context of restaurant reservation, our tasks require manipulating sentences and symbols in order to properly conduct conversations, issue API calls and use the outputs of such calls. We show that an end-to-end dialog system based on Memory Networks can reach promising, yet imperfect, performance and learn to perform non-trivial operations.
1605.07683#1
1605.07683
[ "1512.05742" ]
1605.07683#1
Learning End-to-End Goal-Oriented Dialog
We conï¬ rm those results by comparing our system to a hand-crafted slot-ï¬ lling baseline on data from the second Dialog State Tracking Challenge (Henderson et al., 2014a). We show similar result patterns on data extracted from an online concierge service. # INTRODUCTION The most useful applications of dialog systems such as digital personal assistants or bots are currently goal-oriented and transactional: the system needs to understand a user request and complete a related task with a clear goal within a limited number of dialog turns. The workhorse of traditional dialog systems is slot-ï¬ lling (Lemon et al., 2006; Wang and Lemon, 2013; Young et al., 2013) which predeï¬ nes the structure of a dialog state as a set of slots to be ï¬ lled during the dialog. For a restaurant reservation system, such slots can be the location, price range or type of cuisine of a restaurant. Slot-ï¬
1605.07683#0
1605.07683#2
1605.07683
[ "1512.05742" ]
1605.07683#2
Learning End-to-End Goal-Oriented Dialog
lling has proven reliable but is inherently hard to scale to new domains: it is impossible to manually encode all features and slots that users might refer to in a conversation. End-to-end dialog systems, usually based on neural networks (Shang et al., 2015; Vinyals and Le, 2015; Sordoni et al., 2015; Serban et al., 2015a; Dodge et al., 2016), escape such limitations: all their components are directly trained on past dialogs, with no assumption on the domain or dialog state structure, thus making it easy to automatically scale up to new domains. They have shown promising performance in non goal-oriented chit-chat settings, where they were trained to predict the next utterance in social media and forum threads (Ritter et al., 2011; Wang et al., 2013; Lowe et al., 2015) or movie conversations (Banchs, 2012). But the performance achieved on chit-chat may not necessarily carry over to goal-oriented conversations. As illustrated in Figure 1 in a restaurant reservation scenario, conducting goal-oriented dialog requires skills that go beyond language modeling, e.g., asking questions to clearly deï¬ ne a user request, querying Knowledge Bases (KBs), interpreting results from queries to display options to users or completing a transaction. This makes it hard to ascertain how well end-to-end dialog models would do, especially since evaluating chit-chat performance in itself is not straightforward (Liu et al., 2016). In particular, it is unclear if end-to-end models are in a position to replace traditional dialog methods in a goal-directed setting: can end-to-end dialog models be competitive with traditional methods even in the well-deï¬ ned narrow-domain tasks where they excel? If not, where do they fall short? This paper aims to make it easier to address these questions by proposing an open resource to test end- to-end dialog systems in a way that 1) favors reproducibility and comparisons, and 2) is lightweight and easy to use. We aim to break down a goal-directed objective into several subtasks to test some crucial capabilities that dialog systems should have (and hence provide error analysis by design).
1605.07683#1
1605.07683#3
1605.07683
[ "1512.05742" ]
1605.07683#3
Learning End-to-End Goal-Oriented Dialog
1 Published as a conference paper at ICLR 2017 Hi! Hello, what can I help you with today? T'd like to book a table for six people in an expensive price range with British food. Task 1 Th it! - Wise should it be? Issuing API calls Ok, let me look into some as fr acy Actually i would prefer for four. Sure. Is there anything else to update? Task 2 No. Updating API calls Ok let me look into some options for you.
1605.07683#2
1605.07683#4
1605.07683
[ "1512.05742" ]
1605.07683#4
Learning End-to-End Goal-Oriented Dialog
In London. The Place R_phone The Place phone The Place R cuisine british The Place R_address The Place address The Place R location london The Place R availability four The Place R price expensive The Place R_rating 7 The Fancy Pub R phone The Fancy Pub phone â The Fancy Pub R_cuisine british The Fancy Pub R address The Fancy Pub_address â The_Fancy Pub R location london Task 3 The Fancy Pub R availability four : . . Tenaya Reales Cpe Displaying options * The Fancy Pub R-rating 8 What do you think of this option: The_Fancy_Pub No, I don't like that. Sure, let me find an other option for you. What do you think of this option: The Place Let's do it! Great let me do the reservation. Can you provide me the address? Here it is: The_Place_address. You rock! Is there anything else I can help you with? as tg . . SNIOIEHETIESS Providing extra-information You're welcome. Task 5 Conducting full dialogs Figure 1: Goal-oriented dialog tasks. A user (in green) chats with a bot (in blue) to book a table at a restaurant. Models must predict bot utterances and API calls (in dark red). Task 1 tests the capacity of interpreting a request and asking the right questions to issue an API call. Task 2 checks the ability to modify an API call. Task 3 and 4 test the capacity of using outputs from an API call (in light red) to propose options (sorted by rating) and to provide extra-information. Task 5 combines everything. In the spirit of the bAbI tasks conceived as question answering testbeds (Weston et al., 2015b), we designed a set of ï¬ ve tasks within the goal-oriented context of restaurant reservation. Grounded with an underlying KB of restaurants and their properties (location, type of cuisine, etc.), these tasks cover several dialog stages and test if models can learn various abilities such as performing dialog management, querying KBs, interpreting the output of such queries to continue the conversation or dealing with new entities not appearing in dialogs from the training set. In addition to showing how the set of tasks we propose can be used to test the goal-directed capabilities of an end-to-end dialog system, we also propose results on two additional datasets extracted from real interactions with users, to conï¬
1605.07683#3
1605.07683#5
1605.07683
[ "1512.05742" ]
1605.07683#5
Learning End-to-End Goal-Oriented Dialog
rm that the pattern of results observed in our tasks is indeed a good proxy for what would be observed on real data, with the added beneï¬ t of better reproducibility and interpretability. The goal here is explicitly not to improve the state of the art in the narrow domain of restaurant booking, but to take a narrow domain where traditional handcrafted dialog systems are known to perform well, and use that to gauge the strengths and weaknesses of current end-to-end systems with no domain knowledge. Solving our tasks requires manipulating both natural language and symbols from a KB. Evaluation uses two metrics, per-response and per-dialog accuracies, the latter tracking completion of the actual goal. Figure 1 depicts the tasks and Section 3 details them. Section 4 compares multiple methods on these tasks. As an end-to-end neural model, we tested Memory Networks (Weston et al., 2015a), an attention-based architecture that has proven competitive for non goal-oriented dialog (Dodge et al., 2016). Our experiments in Section 5 show that Memory Networks can be trained to perform non-trivial operations such as issuing API calls to KBs and manipulating entities unseen in training.
1605.07683#4
1605.07683#6
1605.07683
[ "1512.05742" ]
1605.07683#6
Learning End-to-End Goal-Oriented Dialog
We conï¬ rm our ï¬ ndings on real human-machine dialogs 2 Published as a conference paper at ICLR 2017 Table 1: Data used in this paper. Tasks 1-5 were generated using our simulator and share the same KB. Task 6 was converted from the 2nd Dialog State Tracking Challenge (Henderson et al., 2014a). Concierge is made of chats extracted from a real online concierge service. (â ) Tasks 1-5 have two test sets, one using the vocabulary of the training set and the other using out-of-vocabulary words. Tasks DIALOGS Average statistics Number of utterances: - user utterances - bot utterances - outputs from API calls DATASETS Vocabulary size Candidate set size Training dialogs Tasks 1-5 share the Validation dialogs same data source Test dialogs T1 T2 T3 T4 T5 55 43 12 13 7 5 18 10 7 24 23 0 3,747 4,212 1,000 1,000 1,000(â ) 17 7 10 0 15 4 4 7 T6 54 6 8 40 1,229 2,406 1,618 500 1,117 Concierge 8 4 4 0 8,629 11,482 3,249 403 402 from the restaurant reservation dataset of the 2nd Dialog State Tracking Challenge, or DSTC2 (Henderson et al., 2014a), which we converted into our task format, showing that Memory Networks can outperform a dedicated slot-ï¬ lling rule-based baseline. We also evaluate on a dataset of human- human dialogs extracted from an online concierge service that books restaurants for users. Overall, the per-response performance is encouraging, but the per-dialog one remains low, indicating that end-to-end models still need to improve before being able to reliably handle goal-oriented dialog. # 2 RELATED WORK The most successful goal-oriented dialog systems model conversation as partially observable Markov decision processes (POMDP) (Young et al., 2013). However, despite recent efforts to learn modules (Henderson et al., 2014b), they still require many hand-crafted features for the state and action space representations, which restrict their usage to narrow domains.
1605.07683#5
1605.07683#7
1605.07683
[ "1512.05742" ]
1605.07683#7
Learning End-to-End Goal-Oriented Dialog
Our simulation, used to generate goal-oriented datasets, can be seen as an equivalent of the user simulators used to train POMDP (Young et al., 2013; Pietquin and Hastie, 2013), but for training end-to-end systems. Serban et al. (2015b) list available corpora for training dialog systems. Unfortunately, no good resources exist to train and test end-to-end models in goal-oriented scenarios. Goal-oriented datasets are usually designed to train or test dialog state tracker components (Henderson et al., 2014a) and are hence of limited scale and not suitable for end-to-end learning (annotated at the state level and noisy). However, we do convert the Dialog State Tracking Challenge data into our framework. Some datasets are not open source, and require a particular license agreement or the participation to a challenge (e.g., the end-to-end task of DSTC4 (Kim et al., 2016)) or are proprietary (e.g., Chen et al. (2016)). Datasets are often based on interactions between users and existing systems (or ensemble of systems) like DSTC datasets, SFCore (GaÅ¡ic et al., 2014) or ATIS (Dahl et al., 1994). This creates noise and makes it harder to interpret the errors of a model. Lastly, resources designed to connect dialog systems to users, in particular in the context of reinforcement learning, are usually built around a crowdsourcing setting such as Amazon Mechanical Turk, e.g., (Hixon et al., 2015; Wen et al., 2015; Su et al., 2015a;b). While this has clear advantages, it prevents reproducibility and consistent comparisons of methods in the exact same setting. The closest resource to ours might be the set of tasks described in (Dodge et al., 2016), since some of them can be seen as goal-oriented. However, those are question answering tasks rather than dialog, i.e. the bot only responds with answers, never questions, which does not reï¬ ect full conversation. # 3 GOAL-ORIENTED DIALOG TASKS All our tasks involve a restaurant reservation system, where the goal is to book a table at a restaurant.
1605.07683#6
1605.07683#8
1605.07683
[ "1512.05742" ]
1605.07683#8
Learning End-to-End Goal-Oriented Dialog
The ï¬ rst ï¬ ve tasks are generated by a simulation, the last one uses real human-bot dialogs. The data for all tasks is available at http://fb.ai/babi. We also give results on a proprietary dataset extracted from an online restaurant reservation concierge service with anonymized users. 3 Published as a conference paper at ICLR 2017 3.1 RESTAURANT RESERVATION SIMULATION The simulation is based on an underlying KB, whose facts contain the restaurants that can be booked and their properties. Each restaurant is deï¬ ned by a type of cuisine (10 choices, e.g., French, Thai), a location (10 choices, e.g., London, Tokyo), a price range (cheap, moderate or expensive) and a rating (from 1 to 8). For simplicity, we assume that each restaurant only has availability for a single party size (2, 4, 6 or 8 people). Each restaurant also has an address and a phone number listed in the KB. The KB can be queried using API calls, which return the list of facts related to the corresponding restaurants.
1605.07683#7
1605.07683#9
1605.07683
[ "1512.05742" ]
1605.07683#9
Learning End-to-End Goal-Oriented Dialog
Each query must contain four ï¬ elds: a location, a type of cuisine, a price range and a party size. It can return facts concerning one, several or no restaurant (depending on the party size). Using the KB, conversations are generated in the format shown in Figure 1. Each example is a dialog comprising utterances from a user and a bot, as well as API calls and the resulting facts. Dialogs are generated after creating a user request by sampling an entry for each of the four required ï¬ elds: e.g. the request in Figure 1 is [cuisine:
1605.07683#8
1605.07683#10
1605.07683
[ "1512.05742" ]
1605.07683#10
Learning End-to-End Goal-Oriented Dialog
British, location: London, party size: six, price range: expensive]. We use natural language patterns to create user and bot utterances. There are 43 patterns for the user and 20 for the bot (the user can use up to 4 ways to say something, while the bot always uses the same). Those patterns are combined with the KB entities to form thousands of different utterances. 3.1.1 TASK DEFINITIONS We now detail each task. Tasks 1 and 2 test dialog management to see if end-to-end systems can learn to implicitly track dialog state (never given explicitly), whereas Task 3 and 4 check if they can learn to use KB facts in a dialog setting. Task 3 also requires to learn to sort. Task 5 combines all tasks. Task 1: Issuing API calls A user request implicitly deï¬ nes a query that can contain from 0 to 4 of the required ï¬ elds (sampled uniformly; in Figure 1, it contains 3). The bot must ask questions for ï¬ lling the missing ï¬ elds and eventually generate the correct corresponding API call.
1605.07683#9
1605.07683#11
1605.07683
[ "1512.05742" ]
1605.07683#11
Learning End-to-End Goal-Oriented Dialog
The bot asks for information in a deterministic order, making prediction possible. Task 2: Updating API calls Starting by issuing an API call as in Task 1, users then ask to update their requests between 1 and 4 times (sampled uniformly). The order in which ï¬ elds are updated is random. The bot must ask users if they are done with their updates and issue the updated API call. Task 3: Displaying options Given a user request, we query the KB using the corresponding API call and add the facts resulting from the call to the dialog history. The bot must propose options to users by listing the restaurant names sorted by their corresponding rating (from higher to lower) until users accept. For each option, users have a 25% chance of accepting. If they do, the bot must stop displaying options, otherwise propose the next one. Users always accept the option if this is the last remaining one. We only keep examples with API calls retrieving at least 3 options.
1605.07683#10
1605.07683#12
1605.07683
[ "1512.05742" ]
1605.07683#12
Learning End-to-End Goal-Oriented Dialog
Task 4: Providing extra information Given a user request, we sample a restaurant and start the dialog as if users had agreed to book a table there. We add all KB facts corresponding to it to the dialog. Users then ask for the phone number of the restaurant, its address or both, with proportions 25%, 25% and 50% respectively. The bot must learn to use the KB facts correctly to answer. Task 5: Conducting full dialogs We combine Tasks 1-4 to generate full dialogs just as in Figure 1. Unlike in Task 3, we keep examples if API calls return at least 1 option instead of 3. 3.1.2 DATASETS We want to test how well models handle entities appearing in the KB but not in the dialog training sets. We split types of cuisine and locations in half, and create two KBs, one with all facts about restaurants within the ï¬ rst halves and one with the rest. This yields two KBs of 4,200 facts and 600 restaurants each (5 types of cuisine à 5 locations à 3 price ranges à 8 ratings) that only share price ranges, ratings and party sizes, but have disjoint sets of restaurants, locations, types of cuisine, phones and addresses.
1605.07683#11
1605.07683#13
1605.07683
[ "1512.05742" ]
1605.07683#13
Learning End-to-End Goal-Oriented Dialog
We use one of the KBs to generate the standard training, validation and test dialogs, and use the other KB only to generate test dialogs, termed Out-Of-Vocabulary (OOV) test sets. For training, systems have access to the training examples and both KBs. We then evaluate on both test sets, plain and OOV. Beyond the intrinsic difï¬ culty of each task, the challenge on the OOV test 4 Published as a conference paper at ICLR 2017
1605.07683#12
1605.07683#14
1605.07683
[ "1512.05742" ]
1605.07683#14
Learning End-to-End Goal-Oriented Dialog
sets is for models to generalize to new entities (restaurants, locations and cuisine types) unseen in any training dialog â something natively impossible for embedding methods. Ideally, models could, for instance, leverage information coming from the entities of the same type seen during training. We generate ï¬ ve datasets, one per task deï¬ ned in 3.1.1. Table 1 gives their statistics. Training sets are relatively small (1,000 examples) to create realistic learning conditions. The dialogs from the training and test sets are different, never being based on the same user requests. Thus, we test if models can generalize to new combinations of ï¬
1605.07683#13
1605.07683#15
1605.07683
[ "1512.05742" ]
1605.07683#15
Learning End-to-End Goal-Oriented Dialog
elds. Dialog systems are evaluated in a ranking, not a generation, setting: at each turn of the dialog, we test whether they can predict bot utterances and API calls by selecting a candidate, not by generating it.1 Candidates are ranked from a set of all bot utterances and API calls appearing in training, validation and test sets (plain and OOV) for all tasks combined. 3.2 DIALOG STATE TRACKING CHALLENGE Since our tasks rely on synthetically generated language for the user, we supplement our dataset with real human-bot dialogs. We use data from DSTC2 (Henderson et al., 2014a), that is also in the restaurant booking domain. Unlike our tasks, its user requests only require 3 ï¬ elds: type of cuisine (91 choices), location (5 choices) and price range (3 choices).
1605.07683#14
1605.07683#16
1605.07683
[ "1512.05742" ]
1605.07683#16
Learning End-to-End Goal-Oriented Dialog
The dataset was originally designed for dialog state tracking hence every dialog turn is labeled with a state (a user intent + slots) to be predicted. As our goal is to evaluate end-to-end training, we did not use that, but instead converted the data into the format of our 5 tasks and included it in the dataset as Task 6. We used the provided speech transcriptions to create the user and bot utterances, and given the dialog states we created the API calls to the KB and their outputs which we added to the dialogs. We also added ratings to the restaurants returned by the API calls, so that the options proposed by the bots can be consistently predicted (by using the highest rating). We did use the original test set but use a slightly different training/validation split. Our evaluation differs from the challenge (we do not predict the dialog state), so we cannot compare with the results from (Henderson et al., 2014a). This dataset has similar statistics to our Task 5 (see Table 1) but is harder. The dialogs are noisier and the bots made mistakes due to speech recognition errors or misinterpretations and also do not always have a deterministic behavior (the order in which they can ask for information varies).
1605.07683#15
1605.07683#17
1605.07683
[ "1512.05742" ]
1605.07683#17
Learning End-to-End Goal-Oriented Dialog
3.3 ONLINE CONCIERGE SERVICE Tasks 1-6 are, at least partially, artiï¬ cial. This provides perfect control over their design (at least for Tasks 1-5), but no guarantee that good performance would carry over from such synthetic to more realistic conditions. To quantify this, we also evaluate the models from Section 4 on data extracted from a real online concierge service performing restaurant booking: users make requests through a text-based chat interface that are handled by human operators who can make API calls.
1605.07683#16
1605.07683#18
1605.07683
[ "1512.05742" ]