id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
listlengths 1
1
|
---|---|---|---|---|---|---|
1607.06450#22 | Layer Normalization | rst 100 epoch. It highlights the speedup beneï¬ t of applying layer nor- malization that the layer normalized DRAW converges almost twice as fast than the baseline model. After 200 epoches, the baseline model converges to a variational log likelihood of 82.36 nats on the test data and the layer normalization model obtains 82.09 nats. # 6.5 Handwriting sequence generation The previous experiments mostly examine RNNs on NLP tasks whose lengths are in the range of 10 to 40. To show the effectiveness of layer normalization on longer sequences, we performed hand- writing generation tasks using the IAM Online Handwriting Database [Liwicki and Bunke, 2005]. IAM-OnDB consists of handwritten lines collected from 221 different writers. When given the input character string, the goal is to predict a sequence of x and y pen co-ordinates of the corresponding handwriting line on the whiteboard. There are, in total, 12179 handwriting line sequences. The input string is typically more than 25 characters and the average handwriting line has a length around 700. | 1607.06450#21 | 1607.06450#23 | 1607.06450 | [
"1605.02688"
] |
1607.06450#23 | Layer Normalization | We used the same model architecture as in Section (5.2) of Graves [2013]. The model architecture consists of three hidden layers of 400 LSTM cells, which produce 20 bivariate Gaussian mixture components at the output layer, and a size 3 input layer. The character sequence was encoded with one-hot vectors, and hence the window vectors were size 57. A mixture of 10 Gaussian functions was used for the window parameters, requiring a size 30 parameter vector. The total number of weights was increased to approximately 3.7M. The model is trained using mini-batches of size 8 and the Adam [Kingma and Ba, 2014] optimizer. The combination of small mini-batch size and very long sequences makes it important to have very stable hidden dynamics. Figure 5 shows that layer normalization converges to a comparable log likelihood as the baseline model but is much faster. | 1607.06450#22 | 1607.06450#24 | 1607.06450 | [
"1605.02688"
] |
1607.06450#24 | Layer Normalization | 9 10? re; ] Train NLL â BatchNorm bz128 10° â layerNorm bz4 | Baseline bz128 Baseline bz4 â LayerNorm bz128 v0 â _ BatchNorm bz4|1 0 i0 Ea) 30 a0 50 60 Test Er. oo10 â LayerNorm bz4 | | â Baseline bz4 â LayerNorm bz128 â BatchNorm bz4 0.005 0.005, 0 10 2 30 a0 30 60 0 5 20 30 40 30 Ca Epoch Epoch Figure 6: Permutation invariant MNIST 784-1000-1000-10 model negative log likelihood and test error with layer normalization and batch normalization. (Left) The models are trained with batch- size of 128. (Right) The models are trained with batch-size of 4. # 6.6 Permutation invariant MNIST In addition to RNNs, we investigated layer normalization in feed-forward networks. We show how layer normalization compares with batch normalization on the well-studied permutation invariant MNIST classiï¬ | 1607.06450#23 | 1607.06450#25 | 1607.06450 | [
"1605.02688"
] |
1607.06450#25 | Layer Normalization | cation problem. From the previous analysis, layer normalization is invariant to input re-scaling which is desirable for the internal hidden layers. But this is unnecessary for the logit outputs where the prediction conï¬ dence is determined by the scale of the logits. We only apply layer normalization to the fully-connected hidden layers that excludes the last softmax layer. All the models were trained using 55000 training data points and the Adam [Kingma and Ba, 2014] optimizer. For the smaller batch-size, the variance term for batch normalization is computed using the unbiased estimator. The experimental results from Figure 6 highlight that layer normalization is robust to the batch-sizes and exhibits a faster training convergence comparing to batch normalization that is applied to all layers. # 6.7 Convolutional Networks We have also experimented with convolutional neural networks. In our preliminary experiments, we observed that layer normalization offers a speedup over the baseline model without normalization, but batch normalization outperforms the other methods. With fully connected layers, all the hidden units in a layer tend to make similar contributions to the ï¬ nal prediction and re-centering and re- scaling the summed inputs to a layer works well. However, the assumption of similar contributions is no longer true for convolutional neural networks. The large number of the hidden units whose receptive ï¬ elds lie near the boundary of the image are rarely turned on and thus have very different statistics from the rest of the hidden units within the same layer. We think further research is needed to make layer normalization work well in ConvNets. | 1607.06450#24 | 1607.06450#26 | 1607.06450 | [
"1605.02688"
] |
1607.06450#26 | Layer Normalization | # 7 Conclusion In this paper, we introduced layer normalization to speed-up the training of neural networks. We provided a theoretical analysis that compared the invariance properties of layer normalization with batch normalization and weight normalization. We showed that layer normalization is invariant to per training-case feature shifting and scaling. Empirically, we showed that recurrent neural networks beneï¬ t the most from the proposed method especially for long sequences and small mini-batches. # Acknowledgments This research was funded by grants from NSERC, CFI, and Google. | 1607.06450#25 | 1607.06450#27 | 1607.06450 | [
"1605.02688"
] |
1607.06450#27 | Layer Normalization | 10 # References Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classiï¬ cation with deep convolutional neural networks. In NIPS, 2012. Geoffrey Hinton, Li Deng, Dong Yu, George E Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N Sainath, et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE, 2012. Jeffrey Dean, Greg Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Mark Mao, Andrew Senior, Paul Tucker, Ke Yang, Quoc V Le, et al. Large scale distributed deep networks. In NIPS, 2012. Sergey Ioffe and Christian Szegedy. | 1607.06450#26 | 1607.06450#28 | 1607.06450 | [
"1605.02688"
] |
1607.06450#28 | Layer Normalization | Batch normalization: Accelerating deep network training by reducing internal covariate shift. ICML, 2015. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. Advances in neural information processing systems, pages 3104â 3112, 2014. In C´esar Laurent, Gabriel Pereyra, Phil´emon Brakel, Ying Zhang, and Yoshua Bengio. Batch normalized recurrent neural networks. arXiv preprint arXiv:1510.01378, 2015. Dario Amodei, Rishita Anubhai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Jingdong Chen, Mike Chrzanowski, Adam Coates, Greg Diamos, et al. Deep speech 2: End-to-end speech recognition in english and mandarin. arXiv preprint arXiv:1512.02595, 2015. | 1607.06450#27 | 1607.06450#29 | 1607.06450 | [
"1605.02688"
] |
1607.06450#29 | Layer Normalization | Tim Cooijmans, Nicolas Ballas, C´esar Laurent, and Aaron Courville. Recurrent batch normalization. arXiv preprint arXiv:1603.09025, 2016. Tim Salimans and Diederik P Kingma. Weight normalization: A simple reparameterization to accelerate train- ing of deep neural networks. arXiv preprint arXiv:1602.07868, 2016. Behnam Neyshabur, Ruslan R Salakhutdinov, and Nati Srebro. Path-sgd: Path-normalized optimization in deep neural networks. In Advances in Neural Information Processing Systems, pages 2413â 2421, 2015. | 1607.06450#28 | 1607.06450#30 | 1607.06450 | [
"1605.02688"
] |
1607.06450#30 | Layer Normalization | Shun-Ichi Amari. Natural gradient works efï¬ ciently in learning. Neural computation, 1998. Ivan Vendrov, Ryan Kiros, Sanja Fidler, and Raquel Urtasun. Order-embeddings of images and language. ICLR, 2016. The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Fr´ed´eric Bastien, Justin Bayer, Anatoly Belikov, et al. Theano: A python framework for fast computation of mathematical expressions. arXiv preprint arXiv:1605.02688, 2016. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll´ar, and C Lawrence Zitnick. Microsoft coco: Common objects in context. ECCV, 2014. Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. | 1607.06450#29 | 1607.06450#31 | 1607.06450 | [
"1605.02688"
] |
1607.06450#31 | Layer Normalization | Learning phrase representations using rnn encoder-decoder for statistical machine translation. EMNLP, 2014. Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. ICLR, 2015. Ryan Kiros, Ruslan Salakhutdinov, and Richard S Zemel. Unifying visual-semantic embeddings with multi- modal neural language models. arXiv preprint arXiv:1411.2539, 2014. | 1607.06450#30 | 1607.06450#32 | 1607.06450 | [
"1605.02688"
] |
1607.06450#32 | Layer Normalization | D. Kingma and J. L. Ba. Adam: a method for stochastic optimization. ICLR, 2014. arXiv:1412.6980. Liwei Wang, Yin Li, and Svetlana Lazebnik. Learning deep structure-preserving image-text embeddings. CVPR, 2016. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. Teaching machines to read and comprehend. In NIPS, 2015. Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. Skip-thought vectors. In NIPS, 2015. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. | 1607.06450#31 | 1607.06450#33 | 1607.06450 | [
"1605.02688"
] |
1607.06450#33 | Layer Normalization | Efï¬ cient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013. Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In ICCV, 2015. Marco Marelli, Luisa Bentivogli, Marco Baroni, Raffaella Bernardi, Stefano Menini, and Roberto Zamparelli. Semeval-2014 task 1: Evaluation of compositional distributional semantic models on full sentences through semantic relatedness and textual entailment. SemEval-2014, 2014. | 1607.06450#32 | 1607.06450#34 | 1607.06450 | [
"1605.02688"
] |
1607.06450#34 | Layer Normalization | 11 Bo Pang and Lillian Lee. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In ACL, pages 115â 124, 2005. Minqing Hu and Bing Liu. Mining and summarizing customer reviews. In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, 2004. Bo Pang and Lillian Lee. A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. | 1607.06450#33 | 1607.06450#35 | 1607.06450 | [
"1605.02688"
] |
1607.06450#35 | Layer Normalization | In ACL, 2004. Janyce Wiebe, Theresa Wilson, and Claire Cardie. Annotating expressions of opinions and emotions in lan- guage. Language resources and evaluation, 2005. K. Gregor, I. Danihelka, A. Graves, and D. Wierstra. DRAW: a recurrent neural network for image generation. arXiv:1502.04623, 2015. Hugo Larochelle and Iain Murray. The neural autoregressive distribution estimator. In AISTATS, volume 6, page 622, 2011. | 1607.06450#34 | 1607.06450#36 | 1607.06450 | [
"1605.02688"
] |
1607.06450#36 | Layer Normalization | Marcus Liwicki and Horst Bunke. Iam-ondb-an on-line english sentence database acquired from handwritten text on a whiteboard. In ICDAR, 2005. Alex Graves. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, 2013. 12 # Supplementary Material # Application of layer normalization to each experiment This section describes how layer normalization is applied to each of the papersâ experiments. For notation convenience, we deï¬ ne layer normalization as a function mapping LN : RD â RD with two set of adaptive parameters, gains α and biases β: LN(z:0, 3) = (@ â ) SatB, (15) 12 wpe o=, (16) where, zi is the ith element of the vector z. | 1607.06450#35 | 1607.06450#37 | 1607.06450 | [
"1605.02688"
] |
1607.06450#37 | Layer Normalization | # Teaching machines to read and comprehend and handwriting sequence generation The basic LSTM equations used for these experiment are given by: ft it ot gt = Whhtâ 1 + Wxxt + b (17) ce, = o(f) © câ 1 + o(iz) © tanh(g,) (18) ce, = o(f) © câ 1 + o(iz) © tanh(g,) hy = o(0;) © tanh(c;) hy = o(0;) © tanh(c;) (19) The version that incorporates layer normalization is modiï¬ ed as follows: | 1607.06450#36 | 1607.06450#38 | 1607.06450 | [
"1605.02688"
] |
1607.06450#38 | Layer Normalization | ft it ot gt = LN (Whhtâ 1; α1, β1) + LN (Wxxt; α2, β2) + b (20) c, = o(f;) © cy-1 + o(is) © tanh(gy) hy = o(0;) © tanh(LN(c;; a3, 83)) c, = o(f;) © cy-1 + o(is) © tanh(gy) (21) hy = o(0;) © tanh(LN(c;; a3, 83)) (22) where αi, βi are the additive and multiplicative parameters, respectively. Each αi is initialized to a vector of zeros and each βi is initialized to a vector of ones. # Order embeddings and skip-thoughts These experiments utilize a variant of gated recurrent unit which is deï¬ ned as follows: (") ry h, = tanh(Wx, + o(r) © (Uby_-1)) h, = (1â o(z:))he-1 + o(z:)hy Why_1 + Wax: = Whhtâ 1 + Wxxt (23) (24) (25) | 1607.06450#37 | 1607.06450#39 | 1607.06450 | [
"1605.02688"
] |
1607.06450#39 | Layer Normalization | Layer normalization is applied as follows: â â ¢~ GS Ne ll LIN(W)ky-1; a1, 81) + LN(W2X1; 2, 32) h, = tanh(LN (Wx;; a3, 33) + o(r.) © LN(Uhy_1; a4, 81)) hy, = (1â o(2:))hy-1 + o(z1) hy just as before, αi is initialized to a vector of zeros and each βi is initialized to a vector of ones. | 1607.06450#38 | 1607.06450#40 | 1607.06450 | [
"1605.02688"
] |
1607.06450#40 | Layer Normalization | 13 (18) (19) (21) (22) (26) (27) (28) # Modeling binarized MNIST using DRAW The layer norm is only applied to the output of the LSTM hidden states in this experiment: The version that incorporates layer normalization is modiï¬ ed as follows: ft it ot gt = Whhtâ 1 + Wxxt + b (29) ce, = o(f) © câ 1 + o(iz) © tanh(g,) (30) h, = o(o,) © tanh(LN(c;; 0, 8) Gl) where a, 3 are the additive and multiplicative parameters, respectively. @ is initialized to a vector of zeros and (3 is initialized to a vector of ones. # Learning the magnitude of incoming weights We now compare how gradient descent updates changing magnitude of the equivalent weights be- tween the normalized GLM and original parameterization. The magnitude of the weights are ex- plicitly parameterized using the gain parameter in the normalized model. Assume there is a gradient update that changes norm of the weight vectors by δg. We can project the gradient updates to the weight vector for the normal GLM. The KL metric, ie how much the gradient update changes the model prediction, for the normalized model depends only on the magnitude of the prediction error. | 1607.06450#39 | 1607.06450#41 | 1607.06450 | [
"1605.02688"
] |
1607.06450#41 | Layer Normalization | Speciï¬ cally, under batch normalization: 1 = 1 Cc ds? = 3 ver([0, 0, 5g] ")" F(vec([W, b, g]") vec([0,0, d4]") = 550 ew [ees] 5g: (32) Under layer normalization: 1 _ ds? =5 vec((0,0,6,)") F(wee((W, b,]") vee((0,0,64)") Cov(yr, y1 |x) OS =p" es Cov(yn, yar |x) SE MGnâ 1 1 =, E : me, : 5g BO xP) (au =1)(a1 =p) (an =n)? Cov(yr, 41 |X) oe Cov(yn, ya |x) #5 Under weight normalization: 1 _ dsâ =5 vec([0, 0, 5j]') "F(vec([W, b, g] ") vec([0, 0, 5g] ") Lal Cov(ys, I) ro Cov(n Yu 1%) Tttattea Te =5'â : . : 6g. G4 29 6? xxP(x) , , , OD) 2 Cov(yi, ys |X) 7 Cov (yi, yt |) rate won feleo|= Cov(yi, ys |X) 7 Cov (yi, yt |) rate won feleo|= Whereas, the KL metric in the standard GLM is related to its activities a; = w; x, that is depended on both its current weights and input data. We Project the gradient updates to the gain parameter 6,1 of the iâ | 1607.06450#40 | 1607.06450#42 | 1607.06450 | [
"1605.02688"
] |
1607.06450#42 | Layer Normalization | â neuron to its weight vector as Jy; + â Twrls in the standard GLM model: tol") F (lw) , bi, wf ,by]") vec 0,54; 2,0] 5 vee ([8gi 951 Ne? |) (wi! ,w; ,d;]) vee([5gi oi ,0)') Tok ne Ilwill2 Toile ne I[evy|2 bgi5g5 aja; = E |Cov(y;, y; |x) â +~2_ (35) BE aay [CVU Fa falhoyle | 1607.06450#41 | 1607.06450#43 | 1607.06450 | [
"1605.02688"
] |
1607.06450#43 | Layer Normalization | The batch normalized and layer normalized models are therefore more robust to the scaling of the input and its parameters than the standard model. 14 (32) (33) | 1607.06450#42 | 1607.06450 | [
"1605.02688"
] |
|
1607.01759#0 | Bag of Tricks for Efficient Text Classification | 2016 6 1 0 2 g u A 9 arXiv:1607.01759v3 [esCL] # ] L C . s c [ 3 v 9 5 7 1 0 . 7 0 6 1 : v i X r a # Bag of Tricks for Efï¬ cient Text Classiï¬ cation # Armand Joulin Edouard Grave Piotr Bojanowski Tomas Mikolov Facebook AI Research {ajoulin,egrave,bojanowski,tmikolov}@fb.com # Abstract This paper explores a simple and efï¬ cient Our ex- baseline for text classiï¬ cation. periments show that our text classi- fast ï¬ er fastText is often on par with deep learning classiï¬ ers in terms of accuracy, and many orders of magnitude faster for training and evaluation. We can train fastText on more than one billion words in less than ten minutes using a standard multicore CPU, and classify half a million sentences among 312K classes in less than a minute. In this work, we explore ways to scale these baselines to very large corpus with a large output space, in the context of text classiï¬ | 1607.01759#1 | 1607.01759 | [
"1606.01781"
] |
|
1607.01759#1 | Bag of Tricks for Efficient Text Classification | cation. Inspired by the recent work in efï¬ cient word representation learning (Mikolov et al., 2013; Levy et al., 2015), we show that linear models with a rank constraint and a fast loss approximation can train on a billion words within ten minutes, while achieving perfor- mance on par with the state-of-the-art. We evalu- ate the quality of our approach fastText1 on two different tasks, namely tag prediction and sentiment analysis. # 1 Introduction # 2 Model architecture Text classiï¬ cation is an important task in Natural Language Processing with many applications, such as web search, information retrieval, ranking and classiï¬ cation (Deerwester et al., 1990; document Pang and Lee, 2008). Recently, models based on neural networks have become increasingly popular Zhang and LeCun, 2015; Conneau et al., 2016). While these models achieve very good performance in practice, they tend to be relatively slow both at train and test time, limiting their use on very large datasets. | 1607.01759#0 | 1607.01759#2 | 1607.01759 | [
"1606.01781"
] |
1607.01759#2 | Bag of Tricks for Efficient Text Classification | of- are text for ten considered as (Joachims, 1998; problems classiï¬ cation Fan et al., 2008). McCallum and Nigam, 1998; Despite their simplicity, they often obtain state- of-the-art performances if the right features are used (Wang and Manning, 2012). They also have the potential to scale to very large cor- pus (Agarwal et al., 2014). A simple and efï¬ cient baseline for sentence classiï¬ cation is to represent sentences as bag of words (BoW) and train a linear classiï¬ er, e.g., a logistic regression or an SVM (Joachims, 1998; Fan et al., 2008). However, linear classiï¬ ers do not share parameters among features and classes. This possibly limits their generalization in the context of large output space where some classes have very few examples. Common solutions to this problem are to factorize the linear clas- (Schutze, 1992; siï¬ er Mikolov et al., 2013) use multilayer neural (Collobert and Weston, 2008; networks Zhang et al., 2015). Figure 1 shows a simple linear model with rank constraint. | 1607.01759#1 | 1607.01759#3 | 1607.01759 | [
"1606.01781"
] |
1607.01759#3 | Bag of Tricks for Efficient Text Classification | The ï¬ rst weight matrix A is a look-up table over the words. The word representations are then averaged into a text representation, which is in turn fed to a linear classiï¬ er. The text representa- # 1https://github.com/facebookresearch/fastText output hidden x1 x2 . . . xN â 1 xN Figure 1: Model architecture of fastText for a sentence with N ngram features x1, . . . , xN . The features are embedded and averaged to form the hidden variable. tion is an hidden variable which can be potentially be reused. This architecture is similar to the cbow model of Mikolov et al. (2013), where the middle word is replaced by a label. We use the softmax function f to compute the probability distribution over the predeï¬ | 1607.01759#2 | 1607.01759#4 | 1607.01759 | [
"1606.01781"
] |
1607.01759#4 | Bag of Tricks for Efficient Text Classification | ned classes. For a set of N doc- uments, this leads to minimizing the negative log- likelihood over the classes: â 1 N N X n=1 yn log(f (BAxn)), where xn is the normalized bag of features of the n- th document, yn the label, A and B the weight matri- ces. This model is trained asynchronously on mul- tiple CPUs using stochastic gradient descent and a linearly decaying learning rate. # 2.1 Hierarchical softmax When the number of classes is large, computing the linear classiï¬ er is computationally expensive. More precisely, the computational complexity is O(kh) where k is the number of classes and h the di- mension of the text representation. In order to im- prove our running time, we use a hierarchical soft- max (Goodman, 2001) based on the Huffman cod- ing tree (Mikolov et al., 2013). During training, the computational complexity drops to O(h log2(k)). The hierarchical softmax is also advantageous at test time when searching for the most likely class. Each node is associated with a probability that is the probability of the path from the root to that node. If the node is at depth l + 1 with parents n1, . . . , nl, its probability is l P (nl+1) = Y i=1 P (ni). This means that the probability of a node is always lower than the one of its parent. Exploring the tree with a depth ï¬ rst search and tracking the maximum probability among the leaves allows us to discard any branch associated with a small probability. In practice, we observe a reduction of the complexity to O(h log2(k)) at test time. This approach is fur- ther extended to compute the T -top targets at the cost of O(log(T )), using a binary heap. | 1607.01759#3 | 1607.01759#5 | 1607.01759 | [
"1606.01781"
] |
1607.01759#5 | Bag of Tricks for Efficient Text Classification | # 2.2 N-gram features Bag of words is invariant to word order but taking explicitly this order into account is often computa- tionally very expensive. Instead, we use a bag of n-grams as additional features to capture some par- tial information about the local word order. This is very efï¬ cient in practice while achieving compa- rable results to methods that explicitly use the or- der (Wang and Manning, 2012). We maintain a fast and memory efï¬ cient mapping of the n-grams by using the hashing trick (Weinberger et al., 2009) with the same hash- ing function as in Mikolov et al. (2011) and 10M bins if we only used bigrams, and 100M otherwise. | 1607.01759#4 | 1607.01759#6 | 1607.01759 | [
"1606.01781"
] |
1607.01759#6 | Bag of Tricks for Efficient Text Classification | # 3 Experiments We evaluate fastText on two different tasks. First, we compare it to existing text classifers on the problem of sentiment analysis. Then, we evaluate its capacity to scale to large output space on a tag prediction dataset. Note that our model could be im- plemented with the Vowpal Wabbit library,2 but we observe in practice, that our tailored implementation is at least 2-5Ã faster. # 3.1 Sentiment analysis the Datasets protocol same 8 the n-grams of Zhang et al. (2015). We report from Zhang et al. (2015), and TFIDF baselines level convolutional as well as model (char-CNN) of Zhang and LeCun (2015), the character based convolution recurrent net- work (char-CRNN) of (Xiao and Cho, 2016) and the very deep convolutional network (VDCNN) We also compare of Conneau et al. (2016). 2Using the options --nn, --ngrams and --log multi | 1607.01759#5 | 1607.01759#7 | 1607.01759 | [
"1606.01781"
] |
1607.01759#7 | Bag of Tricks for Efficient Text Classification | Model AG Sogou DBP Yelp P. Yelp F. Yah. A. Amz. F. Amz. P. BoW (Zhang et al., 2015) ngrams (Zhang et al., 2015) ngrams TFIDF (Zhang et al., 2015) char-CNN (Zhang and LeCun, 2015) char-CRNN (Xiao and Cho, 2016) VDCNN (Conneau et al., 2016) 88.8 92.0 92.4 87.2 91.4 91.3 92.9 97.1 97.2 95.1 95.2 96.8 96.6 98.6 98.7 98.3 98.6 98.7 92.2 95.6 95.4 94.7 94.5 95.7 58.0 56.3 54.8 62.0 61.8 64.7 68.9 68.5 68.5 71.2 71.7 73.4 54.6 54.3 52.4 59.5 59.2 63.0 90.4 92.0 91.5 94.5 94.1 95.7 fastText, h = 10 fastText, h = 10, bigram 91.5 92.5 93.9 96.8 98.1 98.6 93.8 95.7 60.4 63.9 72.0 72.3 55.8 60.2 91.2 94.6 Table 1: Test accuracy [%] on sentiment datasets. FastText has been run with the same parameters for all the datasets. It has 10 hidden units and we evaluate it with and without bigrams. For char-CNN, we show the best reported numbers without data augmentation. Zhang and LeCun (2015) Conneau et al. (2016) fastText small char-CNN big char-CNN depth=9 depth=17 depth=29 AG Sogou DBpedia Yelp P. | 1607.01759#6 | 1607.01759#8 | 1607.01759 | [
"1606.01781"
] |
1607.01759#8 | Bag of Tricks for Efficient Text Classification | Yelp F. Yah. A. Amz. F. Amz. P. 1h - 2h - - 8h 2d 2d 3h - 5h - - 1d 5d 5d 24m 25m 27m 28m 29m 1h 2h45 2h45 37m 41m 44m 43m 45m 1h33 4h20 4h25 51m 56m 1h 1h09 1h12 2h 7h 7h 1s 7s 2s 3s 4s 5s 9s 10s Table 2: Training time for a single epoch on sentiment analysis datasets compared to char-CNN and VDCNN. following their evaluation to Tang et al. (2015) protocol. We report their main baselines as well as their two approaches based on recurrent networks (Conv-GRNN and LSTM-GRNN). Results. We present the results in Figure 1. We use 10 hidden units and run fastText for 5 epochs with a learning rate selected on a valida- tion set from {0.05, 0.1, 0.25, 0.5}. On this task, adding bigram information improves the perfor- mance by 1-4%. Overall our accuracy is slightly better than char-CNN and char-CRNN and, a bit worse than VDCNN. Note that we can increase the accuracy slightly by using more n-grams, for example with trigrams, the performance on Sogou goes up to 97.1%. Finally, Figure 3 shows that our method is competitive with the methods pre- sented in Tang et al. (2015). We tune the hyper- parameters on the validation set and observe that using n-grams up to 5 leads to the best perfor- mance. Unlike Tang et al. (2015), fastText does not use pre-trained word embeddings, which can be explained the 1% difference in accuracy. | 1607.01759#7 | 1607.01759#9 | 1607.01759 | [
"1606.01781"
] |
1607.01759#9 | Bag of Tricks for Efficient Text Classification | Model Yelpâ 13 Yelpâ 14 Yelpâ 15 IMDB 59.8 SVM+TF 59.7 CNN Conv-GRNN 63.7 LSTM-GRNN 65.1 61.8 61.0 65.5 67.1 62.4 61.5 66.0 67.6 40.5 37.5 42.5 45.3 fastText 64.2 66.2 66.6 45.2 Table 3: Comparision with Tang et al. (2015). The hyper- parameters are chosen on the validation set. We report the test accuracy. Training time. Both char-CNN and VDCNN are trained on a NVIDIA Tesla K40 GPU, while our models are trained on a CPU using 20 threads. Ta- ble 2 shows that methods using convolutions are sev- eral orders of magnitude slower than fastText. | 1607.01759#8 | 1607.01759#10 | 1607.01759 | [
"1606.01781"
] |
1607.01759#10 | Bag of Tricks for Efficient Text Classification | While it is possible to have a 10Ã speed up for char-CNN by using more recent CUDA implemen- tations of convolutions, fastText takes less than a minute to train on these datasets. The GRNNs method of Tang et al. (2015) takes around 12 hours per epoch on CPU with a single thread. Our speed- Input Prediction Tags taiyoucon 2011 digitals: individuals digital pho- tos from the anime convention taiyoucon 2011 in mesa, arizona. if you know the model and/or the character, please comment. #cosplay #24mm #anime #animeconvention #arizona #canon #con #convention #cos #cosplay #costume #mesa #play #taiyou #taiyoucon 2012 twin cities pride 2012 twin cities pride pa- rade #minneapolis #2012twincitiesprideparade neapolis #mn #usa #min- beagle enjoys the snowfall #snow #2007 #beagle #hillsboro #january #maddison #maddy #oregon #snow christmas #christmas #cameraphone #mobile euclid avenue #newyorkcity #cleveland #euclidavenue Table 4: Examples from the validation set of YFCC100M dataset obtained with fastText with 200 hidden units and bigrams. We show a few correct and incorrect tag predictions. up compared to neural network based methods in- creases with the size of the dataset, going up to at least a 15,000Ã speed-up. # 3.2 Tag prediction Dataset and baselines. To test scalability of our approach, further evaluation is carried on (Thomee et al., 2016) the YFCC100M dataset which consists of almost 100M images with cap- tions, titles and tags. We focus on predicting the tags according to the title and caption (we do not use the images). We remove the words and tags occurring less than 100 times and split the data into a train, validation and test set. The train set contains 91,188,648 examples (1.5B tokens). The validation has 930,497 examples and the test set 543,424. The vocabulary size is 297,141 and there are 312,116 unique tags. | 1607.01759#9 | 1607.01759#11 | 1607.01759 | [
"1606.01781"
] |
1607.01759#11 | Bag of Tricks for Efficient Text Classification | We will release a script that recreates this dataset so that our numbers could be reproduced. We report precision at 1. We consider a frequency-based baseline which tag. We also com- predicts the most frequent pare with Tagspace (Weston et al., 2014), which is a tag prediction model similar to ours, but based on the Wsabie model of Weston et al. (2011). While the Tagspace model is described using convolutions, we consider the linear version, which achieves com- parable performance but is much faster. Model prec@1 Running time Train Test Freq. baseline Tagspace, h = 50 Tagspace, h = 200 2.2 30.1 35.6 - 3h8 5h32 - 6h 15h fastText, h = 50 31.2 fastText, h = 50, bigram 36.7 fastText, h = 200 41.1 fastText, h = 200, bigram 46.1 6m40 7m47 10m34 13m38 48s 50s 1m29 1m37 | 1607.01759#10 | 1607.01759#12 | 1607.01759 | [
"1606.01781"
] |
1607.01759#12 | Bag of Tricks for Efficient Text Classification | Table 5: Prec@1 on the test set for tag prediction on YFCC100M. We also report the training time and test time. Test time is reported for a single thread, while training uses 20 threads for both models. and 200. Both models achieve a similar perfor- mance with a small hidden layer, but adding bi- grams gives us a signiï¬ cant boost in accuracy. At test time, Tagspace needs to compute the scores for all the classes which makes it relatively slow, while our fast inference gives a signiï¬ cant speed-up when the number of classes is large (more than 300K here). Overall, we are more than an order of mag- nitude faster to obtain model with a better quality. The speedup of the test phase is even more signiï¬ - cant (a 600à speedup). | 1607.01759#11 | 1607.01759#13 | 1607.01759 | [
"1606.01781"
] |
1607.01759#13 | Bag of Tricks for Efficient Text Classification | Table 4 shows some quali- tative examples. Results and training time. Table 5 presents a comparison of fastText and the baselines. We run fastText for 5 epochs and compare it to Tagspace for two sizes of the hidden layer, i.e., 50 # 4 Discussion and conclusion In this work, we propose a simple baseline method for text classiï¬ cation. Unlike unsupervisedly trained word vectors from word2vec, our word features can be averaged together to form good sentence repre- sentations. In several tasks, fastText obtains per- formance on par with recently proposed methods in- spired by deep learning, while being much faster. Although deep neural networks have in theory much higher representational power than shallow models, it is not clear if simple text classiï¬ cation problems such as sentiment analysis are the right ones to eval- uate them. | 1607.01759#12 | 1607.01759#14 | 1607.01759 | [
"1606.01781"
] |
1607.01759#14 | Bag of Tricks for Efficient Text Classification | We will publish our code so that the research community can easily build on top of our work. Acknowledgement. We thank Gabriel Synnaeve, Herv´e G´egou, Jason Weston and L´eon Bottou for their help and comments. We also thank Alexis Con- neau, Duyu Tang and Zichao Zhang for providing us with information about their methods. # References [Agarwal et al.2014] Alekh Agarwal, Olivier Chapelle, Miroslav Dud´ık, and John Langford. 2014. A reliable effective terascale linear learning system. JMLR. [Collobert and Weston2008] Ronan Collobert and Jason Weston. 2008. A uniï¬ ed architecture for natural lan- guage processing: Deep neural networks with multi- task learning. In ICML. [Conneau et al.2016] Alexis Conneau, Holger Schwenk, Lo¨ıc Barrault, and Yann Lecun. 2016. Very deep con- volutional networks for natural language processing. arXiv preprint arXiv:1606.01781. [Deerwester et al.1990] Scott Deerwester, Susan T Du- mais, George W Furnas, Thomas K Landauer, and Richard Harshman. 1990. Indexing by latent semantic analysis. Journal of the American society for informa- tion science. [Fan et al.2008] Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, Xiang-Rui Wang, and Chih-Jen Lin. 2008. Li- blinear: A library for large linear classiï¬ cation. JMLR. [Goodman2001] Joshua Goodman. 2001. Classes for fast maximum entropy training. In ICASSP. [Joachims1998] Thorsten Joachims. 1998. Text catego- rization with support vector machines: Learning with many relevant features. | 1607.01759#13 | 1607.01759#15 | 1607.01759 | [
"1606.01781"
] |
1607.01759#15 | Bag of Tricks for Efficient Text Classification | Springer. [Kim2014] Yoon Kim. 2014. Convolutional neural net- works for sentence classiï¬ cation. In EMNLP. [Levy et al.2015] Omer Levy, Yoav Goldberg, and Ido Dagan. 2015. Improving distributional similarity with lessons learned from word embeddings. TACL. [McCallum and Nigam1998] Andrew McCallum and Ka- mal Nigam. 1998. A comparison of event models for naive bayes text classiï¬ cation. In AAAI workshop on learning for text categorization. [Mikolov et al.2011] Tom´aË s Mikolov, Anoop Deoras, Daniel Povey, Luk´aË s Burget, and Jan Ë Cernock`y. 2011. | 1607.01759#14 | 1607.01759#16 | 1607.01759 | [
"1606.01781"
] |
1607.01759#16 | Bag of Tricks for Efficient Text Classification | Strategies for training large scale neural network lan- guage models. In Workshop on Automatic Speech Recognition and Understanding. IEEE. [Mikolov et al.2013] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efï¬ cient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. 2008. Opinion mining and sentiment analysis. Foundations and trends in information retrieval. | 1607.01759#15 | 1607.01759#17 | 1607.01759 | [
"1606.01781"
] |
1607.01759#17 | Bag of Tricks for Efficient Text Classification | [Schutze1992] Hinrich Schutze. 1992. Dimensions of meaning. In Supercomputing. [Tang et al.2015] Duyu Tang, Bing Qin, and Ting Liu. 2015. Document modeling with gated recurrent neural network for sentiment classiï¬ cation. In EMNLP. [Thomee et al.2016] Bart Thomee, David A Shamma, Gerald Friedland, Benjamin Elizalde, Karl Ni, Dou- 2016. glas Poland, Damian Borth, and Li-Jia Li. Yfcc100m: The new data in multimedia research. vol- ume 59, pages 64â 73. ACM. [Wang and Manning2012] Sida Wang and Christopher D Manning. 2012. Baselines and bigrams: | 1607.01759#16 | 1607.01759#18 | 1607.01759 | [
"1606.01781"
] |
1607.01759#18 | Bag of Tricks for Efficient Text Classification | Simple, good sentiment and topic classiï¬ cation. In ACL. [Weinberger et al.2009] Kilian Weinberger, Anirban Das- gupta, John Langford, Alex Smola, and Josh Atten- berg. 2009. Feature hashing for large scale multitask learning. In ICML. [Weston et al.2011] Jason Weston, Samy Bengio, and Nicolas Usunier. 2011. Wsabie: Scaling up to large vocabulary image annotation. In IJCAI. [Weston et al.2014] Jason Weston, Sumit Chopra, and Keith Adams. 2014. #tagspace: Semantic embed- dings from hashtags. In EMNLP. [Xiao and Cho2016] Yijun Xiao and Kyunghyun Cho. 2016. | 1607.01759#17 | 1607.01759#19 | 1607.01759 | [
"1606.01781"
] |
1607.01759#19 | Bag of Tricks for Efficient Text Classification | Efï¬ cient character-level document classiï¬ cation by combining convolution and recurrent layers. arXiv preprint arXiv:1602.00367. [Zhang and LeCun2015] Xiang Zhang and Yann LeCun. 2015. Text understanding from scratch. arXiv preprint arXiv:1502.01710. [Zhang et al.2015] Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classiï¬ cation. In NIPS. | 1607.01759#18 | 1607.01759 | [
"1606.01781"
] |
|
1607.00036#0 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | 7 1 0 2 r a M 7 1 ] G L . s c [ 2 v 6 3 0 0 0 . 7 0 6 1 : v i X r a # Dynamic Neural Turing Machine with Continuous and Discrete Addressing Schemes Caglar Gulcehre1, Sarath Chandar1, Kyunghyun Cho2, Yoshua Bengio1 1University of Montreal, [email protected] 2New York University, [email protected] Keywords: neural networks, memory, neural Turing machines, natural language processing # Abstract We extend neural Turing machine (NTM) model into a dynamic neural Turing ma- chine (D-NTM) by introducing a trainable memory addressing scheme. This address- ing scheme maintains for each memory cell two separate vectors, content and address vectors. This allows the D-NTM to learn a wide variety of location-based addressing strategies including both linear and nonlinear ones. We implement the D-NTM with both continuous, differentiable and discrete, non-differentiable read/write mechanisms. We investigate the mechanisms and effects of learning to read and write into a memory through experiments on Facebook bAbI tasks using both a feedforward and GRU- controller. The D-NTM is evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM baselines. We have done extensive analysis of our model and different variations of NTM on bAbI task. We also provide further experimental results on sequential pMNIST, Stanford Natural Language Inference, associative recall and copy tasks. # Introduction 1 | 1607.00036#1 | 1607.00036 | [
"1511.02301"
] |
|
1607.00036#1 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | Designing of general-purpose learning algorithms is one of the long-standing goals of artiï¬ cial intelligence. Despite the success of deep learning in this area (see, e.g., (Good- fellow et al., 2016)) there are still a set of complex tasks that are not well addressed by conventional neural network based models. Those tasks often require a neural network to be equipped with an explicit, external memory in which a larger, potentially un- bounded, set of facts need to be stored. They include, but are not limited to, episodic question-answering (Weston et al., 2015b; Hermann et al., 2015; Hill et al., 2015), com- pact algorithms (Zaremba et al., 2015), dialogue (Serban et al., 2016; Vinyals and Le, 2015) and video caption generation (Yao et al., 2015). | 1607.00036#0 | 1607.00036#2 | 1607.00036 | [
"1511.02301"
] |
1607.00036#2 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | 1 Recently two promising approaches that are based on neural networks for this type of tasks have been proposed. Memory networks (Weston et al., 2015b) explicitly store all the facts, or information, available for each episode in an external memory (as con- tinuous vectors) and use the attention-based mechanism to index them when returning an output. On the other hand, neural Turing machines (NTM, (Graves et al., 2014)) read each fact in an episode and decides whether to read, write the fact or do both to the external, differentiable memory. A crucial difference between these two models is that the memory network does not have a mechanism to modify the content of the external memory, while the NTM does. In practice, this leads to easier learning in the memory network, which in turn resulted in that it being used more in realistic tasks (Bordes et al., 2015; Dodge et al., 2015). On the contrary, the NTM has mainly been tested on a series of small-scale, carefully-crafted tasks such as copy and associative recall. However, NTM is more expressive, precisely because it can store and modify the internal state of the network as it processes an episode and we were able to use it without any modiï¬ cations on the model for different tasks. The original NTM supports two modes of addressing (which can be used simulta- neously.) They are content-based and location-based addressing. We notice that the location-based strategy is based on linear addressing. The distance between each pair of consecutive memory cells is ï¬ xed to a constant. We address this limitation, in this paper, by introducing a learnable address vector for each memory cell of the NTM with least recently used memory addressing mechanism, and we call this variant a dynamic neural Turing machine (D-NTM). We evaluate the proposed D-NTM on the full set of Facebook bAbI task (We- ston et al., 2015b) using either continuous, differentiable attention or discrete, non- differentiable attention (Zaremba and Sutskever, 2015) as an addressing strategy. Our experiments reveal that it is possible to use the discrete, non-differentiable attention mechanism, and in fact, the D-NTM with the discrete attention and GRU controller outperforms the one with the continuous attention. | 1607.00036#1 | 1607.00036#3 | 1607.00036 | [
"1511.02301"
] |
1607.00036#3 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | We also provide results on sequen- tial pMNIST, Stanford Natural Language Inference (SNLI) task and algorithmic tasks proposed by (Graves et al., 2014) in order to investigate the ability of our model when dealing with long-term dependencies. We summarize our contributions in this paper as below, â ¢ We propose a variation of neural Turing machine called a dynamic neural Turing machine (D-NTM) which employs a learnable and location-based addressing. â ¢ We demonstrate the application of neural Turing machines on more natural and less toyish tasks, episodic question-answering, natural language entailment, digit classiï¬ cation from the pixes besides the toy tasks. We provide a detailed analysis of our model on the bAbI task. | 1607.00036#2 | 1607.00036#4 | 1607.00036 | [
"1511.02301"
] |
1607.00036#4 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | â ¢ We propose to use the discrete attention mechanism and empirically show that, it can outperform the continuous attention based addressing for episodic QA task. â ¢ We propose a curriculum strategy for our model with the feedforward controller and discrete attention that improves our results signiï¬ cantly. 2 In this paper, we avoid doing architecture engineering for each task we work on and focus on pure modelâ s overall performance on each without task-speciï¬ c modiï¬ cations on the model. In that respect, we mainly compare our model against similar models such as NTM and LSTM without task-speciï¬ c modiï¬ cations. This helps us to better understand the modelâ s failures. The remainder of this article is organized as follows. In Section 2, we describe the architecture of Dynamic Neural Turing Machine (D-NTM). In Section 3, we describe the proposed addressing mechanism for D-NTM. Section 4 explains the training pro- cedure. In Section 5, we brieï¬ y discuss some related models. In Section 6, we report results on episodic question answering task. In Section 7, 8, and 9 we discuss the re- sults in sequential MNIST, SNLI, and algorithmic toy tasks respectively. Section 10 concludes the article. # 2 Dynamic Neural Turing Machine The proposed dynamic neural Turing machine (D-NTM) extends the neural Turing ma- chine (NTM, (Graves et al., 2014)) which has a modular design. The D-NTM consists of two main modules: a controller, and a memory. The controller, which is often imple- mented as a recurrent neural network, issues a command to the memory so as to read, write to and erase a subset of memory cells. # 2.1 Memory D-NTM consists of an external memory Mt, where each memory cell i in Mt[i] is partitioned into two parts: a trainable address vector At[i] â R1à da and a content vector Ct[i] â R1à dc. Mt[i] = [At[i]; Ct[i]] . Memory Mt consists of N such memory cells and hence represented by a rectangular matrix Mt â RN à (dc+da): Mt = [At; Ct] . The ï¬ rst part At â RN à | 1607.00036#3 | 1607.00036#5 | 1607.00036 | [
"1511.02301"
] |
1607.00036#5 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | da is a learnable address matrix, and the second Ct â RN Ã dc a content matrix. The address part At is considered a model parameter that is updated during training. During inference, the address part is not overwritten by the controller and remains constant. On the other hand, the content part Ct is both read and written by the controller both during training and inference. At the beginning of each episode, the content part of the memory is refreshed to be an all-zero matrix, C0 = 0. This introduction of the learnable address portion for each memory cell allows the model to learn sophisticated location-based addressing strategies. # 2.2 Controller At each timestep t, the controller (1) receives an input value xt, (2) addresses and reads the memory and creates the content vector rt, (3) erases/writes a portion of the memory, (4) updates its own hidden state ht, and (5) outputs a value yt (if needed.) In this 3 paper, we use both a gated recurrent unit (GRU, (Cho et al., 2014)) and a feedforward- controller to implement the controller such that for a GRU controller ht = GRU(xt, htâ 1, rt) (1) and for a feedforward-controller ht = Ï (xt, rt). (2) # 2.3 Model Operation At each timestep t, the controller receives an input value xt. Then it generates the read weights wr t , the content vector read from the memory rt â R(da+dc)Ã 1 is computed as ry = (M,) wy, (3) The hidden state of the controller (ht) is conditioned on the memory content vector rt and based on this current hidden state of the controller. The model predicts the output label yt for the input. The controller also updates the memory by erasing the old content and writing a new content into the memory. The controller computes three vectors: erase vector et â | 1607.00036#4 | 1607.00036#6 | 1607.00036 | [
"1511.02301"
] |
1607.00036#6 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | Rdcà 1, write weights ww t â RN à 1, and candidate memory content vector ¯ct â Rdcà 1. These vectors are used to modify the memory. Erase vector is computed by a simple MLP which is conditioned on the hidden state of the controller ht. The candidate memory content vector ¯ct is computed based on the current hidden state of the controller ht â Rdhà 1 and the input of the controller which is scaled by a scalar gate αt. The αt is a function of the hidden state and the input of the controller. a= f (i, Xz), (4) # αt = f (ht, xt), ¯ct = ReLU(Wmht + αtWxxt). C, = ReLU(W,,h; + a; W--Xt). (5) where Wm and Wx are trainable matrices and ReLU is the rectiï¬ ed linear activation function (Nair and Hinton, 2010). Given the erase, write and candidate memory content vectors (et, ww t , and ¯ct respectively), the memory matrix is updated by, C,[j] = (1 â erw;?[9]) © Craly] + wP Ue. (6) where the index j in Ct[j] denotes the j-th row of the content matrix Ct of the memory matrix Mt. No Operation (NOP) As found in (Joulin and Mikolov, 2015), an additional NOP operation can be useful for the controller not to access the memory only once in a while. We model this situation by designating one memory cell as a NOP cell to which the controller should access when it does not need to read or write into the memory. Because reading from or writing into this memory cell is completely ignored. We illustrate and elaborate more on the read and write operations of the D-NTM in Figure 1. t are the most crucial parts of the model since the controller decide where to read from and write into the memory by using those. We elaborate this in the next section. 4 (4) (5) | 1607.00036#5 | 1607.00036#7 | 1607.00036 | [
"1511.02301"
] |
1607.00036#7 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | Story Controller Memory : Address 1 Content - Address 2 Content Fact t-1 J++ }e( } Address 3 Content 4 Address 4, Content aaa = â â | [Ant Address 5 Content on re Address 6 Content Fact t O-O-O- Question O-O-O- â â _1__ address Z| Contd Reader | â â Content Figure 1: A graphical illustration of the proposed dynamic neural Turing machine with the recurrent-controller. The controller receives the fact as a continuous vector encoded by a recurrent neural network, computes the read and write weights for addressing the memory. If the D-NTM automatically detects that a query has been received, it returns an answer and terminates. # 3 Addressing Mechanism Each of the address vectors (both read and write) is computed in similar ways. First, the controller computes a key vector: k, = Wh, + by, Both for the read and the write operations, kt â R(da+dc)à 1. Wk â R(da+dc)à N and bk â R(da+dc)à 1 are the learnable weight matrix and bias respectively of kt. Also, the sharpening factor βt â R â ¥ 1 is computed as follows: B= softplus(uj hâ +bg) +1. (7) where uβ and bβ are the parameters of the sharpening factor βt and softplus is deï¬ ned as follows: softplus(x) = log(exp(x) + 1) (8) Given the key kt and sharpening factor βt, the logits for the address weights are then computed by, zt[i] = βtS (kt, Mt[i]) (9) where the similarity function is basically the cosine distance where it is deï¬ ned as S (x, y) â R and 1 â ¥ S (x, y) â ¥ â 1, x-y S09) ~ Teil â ¬ is a small positive value to avoid division by zero. We have used â ¬ = le â 7 in all our experiments. | 1607.00036#6 | 1607.00036#8 | 1607.00036 | [
"1511.02301"
] |
1607.00036#8 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | The address weight generation which we have described in this section is same with the content based addressing mechanism proposed in (Graves et al., 2014). 5 # 3.1 Dynamic Least Recently Used Addressing We introduce a memory addressing operation that can learn to put more emphasis on the least recently used (LRU) memory locations. As observed in (Santoro et al., 2016; Rae et al., 2016), we ï¬ nd it easier to learn the write operations with the use of LRU addressing. To learn a LRU based addressing, ï¬ rst we compute the exponentially moving av- erages of the logits (zt) as vt, where it can be computed as vt = 0.1vtâ 1 + 0.9zt. We rescale the accumulated vt with γt, such that the controller adjusts the inï¬ uence of how much previously written memory locations should effect the attention weights of a particular time-step. Next, we subtract vt from zt in order to reduce the weights of previously read or written memory locations. γt is a shallow MLP with a scalar output and it is conditioned on the hidden state of the controller. γt is parametrized with the parameters uγ and bγ, "= sigmoid(u] hy +b,), w; = softmax(z; â Â¥Vi-1)- (10) (11) | 1607.00036#7 | 1607.00036#9 | 1607.00036 | [
"1511.02301"
] |
1607.00036#9 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | This addressing method increases the weights of the least recently used rows of the memory. The magnitude of the inï¬ uence of the least-recently used memory locations is being learned and adjusted with γt. Our LRU addressing is dynamic due to the modelâ s ability to switch between pure content-based addressing and LRU. During the training, we do not backpropagate through vt. Due to the dynamic nature of this addressing mechanism, it can be used for both read and write operations. If needed, the model will automatically learn to disable LRU while reading from the memory. The address vector deï¬ ned in Equation (11) is a continuous vector. This makes the addressing operation differentiable and we refer to such a D-NTM as continuous D-NTM. # 3.2 Discrete Addressing By deï¬ nition in Eq. (11), every element in the address vector wt is positive and sums up to one. In other words, we can treat this vector as the probabilities of a categorical distribution C(wt) with dim(wt) choices: p[j] = wt[j], where wt[j] is the j-th element of wt. We can readily sample from this categorical distribution and form an one-hot vector Ë wt such that Ë wt[k] = I(k = j), where j â ¼ C(w), and I is an indicator function. If we use Ë wt instead of wt, then we will read and write from only one memory cell at a time. This makes the addressing operation non-differentiable and we refer to such a D-NTM as discrete D-NTM. In discrete D-NTM we sample the one-hot vector during training. Once training is over, we switch to a deterministic strategy. We simply choose an element of wt with the largest value to be the index of the target memory cell, such that | 1607.00036#8 | 1607.00036#10 | 1607.00036 | [
"1511.02301"
] |
1607.00036#10 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | Ë wt[k] = I(k = argmax(wt)). 6 # 3.3 Multi-step Addressing At each time-step, controller may require more than one-step for accessing to the mem- ory. The original NTM addresses this by implementing multiple sets of read, erase and write heads. In this paper, we explore an option of allowing each head to operate more than once at each timestep, similar to the multi-hop mechanism from the end-to-end memory network (Sukhbaatar et al., 2015). # 4 Training D-NTM Once the proposed D-NTM is executed, it returns the output distribution p(y(n)|x(n) for the nth example that is parameterized with θ. We deï¬ ne our cost function as the neg- ative log-likelihood: N 1 n n n C00) = Hy Lowry iaâ , 8), (12) where θ is a set of all the parameters of the model. | 1607.00036#9 | 1607.00036#11 | 1607.00036 | [
"1511.02301"
] |
1607.00036#11 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | Continuous D-NTM, just like the original NTM, is fully end-to-end differentiable and hence we can compute the gradient of this cost function by using backpropagation and learn the parameters of the model with a gradient-based optimization algorithm, such as stochastic gradient descent, to train it end-to-end. However, in discrete D- NTM, we use sampling-based strategy for all the heads during training. This clearly makes the use of backpropagation infeasible to compute the gradient, as the sampling procedure is not differentiable. # 4.1 Training discrete D-NTM To train discrete D-NTM, we use REINFORCE (Williams, 1992) together with the three variance reduction techniquesâ global baseline, input-dependent baseline and variance normalizationâ suggested in (Mnih and Gregor, 2014). | 1607.00036#10 | 1607.00036#12 | 1607.00036 | [
"1511.02301"
] |
1607.00036#12 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | Let us deï¬ ne R(x) = log p(y|x1, . . . , xT ; θ) as a reward. We ï¬ rst center and re- scale the reward by, ~ R(x) -l ¢) = BO? Vor+eâ ¬ where b and Ï is running average and standard deviation of R. We can further center it for each input x separately, i.e., ¯R(x) = Ë R(x) â b(x), where b(x) is computed by a baseline network which takes as input x and predicts its estimated reward. The baseline network is trained to minimize the Huber loss (Huber, 1964) between the true reward Ë R(x) and the predicted reward b(x). This is also called as input based baseline (IBB) which is introduced in (Mnih and Gregor, 2014). | 1607.00036#11 | 1607.00036#13 | 1607.00036 | [
"1511.02301"
] |
1607.00036#13 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | 7 We use the Huber loss to learn the baseline b(x) which is deï¬ ned by, Hδ(z) = z2 for |z| â ¤ δ, δ(2|z| â δ), otherwise, due to its robustness where z would be ¯R(x) in this case. As a further measure to reduce the variance, we regularize the negative entropy of all those category distributions to facilitate a better exploration during training (Xu et al., 2015). Then, the cost function for each training example is approximated as in Equation (13). In this equation, we write the terms related to compute the REINFORCE gradients that includes terms for the entropy regularization on the action space, the likelihood- ratio term to compute the REINFORCE gradients both for the read and the write heads. C"(0) = â log p(y|xur, Wi.7, Wy) J -»y R(x â )(log p(w x17) + log p(w? |X1-r) j=l a w'|x7) +H(w 4 |[Xur))- (13) where J is the number of addressing steps, λH is the entropy regularization coefï¬ - cient, and H denotes the entropy. | 1607.00036#12 | 1607.00036#14 | 1607.00036 | [
"1511.02301"
] |
1607.00036#14 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | # 4.2 Curriculum Learning for the Discrete Attention Training discrete attention with feedforward controller and REINFORCE is challeng- ing. We propose to use a curriculum strategy for training with the discrete attention in order to tackle this problem. For each minibatch, the controller stochastically decides to choose either to use the discrete or continuous weights based on the random variable Ï n with probability pn where n stands for the number of k minibatch updates such that we only update pn every k minibatch updates. Ï n is a Bernoulli random variable which is sampled with probability of pn, Ï n â ¼ Bernoulli(pn). The model will either use the discrete or the continuous-attention based on the Ï n. We start the training procedure with p0 = 1 and during the training pn is annealed to 0 by setting pn = p0â We can rewrite the weights wt as in Equation (14), where it is expressed as the combination of continuous attention weights ¯wt and discrete attention weights Ë wt with Ï t being a binary variable that chooses to use one of them, wt = Ï n ¯wt + (1 â Ï n) Ë wt. (14) | 1607.00036#13 | 1607.00036#15 | 1607.00036 | [
"1511.02301"
] |
1607.00036#15 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | By using this curriculum learning strategy, at the beginning of the training, the model learns to use the memory mainly with the continuous attention. As we anneal the pt, the model will rely more on the discrete attention. 8 # 4.3 Regularizing D-NTM If the controller of D-NTM is a recurrent neural network, we ï¬ nd it to be important to regularize the training of the D-NTM so as to avoid suboptimal solutions in which the D-NTM ignores the memory and works as a simple recurrent neural network. Read-Write Consistency Regularizer One such suboptimal solution we have ob- served in our preliminary experiments with the proposed D-NTM is that the D-NTM uses the address part A of the memory matrix simply as an additional weight matrix, rather than as a means to accessing the content part C. We found that this pathologi- cal case can be effectively avoided by encouraging the read head to point to a memory cell which has also been pointed by the write head. This can be implemented as the following regularization term: T t y 1 WwW T Rrw(w",w") = SOIL = (Dwi wi? (15) © t= t=1 In the equations above, ww t is the write and wr t is the read weights. | 1607.00036#14 | 1607.00036#16 | 1607.00036 | [
"1511.02301"
] |
1607.00036#16 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | Next Input Prediction as Regularization Temporal structure is a strong signal that should be exploited by the controller based on a recurrent neural network. We exploit this structure by letting the controller predict the input in the future. We maximize the predictability of the next input by the controller during training. This is equivalent to minimizing the following regularizer: T Rprea(W) = > log p(Xt41|X, wi, Ww, e:, Mi; 0) t=0 where xt is the current input and xt+1 is the input at the next timestep. | 1607.00036#15 | 1607.00036#17 | 1607.00036 | [
"1511.02301"
] |
1607.00036#17 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | We ï¬ nd this regularizer to be effective in our preliminary experiments and use it for bAbI tasks. # 5 Related Work A recurrent neural network (RNN), which is used as a controller in the proposed D- NTM, has an implicit memory in the form of recurring hidden states. Even with this implicit memory, a vanilla RNN is however known to have difï¬ culties in storing in- formation for long time-spans (Bengio et al., 1994; Hochreiter, 1991). Long short-term memory (LSTM, (Hochreiter and Schmidhuber, 1997)) and gated recurrent units (GRU, (Cho et al., 2014)) have been found to address this issue. However all these models based solely on RNNs have been found to be limited when they are used to solve, e.g., algorithmic tasks and episodic question-answering. | 1607.00036#16 | 1607.00036#18 | 1607.00036 | [
"1511.02301"
] |
1607.00036#18 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | In addition to the ï¬ nite random access memory of the neural Turing machine, based on which the D-NTM is designed, other data structures have been proposed as external memory for neural networks. In (Sun et al., 1997; Grefenstette et al., 2015; Joulin and Mikolov, 2015), a continuous, differentiable stack was proposed. In (Zaremba et al., 9 2015; Zaremba and Sutskever, 2015), grid and tape storage are used. These approaches differ from the NTM in that their memory is unbounded and can grow indeï¬ nitely. On the other hand, they are often not randomly accessible. | 1607.00036#17 | 1607.00036#19 | 1607.00036 | [
"1511.02301"
] |
1607.00036#19 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | Zhang et al. (2015) proposed a variation of NTM that has a structured memory and they have shown experiments on copy and associative recall tasks with this model. In parallel to our work (Yang, 2016) and (Graves et al., 2016) proposed new memory access mechanisms to improve NTM type of models. (Graves et al., 2016) reported superior results on a diverse set of algorithmic learning tasks. Memory networks (Weston et al., 2015b) form another family of neural networks with external memory. In this class of neural networks, information is stored explicitly as it is (in the form of its continuous representation) in the memory, without being erased or modiï¬ | 1607.00036#18 | 1607.00036#20 | 1607.00036 | [
"1511.02301"
] |
1607.00036#20 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | ed during an episode. Memory networks and their variants have been applied to various tasks successfully (Sukhbaatar et al., 2015; Bordes et al., 2015; Dodge et al., 2015; Xiong et al., 2016; Chandar et al., 2016). Miller et al. (2016) have also indepen- dently proposed the idea of having separate key and value vectors for memory networks. A similar addressing mechanism is also explored in (Reed and de Freitas, 2016) in the context of learning program traces. Another related family of models is the attention-based neural networks. Neural networks with continuous or discrete attention over an input have shown promising results on a variety of challenging tasks, including machine translation (Bahdanau et al., 2015; Luong et al., 2015), speech recognition (Chorowski et al., 2015), machine reading comprehension (Hermann et al., 2015) and image caption generation (Xu et al., 2015). The latter two, the memory network and attention-based networks, are however clearly distinguishable from the D-NTM by the fact that they do not modify the content of the memory. # 6 Experiments on Episodic Question-Answering In this section, we evaluate the proposed D-NTM on the synthetic episodic question- answering task called Facebook bAbI (Weston et al., 2015a). We use the version of the dataset that contains 10k training examples per sub-task provided by Facebook.1 For each episode, the D-NTM reads a sequence of factual sentences followed by a question, all of which are given as natural language sentences. The D-NTM is expected to store and retrieve relevant information in the memory in order to answer the question based on the presented facts. # 6.1 Model and Training Details We use the same hyperparameters for all the tasks for a given model. We use a recurrent neural network with GRU units to encode a variable-length fact into a ï¬ xed-size vec- tor representation. This allows the D-NTM to exploit the word ordering in each fact, unlike when facts are encoded as bag-of-words vectors. We experiment with both a recurrent and feedforward neural network as the controller that generates the read and | 1607.00036#19 | 1607.00036#21 | 1607.00036 | [
"1511.02301"
] |
1607.00036#21 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | # 1 https://research.facebook.com/researchers/1543934539189348 10 write weights. The controller has 180 units. We train our feedforward controller using noisy-tanh activation function (Gulcehre et al., 2016) since we were experiencing train- ing difï¬ culties with sigmoid and tanh activation functions. We use both single-step and three-steps addressing with our GRU controller. The memory contains 120 memory cells. Each memory cell consists of a 16-dimensional address part and 28-dimensional content part. | 1607.00036#20 | 1607.00036#22 | 1607.00036 | [
"1511.02301"
] |
1607.00036#22 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | We set aside a random 10% of the training examples as a validation set for each sub-task and use it for early-stopping and hyperparameter search. We train one D-NTM for each sub-task, using Adam (Kingma and Ba, 2014) with its learning rate set to 0.003 and 0.007 respectively for GRU and feedforward controller. The size of each minibatch is 160, and each minibatch is constructed uniform-randomly from the training set. | 1607.00036#21 | 1607.00036#23 | 1607.00036 | [
"1511.02301"
] |
1607.00036#23 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | # 6.2 Goals The goal of this experiment is three-fold. First, we present for the ï¬ rst time the per- formance of a memory-based network that can both read and write dynamically on the Facebook bAbI tasks2. We aim to understand whether a model that has to learn to write an incoming fact to the memory, rather than storing it as it is, is able to work well, and to do so, we compare both the original NTM and proposed D-NTM against an LSTM-RNN. Second, we investigate the effect of having to learn how to write. The fact that the NTM needs to learn to write likely has adverse effect on the overall performance, when compared to, for instance, end-to-end memory networks (MemN2N, (Sukhbaatar et al., 2015)) and dynamic memory network (DMN+, (Xiong et al., 2016)) both of which simply store the incoming facts as they are. We quantify this effect in this experiment. Lastly, we show the effect of the proposed learnable addressing scheme. We further explore the effect of using a feedforward controller instead of the GRU controller. In addition to the explicit memory, the GRU controller can use its own internal hidden state as the memory. On the other hand, the feedforward controller must solely rely on the explicit memory, as it is the only memory available. | 1607.00036#22 | 1607.00036#24 | 1607.00036 | [
"1511.02301"
] |
1607.00036#24 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | # 6.3 Results and Analysis In Table 1, we ï¬ rst observe that the NTMs are indeed capable of solving this type of episodic question-answering better than the vanilla LSTM-RNN. Although the avail- ability of explicit memory in the NTM has already suggested this result, we note that this is the ï¬ rst time neural Turing machines have been used in this speciï¬ c task. All the variants of NTM with the GRU controller outperform the vanilla LSTM- RNN. However, not all of them perform equally well. First, it is clear that the proposed dynamic NTM (D-NTM) using the GRU controller outperforms the original NTM with the GRU controller (NTM, CBA only NTM vs. continuous D-NTM, Discrete D-NTM). As discussed earlier, the learnable addressing scheme of the D-NTM allows the con- troller to access the memory slots by location in a potentially nonlinear way. | 1607.00036#23 | 1607.00036#25 | 1607.00036 | [
"1511.02301"
] |
1607.00036#25 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | We expect 2Similar experiments were done in the recently published (Graves et al., 2016), but D-NTM results for bAbI tasks were already available in arxiv by that time. 11 Task 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Avg.Err. LSTM 0.00 81.90 83.10 0.20 1.20 51.80 24.90 34.10 20.20 30.10 10.30 23.40 6.10 81.00 78.70 51.90 50.10 6.80 90.30 2.10 36.41 MemN2N 0.00 0.30 2.10 0.00 0.80 0.10 2.00 0.90 0.30 0.00 0.10 0.00 0.00 0.10 0.00 51.80 18.60 5.30 2.30 0.00 4.24 DMN+ 0.00 0.30 1.10 0.00 0.50 0.00 2.40 0.00 0.00 0.00 0.00 0.00 0.00 0.20 0.00 45.30 4.20 2.10 0.00 0.00 2.81 1-step LBAâ | 1607.00036#24 | 1607.00036#26 | 1607.00036 | [
"1511.02301"
] |
1607.00036#26 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | NTM 16.30 57.08 74.16 0.00 1.46 23.33 21.67 25.76 24.79 41.46 18.96 25.83 6.67 58.54 36.46 71.15 43.75 3.96 75.89 1.25 31.42 1-step CBA NTM 16.88 55.70 55.00 0.00 20.41 21.04 21.67 21.05 24.17 33.13 31.88 30.00 5.63 59.17 42.30 71.15 43.75 47.50 71.51 0.00 33.60 1-step Soft D-NTM 5.41 58.54 74.58 0.00 1.66 40.20 19.16 12.58 36.66 52.29 31.45 7.70 5.62 60.00 36.87 49.16 17.91 3.95 73.74 2.70 29.51 1-step Discrete D-NTM 6.66 56.04 72.08 0.00 1.04 44.79 19.58 18.46 34.37 50.83 4.16 6.66 2.29 63.75 39.27 51.35 16.04 3.54 64.63 3.12 27.93 3-steps LBAâ | 1607.00036#25 | 1607.00036#27 | 1607.00036 | [
"1511.02301"
] |
1607.00036#27 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | NTM 0.00 61.67 83.54 0.00 0.83 48.13 7.92 25.38 37.80 56.25 3.96 28.75 5.83 61.88 35.62 46.15 43.75 47.50 61.56 0.40 32.85 3-steps CBA NTM 0.00 59.38 65.21 0.00 1.46 54.80 37.70 8.82 0.00 23.75 0.28 23.75 83.13 57.71 21.88 50.00 56.25 47.50 63.65 0.00 32.76 3-steps Soft D-NTM 0.00 46.66 47.08 0.00 1.25 20.62 7.29 11.02 39.37 20.00 30.62 5.41 7.91 58.12 36.04 46.04 21.25 6.87 75.88 3.33 24.24 3-steps Discrete D-NTM 0.00 62.29 41.45 0.00 1.45 11.04 5.62 0.74 32.50 20.83 16.87 4.58 5.00 60.20 40.26 45.41 9.16 1.66 76.66 0.00 21.79 Table 1: Test error rates (%) on the 20 bAbI QA tasks for models using 10k training examples with the GRU and feedforward controller. FF stands for the experiments that are conducted with feedforward controller. | 1607.00036#26 | 1607.00036#28 | 1607.00036 | [
"1511.02301"
] |
1607.00036#28 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | Let us, note that LBAâ refers to NTM that uses both LBA and CBA. In this table, we compare multi-step vs single-step address- ing, original NTM with location based+content based addressing vs only content based addressing, and discrete vs continuous addressing on bAbI. it to help with tasks that have non-trivial access patterns, and as anticipated, we see a large gain with the D-NTM over the original NTM in the tasks of, for instance, 12 - Conjunction and 17 - Positional Reasoning. Among the recurrent variants of the proposed D-NTM, we notice signiï¬ cant im- provements by using discrete addressing over using continuous addressing. We con- jecture that this is due to certain types of tasks that require precise/sharp retrieval of a stored fact, in which case continuous addressing is in disadvantage over discrete ad- dressing. This is evident from the observation that the D-NTM with discrete addressing signiï¬ cantly outperforms that with continuous addressing in the tasks of 8 - Lists/Sets and 11 - Basic Coreference. Furthermore, this is in line with an earlier observation in (Xu et al., 2015), where discrete addressing was found to generalize better in the task of image caption generation. In Table 2, we also observe that the D-NTM with the feedforward controller and discrete attention performs worse than LSTM and D-NTM with continuous-attention. However, when the proposed curriculum strategy from Sec. 3.2 is used, the average test error drops from 68.30 to 37.79. We empirically found training of the feedforward controller more difï¬ cult than that of the recurrent controller. We train our feedforward controller based models four times longer (in terms of the number of updates) than the recurrent controller based ones in order to ensure that they are converged for most of the tasks. On the other hand, the models trained with the GRU controller overï¬ t on bAbI tasks very quickly. For example, on tasks 3 and 16 the feedforward controller based model underï¬ ts (i.e., high training loss) at the end of the training, whereas with the same number of units the model with the GRU controller can overï¬ t on those tasks after 3,000 updates only. We notice a signiï¬ | 1607.00036#27 | 1607.00036#29 | 1607.00036 | [
"1511.02301"
] |
1607.00036#29 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | cant performance gap, when our results are compared to the vari- ants of the memory network (Weston et al., 2015b) (MemN2N and DMN+). We at- 12 tribute this gap to the difï¬ culty in learning to manipulate and store a complex input. Graves et al. (2016) also has also reported results with differentiable neural com- puter (DNC) and NTM on bAbI dataset. However their experimental setup is different from the setup we use in this paper. This makes the comparisons between more difï¬ | 1607.00036#28 | 1607.00036#30 | 1607.00036 | [
"1511.02301"
] |
1607.00036#30 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | - cult. The main differences broadly are, as the input representations to the controller, they used the embedding representation of each word whereas we have used the rep- resentation obtained with GRU for each fact. Secondly, they report only joint training results. However, we have only trained our models on the individual tasks separately. However, despite the differences in terms of architecture in DNC paper (see Table 1), the mean results of their NTM results is very close to ours 28.5% with std of +/- 2.9 which we obtain 31.4% error. Task 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Avg.Err. FF Soft D-NTM 4.38 27.5 71.25 0.00 1.67 1.46 6.04 1.70 0.63 19.80 0.00 6.25 7.5 17.5 0.0 49.65 1.25 0.24 39.47 0.0 12.81 FF Discrete D-NTM 81.67 76.67 79.38 78.65 83.13 48.76 54.79 69.75 39.17 56.25 78.96 82.5 75.0 78.75 71.42 71.46 43.75 48.13 71.46 76.56 68.30 FF Discreteâ D-NTM 14.79 76.67 70.83 44.06 17.71 48.13 23.54 35.62 14.38 56.25 39.58 32.08 18.54 24.79 39.73 71.15 43.75 2.92 71.56 9.79 37.79 Table 2: Test error rates (%) on the 20 bAbI QA tasks for models using 10k training examples with feedforward controller. # 6.4 Visualization of Discrete Attention We visualize the attention of D-NTM with GRU controller with discrete attention in Figure 2. | 1607.00036#29 | 1607.00036#31 | 1607.00036 | [
"1511.02301"
] |
1607.00036#31 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | From this example, we can see that D-NTM has learned to ï¬ nd the correct supporting fact even without any supervision for the particular story in the visualization. # 6.5 Learning Curves for the Recurrent Controller In Figure 3, we compare the learning curves of the continuous and discrete attention D-NTM model with recurrent controller on Task 1. Surprisingly, the discrete attention D-NTM converges faster than the continuous-attention model. The main difï¬ culty of learning continuous-attention is due to the fact that learning to write with continuous- attention can be challenging. | 1607.00036#30 | 1607.00036#32 | 1607.00036 | [
"1511.02301"
] |
1607.00036#32 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | 13 Antoine is bored Jason is hungry Jason travelled to the kitchen Antoine travelled to the garden Write Read Jason got the apple there Yann is tired Yann journeyed to the bedroom Why did yan go to the bedroom ? Figure 2: An example view of the discrete attention over the memory slots for both read (left) and write heads(right). x-axis the denotes the memory locations that are being accessed and y-axis corresponds to the content in the particular memory location. In this ï¬ gure, we visualize the discrete-attention model with 3 reading steps and on task 20. It is easy to see that the NTM with discrete-attention accesses to the relevant part of the memory. We only visualize the last-step of the three steps for writing. Because with discrete attention usually the model just reads the empty slots of the memory. | 1607.00036#31 | 1607.00036#33 | 1607.00036 | [
"1511.02301"
] |
1607.00036#33 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | 30 â â Train nll hard attention model â â Train nll soft attention model Figure 3: A visualization for the learning curves of continuous and discrete D-NTM models trained on Task 1 using 3 steps. In most tasks, we observe that the discrete attention model with GRU controller does converge faster than the continuous-attention model. 14 # 6.6 Training with Continuous Attention and Testing with Discrete Attention In Table 3, we provide results to investigate the effects of using discrete attention model at the test-time for a model trained with feedforward controller and continuous attention. Discreteâ D-NTM model bootstraps the discrete attention with the continuous attention, using the curriculum method that we have introduced in Section 4.2. Discreteâ D-NTM model is the continuous-attention model which uses discrete-attention at the test time. We observe that the Discreteâ D-NTM model which is trained with continuous-attention outperforms Discrete D-NTM model. continuous Discrete Discreteâ Discreteâ D-NTM D-NTM D-NTM D-NTM 14.79 4.38 76.67 27.5 70.83 71.25 44.06 0.00 17.71 1.67 48.13 1.46 23.54 6.04 35.62 1.70 14.38 0.63 56.25 19.80 39.58 0.00 32.08 6.25 18.54 7.5 24.79 17.5 39.73 0.0 71.15 49.65 43.75 1.25 2.92 0.24 71.56 39.47 9.79 0.0 12.81 37.79 Table 3: Test error rates (%) on the 20 bAbI QA tasks for models using 10k training examples with the feedforward controller. Discreteâ D-NTM model bootstraps the dis- crete attention with the continuous attention, using the curriculum method that we have introduced in Section 3.2. Discreteâ D-NTM model is the continuous-attention model which uses discrete-attention at the test time. | 1607.00036#32 | 1607.00036#34 | 1607.00036 | [
"1511.02301"
] |
1607.00036#34 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | # 6.7 D-NTM with BoW Fact Representation In Table 4, we provide results for D-NTM using BoW with positional encoding (PE) Sukhbaatar et al. (2015) as the representation of the input facts. The facts representa- tions are provided as an input to the GRU controller. In agreement to our results with the GRU fact representation, with the BoW fact representation we observe improvements with multi-step of addressing over single-step and discrete addressing over continuous addressing. | 1607.00036#33 | 1607.00036#35 | 1607.00036 | [
"1511.02301"
] |
1607.00036#35 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | 15 Task D-NTM(1-step) D-NTM(1-step) D-NTM(3-steps) D-NTM(3-steps) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Avg Table 4: Test error rates (%) on the 20 bAbI QA tasks for models using 10k training examples with the GRU controller and representations of facts are obtained with BoW using positional encoding. # 7 Experiments on Sequential pMNIST In sequential MNIST task, the pixels of the MNIST digits are provided to the model in scan line order, left to right and top to bottom (Le et al., 2015). At the end of sequence of pixels, the model predicts the label of the digit in the sequence of pixels. We ex- periment D-NTM on the variation of sequential MNIST where the order of the pixels is randomly shufï¬ ed, we call this task as permuted MNIST (pMNIST). An important contribution of this task to our paper, in particular, is to measure the modelâ s ability to perform well when dealing with long-term dependencies. We report our results in Ta- ble 5, we observe improvements over other models that we compare against. In Table 5, â discrete addressing with MABâ refers to D-NTM model using REINFORCE with baseline computed from moving averages of the reward. Discrete addressing with IB refers to D-NTM using REINFORCE with input-based baseline. In Figure 4, we show the learning curves of input-based-baseline (ibb) and regular REINFORCE with moving averages baseline (mab) on the pMNIST task. We observe that input-based-baseline in general is much easier to optimize and converges faster as well. | 1607.00036#34 | 1607.00036#36 | 1607.00036 | [
"1511.02301"
] |
1607.00036#36 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | But it can quickly overï¬ t to the task as well. Let us note that, recurrent batch normalization with LSTM (Cooijmans et al., 2017) with 95.6% accuracy and it per- forms much better than other algorithms. However, it is possible to use recurrent batch normalization in our model and potentially improve our results on this task as well. In all our experiments on sequential MNIST task, we try to keep the capacity of our model to be close to our baselines. We use 100 GRU units in the controller and each | 1607.00036#35 | 1607.00036#37 | 1607.00036 | [
"1511.02301"
] |
1607.00036#37 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | 16 D-NTM discrete MAB D-NTM discrete IB Soft D-NTM NTM 89.6 92.3 93.4 90.9 I-RNN (Le et al., 2015) Zoneout (Krueger et al., 2016) LSTM (Krueger et al., 2016) Unitary-RNN (Arjovsky et al., 2016) Recurrent Dropout (Krueger et al., 2016) Recurrent Batch Normalization (Cooijmans et al., 2017) 82.0 93.1 89.8 91.4 92.5 95.6 Table 5: Sequential pMNIST. 25 â â | 1607.00036#36 | 1607.00036#38 | 1607.00036 | [
"1511.02301"
] |
1607.00036#38 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | validation learning curve of ibb â â â validation learning curve of mab h ---- training learning curve of ibb ---- training learning curve of mab 2.0 Figure 4: We compare the learning curves of our D-NTM model using discrete attention on pMNIST task with input-based baseline and regular REINFORCE baseline. The x- axis is the loss and y-axis is the number of epochs. 17 content vector of size 8 and with address vectors of size 8. We use a learning rate of 1e â 3 and trained the model with Adam optimizer. We did not use the read and write consistency regularization in any of our models. # 8 Stanford Natural Language Inference (SNLI) Task SNLI task (Bowman et al., 2015) is designed to test the abilities of different ma- chine learning algorithms for inferring the entailment between two different statements. Those two statements, can either entail, contradict or be neutral to each other. In this pa- per, we feed the premise followed by the end of premise (EOP) token and the hypothesis in the same sequence as an input to the model. Similarly Rockt¨aschel et al. (2015) have trained their model by providing the premise and the hypothesis in a similar way. This ensures that the performance of our model does not rely only on a particular prepro- cessing or architectural engineering. But rather we mainly rely on the modelâ s ability to represent the sequence and the dependencies in the input sequence efï¬ ciently. The model proposed by Rockt¨aschel et al. (2015), applies attention over its previous hidden states over premise when it reads the hypothesis. In Table 6, we report results for different models with or without recurrent dropout (Semeniuta et al., 2016) and layer normalization (Ba et al., 2016). The number of input vocabulary we use in our paper is 41200, we use GLOVE (Pen- nington et al., 2014) embeddings to initialize the input embeddings. We use GRU- controller with 300 units and the size of the embeddings are also 300. We optimize our models with Adam. We have done a hyperparameter search to ï¬ nd the optimal learning rate via random search and sampling the learning rate from log-space between 1e â 2 and 1e â | 1607.00036#37 | 1607.00036#39 | 1607.00036 | [
"1511.02301"
] |
1607.00036#39 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | 4 for each model. We use layer-normalization in our controller (Ba et al., 2016). We have observed signiï¬ cant improvements by using layer normalization and dropout on this task. Mainly because that the overï¬ tting is a severe problem on SNLI. D-NTM achieves better performance compared to both LSTM and NTMs. Test Acc Word by Word Attention(Rockt¨aschel et al., 2015) Word by Word Attention two-way(Rockt¨aschel et al., 2015) LSTM + LayerNorm + Dropout NTM + LayerNorm + Dropout DNTM + LayerNorm + Dropout LSTM (Bowman et al., 2015) D-NTM NTM 83.5 83.2 81.7 81.8 82.3 77.6 80.9 80.2 Table 6: Stanford Natural Language Inference Task | 1607.00036#38 | 1607.00036#40 | 1607.00036 | [
"1511.02301"
] |
1607.00036#40 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | 18 # 9 NTM Toy Tasks We explore the possibility of using D-NTM to solve algorithmic tasks such as copy and associative recall tasks. We train our model on the same lengths of sequences that is experimented in (Graves et al., 2014). We report our results in Table 7. We ï¬ nd out that D-NTM using continuous-attention can successfully learn the â Copyâ and â Associative Recallâ tasks. In Table 7, we train our model on sequences of the same length as the experiments in (Graves et al., 2014) and test the model on the sequences of the maximum length seen during the training. We consider a model to be successful on copy or associative recall if its validation cost (binary cross-entropy) is lower than 0.02 over the sequences of maximum length seen during the training. We set the threshold to 0.02 to determine whether a model is successful on a task. Because empirically we observe that the mod- els have higher validation costs perform badly in terms of generalization over the longer sequences. â D-NTM discreteâ model in this table is trained with REINFORCE using moving averages to estimate the baseline. Copy Tasks Associative Recall Soft D-NTM D-NTM discrete NTM Success Success Success Success Failure Success Table 7: NTM Toy Tasks. | 1607.00036#39 | 1607.00036#41 | 1607.00036 | [
"1511.02301"
] |
1607.00036#41 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | On both copy and associative recall tasks, we try to keep the capacity of our model to be close to our baselines. We use 100 GRU units in the controller and each content vector of has a size of 8 and using address vector of size 8. We use a learning rate of 1e â 3 and trained the model with Adam optimizer. We did not use the read and write consistency regularization in any of our models. For the model with the discrete attention we use REINFORCE with baseline computed using moving averages. # 10 Conclusion and Future Work In this paper we extend neural Turing machines (NTM) by introducing a learnable ad- dressing scheme which allows the NTM to be capable of performing highly nonlinear location-based addressing. This extension, to which we refer by dynamic NTM (D- NTM), is extensively tested with various conï¬ gurations, including different addressing mechanisms (continuous vs. discrete) and different number of addressing steps, on the Facebook bAbI tasks. This is the ï¬ rst time an NTM-type model was tested on this task, and we observe that the NTM, especially the proposed D-NTM, performs better than vanilla LSTM-RNN. Furthermore, the experiments revealed that the discrete, dis- crete addressing works better than the continuous addressing with the GRU controller, and our analysis reveals that this is the case when the task requires precise retrieval of memory content. Our experiments show that the NTM-based models can be weaker than other vari- ants of memory networks which do not learn but have an explicit mechanism of storing | 1607.00036#40 | 1607.00036#42 | 1607.00036 | [
"1511.02301"
] |
1607.00036#42 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | 19 incoming facts as they are. We conjecture that this is due to the difï¬ culty in learning how to write, manipulate and delete the content of memory. Despite this difï¬ culty, we ï¬ nd the NTM-based approach, such as the proposed D-NTM, to be a better, future-proof approach, because it can scale to a much longer horizon (where it becomes impossible to explicitly store all the experiences.) On pMNIST task, we show that our model can outperform other similar type of approaches proposed to deal with the long-term dependencies. On copy and associa- tive recall tasks, we show that our model can solve the algorithmic problems that are proposed to solve with NTM type of models. Finally we have shown some results on the SNLI task where our model performed better than NTM and the LSTM on this task. However our results do not involve any task speciï¬ c modiï¬ cations and the results can be improved further by structuring the architecture of our model according to the SNLI task. The success of both the learnable address and the discrete addressing scheme sug- gests two future research directions. First, we should try both of these schemes in a wider array of memory-based models, as they are not speciï¬ c to the neural Turing ma- chines. Second, the proposed D-NTM needs to be evaluated on a diverse set of applica- tions, such as text summarization (Rush et al., 2015), visual question-answering (Antol et al., 2015) and machine translation, in order to make a more concrete conclusion. | 1607.00036#41 | 1607.00036#43 | 1607.00036 | [
"1511.02301"
] |
1607.00036#43 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | # References Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, and Devi Parikh. VQA: visual question answering. In 2015 IEEE International Conference on Computer Vision, ICCV 2015, Santiago, Chile, December 7-13, 2015, pages 2425â 2433, 2015. Martin Arjovsky, Amar Shah, and Yoshua Bengio. Unitary evolution recurrent neural networks. ICML 2016, 2016. | 1607.00036#42 | 1607.00036#44 | 1607.00036 | [
"1511.02301"
] |
1607.00036#44 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. In Proceedings Of The International Con- ference on Representation Learning (ICLR 2015), 2015. Yoshua Bengio, Patrice Simard, and Paolo Frasconi. Learning long-term dependencies with gradient descent is difï¬ | 1607.00036#43 | 1607.00036#45 | 1607.00036 | [
"1511.02301"
] |
1607.00036#45 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | cult. Neural Networks, IEEE Transactions on, 5(2): 157â 166, 1994. Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. Large-scale simple question answering with memory networks. arXiv preprint arXiv:1506.02075, 2015. Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. A large annotated corpus for learning natural language inference. arXiv preprint arXiv:1508.05326, 2015. | 1607.00036#44 | 1607.00036#46 | 1607.00036 | [
"1511.02301"
] |
1607.00036#46 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | 20 Sarath Chandar, Sungjin Ahn, Hugo Larochelle, Pascal Vincent, Gerald Tesauro, and Yoshua Bengio. Hierarchical memory networks. arXiv preprint arXiv:1605.07427, 2016. Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder- decoder for statistical machine translation. In EMNLP, 2014. Jan Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, and Yoshua arXiv preprint Bengio. arXiv:1506.07503, 2015. Attention-based models for speech recognition. Tim Cooijmans, Nicolas Ballas, C´esar Laurent, and Aaron Courville. Recurrent batch normalization. ICLR 2017, Toullone France, 2017. Jesse Dodge, Andreea Gane, Xiang Zhang, Antoine Bordes, Sumit Chopra, Alexan- der Miller, Arthur Szlam, and Jason Weston. | 1607.00036#45 | 1607.00036#47 | 1607.00036 | [
"1511.02301"
] |
1607.00036#47 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | Evaluating prerequisite qualities for learning end-to-end dialog systems. CoRR, abs/1511.06931, 2015. Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep learning. Book in prepa- ration for MIT Press, 2016. URL http://www.deeplearningbook.org. Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. arXiv preprint arXiv:1410.5401, 2014. Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska-Barwi´nska, Sergio G´omez Colmenarejo, Edward Grefenstette, Tiago Ra- malho, John Agapiou, et al. Hybrid computing using a neural network with dynamic external memory. Nature, 538(7626):471â 476, 2016. | 1607.00036#46 | 1607.00036#48 | 1607.00036 | [
"1511.02301"
] |
1607.00036#48 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | Edward Grefenstette, Karl Moritz Hermann, Mustafa Suleyman, and Phil Blunsom. Learning to transduce with unbounded memory. In Advances in Neural Information Processing Systems, pages 1819â 1827, 2015. Caglar Gulcehre, Marcin Moczulski, Misha Denil, and Yoshua Bengio. Noisy activation functions. ICML 2016, New York, 2016. Karl Moritz Hermann, Tom´aË s KoË cisk`y, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. | 1607.00036#47 | 1607.00036#49 | 1607.00036 | [
"1511.02301"
] |
1607.00036#49 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | Teaching machines to read and comprehend. arXiv preprint arXiv:1506.03340, 2015. Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. The goldilocks princi- ple: Reading childrenâ s books with explicit memory representations. arXiv preprint arXiv:1511.02301, 2015. Sepp Hochreiter. Untersuchungen zu dynamischen neuronalen netzen. Diploma, Tech- nische Universit¨at M¨unchen, page 91, 1991. Sepp Hochreiter and J¨urgen Schmidhuber. Long short-term memory. Neural Computa- tion, 9(8):1735â 1780, 1997. 21 Peter J. Huber. | 1607.00036#48 | 1607.00036#50 | 1607.00036 | [
"1511.02301"
] |
1607.00036#50 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | Robust estimation of a location parameter. Ann. Math. Statist., 35(1): 73â 101, 03 1964. Inferring algorithmic patterns with stack- augmented recurrent nets. In Advances in Neural Information Processing Systems, pages 190â 198, 2015. Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014. David Krueger, Tegan Maharaj, J´anos Kram´ar, Mohammad Pezeshki, Nicolas Bal- las, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Hugo Larochelle, Aaron Courville, et al. Zoneout: Regularizing rnns by randomly preserving hidden activa- tions. arXiv preprint arXiv:1606.01305, 2016. Quoc V Le, Navdeep Jaitly, and Geoffrey E Hinton. A simple way to initialize recurrent networks of rectiï¬ ed linear units. arXiv preprint arXiv:1504.00941, 2015. Minh-Thang Luong, Hieu Pham, and Christopher D Manning. Effective approaches to attention-based neural machine translation. In Proceedings Of The Conference on Empirical Methods for Natural Language Processing (EMNLP 2015), 2015. | 1607.00036#49 | 1607.00036#51 | 1607.00036 | [
"1511.02301"
] |
1607.00036#51 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | Alexander Miller, Adam Fisch, Jesse Dodge, Amir-Hossein Karimi, Antoine Bordes, and Jason Weston. Key-value memory networks for directly reading documents. CoRR, abs/1606.03126, 2016. URL http://arxiv.org/abs/1606.03126. Andriy Mnih and Karol Gregor. Neural variational inference and learning in belief networks. International Conference on Machine Learning, ICML, 2014. Vinod Nair and Geoffrey E Hinton. | 1607.00036#50 | 1607.00036#52 | 1607.00036 | [
"1511.02301"
] |
1607.00036#52 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | Rectiï¬ ed linear units improve restricted boltzmann machines. In Proceedings of the 27th international conference on machine learning (ICML-10), pages 807â 814, 2010. Jeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vec- tors for word representation. In EMNLP, volume 14, pages 1532â 1543, 2014. Jack W Rae, Jonathan J Hunt, Tim Harley, Ivo Danihelka, Andrew Senior, Greg Wayne, Alex Graves, and Timothy P Lillicrap. | 1607.00036#51 | 1607.00036#53 | 1607.00036 | [
"1511.02301"
] |
1607.00036#53 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | Scaling memory-augmented neural networks with sparse reads and writes. In Advances in NIPS. 2016. Scott Reed and Nando de Freitas. Neural programmer-interpreters. ICLR 2016, 2016. Tim Rockt¨aschel, Edward Grefenstette, Karl Moritz Hermann, Tom´aË s KoË cisk`y, and Phil Blunsom. Reasoning about entailment with neural attention. arXiv preprint arXiv:1509.06664, 2015. Alexander M. Rush, Sumit Chopra, and Jason Weston. A neural attention model for abstractive sentence summarization. In Proceedings of the 2015 Conference on Em- pirical Methods in Natural Language Processing, EMNLP 2015, Lisbon, Portugal, September 17-21, 2015, pages 379â 389, 2015. 22 Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy Lil- ICML 2016, licrap. One-shot learning with memory-augmented neural networks. 2016. Stanislau Semeniuta, Aliaksei Severyn, and Erhardt Barth. Recurrent dropout without memory loss. arXiv preprint arXiv:1603.05118, 2016. Iulian V Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. Building end-to-end dialogue systems using generative hierarchical neural network models. In Proceedings of the 30th AAAI Conference on Artiï¬ cial Intelli- gence (AAAI-16), 2016. | 1607.00036#52 | 1607.00036#54 | 1607.00036 | [
"1511.02301"
] |
1607.00036#54 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. End-to-end mem- ory networks. arXiv preprint arXiv:1503.08895, 2015. Guo-Zheng Sun, C. Lee Giles, and Hsing-Hen Chen. The neural network pushdown au- tomaton: Architecture, dynamics and training. In Adaptive Processing of Sequences and Data Structures, International Summer School on Neural Networks, pages 296â 345, 1997. Oriol Vinyals and Quoc Le. A neural conversational model. arXiv preprint arXiv:1506.05869, 2015. Jason Weston, Antoine Bordes, Sumit Chopra, and Tomas Mikolov. Towards ai- arXiv preprint complete question answering: a set of prerequisite toy tasks. arXiv:1502.05698, 2015a. Jason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. In Proceedings Of The International Conference on Representation Learning (ICLR 2015), 2015b. In Press. Ronald J. Williams. | 1607.00036#53 | 1607.00036#55 | 1607.00036 | [
"1511.02301"
] |
1607.00036#55 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8:229â 256, 1992. Caiming Xiong, Stephen Merity, and Richard Socher. Dynamic memory networks for visual and textual question answering. CoRR, abs/1603.01417, 2016. Kelvin Xu, Jimmy Ba, Ryan Kiros, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, and Yoshua Bengio. | 1607.00036#54 | 1607.00036#56 | 1607.00036 | [
"1511.02301"
] |
1607.00036#56 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | Show, attend and tell: Neural image caption generation with visual attention. In Proceedings Of The International Conference on Represen- tation Learning (ICLR 2015), 2015. Greg Yang. Lie access neural turing machine. arXiv preprint arXiv:1602.08671, 2016. Li Yao, Atousa Torabi, Kyunghyun Cho, Nicolas Ballas, Christopher Pal, Hugo Larochelle, and Aaron Courville. Describing videos by exploiting temporal struc- ture. In Computer Vision (ICCV), 2015 IEEE International Conference on. IEEE, 2015. | 1607.00036#55 | 1607.00036#57 | 1607.00036 | [
"1511.02301"
] |
1607.00036#57 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | 23 Wojciech Zaremba and Ilya Sutskever. Reinforcement learning neural turing machines. CoRR, abs/1505.00521, 2015. Wojciech Zaremba, Tomas Mikolov, Armand Joulin, and Rob Fergus. Learning simple algorithms from examples. arXiv preprint arXiv:1511.07275, 2015. Wei Zhang, Yang Yu, and Bowen Zhou. Structured memory for neural turing machines. arXiv preprint arXiv:1510.03931, 2015. | 1607.00036#56 | 1607.00036#58 | 1607.00036 | [
"1511.02301"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.