id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
sequencelengths 1
1
|
---|---|---|---|---|---|---|
1511.08630#18 | A C-LSTM Neural Network for Text Classification | In our ï¬ nal settings, we only use one convolu- tional layer and one LSTM layer for both tasks. For the ï¬ lter size, we investigated ï¬ lter lengths of 2, 3 and 4 in two cases: a) single convolutional layer with the same ï¬ lter length, and b) multiple convolu- tional layers with different lengths of ï¬ lters in paral- lel. Here we denote the number of ï¬ lters of length i by ni for ease of clariï¬ | 1511.08630#17 | 1511.08630#19 | 1511.08630 | [
"1511.08630"
] |
1511.08630#19 | A C-LSTM Neural Network for Text Classification | cation. For the ï¬ rst case, each n-gram window is transformed into ni convoluted Model SVM NBoW Paragraph Vector RAE MV-RNN RNTN DRNN CNN-non-static CNN-multichannel DCNN Molding-CNN Dependency Tree-LSTM Constituency Tree-LSTM LSTM Bi-LSTM C-LSTM Fine-grained (%) Binary (%) Reported in (Socher et al., 2013b) (Kalchbrenner et al., 2014) (Le and Mikolov, 2014) (Socher, Pennington, et al., 2011) (Socher et al., 2012) (Socher et al., 2013b) (Irsoy and Cardie, 2014) (Kim, 2014) (Kim, 2014) (Kalchbrenner et al., 2014) (Lei et al., 2015) (Tai et al., 2015) (Tai et al., 2015) our implementation our implementation our implementation 79.4 80.5 87.8 82.4 82.9 85.4 86.6 87.2 88.1 86.8 88.6 85.7 88.0 86.6 87.9 87.8 40.7 42.4 48.7 43.2 44.4 45.7 49.8 48.0 47.4 48.5 51.2 48.4 51.0 46.6 47.8 49.2 Table 1: Comparisons with baseline models on Stanford Sentiment Treebank. Fine-grained is a 5-class classiï¬ cation task. Binary is a 2-classiï¬ cation task. The second block contains the recursive models. The third block are methods related to convolutional neural networks. The fourth block contains methods using LSTM (the ï¬ rst two methods in this block also use syntactic parsing trees). The ï¬ rst block contains other baseline methods. The last block is our model. | 1511.08630#18 | 1511.08630#20 | 1511.08630 | [
"1511.08630"
] |
1511.08630#20 | A C-LSTM Neural Network for Text Classification | features after convolution and the sequence of win- dow representations is fed into LSTM. For the latter case, since the number of windows generated from each convolution layer varies when the ï¬ lter length varies (see L â k + 1 below equation (3)), we cut the window sequence at the end based on the maximum ï¬ lter length that gives the shortest number of win- dows. Each window is represented as the concatena- tion of outputs from different convolutional layers. We also exploit different combinations of different ï¬ lter lengths. We further present experimental anal- ysis of the exploration on ï¬ lter size later. According to the experiments, we choose a single convolutional layer with ï¬ lter length 3. For SST, the number of ï¬ lters of length 3 is set to be 150 and the memory dimension of LSTM is set to be 150, too. The word vector layer and the LSTM layer are dropped out with a probability of 0.5. | 1511.08630#19 | 1511.08630#21 | 1511.08630 | [
"1511.08630"
] |
1511.08630#21 | A C-LSTM Neural Network for Text Classification | For TREC, the number of ï¬ lters is set to be 300 and the memory dimension is set to be 300. The word vec- tor layer and the LSTM layer are dropped out with a probability of 0.5. We also add L2 regularization with a factor of 0.001 to the weights in the softmax layer for both tasks. # 6 Results and Model Analysis In this section, we show our evaluation results on sentiment classiï¬ cation and question type classiï¬ ca- tion tasks. Moreover, we give some model analysis on the ï¬ lter size conï¬ guration. # 6.1 Sentiment Classiï¬ cation | 1511.08630#20 | 1511.08630#22 | 1511.08630 | [
"1511.08630"
] |
1511.08630#22 | A C-LSTM Neural Network for Text Classification | The results are shown in Table 1. We compare our model with a large set of well-performed models on the Stanford Sentiment Treebank. Generally, the baseline models consist of recur- sive models, convolutional neural network mod- els, LSTM related models and others. The re- cursive models employ a syntactic parse tree as the sentence structure and the sentence representa- tion is computed recursively in a bottom-up man- ner along the parse tree. Under this category, we choose recursive autoencoder (RAE), matrix-vector (MV-RNN), tensor based composition (RNTN) and multi-layer stacked (DRNN) recursive neural net- work as baselines. Among CNNs, we compare with Kimâ s (2014) CNN model with ï¬ ne-tuned word vec- tors (CNN-non-static) and multi-channels (CNN- multichannel), DCNN with dynamic k-max pool- Acc Reported in 95.0 Silva et al .(2011) 91.8 Zhao et al .(2015) 92.4 Zhao et al .(2015) 93.6 Kim (2014) 92.2 Kim (2014) 93.0 Kalchbrenner et al. (2014) our implementation 93.2 our implementation 93.0 our implementation 94.6 Model SVM Paragraph Vector Ada-CNN CNN-non-static CNN-multichannel DCNN LSTM Bi-LSTM C-LSTM | 1511.08630#21 | 1511.08630#23 | 1511.08630 | [
"1511.08630"
] |
1511.08630#23 | A C-LSTM Neural Network for Text Classification | Table 2: The 6-way question type classiï¬ cation accuracy on TREC. ing, Taoâ s CNN (Molding-CNN) with low-rank ten- sor based non-linear and non-consecutive convo- lutions. Among LSTM related models, we ï¬ rst compare with two tree-structured LSTM models (Dependence Tree-LSTM and Constituency Tree- LSTM) that adjust LSTM to tree-structured network topologies. Then we implement one-layer LSTM and Bi-LSTM by ourselves. Since we could not tune the result of Bi-LSTM to be as good as what has been reported in (Tai et al., 2015) even if following their untied weight conï¬ guration, we report our own results. For other baseline methods, we compare against SVM with unigram and bigram features, NBoW with average word vector features and para- graph vector that infers the new paragraph vector for unseen documents. To the best of our knowledge, we achieve the fourth best published result for the 5-class classi- ï¬ cation task on this dataset. For the binary clas- siï¬ cation task, we achieve comparable results with respect to the state-of-the-art ones. From Table 1, we have the following observations: (1) Although we did not beat the state-of-the-art ones, as an end- to-end model, the result is still promising and com- parable with thoes models that heavily rely on lin- guistic annotations and knowledge, especially syn- tactic parse trees. This indicates C-LSTM will be more feasible for various scenarios. (2) Compar- ing our results against single CNN and LSTM mod- els shows that LSTM does learn long-term depen- dencies across sequences of higher-level represen- tations better. We could explore in the future how to learn more compact higher-level representations by replacing standard convolution with other non- linear feature mapping functions or appealing to tree-structured topologies before the convolutional layer. | 1511.08630#22 | 1511.08630#24 | 1511.08630 | [
"1511.08630"
] |
1511.08630#24 | A C-LSTM Neural Network for Text Classification | # 6.2 Question Type Classiï¬ cation The prediction accuracy on TREC question classiï¬ - cation is reported in Table 2. We compare our model with a variety of models. The SVM classiï¬ er uses unigrams, bigrams, wh-word, head word, POS tags, parser, hypernyms, WordNet synsets as engineered features and 60 hand-coded rules. Ada-CNN is a self-adaptiive hierarchical sentence model with gat- ing networks. Other baseline models have been in- troduced in the last task. From Table 2, we have the following observations: (1) Our result consistently outperforms all published neural baseline models, which means that C-LSTM captures intentions of TREC questions well. (2) Our result is close to that of the state-of-the-art SVM that depends on highly engineered features. Such engineered features not only demands human laboring but also leads to the error propagation in the existing NLP tools, thus couldnâ t generalize well in other datasets and tasks. With the ability of automatically learning semantic sentence representations, C-LSTM doesnâ t require any human-designed features and has a better scali- bility. | 1511.08630#23 | 1511.08630#25 | 1511.08630 | [
"1511.08630"
] |
1511.08630#25 | A C-LSTM Neural Network for Text Classification | # 6.3 Model Analysis Here we investigate the impact of different ï¬ lter con- ï¬ gurations in the convolutional layer on the model performance. In the convolutional layer of our model, ï¬ lters are used to capture local n-gram features. Intuitively, multiple convolutional layers in parallel with differ- 0.950 0.945 0.940 y c a r u c c A 0.935 0.930 0.925 0.920 S:2 S:3 S:4 M:2,3 Filter configuration M:2,4 M:3,4 M:2,3,4 Figure 2: | 1511.08630#24 | 1511.08630#26 | 1511.08630 | [
"1511.08630"
] |
1511.08630#26 | A C-LSTM Neural Network for Text Classification | Prediction accuracies on TREC questions with dif- ferent ï¬ lter size strategies. For the horizontal axis, S means single convolutional layer with the same ï¬ lter length, and M means multiple convolutional layers in parallel with different ï¬ lter lengths. ent ï¬ lter sizes should perform better than single con- volutional layers with the same length ï¬ lters in that different ï¬ lter sizes could exploit features of differ- ent n-grams. However, we found in our experiments that single convolutional layer with ï¬ lter length 3 al- ways outperforms the other cases. We show in Figure 2 the prediction accuracies on the 6-way question classiï¬ cation task using differ- ent ï¬ lter conï¬ gurations. Note that we also observe the similar phenomenon in the sentiment classiï¬ ca- tion task. For each ï¬ lter conï¬ | 1511.08630#25 | 1511.08630#27 | 1511.08630 | [
"1511.08630"
] |
1511.08630#27 | A C-LSTM Neural Network for Text Classification | guration, we report in Figure 2 the best result under extensive grid-search on hyperparameters. It it shown that single convolu- tional layer with ï¬ lter length 3 performs best among all ï¬ lter conï¬ gurations. For the case of multiple convolutional layers in parallel, it is shown that ï¬ l- ter conï¬ gurations with ï¬ lter length 3 performs better that those without tri-gram ï¬ lters, which further con- ï¬ rms that tri-gram features do play a signiï¬ cant role in capturing local features in our tasks. We conjec- ture that LSTM could learn better semantic sentence representations from sequences of tri-gram features. | 1511.08630#26 | 1511.08630#28 | 1511.08630 | [
"1511.08630"
] |
1511.08630#28 | A C-LSTM Neural Network for Text Classification | # 7 Conclusion and Future Work We have described a novel, uniï¬ ed model called C- LSTM that combines convolutional neural network with long short-term memory network (LSTM). C- LSTM is able to learn phrase-level features through a convolutional layer; sequences of such higher- level representations are then fed into the LSTM to learn long-term dependencies. We evaluated the learned semantic sentence representations on senti- ment classiï¬ cation and question type classiï¬ cation tasks with very satisfactory results. | 1511.08630#27 | 1511.08630#29 | 1511.08630 | [
"1511.08630"
] |
1511.08630#29 | A C-LSTM Neural Network for Text Classification | We could explore in the future ways to replace the standard convolution with tensor-based operations or tree-structured convolutions. We believe LSTM will beneï¬ t from more structured higher-level repre- sentations. # References [Bastien et al.2012] Fr´ed´eric Bastien, Pascal Lamblin, Razvan Pascanu, James Bergstra, Ian J. Goodfellow, Arnaud Bergeron, Nicolas Bouchard, and Yoshua Ben- gio. 2012. Theano: new features and speed im- provements. Deep Learning and Unsupervised Fea- ture Learning NIPS 2012 Workshop. [Cho et al.2014] Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learn- ing phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078. [Collobert et al.2011] Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language process- ing (almost) from scratch. The Journal of Machine Learning Research, 12:2493â 2537. [Denil et al.2014] Misha Denil, Alban Demiraj, Nal Kalchbrenner, Phil Blunsom, and Nando de Freitas. 2014. Modelling, visualising and summarising doc- uments with a single convolutional neural network. arXiv preprint arXiv:1406.3830. Devlin, Zbib, [Devlin et al.2014] Jacob Thomas Lamar, Richard Zhongqiang Huang, Schwartz, and John Makhoul. Fast and 2014. robust neural network joint models for statistical machine translation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, volume 1, pages 1370â | 1511.08630#28 | 1511.08630#30 | 1511.08630 | [
"1511.08630"
] |
1511.08630#30 | A C-LSTM Neural Network for Text Classification | 1380. [Hinton et al.2012] Geoffrey E Hinton, Nitish Srivas- tava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. The Computing Research Repository (CoRR). [Hochreiter and Schmidhuber1997] Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735â | 1511.08630#29 | 1511.08630#31 | 1511.08630 | [
"1511.08630"
] |
1511.08630#31 | A C-LSTM Neural Network for Text Classification | 1780. [Irsoy and Cardie2014] Ozan Irsoy and Claire Cardie. 2014. Deep recursive neural networks for composi- tionality in language. In Advances in Neural Informa- tion Processing Systems, pages 2096â 2104. [Johnson and Zhang2015] Rie Johnson and Tong Zhang. 2015. Effective use of word order for text categoriza- tion with convolutional neural networks. Human Lan- guage Technologies: The 2015 Annual Conference of the North American Chapter of the ACL, pages 103â 112. [Kalchbrenner et al.2014] Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. 2014. A convo- lutional neural network for modelling sentences. Association for Computational Linguistics (ACL). [Kim2014] Yoon Kim. 2014. Convolutional neural net- works for sentence classiï¬ | 1511.08630#30 | 1511.08630#32 | 1511.08630 | [
"1511.08630"
] |
1511.08630#32 | A C-LSTM Neural Network for Text Classification | cation. In Proceedings of Empirical Methods on Natural Language Processing. [Le and Mikolov2014] Quoc Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), pages 1188â 1196. [Lei et al.2015] Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2015. Molding cnns for text: non-linear, non-consecutive convolutions. In Proceedings of Em- pirical Methods on Natural Language Processing. [Li and Roth2002] Xin Li and Dan Roth. 2002. Learn- ing question classiï¬ ers. | 1511.08630#31 | 1511.08630#33 | 1511.08630 | [
"1511.08630"
] |
1511.08630#33 | A C-LSTM Neural Network for Text Classification | In Proceedings of the 19th in- ternational conference on Computational linguistics- Volume 1, pages 1â 7. Association for Computational Linguistics. [Li et al.2015] Jiwei Li, Dan Jurafsky, and Eudard Hovy. 2015. When are tree structures necessary for deep learning of representations? In Proceedings of Em- pirical Methods on Natural Language Processing. Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Advances in neural infor- mation processing systems, pages 3111â | 1511.08630#32 | 1511.08630#34 | 1511.08630 | [
"1511.08630"
] |
1511.08630#34 | A C-LSTM Neural Network for Text Classification | 3119. [Mou et al.2015] Lili Mou, Hao Peng, Ge Li, Yan Xu, Lu Zhang, and Zhi Jin. 2015. Discriminative neural sentence modeling by tree-based convolution. Unpublished manuscript: http://arxiv. org/abs/1504. 01106v5. Version, 5. [Nair and Hinton2010] Vinod Nair and Geoffrey E Hin- ton. 2010. Rectiï¬ | 1511.08630#33 | 1511.08630#35 | 1511.08630 | [
"1511.08630"
] |
1511.08630#35 | A C-LSTM Neural Network for Text Classification | ed linear units improve restricted boltzmann machines. In Proceedings of the 27th In- ternational Conference on Machine Learning (ICML- 10), pages 807â 814. [Pascanu et al.2014] Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. 2014. How to construct deep recurrent neural networks. In Proceed- ings of the conference on International Conference on Learning Representations (ICLR). [Sainath et al.2015] Tara N Sainath, Oriol Vinyals, An- drew Senior, and Hasim Sak. 2015. Convolutional, long short-term memory, fully connected deep neural networks. IEEE International Conference on Acous- tics, Speech and Signal Processing. [Silva et al.2011] Joao Silva, Lu´ısa Coheur, Ana Cristina Mendes, and Andreas Wichert. 2011. From symbolic to sub-symbolic information in question classiï¬ | 1511.08630#34 | 1511.08630#36 | 1511.08630 | [
"1511.08630"
] |
1511.08630#36 | A C-LSTM Neural Network for Text Classification | cation. Artiï¬ cial Intelligence Review, 35(2):137â 154. Brody Huval, Christopher D Manning, and Andrew Y Ng. 2012. Semantic compositionality through recursive matrix- vector spaces. In Proceedings of Empirical Methods on Natural Language Processing, pages 1201â 1211. John Bauer, Christopher D Manning, and Andrew Y Ng. 2013a. Parsing with compositional vector grammars. In In Proceedings of the ACL conference. Citeseer. [Socher et al.2013b] Richard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013b. Recur- sive deep models for semantic compositionality over In Proceedings of Empirical a sentiment treebank. Methods on Natural Language Processing, volume 1631, page 1642. Citeseer. [Sutskever et al.2014] Ilya Sutskever, Oriol Vinyals, and Quoc VV Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural informa- tion processing systems, pages 3104â | 1511.08630#35 | 1511.08630#37 | 1511.08630 | [
"1511.08630"
] |
1511.08630#37 | A C-LSTM Neural Network for Text Classification | 3112. [Tai et al.2015] Kai Sheng Tai, Richard Socher, and Improved semantic Christopher D Manning. 2015. representations from tree-structured long short-term memory networks. Association for Computational Linguistics (ACL). [Tang et al.2015] Duyu Tang, Bing Qin, and Ting Liu. 2015. Document modeling with gated recurrent neural network for sentiment classiï¬ cation. In Proceedings of Empirical Methods on Natural Language Process- ing. [Tieleman and Hinton2012] T. Tieleman and G Hinton. 2012. | 1511.08630#36 | 1511.08630#38 | 1511.08630 | [
"1511.08630"
] |
1511.08630#38 | A C-LSTM Neural Network for Text Classification | Lecture 6.5 - rmsprop, coursera: Neural net- works for machine learning. [Xu et al.2015] Kelvin Xu, Jimmy Ba, Ryan Kiros, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In Pro- ceedings of 2015th International Conference on Ma- chine Learning. [Zhao et al.2015] Han Zhao, Zhengdong Lu, and Pascal Poupart. 2015. Self-adaptive hierarchical sentence model. | 1511.08630#37 | 1511.08630#39 | 1511.08630 | [
"1511.08630"
] |
1511.08630#39 | A C-LSTM Neural Network for Text Classification | In Proceedings of International Joint Confer- ences on Artiï¬ cial Intelligence. | 1511.08630#38 | 1511.08630 | [
"1511.08630"
] |
|
1511.06939#0 | Session-based Recommendations with Recurrent Neural Networks | 6 1 0 2 r a M 9 2 ] G L . s c [ 4 v 9 3 9 6 0 . 1 1 5 1 : v i X r a Published as a conference paper at ICLR 2016 # SESSION-BASED RECOMMENDATIONS WITH RECURRENT NEURAL NETWORKS Bal´azs Hidasi â Gravity R&D Inc. Budapest, Hungary [email protected] # Alexandros Karatzoglou Telefonica Research Barcelona, Spain [email protected] Linas Baltrunas â Netï¬ ix Los Gatos, CA, USA [email protected] Domonkos Tikk Gravity R&D Inc. Budapest, Hungary [email protected] # ABSTRACT | 1511.06939#1 | 1511.06939 | [
"1502.04390"
] |
|
1511.06939#1 | Session-based Recommendations with Recurrent Neural Networks | We apply recurrent neural networks (RNN) on a new domain, namely recom- mender systems. Real-life recommender systems often face the problem of having to base recommendations only on short session-based data (e.g. a small sportsware website) instead of long user histories (as in the case of Netï¬ ix). In this situation the frequently praised matrix factorization approaches are not accurate. This prob- lem is usually overcome in practice by resorting to item-to-item recommendations, i.e. recommending similar items. We argue that by modeling the whole session, more accurate recommendations can be provided. We therefore propose an RNN- based approach for session-based recommendations. Our approach also considers practical aspects of the task and introduces several modiï¬ cations to classic RNNs such as a ranking loss function that make it more viable for this speciï¬ c problem. Experimental results on two data-sets show marked improvements over widely used approaches. # INTRODUCTION Session-based recommendation is a relatively unappreciated problem in the machine learning and recommender systems community. Many e-commerce recommender systems (particularly those of small retailers) and most of news and media sites do not typically track the user-idâ s of the users that visit their sites over a long period of time. While cookies and browser ï¬ ngerprinting can provide some level of user recognizability, those technologies are often not reliable enough and moreover raise privacy concerns. Even if tracking is possible, lots of users have only one or two sessions on a smaller e-commerce site, and in certain domains (e.g. classiï¬ ed sites) the behavior of users often shows session-based traits. Thus subsequent sessions of the same user should be handled independently. Consequently, most session-based recommendation systems deployed for e-commerce are based on relatively simple methods that do not make use of a user proï¬ le e.g. item- to-item similarity, co-occurrence, or transition probabilities. While effective, those methods often take only the last click or selection of the user into account ignoring the information of past clicks. The most common methods used in recommender systems are factor models (Koren et al., 2009; Weimer et al., 2007; Hidasi & Tikk, 2012) and neighborhood methods (Sarwar et al., 2001; Ko- ren, 2008). | 1511.06939#0 | 1511.06939#2 | 1511.06939 | [
"1502.04390"
] |
1511.06939#2 | Session-based Recommendations with Recurrent Neural Networks | Factor models work by decomposing the sparse user-item interactions matrix to a set of d dimensional vectors one for each item and user in the dataset. The recommendation problem is then treated as a matrix completion/reconstruction problem whereby the latent factor vectors are then used to ï¬ ll the missing entries by e.g. taking the dot product of the corresponding userâ item latent factors. Factor models are hard to apply in session-based recommendation due to the absence â The author spent 3 months at Telefonica Research during the research of this topic. â This work was done while the author was a member of the Telefonica Research group in Barcelona, Spain | 1511.06939#1 | 1511.06939#3 | 1511.06939 | [
"1502.04390"
] |
1511.06939#3 | Session-based Recommendations with Recurrent Neural Networks | 1 Published as a conference paper at ICLR 2016 of a user proï¬ le. On the other hand, neighborhood methods, which rely on computing similari- ties between items (or users) are based on co-occurrences of items in sessions (or user proï¬ les). Neighborhood methods have been used extensively in session-based recommendations. The past few years have seen the tremendous success of deep neural networks in a number of tasks such as image and speech recognition (Russakovsky et al., 2014; Hinton et al., 2012) where unstruc- tured data is processed through several convolutional and standard layers of (usually rectiï¬ | 1511.06939#2 | 1511.06939#4 | 1511.06939 | [
"1502.04390"
] |
1511.06939#4 | Session-based Recommendations with Recurrent Neural Networks | ed linear) units. Sequential data modeling has recently also attracted a lot of attention with various ï¬ avors of RNNs being the model of choice for this type of data. Applications of sequence modeling range from test-translation to conversation modeling to image captioning. While RNNs have been applied to the aforementioned domains with remarkable success little atten- tion, has been paid to the area of recommender systems. In this work we argue that RNNs can be applied to session-based recommendation with remarkable results, we deal with the issues that arise when modeling such sparse sequential data and also adapt the RNN models to the recommender setting by introducing a new ranking loss function suited to the task of training these models. The session-based recommendation problem shares some similarities with some NLP-related problems in terms of modeling as long as they both deals with sequences. In the session-based recommenda- tion we can consider the ï¬ rst item a user clicks when entering a web-site as the initial input of the RNN, we then would like to query the model based on this initial input for a recommendation. Each consecutive click of the user will then produce an output (a recommendation) that depends on all the previous clicks. Typically the item-set to choose from in recommenders systems can be in the tens of thousands or even hundreds of thousands. Apart from the large size of the item set, another challenge is that click-stream datasets are typically quite large thus training time and scalability are really important. As in most information retrieval and recommendation settings, we are interested in focusing the modeling power on the top-items that the user might be interested in, to this end we use ranking loss function to train the RNNs. 2 RELATED WORK 2.1 SESSION-BASED RECOMMENDATION Much of the work in the area of recommender systems has focused on models that work when a user identiï¬ er is available and a clear user proï¬ le can be built. | 1511.06939#3 | 1511.06939#5 | 1511.06939 | [
"1502.04390"
] |
1511.06939#5 | Session-based Recommendations with Recurrent Neural Networks | In this setting, matrix factorization methods and neighborhood models have dominated the literature and are also employed on-line. One of the main approaches that is employed in session-based recommendation and a natural solution to the problem of a missing user proï¬ le is the item-to-item recommendation approach (Sarwar et al., 2001; Linden et al., 2003) in this setting an item to item similarity matrix is precomputed from the available session data, that is items that are often clicked together in sessions are deemed to be similar. This similarity matrix is then simply used during the session to recommend the most similar items to the one the user has currently clicked. While simple, this method has been proven to be effective and is widely employed. While effective, these methods are only taking into account the last click of the user, in effect ignoring the information of the past clicks. A somewhat different approach to session-based recommendation are Markov Decision Processes (MDPs) (2002). MDPs are models of sequential stochastic decision problems. An MDP is defined as a four-tuple (S, A, Rwd, tr) where S is the set of states, A is a set of actions Rwd is a reward function and tr is the state-transition function. In recommender systems actions can be equated with recommendations and the simplest MPDs are essentially first order Markov chains where the next recommendation can be simply computed on the basis of the transition probability between items. The main issue with applying Markov chains in session-based recommendation is that the state space quickly becomes unmanageable when trying to include all possible sequences of user selections. The extended version of the General Factorization Framework (GFF) (Hidasi & Tikk, 2015) is ca- pable of using session data for recommendations. It models a session by the sum of its events. It uses two kinds of latent representations for items, one represents the item itself, the other is for representing the item as part of a session. The session is then represented as the average of the feature vectors of part-of-a-session item representation. However, this approach does not consider any ordering within the session. | 1511.06939#4 | 1511.06939#6 | 1511.06939 | [
"1502.04390"
] |
1511.06939#6 | Session-based Recommendations with Recurrent Neural Networks | 2 Published as a conference paper at ICLR 2016 2.2 DEEP LEARNING IN RECOMMENDERS One of the ï¬ rst related methods in the neural networks literature where the use of Restricted Boltz- mann Machines (RBM) for Collaborative Filtering (Salakhutdinov et al., 2007). In this work an RBM is used to model user-item interaction and perform recommendations. This model has been shown to be one of the best performing Collaborative Filtering models. Deep Models have been used to extract features from unstructured content such as music or images that are then used together with more conventional collaborative ï¬ ltering models. In Van den Oord et al. (2013) a convolutional deep network is used to extract feature from music ï¬ les that are then used in a factor model. More recently Wang et al. (2015) introduced a more generic approach whereby a deep network is used to extract generic content-features from any types of items, these features are then incorporated in a standard collaborative ï¬ ltering model to enhance the recommendation performance. This approach seems to be particularly useful in settings where there is not sufï¬ cient user-item interaction information. # 3 RECOMMENDATIONS WITH RNNS Recurrent Neural Networks have been devised to model variable-length sequence data. The main difference between RNNs and conventional feedforward deep models is the existence of an internal hidden state in the units that compose the network. Standard RNNs update their hidden state h using the following update function: | 1511.06939#5 | 1511.06939#7 | 1511.06939 | [
"1502.04390"
] |
1511.06939#7 | Session-based Recommendations with Recurrent Neural Networks | ht = g(W xt + U htâ 1) (1) Where g is a smooth and bounded function such as a logistic sigmoid function xt is the input of the unit at time t. An RNN outputs a probability distribution over the next element of the sequence, given its current state ht. A Gated Recurrent Unit (GRU) (Cho et al., 2014) is a more elaborate model of an RNN unit that aims at dealing with the vanishing gradient problem. GRU gates essentially learn when and by how much to update the hidden state of the unit. The activation of the GRU is a linear interpolation between the previous activation and the candidate activation Ë | 1511.06939#6 | 1511.06939#8 | 1511.06939 | [
"1502.04390"
] |
1511.06939#8 | Session-based Recommendations with Recurrent Neural Networks | ht: ht = (1 â zt)htâ 1 + zt Ë ht where the update gate is given by: zt = Ï (Wzxt + Uzhtâ 1) (3) while the candidate activation function Ë ht is computed in a similar manner: hy = tanh (Wx, + U(r, © hy_1)) (4) and ï¬ naly the reset gate rt is given by: rt = Ï (Wrxt + Urhtâ 1) (5) 3.1 CUSTOMIZING THE GRU MODEL We used the GRU-based RNN in our models for session-based recommendations. The input of the network is the actual state of the session while the output is the item of the next event in the session. The state of the session can either be the item of the actual event or the events in the session so far. In the former case 1-of-N encoding is used, i.e. the input vectorâ s length equals to the number of items and only the coordinate corresponding to the active item is one, the others are zeros. The latter setting uses a weighted sum of these representations, in which events are discounted if they have occurred earlier. For the stake of stability, the input vector is then normalized. We expect this to help because it reinforces the memory effect: the reinforcement of very local ordering constraints which are not well captured by the longer memory of RNN. We also experimented with adding an additional embedding layer, but the 1-of-N encoding always performed better. The core of the network is the GRU layer(s) and additional feedforward layers can be added between the last layer and the output. The output is the predicted preference of the items, i.e. the likelihood of being the next in the session for each item. When multiple GRU layers are used, the hidden state of the previous layer is the input of the next one. The input can also be optionally connected | 1511.06939#7 | 1511.06939#9 | 1511.06939 | [
"1502.04390"
] |
1511.06939#9 | Session-based Recommendations with Recurrent Neural Networks | 3 Published as a conference paper at ICLR 2016 dake] Buippaqui3 wa}! Uo sas09s :ndjno Bulpoo N-JO-T â way! jenqoe :ynduj Figure 1: General architecture of the network. Processing of one event of the event stream at once. to GRU layers deeper in the network, as we found that this improves performance. See the whole architecture on Figure 1, which depicts the representation of a single event within a time series of events. Since recommender systems are not the primary application area of recurrent neural networks, we modiï¬ ed the base network to better suit the task. We also considered practical points so that our solution could be possibly applied in a live environment. 3.1.1 SESSION-PARALLEL MINI-BATCHES RNNs for natural language processing tasks usually use in-sequence mini-batches. For example it is common to use a sliding window over the words of sentences and put these windowed fragments next to each other to form mini-batches. | 1511.06939#8 | 1511.06939#10 | 1511.06939 | [
"1502.04390"
] |
1511.06939#10 | Session-based Recommendations with Recurrent Neural Networks | This does not ï¬ t our task, because (1) the length of sessions can be very different, even more so than that of sentences: some sessions consist of only 2 events, while others may range over a few hundreds; (2) our goal is to capture how a session evolves over time, so breaking down into fragments would make no sense. Therefore we use session-parallel mini-batches. First, we create an order for the sessions. Then, we use the ï¬ rst event of the ï¬ rst X sessions to form the input of the ï¬ rst mini-batch (the desired output is the second events of our active sessions). The second mini-batch is formed from the second events and so on. If any of the sessions end, the next available session is put in its place. Sessions are assumed to be independent, thus we reset the appropriate hidden state when this switch occurs. See Figure 2 for more details. Mini-batch1 Mini-batch2 Mini-batch3 Session1 Session2 is fina bos Input Session3 â Session4 el SessionS fsa fs2 fsa] Output Figure 2: Session-parallel mini-batch creation 3.1.2 SAMPLING ON THE OUTPUT | 1511.06939#9 | 1511.06939#11 | 1511.06939 | [
"1502.04390"
] |
1511.06939#11 | Session-based Recommendations with Recurrent Neural Networks | Recommender systems are especially useful when the number of items is large. Even for a medium- sized webshop this is in the range of tens of thousands, but on larger sites it is not rare to have 4 Published as a conference paper at ICLR 2016 hundreds of thousands of items or even a few millions. Calculating a score for each item in each step would make the algorithm scale with the product of the number of items and the number of events. This would be unusable in practice. Therefore we have to sample the output and only compute the score for a small subset of the items. This also entails that only some of the weights will be updated. Besides the desired output, we need to compute scores for some negative examples and modify the weights so that the desired output is highly ranked. | 1511.06939#10 | 1511.06939#12 | 1511.06939 | [
"1502.04390"
] |
1511.06939#12 | Session-based Recommendations with Recurrent Neural Networks | The natural interpretation of an arbitrary missing event is that the user did not know about the existence of the item and thus there was no interaction. However there is a low probability that the user did know about the item and chose not to interact, because she disliked the item. The more popular the item, the more probable it is that the user knows about it, thus it is more likely that a missing event expresses dislike. Therefore we should sample items in proportion of their popularity. Instead of generating separate samples for each training example, we use the items from the other training examples of the mini-batch as negative examples. | 1511.06939#11 | 1511.06939#13 | 1511.06939 | [
"1502.04390"
] |
1511.06939#13 | Session-based Recommendations with Recurrent Neural Networks | The beneï¬ t of this approach is that we can further reduce computational times by skipping the sampling. Additionally, there are also beneï¬ ts on the implementation side from making the code less complex to faster matrix operations. Meanwhile, this approach is also a popularity-based sampling, because the likelihood of an item being in the other training examples of the mini-batch is proportional to its popularity. # 3.1.3 RANKING LOSS The core of recommender systems is the relevance-based ranking of items. | 1511.06939#12 | 1511.06939#14 | 1511.06939 | [
"1502.04390"
] |
1511.06939#14 | Session-based Recommendations with Recurrent Neural Networks | Although the task can also be interpreted as a classiï¬ cation task, learning-to-rank approaches (Rendle et al., 2009; Shi et al., 2012; Steck, 2015) generally outperform other approaches. Ranking can be pointwise, pair- wise or listwise. Pointwise ranking estimates the score or the rank of items independently of each other and the loss is deï¬ ned in a way so that the rank of relevant items should be low. Pairwise rank- ing compares the score or the rank of pairs of a positive and a negative item and the loss enforces that the rank of the positive item should be lower than that of the negative one. Listwise ranking uses the scores and ranks of all items and compares them to the perfect ordering. As it includes sorting, it is usually computationally more expensive and thus not used often. Also, if there is only one relevant item â as in our case â listwise ranking can be solved via pairwise ranking. We included several pointwise and pairwise ranking losses into our solution. We found that point- wise ranking was unstable with this network (see Section 4 for more comments). Pairwise ranking losses on the other hand performed well. We use the following two. | 1511.06939#13 | 1511.06939#15 | 1511.06939 | [
"1502.04390"
] |
1511.06939#15 | Session-based Recommendations with Recurrent Neural Networks | â ¢ BPR: Bayesian Personalized Ranking (Rendle et al., 2009) is a matrix factorization method that uses pairwise ranking loss. It compares the score of a positive and a sampled negative item. Here we compare the score of the positive item with several sampled items and use their average as the loss. The loss at a given point in one session is deï¬ ned as: Ls = â 1 j=1 log (Ï (Ë rs,i â Ë rs,j)), where NS is the sample size, Ë rs,k is the score on item k NS at the given point of the session, i is the desired item (next item in the session) and j are the negative samples. â ¢ TOP1: | 1511.06939#14 | 1511.06939#16 | 1511.06939 | [
"1502.04390"
] |
1511.06939#16 | Session-based Recommendations with Recurrent Neural Networks | This ranking loss was devised by us for this task. It is the regularized approximation of the relative rank of the relevant item. The relative rank of the relevant item is given by 1 j=1 I{Ë rs,j > Ë rs,i}. We approximate I{·} with a sigmoid. Optimizing for this NS would modify parameters so that the score for i would be high. However this is unstable as certain positive items also act as negative examples and thus scores tend to become increasingly higher. To avoid this, we want to force the scores of the negative examples to be around zero. This is a natural expectation towards the scores of negative items. Thus we added a regularization term to the loss. It is important that this term is in the same range as the relative rank and acts similarly to it. | 1511.06939#15 | 1511.06939#17 | 1511.06939 | [
"1502.04390"
] |
1511.06939#17 | Session-based Recommendations with Recurrent Neural Networks | The ï¬ nal loss function is as follows: Ls = 1 NS # 4 EXPERIMENTS We evaluate the proposed recursive neural network against popular baselines on two datasets. 5 Published as a conference paper at ICLR 2016 The ï¬ rst dataset is that of RecSys Challenge 20151. This dataset contains click-streams of an e- commerce site that sometimes end in purchase events. We work with the training set of the challenge and keep only the click events. | 1511.06939#16 | 1511.06939#18 | 1511.06939 | [
"1502.04390"
] |
1511.06939#18 | Session-based Recommendations with Recurrent Neural Networks | We ï¬ lter out sessions of length 1. The network is trained on â ¼ 6 months of data, containing 7,966,257 sessions of 31,637,239 clicks on 37,483 items. We use the sessions of the subsequent day for testing. Each session is assigned to either the training or the test set, we do not split the data mid-session. Because of the nature of collaborative ï¬ ltering methods, we ï¬ lter out clicks from the test set where the item clicked is not in the train set. Sessions of length one are also removed from the test set. After the preprocessing we are left with 15,324 sessions of 71,222 events for the test set. This dataset will be referred to as RSC15. The second dataset is collected from a Youtube-like OTT video service platform. Events of watching a video for at least a certain amount of time were collected. Only certain regions were subject to this collection that lasted for somewhat shorter than 2 months. During this time item-to-item recommendations were provided after each video at the left side of the screen. These were provided by a selection of different algorithms and inï¬ uenced the behavior of the users. Preprocessing steps are similar to that of the other dataset with the addition of ï¬ ltering out very long sessions as they were probably generated by bots. The training data consists of all but the last day of the aforementioned period and has â ¼ 3 million sessions of â ¼ 13 million watch events on 330 thousand videos. The test set contains the sessions of the last day of the collection period and has â ¼ 37 thousand sessions with â ¼ 180 thousand watch events. This dataset will be referred to as VIDEO. The evaluation is done by providing the events of a session one-by-one and checking the rank of the item of the next event. The hidden state of the GRU is reset to zero after a session ï¬ | 1511.06939#17 | 1511.06939#19 | 1511.06939 | [
"1502.04390"
] |
1511.06939#19 | Session-based Recommendations with Recurrent Neural Networks | nishes. Items are ordered in descending order by their score and their position in this list is their rank. With RSC15, all of the 37,483 items of the train set were ranked. However, this would have been impractical with VIDEO, due to the large number of items. There we ranked the desired item against the most popular 30,000 items. This has negligible effect on the evaluations as rarely visited items often get low scores. Also, popularity based pre-ï¬ ltering is common in practical recommender systems. As recommender systems can only recommend a few items at once, the actual item a user might pick should be amongst the ï¬ rst few items of the list. Therefore, our primary evaluation metric is recall@20 that is the proportion of cases having the desired item amongst the top-20 items in all test cases. Recall does not consider the actual rank of the item as long as it is amongst the top-N. This models certain practical scenarios well where there is no highlighting of recommendations and the absolute order does not matter. Recall also usually correlates well with important online KPIs, such as click-through rate (CTR)(Liu et al., 2012; Hidasi & Tikk, 2012). The second metric used in the experiments is MRR@20 (Mean Reciprocal Rank). That is the average of reciprocal ranks of the desired items. The reciprocal rank is set to zero if the rank is above 20. MRR takes into account the rank of the item, which is important in cases where the order of recommendations matter (e.g. the lower ranked items are only visible after scrolling). 4.1 BASELINES We compare the proposed network to a set of commonly used baselines. | 1511.06939#18 | 1511.06939#20 | 1511.06939 | [
"1502.04390"
] |
1511.06939#20 | Session-based Recommendations with Recurrent Neural Networks | â ¢ POP: Popularity predictor that always recommends the most popular items of the training set. Despite its simplicity it is often a strong baseline in certain domains. S-POP: This baseline recommends the most popular items of the current session. The rec- ommendation list changes during the session as items gain more events. Ties are broken up using global popularity values. This baseline is strong in domains with high repetitiveness. â ¢ Item-KNN: Items similar to the actual item are recommended by this baseline and simi- larity is deï¬ ned as the cosine similarity between the vector of their sessions, i.e. it is the number of co-occurrences of two items in sessions divided by the square root of the product of the numbers of sessions in which the individual items are occurred. Regularization is also included to avoid coincidental high similarities of rarely visited items. This baseline is one of the most common item-to-item solutions in practical systems, that provides recom- mendations in the â others who viewed this item also viewed these onesâ setting. Despite of its simplicity it is usually a strong baseline (Linden et al., 2003; Davidson et al., 2010). # 1http://2015.recsyschallenge.com/ 6 Published as a conference paper at ICLR 2016 Table 1: Recall@20 and MRR@20 using the baseline methods Baseline RSC15 VIDEO Recall@20 MRR@20 Recall@20 MRR@20 POP S-POP Item-KNN BPR-MF 0.0050 0.2672 0.5065 0.2574 0.0012 0.1775 0.2048 0.0618 0.0499 0.1301 0.5508 0.0692 0.0117 0.0863 0.3381 0.0374 # Table 2: Best parametrizations for datasets/loss functions | 1511.06939#19 | 1511.06939#21 | 1511.06939 | [
"1502.04390"
] |
1511.06939#21 | Session-based Recommendations with Recurrent Neural Networks | Dataset Loss Mini-batch Dropout Learning rate Momentum RSC15 RSC15 RSC15 VIDEO VIDEO VIDEO TOP1 BPR Cross-entropy TOP1 BPR Cross-entropy 50 50 500 50 50 200 0.5 0.2 0 0.4 0.3 0.1 0.01 0.05 0.01 0.05 0.1 0.05 0 0.2 0 0 0 0.3 â ¢ BPR-MF: BPR-MF (Rendle et al., 2009) is one of the commonly used matrix factorization methods. It optimizes for a pairwise ranking objective function (see Section 3) via SGD. Matrix factorization cannot be applied directly to session-based recommendations, because the new sessions do not have feature vectors precomputed. However we can overcome this by using the average of item feature vectors of the items that had occurred in the session so far as the user feature vector. In other words we average the similarities of the feature vectors between a recommendable item and the items of the session so far. Table 1 shows the results for the baselines. The item-KNN approach clearly dominates the other methods. 4.2 PARAMETER & STRUCTURE OPTIMIZATION We optimized the hyperparameters by running 100 experiments at randomly selected points of the parameter space for each dataset and loss function. The best parametrization was further tuned by individually optimizing each parameter. The number of hidden units was set to 100 in all cases. The best performing parameters were then used with hidden layers of different sizes. The optimization was done on a separate validation set. Then the networks were retrained on the training plus the validation set and evaluated on the ï¬ nal test set. The best performing parametrizations are summarized in table 2. Weight matrices were initialized by random numbers drawn uniformly from [â x, x] where x depends on the number of rows and columns of the matrix. We experimented with both rmsprop (Dauphin et al., 2015) and adagrad (Duchi et al., 2011). We found adagrad to give better results. | 1511.06939#20 | 1511.06939#22 | 1511.06939 | [
"1502.04390"
] |
1511.06939#22 | Session-based Recommendations with Recurrent Neural Networks | We brieï¬ y experimented with other units than GRU. We found both the classic RNN unit and LSTM to perform worse. We tried out several loss functions. Pointwise ranking based losses, such as cross-entropy and MRR optimization (as in Steck (2015)) were usually unstable, even with regularization. For example cross-entropy yielded only 10 and 6 numerically stable networks of the 100 random runs for RSC15 and VIDEO respectively. We assume that this is due to independently trying to achieve high scores for the desired items and the negative push is small for the negative samples. On the other hand pairwise ranking-based losses performed well. We found the ones introduced in Section 3 (BPR and TOP1) to perform the best. Several architectures were examined and a single layer of GRU units was found to be the best performer. Adding addition layers always resulted in worst performance w.r.t. both training loss and recall and MRR measured on the test set. | 1511.06939#21 | 1511.06939#23 | 1511.06939 | [
"1502.04390"
] |
1511.06939#23 | Session-based Recommendations with Recurrent Neural Networks | We assume that this is due to the generally short 7 Published as a conference paper at ICLR 2016 Table 3: Recall@20 and MRR@20 for different types of a single layer of GRU, compared to the best baseline (item-KNN). Best results per dataset are highlighted. Loss / #Units Recall@20 RSC15 MRR@20 Recall@20 VIDEO MRR@20 TOP1 100 BPR 100 Cross-entropy 100 TOP1 1000 BPR 1000 Cross-entropy 1000 0.5853 (+15.55%) 0.6069 (+19.82%) 0.6074 (+19.91%) 0.6206 (+22.53%) 0.6322 (+24.82%) 0.5777 (+14.06%) 0.2305 (+12.58%) 0.2407 (+17.54%) 0.2430 (+18.65%) 0.2693 (+31.49%) 0.2467 (+20.47%) 0.2153 (+5.16%) 0.6141 (+11.50%) 0.5999 (+8.92%) 0.6372 (+15.69%) 0.6624 (+20.27%) 0.6311 (+14.58%) â 0.3511 (+3.84%) 0.3260 (-3.56%) 0.3720 (+10.04%) 0.3891 (+15.08%) 0.3136 (-7.23%) â | 1511.06939#22 | 1511.06939#24 | 1511.06939 | [
"1502.04390"
] |
1511.06939#24 | Session-based Recommendations with Recurrent Neural Networks | lifespan of the sessions not requiring multiple time scales of different resolutions to be properly represented. However the exact reason of this is unknown as of yet and requires further research. Using embedding of the items gave slightly worse results, therefore we kept the 1-of-N encoding. Also, putting all previous events of the session on the input instead of the preceding one did not result in additional accuracy gain; which is not surprising as GRU â like LSTM â has both long and short term memory. Adding additional feed-forward layers after the GRU layer did not help either. However increasing the size of the GRU layer improved the performance. | 1511.06939#23 | 1511.06939#25 | 1511.06939 | [
"1502.04390"
] |
1511.06939#25 | Session-based Recommendations with Recurrent Neural Networks | We also found that it is beneï¬ cial to use tanh as the activation function of the output layer. 4.3 RESULTS Table 3 shows the results of the best performing networks. Cross-entropy for the VIDEO data with 1000 hidden units was numerically unstable and thus we present no results for that scenario. The results are compared to the best baseline (item-KNN). We show results with 100 and 1000 hidden units. The running time depends on the parameters and the dataset. Generally speaking the difference in runtime between the smaller and the larger variant is not too high on a GeForce GTX Titan X GPU and the training of the network can be done in a few hours2. On CPU, the smaller network can be trained in a practically acceptable timeframe. Frequent retraining is often desirable for recommender systems, because new users and items are introduced frequently. The GRU-based approach has substantial gain over the item-KNN in both evaluation metrics on both datasets, even if the number of units is 1003. Increasing the number of units further improves the results for pairwise losses, but the accuracy decreases for cross-entropy. Even though cross-entropy gives better results with 100 hidden units, the pairwise loss variants surpass these results as the number of units increase. Although, increasing the number of units increases the training times, we found that it was not too expensive to move from 100 units to 1000 on GPU. Also, the cross-entropy based loss was found to be numerically unstable as the result of the network individually trying to increase the score for the target items, while the negative push is relatively small for the other items. Therefore we suggest using any of the two pairwise losses. The TOP1 loss performs slightly better on these two datasets, resulting in â | 1511.06939#24 | 1511.06939#26 | 1511.06939 | [
"1502.04390"
] |
1511.06939#26 | Session-based Recommendations with Recurrent Neural Networks | ¼ 20 â 30% accuracy gain over the best performing baseline. # 5 CONCLUSION & FUTURE WORK In this paper we applied a kind of modern recurrent neural network (GRU) to new application do- main: recommender systems. We chose the task of session based recommendations, because it is a practically important area, but not well researched. We modiï¬ ed the basic GRU in order to ï¬ t the task better by introducing session-parallel mini-batches, mini-batch based output sampling and ranking loss function. We showed that our method can signiï¬ cantly outperform popular baselines that are used for this task. We think that our work can be the basis of both deep learning applications in recommender systems and session based recommendations in general. 2Using Theano with ï¬ xes for the subtensor operators on GPU. 3Except for using the BPR loss on the VIDEO data and evaluating for MRR. | 1511.06939#25 | 1511.06939#27 | 1511.06939 | [
"1502.04390"
] |
1511.06939#27 | Session-based Recommendations with Recurrent Neural Networks | 8 Published as a conference paper at ICLR 2016 Our immediate future work will focus on the more thorough examination of the proposed network. We also plan to train the network on automatically extracted item representation that is built on content of the item itself (e.g. thumbnail, video, text) instead of the current input. # ACKNOWLEDGMENTS The work leading to these results has received funding from the European Unionâ s Seventh Frame- work Programme (FP7/2007-2013) under CrowdRec Grant Agreement nâ ¦ 610594. # REFERENCES | 1511.06939#26 | 1511.06939#28 | 1511.06939 | [
"1502.04390"
] |
1511.06939#28 | Session-based Recommendations with Recurrent Neural Networks | Cho, Kyunghyun, van Merri¨enboer, Bart, Bahdanau, Dzmitry, and Bengio, Yoshua. On the proper- ties of neural machine translation: Encoder-decoder approaches. arXiv preprint arXiv:1409.1259, 2014. Dauphin, Yann N, de Vries, Harm, Chung, Junyoung, and Bengio, Yoshua. Rmsprop and equi- librated adaptive learning rates for non-convex optimization. arXiv preprint arXiv:1502.04390, 2015. Davidson, James, Liebald, Benjamin, Liu, Junning, et al. The YouTube video recommendation system. | 1511.06939#27 | 1511.06939#29 | 1511.06939 | [
"1502.04390"
] |
1511.06939#29 | Session-based Recommendations with Recurrent Neural Networks | In Recsysâ 10: ACM Conf. on Recommender Systems, pp. 293â 296, 2010. ISBN 978-1- 60558-906-0. Duchi, John, Hazan, Elad, and Singer, Yoram. Adaptive subgradient methods for online learning and stochastic optimization. The Journal of Machine Learning Research, 12:2121â 2159, 2011. Hidasi, B. and Tikk, D. Fast ALS-based tensor factorization for context-aware recommendation from implicit feedback. In ECML-PKDDâ 12, Part II, number 7524 in LNCS, pp. 67â 82. Springer, 2012. | 1511.06939#28 | 1511.06939#30 | 1511.06939 | [
"1502.04390"
] |
1511.06939#30 | Session-based Recommendations with Recurrent Neural Networks | Hidasi, Bal´azs and Tikk, Domonkos. General factorization framework for context-aware recommen- dations. Data Mining and Knowledge Discovery, pp. 1â 30, 2015. ISSN 1384-5810. doi: 10.1007/ s10618-015-0417-y. URL http://dx.doi.org/10.1007/s10618-015-0417-y. Hinton, Geoffrey, Deng, Li, Yu, Dong, Dahl, George E, Mohamed, Abdel-rahman, Jaitly, Navdeep, Senior, Andrew, Vanhoucke, Vincent, Nguyen, Patrick, Sainath, Tara N, et al. Deep neural net- works for acoustic modeling in speech recognition: The shared views of four research groups. Signal Processing Magazine, IEEE, 29(6):82â 97, 2012. Koren, Y. Factorization meets the neighborhood: a multifaceted collaborative ï¬ ltering model. In SIGKDDâ 08: ACM Int. Conf. on Knowledge Discovery and Data Mining, pp. 426â 434, 2008. Koren, Yehuda, Bell, Robert, and Volinsky, Chris. | 1511.06939#29 | 1511.06939#31 | 1511.06939 | [
"1502.04390"
] |
1511.06939#31 | Session-based Recommendations with Recurrent Neural Networks | Matrix factorization techniques for recommender systems. Computer, 42(8):30â 37, 2009. Linden, G., Smith, B., and York, J. Amazon. com recommendations: Item-to-item collaborative ï¬ ltering. Internet Computing, IEEE, 7(1):76â 80, 2003. Liu, Qiwen, Chen, Tianjian, Cai, Jing, and Yu, Dianhai. Enlister: Baiduâ s recommender system for the biggest Chinese Q&A website. In RecSys-12: Proc. of the 6th ACM Conf. on Recommender Systems, pp. 285â 288, 2012. Rendle, S., Freudenthaler, C., Gantner, Z., and Schmidt-Thieme, L. BPR: Bayesian personalized ranking from implicit feedback. In UAIâ 09: 25th Conf. on Uncertainty in Artiï¬ cial Intelligence, pp. 452â 461, 2009. ISBN 978-0-9749039-5-8. Russakovsky, Olga, Deng, Jia, Su, Hao, Krause, Jonathan, Satheesh, Sanjeev, Ma, Sean, Huang, Zhiheng, Karpathy, Andrej, Khosla, Aditya, Bernstein, Michael S., Berg, Alexander C., and Li, Fei-Fei. Imagenet large scale visual recognition challenge. CoRR, abs/1409.0575, 2014. URL http://arxiv.org/abs/1409.0575. | 1511.06939#30 | 1511.06939#32 | 1511.06939 | [
"1502.04390"
] |
1511.06939#32 | Session-based Recommendations with Recurrent Neural Networks | 9 Published as a conference paper at ICLR 2016 Salakhutdinov, Ruslan, Mnih, Andriy, and Hinton, Geoffrey. Restricted boltzmann machines for collaborative ï¬ ltering. In Proceedings of the 24th international conference on Machine learning, pp. 791â 798. ACM, 2007. Sarwar, Badrul, Karypis, George, Konstan, Joseph, and Riedl, John. Item-based collaborative ï¬ lter- ing recommendation algorithms. In Proceedings of the 10th international conference on World Wide Web, pp. 285â 295. ACM, 2001. Shani, Guy, Brafman, Ronen I, and Heckerman, David. An mdp-based recommender system. In Proceedings of the Eighteenth conference on Uncertainty in artiï¬ cial intelligence, pp. 453â | 1511.06939#31 | 1511.06939#33 | 1511.06939 | [
"1502.04390"
] |
1511.06939#33 | Session-based Recommendations with Recurrent Neural Networks | 460. Morgan Kaufmann Publishers Inc., 2002. Shi, Yue, Karatzoglou, Alexandros, Baltrunas, Linas, Larson, Martha, Oliver, Nuria, and Hanjalic, Alan. Climf: Learning to maximize reciprocal rank with collaborative less-is-more ï¬ ltering. In Proceedings of the Sixth ACM Conference on Recommender Systems, RecSys â 12, pp. 139â 146, New York, NY, USA, 2012. ACM. ISBN 978-1-4503-1270-7. doi: 10.1145/2365952.2365981. URL http://doi.acm.org/10.1145/2365952.2365981. | 1511.06939#32 | 1511.06939#34 | 1511.06939 | [
"1502.04390"
] |
1511.06939#34 | Session-based Recommendations with Recurrent Neural Networks | Steck, Harald. Gaussian ranking by matrix factorization. In Proceedings of the 9th ACM Confer- ence on Recommender Systems, RecSys â 15, pp. 115â 122, New York, NY, USA, 2015. ACM. ISBN 978-1-4503-3692-5. doi: 10.1145/2792838.2800185. URL http://doi.acm.org/ 10.1145/2792838.2800185. Van den Oord, Aaron, Dieleman, Sander, and Schrauwen, Benjamin. Deep content-based music recommendation. In Advances in Neural Information Processing Systems, pp. 2643â 2651, 2013. Wang, Hao, Wang, Naiyan, and Yeung, Dit-Yan. Collaborative deep learning for recommender In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge systems. Discovery and Data Mining, KDD â | 1511.06939#33 | 1511.06939#35 | 1511.06939 | [
"1502.04390"
] |
1511.06939#35 | Session-based Recommendations with Recurrent Neural Networks | 15, pp. 1235â 1244, New York, NY, USA, 2015. ACM. Weimer, Markus, Karatzoglou, Alexandros, Le, Quoc Viet, and Smola, Alex. Maximum margin ma- trix factorization for collaborative ranking. Advances in neural information processing systems, 2007. 10 | 1511.06939#34 | 1511.06939 | [
"1502.04390"
] |
|
1511.06807#0 | Adding Gradient Noise Improves Learning for Very Deep Networks | 5 1 0 2 v o N 1 2 ] L M . t a t s [ 1 v 7 0 8 6 0 . 1 1 5 1 : v i X r a # Under review as a conference paper at ICLR 2016 # ADDING GRADIENT NOISE IMPROVES LEARNING FOR VERY DEEP NETWORKS Arvind Neelakantanâ , Luke Vilnisâ College of Information and Computer Sciences University of Massachusetts Amherst {arvind,luke}@cs.umass.edu # Quoc V. Le, Ilya Sutskever, Lukasz Kaiser, Karol Kurach Google Brain {qvl,ilyasu,lukaszkaiser,kkurach}@google.com # James Martens University of Toronto [email protected] # ABSTRACT | 1511.06807#1 | 1511.06807 | [
"1508.05508"
] |
|
1511.06807#1 | Adding Gradient Noise Improves Learning for Very Deep Networks | Deep feedforward and recurrent networks have achieved impressive results in many perception and language processing applications. This success is partially attributed to architectural innovations such as convolutional and long short-term memory networks. The main motivation for these architectural innovations is that they capture better domain knowledge, and importantly are easier to optimize than more basic architectures. Recently, more complex architectures such as Neural Turing Machines and Memory Networks have been proposed for tasks including question answering and general computation, creating a new set of optimization challenges. In this paper, we discuss a low-overhead and easy-to-implement tech- nique of adding gradient noise which we ï¬ nd to be surprisingly effective when training these very deep architectures. The technique not only helps to avoid overï¬ tting, but also can result in lower training loss. This method alone allows a fully-connected 20-layer deep network to be trained with standard gradient de- scent, even starting from a poor initialization. We see consistent improvements for many complex models, including a 72% relative reduction in error rate over a carefully-tuned baseline on a challenging question-answering task, and a dou- bling of the number of accurate binary multiplication models learned across 7,000 random restarts. We encourage further application of this technique to additional complex modern architectures. | 1511.06807#0 | 1511.06807#2 | 1511.06807 | [
"1508.05508"
] |
1511.06807#2 | Adding Gradient Noise Improves Learning for Very Deep Networks | 1 # INTRODUCTION Deep neural networks have shown remarkable success in diverse domains including image recog- nition (Krizhevsky et al., 2012), speech recognition (Hinton et al., 2012) and language processing applications (Sutskever et al., 2014; Bahdanau et al., 2014). This broad success comes from a con- ï¬ uence of several factors. First, the creation of massive labeled datasets has allowed deep networks to demonstrate their advantages in expressiveness and scalability. The increase in computing power has also enabled training of far larger networks with more forgiving optimization dynamics (Choro- manska et al., 2015). Additionally, architectures such as convolutional networks (LeCun et al., 1998) and long short-term memory networks (Hochreiter & Schmidhuber, 1997) have proven to be easier to optimize than classical feedforward and recurrent models. Finally, the success of deep networks is also a result of the development of simple and broadly applicable learning techniques such as dropout (Srivastava et al., 2014), ReLUs (Nair & Hinton, 2010), gradient clipping (Pascanu | 1511.06807#1 | 1511.06807#3 | 1511.06807 | [
"1508.05508"
] |
1511.06807#3 | Adding Gradient Noise Improves Learning for Very Deep Networks | â First two authors contributed equally. Work was done when all authors were at Google, Inc. 1 # Under review as a conference paper at ICLR 2016 et al., 2013; Graves, 2013), optimization and weight initialization strategies (Glorot & Bengio, 2010; Sutskever et al., 2013; He et al., 2015). Recent work has aimed to push neural network learning into more challenging domains, such as question answering or program induction. These more complicated problems demand more com- plicated architectures (e.g., Graves et al. (2014); Sukhbaatar et al. (2015)) thereby posing new opti- mization challenges. In order to achieve good performance, researchers have reported the necessity of additional techniques such as supervision in intermediate steps (Weston et al., 2014), warm- starts (Peng et al., 2015), random restarts, and the removal of certain activation functions in early stages of training (Sukhbaatar et al., 2015). A recurring theme in recent works is that commonly-used optimization techniques are not always sufï¬ cient to robustly optimize the models. In this work, we explore a simple technique of adding annealed Gaussian noise to the gradient, which we ï¬ nd to be surprisingly effective in training deep neural networks with stochastic gradient descent. While there is a long tradition of adding random weight noise in classical neural networks, it has been under-explored in the optimization of modern deep architectures. In contrast to theoretical and empirical results on the regularizing effects of conventional stochastic gradient descent, we ï¬ nd that in practice the added noise can actually help us achieve lower training loss by encouraging active exploration of parameter space. This exploration proves especially necessary and fruitful when optimizing neural network models containing many layers or complex latent structures. The main contribution of this work is to demonstrate the broad applicability of this simple method to the training of many complex modern neural architectures. Furthermore, to the best of our knowl- edge, our added noise schedule has not been used before in the training of deep networks. We consistently see improvement from injected gradient noise when optimizing a wide variety of mod- els, including very deep fully-connected networks, and special-purpose architectures for question answering and algorithm learning. For example, this method allows us to escape a poor initializa- tion and successfully train a 20-layer rectiï¬ | 1511.06807#2 | 1511.06807#4 | 1511.06807 | [
"1508.05508"
] |
1511.06807#4 | Adding Gradient Noise Improves Learning for Very Deep Networks | er network on MNIST with standard gradient descent. It also enables a 72% relative reduction in error in question-answering, and doubles the number of ac- curate binary multiplication models learned across 7,000 random restarts. We hope that practitioners will see similar improvements in their own research by adding this simple technique, implementable in a single line of code, to their repertoire. # 2 RELATED WORK Adding random noise to the weights, gradient, or the hidden units has been a known technique amongst neural network practitioners for many years (e.g., An (1996)). However, the use of gradient noise has been rare and its beneï¬ ts have not been fully documented with modern deep networks. Weight noise (Steijvers, 1996) and adaptive weight noise (Graves, 2011; Blundell et al., 2015), which usually maintains a Gaussian variational posterior over network weights, similarly aim to improve learning by added noise during training. They normally differ slightly from our proposed method in that the noise is not annealed and at convergence will be non-zero. Additionally, in adaptive weight noise, an extra set of parameters for the variance must be maintained. Similarly, the technique of dropout (Srivastava et al., 2014) randomly sets groups of hidden units to zero at train time to improve generalization in a manner similar to ensembling. An annealed Gaussian gradient noise schedule was used to train the highly non-convex Stochastic Neighbor Embedding model in Hinton & Roweis (2002). The gradient noise schedule that we found to be most effective is very similar to the Stochastic Gradient Langevin Dynamics algorithm of Welling & Teh (2011), who use gradients with added noise to accelerate MCMC inference for logistic regression and independent component analysis models. This use of gradient information in MCMC sampling for machine learning to allow faster exploration of state space was previously proposed by Neal (2011). Various optimization techniques have been proposed to improve the training of neural networks. Most notable is the use of Momentum (Polyak, 1964; Sutskever et al., 2013; Kingma & Ba, 2014) or adaptive learning rates (Duchi et al., 2011; Dean et al., 2012; Zeiler, 2012). These methods are normally developed to provide good convergence rates for the convex setting, and then heuristically | 1511.06807#3 | 1511.06807#5 | 1511.06807 | [
"1508.05508"
] |
1511.06807#5 | Adding Gradient Noise Improves Learning for Very Deep Networks | 2 # Under review as a conference paper at ICLR 2016 applied to nonconvex problems. On the other hand, injecting noise in the gradient is more suitable for nonconvex problems. By adding even more stochasticity, this technique gives the model more chances to escape local minima (see a similar argument in Bottou (1992)), or to traverse quickly through the â transientâ plateau phase of early learning (see a similar analysis for momentum in Sutskever et al. (2013)). This is born out empirically in our observation that adding gradient noise can actually result in lower training loss. In this sense, we suspect adding gradient noise is similar to simulated annealing (Kirkpatrick et al., 1983) which exploits random noise to explore complex optimization landscapes. | 1511.06807#4 | 1511.06807#6 | 1511.06807 | [
"1508.05508"
] |
1511.06807#6 | Adding Gradient Noise Improves Learning for Very Deep Networks | This can be contrasted with well-known beneï¬ ts of stochastic gradient descent as a learning algorithm (Robbins & Monro, 1951; Bousquet & Bottou, 2008), where both theory and practice have shown that the noise induced by the stochastic process aids generalization by reducing overï¬ tting. # 3 METHOD We consider a simple technique of adding time-dependent Gaussian noise to the gradient g at every training step t: gt â gt + N (0, Ï 2 t ) Our experiments indicate that adding annealed Gaussian noise by decaying the variance works better than using ï¬ xed Gaussian noise. We use a schedule inspired from Welling & Teh (2011) for most of our experiments and take: Ï 2 t = η (1 + t)γ (1) with η selected from {0.01, 0.3, 1.0} and γ = 0.55. | 1511.06807#5 | 1511.06807#7 | 1511.06807 | [
"1508.05508"
] |
1511.06807#7 | Adding Gradient Noise Improves Learning for Very Deep Networks | Higher gradient noise at the beginning of training forces the gradient away from 0 in the early stages. # 4 EXPERIMENTS In the following experiments, we consider a variety of complex neural network architectures: Deep networks for MNIST digit classiï¬ cation, End-To-End Memory Networks (Sukhbaatar et al., 2015) and Neural Programmer (Neelakantan et al., 2015) for question answering, Neural Random Access Machines (Kurach et al., 2015) and Neural GPUs (Kaiser & Sutskever, 2015) for algorithm learning. The models and results are described as follows. 4.1 DEEP FULLY-CONNECTED NETWORKS For our ï¬ rst set of experiments, we examine the impact of adding gradient noise when training a very deep fully-connected network on the MNIST handwritten digit classiï¬ cation dataset (LeCun et al., 1998). Our network is deep: it has 20 hidden layers, with each layer containing 50 hidden units. We use the ReLU activation function (Nair & Hinton, 2010). In this experiment, we add gradient noise sampled from a Gaussian distribution with mean 0, and decaying variance according to the schedule in Equation (1) with η = 0.01. We train with SGD without momentum, using the ï¬ xed learning rates of 0.1 and 0.01. Unless otherwise speciï¬ ed, the weights of the network are initialized from a Gaussian with mean zero, and standard deviation of 0.1, which we call Simple Init. The results of our experiment are in Table 1. When trained from Simple Init we can see that adding noise to the gradient helps in achieving higher average and best accuracy over 20 runs using each learning rate for a total of 40 runs (Table 1, Experiment 1). We note that the average is closer to 50% because the small learning rate of 0.01 usually gives very slow convergence. We also try our approach on a more shallow network of 5 layers, but adding noise does not improve the training in that case. Next, we experiment with clipping the gradients with two threshold values: 100 and 10 (Table 1, Experiment 2, and 3). | 1511.06807#6 | 1511.06807#8 | 1511.06807 | [
"1508.05508"
] |
1511.06807#8 | Adding Gradient Noise Improves Learning for Very Deep Networks | Here, we ï¬ nd training with gradient noise is insensitive to the gradient clipping values. By tuning the clipping threshold, it is possible to get comparable accuracy without noise for this problem. 3 # Under review as a conference paper at ICLR 2016 In our fourth and ï¬ fth experiment (Table 1, Experiment 4, and 5), we use two analytically-derived ReLU initialization techniques (which we term Good Init) recently-proposed by Sussillo & Abbott (2014) and He et al. (2015), and ï¬ nd that adding gradient noise does not help. Previous work has found that stochastic gradient descent with carefully tuned initialization, momentum, learning rate, and learning rate decay can optimize such extremely deep fully-connected ReLU networks (Srivastava et al., 2015). | 1511.06807#7 | 1511.06807#9 | 1511.06807 | [
"1508.05508"
] |
1511.06807#9 | Adding Gradient Noise Improves Learning for Very Deep Networks | It would be harder to ï¬ nd such a robust initialization technique for the more complex heterogeneous architectures considered in later sections. Accordingly, we ï¬ nd in later experiments (e.g., Section 4.3) that random restarts and the use of a momentum-based optimizer like Adam are not sufï¬ cient to achieve the best results in the absence of added gradient noise. To understand how sensitive the methods are to poor initialization, in addition to the sub-optimal Simple Init, we run an experiment where all the weights in the neural network are initialized at zero. The results (Table 1, Experiment 5) show that if we do not add noise to the gradient, the networks fail to learn. If we add some noise, the networks can learn and reach 94.5% accuracy. Experiment 1: Simple Init, No Gradient Clipping Best Test Accuracy Average Test Accuracy 89.9% 96.7% 11.3% Setting No Noise With Noise No Noise + Dropout 43.1% 52.7% 10.8% No Noise With Noise 90.0% 96.7% 46.3% 52.3% No Noise With Noise 95.7% 97.0% 51.6% 53.6% Experiment 4: Good Init (Sussillo & Abbott, 2014) + Gradient Clipping Threshold = 10 No Noise With Noise Experiment 5: Good Init (He et al., 2015) + Gradient Clipping Threshold = 10 No Noise With Noise 97.4% 97.2% 91.7% 91.7% No Noise With Noise 11.4% 94.5% 10.1% 49.7% Table 1: Average and best test accuracy percentages on MNIST over 40 runs. Higher values are better. In summary, these experiments show that if we are careful with initialization and gradient clipping values, it is possible to train a very deep fully-connected network without adding gradient noise. However, if the initialization is poor, optimization can be difï¬ cult, and adding noise to the gradient is a good mechanism to overcome the optimization difï¬ culty. The implication of this set of results is that added gradient noise can be an effective mechanism for training very complex networks. This is because it is more difï¬ | 1511.06807#8 | 1511.06807#10 | 1511.06807 | [
"1508.05508"
] |
1511.06807#10 | Adding Gradient Noise Improves Learning for Very Deep Networks | cult to initialize the weights properly for complex networks. In the following, we explore the training of more complex networks such as End-To-End Memory Networks and Neural Programmer, whose initialization is less well studied. 4 # Under review as a conference paper at ICLR 2016 4.2 END-TO-END MEMORY NETWORKS We test added gradient noise for training End-To-End Memory Networks (Sukhbaatar et al., 2015), a new approach for Q&A using deep networks.1 Memory Networks have been demonstrated to perform well on a relatively challenging toy Q&A problem (Weston et al., 2015). In Memory Networks, the model has access to a context, a question, and is asked to predict an answer. Internally, the model has an attention mechanism which focuses on the right clue to answer the question. In the original formulation (Weston et al., 2015), Memory Networks were provided with additional supervision as to what pieces of context were necessary to answer the question. This was replaced in the End-To-End formulation by a latent attention mechanism implemented by a softmax over contexts. As this greatly complicates the learning problem, the authors implement a two-stage training procedure: First train the networks with a linear attention, then use those weights to warmstart the model with softmax attention. In our experiments with Memory Networks, we use our standard noise schedule, using noise sam- pled from a Gaussian distribution with mean 0, and decaying variance according to Equation (1) with η = 1.0. This noise is added to the gradient after clipping. | 1511.06807#9 | 1511.06807#11 | 1511.06807 | [
"1508.05508"
] |
1511.06807#11 | Adding Gradient Noise Improves Learning for Very Deep Networks | We also ï¬ nd for these experiments that a ï¬ xed standard deviation also works, but its value has to be tuned, and works best at 0.001. We set the number of training epochs to 200 because we would like to understand the behaviors of Memory Networks near convergence. The rest of the training is identical to the experimental setup proposed by the original authors. We test this approach with the published two-stage training approach, and additionally with a one-stage training approach where we train the networks with softmax attention and without warmstarting. Results are reported in Table 2. | 1511.06807#10 | 1511.06807#12 | 1511.06807 | [
"1508.05508"
] |
1511.06807#12 | Adding Gradient Noise Improves Learning for Very Deep Networks | We ï¬ nd some ï¬ uctuations during each run of the training, but the reported results reï¬ ect the typical gains obtained by adding random noise. Setting One-stage training No Noise 9.6% Training error: Validation error: 19.5% Validation error: 16.6% 5.9% Validation error: 10.9% Validation error: 10.8% With Noise 10.5% Training error: Two-stage training Training error: 6.2% Training error: Table 2: The effects of adding gradient noise to End-To-End Memory Networks. Lower values are better. | 1511.06807#11 | 1511.06807#13 | 1511.06807 | [
"1508.05508"
] |
1511.06807#13 | Adding Gradient Noise Improves Learning for Very Deep Networks | We ï¬ nd that warmstarting does indeed help the networks. In both cases, adding random noise to the gradient also helps the network both in terms of training errors and validation errors. Added noise, however, is especially helpful for the training of End-To-End Memory Networks without the warmstarting stage. 4.3 NEURAL PROGRAMMER Neural Programmer is a neural network architecture augmented with a small set of built-in arithmetic and logic operations that learns to induce latent programs. It is proposed for the task of question answering from tables (Neelakantan et al., 2015). Examples of operations on a table include the sum of a set of numbers, or the list of numbers greater than a particular value. Key to Neural Programmer is the use of â soft selectionâ to assign a probability distribution over the list of operations. This probability distribution weighs the result of each operation, and the cost function compares this weighted result to the ground truth. This soft selection, inspired by the soft attention mechanism of Bahdanau et al. (2014), allows for full differentiability of the model. Running the model for several steps of selection allows the model to induce a complex program by chaining the operations, one after the other. Figure 1 shows the architecture of Neural Programmer at a high level. In a synthetic table comprehension task, Neural Programmer takes a question and a table (or database) as input and the goal is to predict the correct answer. To solve this task, the model has to induce a program and execute it on the table. A major challenge is that the supervision signal is | 1511.06807#12 | 1511.06807#14 | 1511.06807 | [
"1508.05508"
] |
1511.06807#14 | Adding Gradient Noise Improves Learning for Very Deep Networks | # 1Code available at: https://github.com/facebook/MemNN 5 # Under review as a conference paper at ICLR 2016 Timestep t=1,2,...,T Arithmetic and LZ | logic operations + Sa Pr) Controller f selection Apply . Ss Data +| Memory |+â > # Input Pur # Output Figure 1: Neural Programmer, a neural network with built-in arithmetic and logic operations. At every time step, the controller selectes an operation and a data segment. Figure reproduced with permission from Neelakantan et al. (2015). in the form of the correct answer and not the program itself. The model runs for a ï¬ xed number of steps, and at each step selects a data segment and an operation to apply to the selected data segment. Soft selection is performed at training time so that the model is differentiable, while at test time hard selection is employed. Table 3 shows examples of programs induced by the model. | 1511.06807#13 | 1511.06807#15 | 1511.06807 | [
"1508.05508"
] |
1511.06807#15 | Adding Gradient Noise Improves Learning for Very Deep Networks | Question greater 17.27 A and lesser -19.21 D count What are the number of elements whose ï¬ eld in column A is greater than 17.27 and ï¬ eld in Column D is lesser than -19.21. t Selected Op 1 Greater Lesser 2 And 3 Count 4 Selected Column A D - - Table 3: Example program induced by the model using T = 4 time steps. We show the selected columns in cases in which the selected operation acts on a particular column. Similar to the above experiments with Memory Networks, in our experiments with Neural Pro- grammer, we add noise sampled from a Gaussian distribution with mean 0, and decaying variance according to Equation (1) with η = 1.0 to the gradient after clipping. The model is optimized with Adam (Kingma & Ba, 2014), which combines momentum and adaptive learning rates. | 1511.06807#14 | 1511.06807#16 | 1511.06807 | [
"1508.05508"
] |
1511.06807#16 | Adding Gradient Noise Improves Learning for Very Deep Networks | For our ï¬ rst experiment, we train Neural Programmer to answer questions involving a single column of numbers. We use 72 different hyper-parameter conï¬ gurations with and without adding annealed random noise to the gradients. We also run each of these experiments for 3 different random ini- tializations of the model parameters and we ï¬ nd that only 1/216 runs achieve 100% test accuracy without adding noise while 9/216 runs achieve 100% accuracy when random noise is added. The 9 successful runs consisted of models initialized with all the three different random seeds, demon- strating robustness to initialization. | 1511.06807#15 | 1511.06807#17 | 1511.06807 | [
"1508.05508"
] |
1511.06807#17 | Adding Gradient Noise Improves Learning for Very Deep Networks | We ï¬ nd that when using dropout (Srivastava et al., 2014) none of the 216 runs give 100% accuracy. We consider a more difï¬ cult question answering task where tables have up to ï¬ ve columns contain- ing numbers. We also experiment on a task containing one column of numbers and another column of text entries. Table 4 shows the performance of adding noise vs. no noise on Neural Programmer. Figure 2 shows an example of the effect of adding random noise to the gradients in our experiment with 5 columns. The differences between the two models are much more pronounced than in Table 4 because Table 4 shows the results after careful hyperparameter selection. In all cases, we see that added gradient noise improves performance of Neural Programmer. Its performance when combined with or used instead of dropout is mixed depending on the problem, but the positive results indicate that it is worth attempting on a case-by-case basis. | 1511.06807#16 | 1511.06807#18 | 1511.06807 | [
"1508.05508"
] |
1511.06807#18 | Adding Gradient Noise Improves Learning for Very Deep Networks | 6 # Under review as a conference paper at ICLR 2016 Setting Five columns Text entries No Noise With Noise Dropout Dropout With Noise 97.4% 95.3% 99.1% 97.6% 98.7% 98.8% 99.2% 97.3% Table 4: The effects of adding random noise to the gradient on Neural Programmer. Higher values are better. Adding random noise to the gradient always helps the model. When the models are applied to these more complicated tasks than the single column experiment, using dropout and noise together seems to be beneï¬ cial in one case while using only one of them achieves the best result in the other case. | 1511.06807#17 | 1511.06807#19 | 1511.06807 | [
"1508.05508"
] |
1511.06807#19 | Adding Gradient Noise Improves Learning for Very Deep Networks | 300 Train Loss: Noise Vs. No Noise roo zest Accuracy: Noise Vs. No Noise --- no noise --- no noise { 3000 --- noise go|| "77 Boise ' Fy i g 2500 = 60 : 6 ' 8 g i g ' 4 < ' = 2000hy,.. . 8 40 : lho . PS ! x ep ert tieebrore ! â 1500 an Se Natal 100% 3010015030020 300 "030100150 20025000 No. of epochs No. of epochs Figure 2: Noise vs. No Noise in our experiment with tables containing 5 columns. The models trained with noise generalizes almost always better. 4.4 NEURAL RANDOM ACCESS MACHINES We now conduct experiments with Neural Random-Access Machines (NRAM) (Kurach et al., 2015). NRAM is a model for algorithm learning that can store data, and explicitly manipulate and derefer- ence pointers. NRAM consists of a neural network controller, memory, registers and a set of built-in operations. This is similar to the Neural Programmer in that it uses a controller network to compose built-in operations, but both reads and writes to an external memory. An operation can either read (a subset of) contents from the memory, write content to the memory or perform an arithmetic opera- tion on either input registers or outputs from other operations. The controller runs for a ï¬ xed number of time steps. At every step, the model selects both the operation to be executed and its inputs. These selections are made using soft attention (Bahdanau et al., 2014) making the model end-to-end dif- ferentiable. NRAM uses an LSTM (Hochreiter & Schmidhuber, 1997) controller. Figure 3 gives an overview of the model. | 1511.06807#18 | 1511.06807#20 | 1511.06807 | [
"1508.05508"
] |
1511.06807#20 | Adding Gradient Noise Improves Learning for Very Deep Networks | For our experiment, we consider a problem of searching k-th elementâ s value on a linked list. The network is given a pointer to the head of the linked list, and has to ï¬ nd the value of the k-th element. Note that this is highly nontrivial because pointers and their values are stored at random locations in memory, so the model must learn to traverse a complex graph for k steps. Because of this complexity, training the NRAM architecture can be unstable, especially when the number of steps and operations is large. We once again experiment with the decaying noise schedule from Equation (1), setting η = 0.3. We run a large grid search over the model hyperparameters (de- tailed in Kurach et al. (2015)), and use the top 3 for our experiments. For each of these 3 settings, we try 100 different random initializations and look at the percentage of runs that give 100% accuracy across each one for training both with and without noise. As in our experiments with Neural Programmer, we ï¬ nd that gradient clipping is crucial when training with noise. This is likely because the effect of random noise is washed away when gradients become too large. For models trained with noise we observed much better reproduce rates, which are presented in Table 5. Although it is possible to train the model to achieve 100% accuracy without | 1511.06807#19 | 1511.06807#21 | 1511.06807 | [
"1508.05508"
] |
1511.06807#21 | Adding Gradient Noise Improves Learning for Very Deep Networks | 7 # Under review as a conference paper at ICLR 2016 binarized LSTM ï¬ nish? r1 r2 r3 r4r s r e t s i g e m1 m2 m3 r1 r2 r3 r4 # memory tape Figure 3: One timestep of the NRAM architecture with R = 4 registers and a memory tape. m1, m2 and m3 are example operations built-in to the model. The operations can read and write from memory. At every time step, the LSTM controller softly selects the operation and its inputs. | 1511.06807#20 | 1511.06807#22 | 1511.06807 | [
"1508.05508"
] |
1511.06807#22 | Adding Gradient Noise Improves Learning for Very Deep Networks | Figure reproduced with permission from Kurach et al. (2015). noise, it is less robust across multiple random restarts, with over 10x as many initializations leading to a correct answer when using noise. Hyperparameter-1 Hyperparameter-2 Hyperparameter-3 Average No Noise With Noise 1% 5% 0% 22% 3% 7% 1.3% 11.3% Table 5: Percentage of successful runs on k-th element task. Higher values are better. All tests were performed with the same set of 100 random initializations (seeds). | 1511.06807#21 | 1511.06807#23 | 1511.06807 | [
"1508.05508"
] |
1511.06807#23 | Adding Gradient Noise Improves Learning for Very Deep Networks | 4.5 CONVOLUTIONAL GATED RECURRENT NETWORKS (NEURAL GPUS) Convolutional Gated Recurrent Networks (CGRN) or Neural GPUs (Kaiser & Sutskever, 2015) are a recently proposed model that is capable of learning arbitrary algorithms. CGRNs use a stack of convolution layers, unfolded with tied parameters like a recurrent network. The input data (usually a list of symbols) is ï¬ rst converted to a three dimensional tensor representation containing a sequence of embedded symbols in the ï¬ rst two dimensions, and zeros padding the next dimension. Then, multiple layers of modiï¬ ed convolution kernels are applied at each step. The modiï¬ ed kernel is a combination of convolution and Gated Recurrent Units (GRU) (Cho et al., 2014). The use of con- volution kernels allows computation to be applied in parallel across the input data, while the gating mechanism helps the gradient ï¬ | 1511.06807#22 | 1511.06807#24 | 1511.06807 | [
"1508.05508"
] |
1511.06807#24 | Adding Gradient Noise Improves Learning for Very Deep Networks | ow. The additional dimension of the tensor serves as a working memory while the repeated operations are applied at each layer. The output at the ï¬ nal layer is the predicted answer. The key difference between Neural GPUs and other architectures for algorithmic tasks (e.g., Neural Turing Machines (Graves et al., 2014)) is that instead of using sequential data access, convolution kernels are applied in parallel across the input, enabling the use of very deep and wide models. The model is referred to as Neural GPUs because the input data is accessed in parallel. Neural GPUs were shown to outperform previous sequential architectures for algorithm learning on tasks such as binary addition and multiplication, by being able to generalize from much shorter to longer data cases. In our experiments, we use Neural GPUs for the task of binary multiplication. The input consists two concatenated sequences of binary digits separated by an operator token, and the goal is to multiply | 1511.06807#23 | 1511.06807#25 | 1511.06807 | [
"1508.05508"
] |
1511.06807#25 | Adding Gradient Noise Improves Learning for Very Deep Networks | 8 # Under review as a conference paper at ICLR 2016 the given numbers. During training, the model is trained on 20-digit binary numbers while at test time, the task is to multiply 200-digit numbers. Once again, we add noise sampled from Gaussian distribution with mean 0, and decaying variance according to the schedule in Equation (1) with η = 1.0, to the gradient after clipping. The model is optimized using Adam (Kingma & Ba, 2014). Table 6 gives the results of a large-scale experiment using Neural GPUs over 7290 experimental runs. The experiment shows that models trained with added gradient noise are more robust across many random initializations and parameter settings. As you can see, adding gradient noise both allows us to achieve the best performance, with the number of models with < 1% error over twice as large as without noise. But it also helps throughout, improving the robustness of training, with more models training to lower error rates as well. This experiment shows that the simple technique of added gradient noise is effective even in regimes where we can afford a very large numbers of random restarts. Setting No Noise With Noise Error < 1% Error < 2% Error < 3% Error < 5% 28 58 90 159 172 282 387 570 Table 6: Number of successful runs on 7290 random trials. Higher values are better. The models are trained on length 20 and tested on length 200. | 1511.06807#24 | 1511.06807#26 | 1511.06807 | [
"1508.05508"
] |
1511.06807#26 | Adding Gradient Noise Improves Learning for Very Deep Networks | # 5 CONCLUSION In this paper, we discussed a set of experiments which show the effectiveness of adding noise to the gradient. We found that adding noise to the gradient during training helps training and generalization of complicated neural networks. We suspect that the effects are pronounced for complex models because they have many local minima. We believe that this surprisingly simple yet effective idea, essentially a single line of code, should be in the toolset of neural network practitioners when facing issues with training neural networks. We also believe that this set of empirical results can give rise to further formal analysis of why adding noise is so effective for very deep neural networks. Acknowledgements We sincerely thank Marcin Andrychowicz, Dmitry Bahdanau, Samy Bengio, Oriol Vinyals for suggestions and the Google Brain team for help with the project. # REFERENCES | 1511.06807#25 | 1511.06807#27 | 1511.06807 | [
"1508.05508"
] |
1511.06807#27 | Adding Gradient Noise Improves Learning for Very Deep Networks | An, Guozhong. The effects of adding noise during backpropagation training on a generalization performance. Neural Computation, 1996. Bahdanau, Dzmitry, Cho, Kyunghyun, and Bengio, Yoshua. Neural machine translation by jointly learning to align and translate. ICLR, 2014. Blundell, Charles, Cornebise, Julien, Kavukcuoglu, Koray, and Wierstra, Daan. Weight uncertainty in neural networks. ICML, 2015. | 1511.06807#26 | 1511.06807#28 | 1511.06807 | [
"1508.05508"
] |
1511.06807#28 | Adding Gradient Noise Improves Learning for Very Deep Networks | Bottou, L´eon. Stochastic gradient learning in neural networks. In Neuro-N¨ımes, 1992. Bousquet, Olivier and Bottou, L´eon. The tradeoffs of large scale learning. In NIPS, 2008. Cho, Kyunghyun, Van Merri¨enboer, Bart, Gulcehre, Caglar, Bahdanau, Dzmitry, Bougares, Fethi, Schwenk, Holger, and Bengio, Yoshua. Learning phrase representations using RNN encoder- decoder for statistical machine translation. In EMNLP, 2014. Choromanska, Anna, Henaff, Mikael, Mathieu, Micha¨el, Arous, G´erard Ben, and LeCun, Yann. The loss surfaces of multilayer networks. In AISTATS, 2015. | 1511.06807#27 | 1511.06807#29 | 1511.06807 | [
"1508.05508"
] |
1511.06807#29 | Adding Gradient Noise Improves Learning for Very Deep Networks | 9 # Under review as a conference paper at ICLR 2016 Dean, Jeffrey, Corrado, Greg, Monga, Rajat, Chen, Kai, Devin, Matthieu, Mao, Mark, Senior, An- drew, Tucker, Paul, Yang, Ke, Le, Quoc V, et al. Large scale distributed deep networks. In NIPS, 2012. Duchi, John, Hazan, Elad, and Singer, Yoram. Adaptive subgradient methods for online learning and stochastic optimization. JMLR, 2011. Glorot, Xavier and Bengio, Yoshua. Understanding the difï¬ culty of training deep feedforward neural networks. In Proc. AISTATS, pp. 249â 256, 2010. Graves, Alex. | 1511.06807#28 | 1511.06807#30 | 1511.06807 | [
"1508.05508"
] |
1511.06807#30 | Adding Gradient Noise Improves Learning for Very Deep Networks | Practical variational inference for neural networks. In NIPS, 2011. Graves, Alex. Generating sequences with recurrent neural networks. arXiv preprint arxiv:1308.0850, 2013. Graves, Alex, Wayne, Greg, and Danihelka, Ivo. Neural Turing Machines. arXiv preprint arXiv:1410.5401, 2014. He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. | 1511.06807#29 | 1511.06807#31 | 1511.06807 | [
"1508.05508"
] |
1511.06807#31 | Adding Gradient Noise Improves Learning for Very Deep Networks | Delving deep into rectiï¬ ers: Surpass- ing human-level performance on imagenet classiï¬ cation. ICCV, 2015. Hinton, Geoffrey and Roweis, Sam. Stochastic neighbor embedding. In NIPS, 2002. Hinton, Geoffrey, Deng, Li, Yu, Dong, Dahl, George, rahman Mohamed, Abdel, Jaitly, Navdeep, Senior, Andrew, Vanhoucke, Vincent, Nguyen, Patrick, Sainath, Tara, and Kingsbury, Brian. Deep neural networks for acoustic modeling in speech recognition. Signal Processing Magazine, 2012. Hochreiter, Sepp and Schmidhuber, J¨urgen. Long short-term memory. Neural Computation, 1997. Kaiser, Lukasz and Sutskever, Ilya. Neural GPUs learn algorithms. In Arxiv, 2015. Kingma, Diederik and Ba, Jimmy. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. Kirkpatrick, Scott, Vecchi, Mario P, et al. Optimization by simulated annealing. Science, 1983. Krizhevsky, Alex, Sutskever, Ilya, and Hinton, Geoffrey E. ImageNet classiï¬ cation with deep con- volutional neural networks. In NIPS, 2012. Kurach, Karol, Andrychowicz, Marcin, and Sutskever, Ilya. Neural random access machine. In Arxiv, 2015. LeCun, Yann, Bottou, L´eon, Bengio, Yoshua, and Haffner, Patrick. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 1998. Nair, Vinod and Hinton, Geoffrey. | 1511.06807#30 | 1511.06807#32 | 1511.06807 | [
"1508.05508"
] |
1511.06807#32 | Adding Gradient Noise Improves Learning for Very Deep Networks | Rectiï¬ ed linear units improve Restricted Boltzmann Machines. In ICML, 2010. Neal, Radford M. MCMC using Hamiltonian dynamics. Handbook of Markov Chain Monte Carlo, 2011. Neelakantan, Arvind, Le, Quoc V., and Sutskever, Ilya. Neural Programmer: Inducing latent pro- grams with gradient descent. In Arxiv, 2015. Pascanu, Razvan, Mikolov, Tomas, and Bengio, Yoshua. | 1511.06807#31 | 1511.06807#33 | 1511.06807 | [
"1508.05508"
] |
1511.06807#33 | Adding Gradient Noise Improves Learning for Very Deep Networks | On the difï¬ culty of training recurrent neural networks. Proc. ICML, 2013. Peng, Baolin, Lu, Zhengdong, Li, Hang, and Wong, Kam-Fai. Towards neural network-based rea- soning. arXiv preprint arxiv:1508.05508, 2015. Polyak, Boris Teodorovich. Some methods of speeding up the convergence of iteration methods. USSR Computational Mathematics and Mathematical Physics, 1964. | 1511.06807#32 | 1511.06807#34 | 1511.06807 | [
"1508.05508"
] |
1511.06807#34 | Adding Gradient Noise Improves Learning for Very Deep Networks | Robbins, Herbert and Monro, Sutton. A stochastic approximation method. The annals of mathemat- ical statistics, 1951. 10 # Under review as a conference paper at ICLR 2016 Srivastava, Nitish, Hinton, Geoffrey, Krizhevsky, Alex, Sutskever, Ilya, and Salakhutdinov, Ruslan. Dropout: A simple way to prevent neural networks from overï¬ tting. JMLR, 2014. Srivastava, Rupesh Kumar, Greff, Klaus, and Schmidhuber, J¨urgen. Training very deep networks. NIPS, 2015. Steijvers, Mark. A recurrent network that performs a context-sensitive prediction task. In CogSci, 1996. | 1511.06807#33 | 1511.06807#35 | 1511.06807 | [
"1508.05508"
] |
1511.06807#35 | Adding Gradient Noise Improves Learning for Very Deep Networks | Sukhbaatar, Sainbayar, Szlam, Arthur, Weston, Jason, and Fergus, Rob. End-to-end memory net- works. In NIPS, 2015. Sussillo, David and Abbott, L.F. Random walks: Training very deep nonlinear feed-forward net- works with smart initialization. Arxiv, 2014. Sutskever, Ilya, Martens, James, Dahl, George, and Hinton, Geoffrey. On the importance of initial- ization and momentum in deep learning. In ICML, 2013. Sutskever, Ilya, Vinyals, Oriol, and Le, Quoc V. Sequence to sequence learning with neural net- works. In NIPS, 2014. | 1511.06807#34 | 1511.06807#36 | 1511.06807 | [
"1508.05508"
] |
1511.06807#36 | Adding Gradient Noise Improves Learning for Very Deep Networks | Welling, Max and Teh, Yee Whye. Bayesian learning via stochastic gradient Langevin dynamics. In ICML, 2011. Weston, Jason, Chopra, Sumit, and Bordes, Antoine. Memory networks. arXiv preprint arXiv:1410.3916, 2014. Weston, Jason, Bordes, Antoine, Chopra, Sumit, and Mikolov, Tomas. Towards AI-complete ques- tion answering: a set of prerequisite toy tasks. In ICML, 2015. Zeiler, Matthew D. Adadelta: An adaptive learning rate method. arXiv preprint arXiv:1212.5701, 2012. 11 | 1511.06807#35 | 1511.06807 | [
"1508.05508"
] |
|
1511.06789#0 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | 6 1 0 2 t c O 8 1 ] V C . s c [ 3 v 9 8 7 6 0 . 1 1 5 1 : v i X r a # The Unreasonable Eï¬ ectiveness of Noisy Data for Fine-Grained Recognition Jonathan Krause!* Benjamin Sapp?** | Andrew Howard? Howard Zhou? Alexander Toshev? Tom Duerig? James Philbin?** Li Fei-Fei! | 1511.06789#1 | 1511.06789 | [
"1503.01817"
] |
|
1511.06789#1 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | # 1Stanford University # 2Zoox # 3Google {jkrause,feifeili}@cs.stanford.edu {bensapp,james}@zoox.com {howarda,howardzhou,toshev,tduerig}@google.com Abstract. Current approaches for ï¬ ne-grained recognition do the fol- lowing: First, recruit experts to annotate a dataset of images, optionally also collecting more structured data in the form of part annotations and bounding boxes. Second, train a model utilizing this data. | 1511.06789#0 | 1511.06789#2 | 1511.06789 | [
"1503.01817"
] |
1511.06789#2 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Toward the goal of solving ï¬ ne-grained recognition, we introduce an alternative approach, leveraging free, noisy data from the web and simple, generic methods of recognition. This approach has beneï¬ ts in both performance and scalability. We demonstrate its eï¬ cacy on four ï¬ ne-grained datasets, greatly exceeding existing state of the art without the manual collec- tion of even a single label, and furthermore show ï¬ rst results at scaling to more than 10,000 ï¬ ne-grained categories. Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using their an- notated training sets. We compare our approach to an active learning approach for expanding ï¬ ne-grained datasets. # 1 Introduction Fine-grained recognition refers to the task of distinguishing very similar cate- gories, such as breeds of dogs [27,37], species of birds [60,58,5,4], or models of cars [70,30]. Since its inception, great progress has been made, with accuracies on the popular CUB-200-2011 bird dataset [60] steadily increasing from 10.3% [60] to 84.6% [69]. The predominant approach in ï¬ ne-grained recognition today consists of two steps. First, a dataset is collected. | 1511.06789#1 | 1511.06789#3 | 1511.06789 | [
"1503.01817"
] |
1511.06789#3 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Since ï¬ ne-grained recognition is a task in- herently diï¬ cult for humans, this typically requires either recruiting a team of experts [58,38] or extensive crowd-sourcing pipelines [30,4]. Second, a method for recognition is trained using these expert-annotated labels, possibly also re- quiring additional annotations in the form of parts, attributes, or relation- ships [75,26,36,5]. While methods following this approach have shown some suc- cess [5,75,36,28], their performance and scalability is constrained by the paucity Work done while J. Krause was interning at Google ** Work done while B. Sapp and J. Philbin were at Google 2 Krause et al. Fig. 1. | 1511.06789#2 | 1511.06789#4 | 1511.06789 | [
"1503.01817"
] |
1511.06789#4 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | There are more than 14,000 species of birds in the world. In this work we show that using noisy data from publicly-available online sources can not only improve recognition of categories in todayâ s datasets, but also scale to very large numbers of ï¬ ne-grained categories, which is extremely expensive with the traditional approach of manually collecting labels for ï¬ ne-grained datasets. Here we show 4,225 of the 10,982 categories recognized in this work. of data available due to these limitations. With this traditional approach it is prohibitive to scale up to all 14,000 species of birds in the world (Fig. 1), 278,000 species of butterï¬ ies and moths, or 941,000 species of insects [24]. In this paper, we show that it is possible to train eï¬ ective models of ï¬ ne- grained recognition using noisy data from the web and simple, generic methods of recognition [55,54]. We demonstrate recognition abilities greatly exceeding current state of the art methods, achieving top-1 accuracies of 92.3% on CUB- 200-2011 [60], 85.4% on Birdsnap [4], 93.4% on FGVC-Aircraft [38], and 80.8% on Stanford Dogs [27] without using a single manually-annotated training label from the respective datasets. On CUB, this is nearly at the level of human ex- perts [6,58]. Building upon this, we scale up the number of ï¬ ne-grained classes recognized, reporting ï¬ rst results on over 10,000 species of birds and 14,000 species of butterï¬ ies and moths. | 1511.06789#3 | 1511.06789#5 | 1511.06789 | [
"1503.01817"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.