id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
sequencelengths 1
1
|
---|---|---|---|---|---|---|
1602.01137#35 | A Dual Embedding Space Model for Document Ranking | cant terms that they expect to match in the target document to formulate their search queries. Therefore in the query corpus, one may say that, the less important terms from the document corpus has been ï¬ ltered out. Therefore when training on the query corpus the CBOW model is more likely to see important terms within the context window compared to when trained on a corpus of document body text, which may make it a better training dataset for the Word2vec model. # 5. RELATED WORK | 1602.01137#34 | 1602.01137#36 | 1602.01137 | [
"1510.02675"
] |
1602.01137#36 | A Dual Embedding Space Model for Document Ranking | The probabilistic model of information retrieval leads to the de- velopment of the BM25 ranking feature [35]. The increase in BM25 as term frequency increases is justiï¬ ed according to the 2-Poisson model [15, 36], which makes a distinction between documents about a term and documents that merely mention that term. Those two types of document have term frequencies from two different Poisson distributions, which justiï¬ es the use of term frequency as evidence of aboutness. By contrast, the model introduced in this paper uses the occurrence of other related terms as evidence of aboutness. For example, under the 2-Poisson model a document about Eminem will tend to mention the term â eminemâ | 1602.01137#35 | 1602.01137#37 | 1602.01137 | [
"1510.02675"
] |
1602.01137#37 | A Dual Embedding Space Model for Document Ranking | repeatedly. Under our all- pairs vector model, a document about Eminem will tend to contain more related terms such as â rapâ , â tracklistâ and â performsâ . Our experiments show both notions of aboutness to be useful. Neural embeddings for IR. The word embeddings produced by the CBOW and SG models have been shown to be surprisingly effective at capturing detailed semantics useful for various Natural Language Processing (NLP) and reasoning tasks, including word analogies [28, 29]. Recent papers have explored in detail the SG and CBOW training methodology [11, 37] and its connection to other approaches for learning word embeddings such as explicit vector space representations [23, 24], matrix factorization [22, 33, 42] and density-based representations [45]. | 1602.01137#36 | 1602.01137#38 | 1602.01137 | [
"1510.02675"
] |
1602.01137#38 | A Dual Embedding Space Model for Document Ranking | Term based IR. For an overview of lexical matching approaches for information retrieval, such as the vector space, probabilistic and language modelling approach, see [26]. In Saltonâ s classic vector space model [39] queries and documents are represented as sparse vectors in a vector space of dimensionality |V|, where V is the word vocabulary. Elements in the vector are non-zero if that term occurs. Documents can be ranked in descending order of cosine similarity with the query, although a wide variety of weighting and similarity functions are possible [51]. In contrast to the classical vector space model, LSA[8], PLSA[17] and LDA[5, 47] learn dense vector representations of much lower dimensionality. It has been suggested that these models perform poorly as standalone retrieval models [1] unless combined with other TF-IDF like features. In our approach the query and documents are also low dimensional dense vectors. We learn 200-dimensional neural word embeddings, and generate document vectors as the centroids of all the word vectors. Yan et al. [49] suggested that term correlation data is less sparse than term-document matrix and hence may be more effective for training embeddings. Baroni et al. [3] evaluated neural word embeddings against tradi- tional word counting approaches and demonstrated the success of the former on a variety of NLP tasks. However, more recent works [16, 40] have shown that there does not seem to be one embedding approach that is best for all tasks. This observation is similar to ours, where we note that IN-IN and IN-OUT model different kinds of word relationships. Although IN-IN, for example, works well for word analogy tasks [28, 29], it might perform less effectively for other tasks, such as those in information retrieval. If so, instead of claiming that any one embedding captures â semanticsâ , it is proba- bly better to characterize embeddings according to which tasks they perform well on. | 1602.01137#37 | 1602.01137#39 | 1602.01137 | [
"1510.02675"
] |
1602.01137#39 | A Dual Embedding Space Model for Document Ranking | Our paper is not the ï¬ rst to apply neural word embeddings in IR. Ganguly et al. [9] recently proposed a generalized language model for IR that incorporates IN-IN similarities. The similarities are used to expand and reweight the terms in each document, which seems to be motivated by intuitions similar to ours, where a term is reinforced if a similar terms occurs in the query. In their case, after greatly expanding the document vocabulary, they perform retrieval based on word occurrences rather than in an embedding space. Word | 1602.01137#38 | 1602.01137#40 | 1602.01137 | [
"1510.02675"
] |
1602.01137#40 | A Dual Embedding Space Model for Document Ranking | IN-OUT im Rel. EE Irrel. (J) Irrel. (R) 0.20 -0.15 0.10 0.05 0.00 0.05 BM25 lm Rel. Inrel. (J) = Irrel. (R) IN-IN [ | Rel. [ Irrel. (J) Irrel. (R) -0.4 0.2 0.0 0.2 0.4 0.6 0.8 1.0 BM25 + IN-OUT [a =0.97] (im Rel. (0 Irrel. (J) [= Irrel. (R) Figure 4: Feature distributions over three sets of documents: Rel. retrieved by Bing and judged relevant, Irrel. (J) retrieved by Bing and judged irrelevant, and Irrel. (R) random documents not retrieved for this query. Our telescoping evaluation setup only uses the ï¬ rst two sets, whose distributions are quite close in all four plots. IN-OUT may have the greatest difference between Rel. and Irrel. (J), which corresponds to its good telescoping NDCG results. BM25 is far superior at separating Irrel. (R) results from the rest, which explains the success of BM25 and mixture models in non-telescoping evaluation. embeddings have also been studied in other IR contexts such as term reweighting [50], cross-lingual retrieval [14, 46, 52] and short- text similarity [20]. Beyond word co-occurrence, recent studies have also explored learning text embeddings from clickthrough data [18, 41], session data [12, 13, 30], query preï¬ | 1602.01137#39 | 1602.01137#41 | 1602.01137 | [
"1510.02675"
] |
1602.01137#41 | A Dual Embedding Space Model for Document Ranking | x-sufï¬ x pairs [31], via auto-encoders [38], and for sentiment classiï¬ cation [44] and for long text[21]. # 6. DISCUSSION AND CONCLUSION We have also identiï¬ ed and investigated a failure of embedding- based ranking: performance is highly dependent on the relevancy of the initial candidate set of documents to be ranked. While stand- alone DESM clearly bests BM25 and LSA on ranking telescoped datasets (Table 3), the same embedding model needs to be com- bined with BM25 to perform well on a raw, unï¬ ltered document collection (Table 4). However, this is not a signiï¬ cant deï¬ ciency with the DESM as telescoping is a common initial set in industrial IR pipelines [7]. Moreover, our DESM is especially well suited for late-stage ranking since it incurs little computational overhead, only requiring the documentâ s centroid (which can be precomputed and stored) and its cosine similarity with the query. This paper motivated and evaluated the use neural word embed- dings to gauge a documentâ s aboutness with respect to a query. Mapping words to points in a shared semantic space allows a query term to be compared against all terms in the document, providing for a reï¬ ned relevance scoring. We formulate a Dual Embedding Space Model (DESM) that leverages the often discarded output em- beddings learned by the CBOW model. Our model exploits a novel use of both the input and output embeddings to capture topic-based semantic relationships. The examples in Table1 show that drasti- cally different nearest neighbors can be found by using proximity in the IN-OUT vs the IN-IN spaces. We have demonstrated through intuition and large-scale experimentation that ranking via proximity in IN-OUT space is better for retrieval than IN-IN based rankers. | 1602.01137#40 | 1602.01137#42 | 1602.01137 | [
"1510.02675"
] |
1602.01137#42 | A Dual Embedding Space Model for Document Ranking | This ï¬ nding emphasizes that usage of the CBOW and SG models is application dependent and that quantifying semantic relatedness via cosine similarity in IN space should not be a default practice. In addition to proposing an effective and efï¬ cient ranking scheme, our work suggests multiple avenues for further investigation. Can the IN-IN and the IN-OUT based distances be incorporated into other stages of the IR pipeline, such as in pseudo relevance feed- back and for query expansion? Are there better ways to compose word-level embeddings into document-level representations? Is there a principled way to ï¬ lter the noisy comparisons that degrade performance on the non-telescoped datasets? Content-based document retrieval is a difï¬ cult problem. Not only is language inherently subtle and ambiguous â allowing for the same ideas to be represented by a multitude of different words â but the appearance of a given word in a document does not necessarily mean that document is relevant. While TF-IDF features such as BM25 are a proven source of evidence for aboutness, they are not sufï¬ ciently precise to rank highly relevant documents ahead of fairly relevant Relevant Irrelevant (judged) Irrelevant (unjudged) 0.05 Jueg yucg 1800 120 140 105 1600 0.00 | 120 1400 % 1200 ft â 0.05) 4 100 7 5 1000 3 80 60 z 800 = -0.10 J | Ie6o 45 600 -0.15 4} 14° 30 400 20 15 200 0.290 + + 4 0 _2 oR _Sh 0 rr 0 1.0 135 140 1400 0.8 | 120 120 1200 0.6 | 105 100 90 1000 0.4 | z2 80 75 800 = 02 . 60 60 600 45 0.0 4 | Jao 400 30 â | 1602.01137#41 | 1602.01137#43 | 1602.01137 | [
"1510.02675"
] |
1602.01137#43 | A Dual Embedding Space Model for Document Ranking | 0.2 7 | 420 15 200 | a 0 7 A : : 0 10 20 30 40 50 60 0 10 20 30 40 50 60 0 10 20 30 40 50 60 BM25 BM25 BM25 Figure 5: Bivariate analysis of our lexical matching and neural word embedding features. On unjudged (random) documents, BM25 is very successful at giving zero score, but both IN-IN and IN-OUT give a range of scores. This explains their poor performance in non-telescoping evaluation. For the judged relevant and judged irrelevant sets, we see a range of cases where both types of feature fail. For example BM25 has both false positives, where an irrelevant document mentions the query terms, and false negatives, where a relevant document does not mention the query terms. | 1602.01137#42 | 1602.01137#44 | 1602.01137 | [
"1510.02675"
] |
1602.01137#44 | A Dual Embedding Space Model for Document Ranking | documents. To do that task well, all of a documentâ s words must be considered. Neural word embeddings, and speciï¬ cally our DESM, provide an effective and efï¬ cient way for all words in a document to contribute, resulting in ranking attune to semantic subtleties. References [1] A. Atreya and C. Elkan. Latent semantic indexing (lsi) fails for trec collections. ACM SIGKDD Explorations Newsletter, 12(2):5â 10, 2011. optimizations for additive machine learned ranking systems. In Proc. WSDM, pages 411â 420. ACM, 2010. [8] S. C. Deerwester, S. T. Dumais, T. K. Landauer, G. W. Furnas, and R. A. Harshman. Indexing by latent semantic analysis. JASIS, 41(6):391â 407, 1990. [9] D. Ganguly, D. Roy, M. Mitra, and G. J. Jones. | 1602.01137#43 | 1602.01137#45 | 1602.01137 | [
"1510.02675"
] |
1602.01137#45 | A Dual Embedding Space Model for Document Ranking | Word embedding based generalized language model for information retrieval. In Proc. SIGIR, pages 795â 798. ACM, 2015. [2] R. Baeza-Yates, P. Boldi, and F. Chierichetti. Essential web pages are easy to ï¬ nd. pages 97â 107. International World Wide Web Conferences Steering Committee, 2015. [10] J. Gao, K. Toutanova, and W.-t. | 1602.01137#44 | 1602.01137#46 | 1602.01137 | [
"1510.02675"
] |
1602.01137#46 | A Dual Embedding Space Model for Document Ranking | Yih. Clickthrough-based latent semantic models for web search. In Proc. SIGIR, pages 675â 684. ACM, 2011. [3] M. Baroni, G. Dinu, and G. Kruszewski. Donâ t count, predict! a systematic comparison of context-counting vs. context-predicting semantic vectors. In Proc. ACL, volume 1, pages 238â 247, 2014. [4] Y. Bengio, R. Ducharme, P. Vincent, and C. Janvin. | 1602.01137#45 | 1602.01137#47 | 1602.01137 | [
"1510.02675"
] |
1602.01137#47 | A Dual Embedding Space Model for Document Ranking | A neural probabilistic language model. JMLR, 3:1137â 1155, 2003. [11] Y. Goldberg and O. Levy. word2vec explained: deriving mikolov et al.â s negative-sampling word-embedding method. arXiv preprint arXiv:1402.3722, 2014. [12] M. Grbovic, N. Djuric, V. Radosavljevic, and N. | 1602.01137#46 | 1602.01137#48 | 1602.01137 | [
"1510.02675"
] |
1602.01137#48 | A Dual Embedding Space Model for Document Ranking | Bhamidipati. Search retargeting using directed query embeddings. In Proc. WWW, pages 37â 38. International World Wide Web Conferences Steering Committee, 2015. [5] D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent dirichlet allocation. the Journal of machine Learning research, 3: 993â 1022, 2003. [6] A. Bookstein and D. R. Swanson. | 1602.01137#47 | 1602.01137#49 | 1602.01137 | [
"1510.02675"
] |
1602.01137#49 | A Dual Embedding Space Model for Document Ranking | Probabilistic models for automatic indexing. JASIS, 25(5):312â 316, 1974. [7] B. B. Cambazoglu, H. Zaragoza, O. Chapelle, J. Chen, C. Liao, Z. Zheng, and J. Degenhardt. Early exit [13] M. Grbovic, N. Djuric, V. Radosavljevic, F. Silvestri, and N. | 1602.01137#48 | 1602.01137#50 | 1602.01137 | [
"1510.02675"
] |
1602.01137#50 | A Dual Embedding Space Model for Document Ranking | Bhamidipati. Context-and content-aware embeddings for query rewriting in sponsored search. In Proc. SIGIR, pages 383â 392. ACM, 2015. [14] P. Gupta, K. Bali, R. E. Banchs, M. Choudhury, and P. Rosso. Query expansion for mixed-script information retrieval. In Proc. SIGIR, pages 677â 686. ACM, 2014. [15] S. P. Harter. | 1602.01137#49 | 1602.01137#51 | 1602.01137 | [
"1510.02675"
] |
1602.01137#51 | A Dual Embedding Space Model for Document Ranking | A probabilistic approach to automatic keyword indexing. JASIS, 26(5):280â 289, 1975. [16] F. Hill, K. Cho, S. Jean, C. Devin, and Y. Bengio. Not all neural embeddings are born equal. arXiv preprint arXiv:1410.0718, 2014. [17] T. Hofmann. Probabilistic latent semantic indexing. In Proc. SIGIR, pages 50â 57. ACM, 1999. [18] P.-S. Huang, X. He, J. Gao, L. Deng, A. Acero, and L. Heck. | 1602.01137#50 | 1602.01137#52 | 1602.01137 | [
"1510.02675"
] |
1602.01137#52 | A Dual Embedding Space Model for Document Ranking | Learning deep structured semantic models for web search using clickthrough data. In Proc. CIKM, pages 2333â 2338. ACM, 2013. [19] R. Jones, B. Rey, O. Madani, and W. Greiner. Generating query substitutions. In Proc. WWW â 06, pages 387â 396, 2006. [20] T. Kenter and M. de Rijke. Short text similarity with word embeddings. In Proc. CIKM, volume 15, page 115. [21] Q. V. Le and T. Mikolov. Distributed representations of sentences and documents. arXiv preprint arXiv:1405.4053, 2014. [22] O. Levy and Y. Goldberg. | 1602.01137#51 | 1602.01137#53 | 1602.01137 | [
"1510.02675"
] |
1602.01137#53 | A Dual Embedding Space Model for Document Ranking | Neural word embedding as implicit matrix factorization. In Advances in Neural Information Processing Systems, pages 2177â 2185, 2014. [23] O. Levy, Y. Goldberg, and I. Ramat-Gan. Linguistic regularities in sparse and explicit word representations. CoNLL-2014, page 171, 2014. [24] O. Levy, Y. Goldberg, and I. Dagan. Improving distributional similarity with lessons learned from word embeddings. Transactions of the Association for Computational Linguistics, 3:211â 225, 2015. [25] M.-T. Luong, I. Sutskever, Q. V. Le, O. Vinyals, and W. | 1602.01137#52 | 1602.01137#54 | 1602.01137 | [
"1510.02675"
] |
1602.01137#54 | A Dual Embedding Space Model for Document Ranking | Zaremba. Addressing the rare word problem in neural machine translation. In Proc. ACL, 2015. [26] C. D. Manning, P. Raghavan, H. Schütze, et al. Introduction to information retrieval, volume 1. Cambridge university press Cambridge, 2008. [27] I. Matveeva, C. Burges, T. Burkard, A. Laucius, and L. | 1602.01137#53 | 1602.01137#55 | 1602.01137 | [
"1510.02675"
] |
1602.01137#55 | A Dual Embedding Space Model for Document Ranking | Wong. High accuracy retrieval with multiple nested ranker. pages 437â 444. ACM, 2006. [28] T. Mikolov, K. Chen, G. Corrado, and J. Dean. Efï¬ cient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013. [29] T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean. | 1602.01137#54 | 1602.01137#56 | 1602.01137 | [
"1510.02675"
] |
1602.01137#56 | A Dual Embedding Space Model for Document Ranking | Distributed representations of words and phrases and their compositionality. In Proc. NIPS, pages 3111â 3119, 2013. [30] B. Mitra. Exploring session context using distributed representations of queries and reformulations. In Proc. SIGIR, pages 3â 12. ACM, 2015. [31] B. Mitra and N. Craswell. Query auto-completion for rare preï¬ xes. In Proc. CIKM. ACM, 2015. [32] E. Nalisnick, B. Mitra, N. Craswell, and R. Caruana. | 1602.01137#55 | 1602.01137#57 | 1602.01137 | [
"1510.02675"
] |
1602.01137#57 | A Dual Embedding Space Model for Document Ranking | Improving document ranking with dual word embeddings. In Proc. WWW. International World Wide Web Conferences Steering Committee, to appear, 2016. [33] J. Pennington, R. Socher, and C. D. Manning. Glove: Global vectors for word representation. Proc. EMNLP, 12: 1532â 1543, 2014. [34] S. Robertson. Understanding inverse document frequency: on theoretical arguments for idf. Journal of documentation, 60 (5):503â 520, 2004. [35] S. Robertson and H. | 1602.01137#56 | 1602.01137#58 | 1602.01137 | [
"1510.02675"
] |
1602.01137#58 | A Dual Embedding Space Model for Document Ranking | Zaragoza. The probabilistic relevance framework: BM25 and beyond. Now Publishers Inc, 2009. [36] S. E. Robertson and S. Walker. Some simple effective approximations to the 2-poisson model for probabilistic weighted retrieval. pages 232â 241. Springer-Verlag New York, Inc., 1994. [37] X. Rong. word2vec parameter learning explained. arXiv preprint arXiv:1411.2738, 2014. [38] R. Salakhutdinov and G. Hinton. Semantic hashing. International Journal of Approximate Reasoning, 50(7): 969â 978, 2009. [39] G. Salton, A. Wong, and C.-S. Yang. | 1602.01137#57 | 1602.01137#59 | 1602.01137 | [
"1510.02675"
] |
1602.01137#59 | A Dual Embedding Space Model for Document Ranking | A vector space model for automatic indexing. Communications of the ACM, 18(11): 613â 620, 1975. [40] T. Schnabel, I. Labutov, D. Mimno, and T. Joachims. Evaluation methods for unsupervised word embeddings. In Proc. EMNLP, 2015. [41] Y. Shen, X. He, J. Gao, L. Deng, and G. | 1602.01137#58 | 1602.01137#60 | 1602.01137 | [
"1510.02675"
] |
1602.01137#60 | A Dual Embedding Space Model for Document Ranking | Mesnil. Learning semantic representations using convolutional neural networks for web search. In Proc. WWW, pages 373â 374, 2014. [42] T. Shi and Z. Liu. Linking glove with word2vec. arXiv preprint arXiv:1411.5595, 2014. [43] A. Singhal, C. Buckley, and M. Mitra. Pivoted document length normalization. In Proceedings of the 19th annual international ACM SIGIR conference on Research and development in information retrieval, pages 21â | 1602.01137#59 | 1602.01137#61 | 1602.01137 | [
"1510.02675"
] |
1602.01137#61 | A Dual Embedding Space Model for Document Ranking | 29. ACM, 1996. [44] D. Tang, F. Wei, N. Yang, M. Zhou, T. Liu, and B. Qin. Learning sentiment-speciï¬ c word embedding for twitter sentiment classiï¬ cation. In Proc. ACL, volume 1, pages 1555â 1565, 2014. [45] L. Vilnis and A. McCallum. Word representations via gaussian embedding. arXiv preprint arXiv:1412.6623, 2014. [46] I. Vuli´c and M.-F. Moens. | 1602.01137#60 | 1602.01137#62 | 1602.01137 | [
"1510.02675"
] |
1602.01137#62 | A Dual Embedding Space Model for Document Ranking | Monolingual and cross-lingual information retrieval models based on (bilingual) word embeddings. In Proc. SIGIR, pages 363â 372. ACM, 2015. [47] X. Wei and W. B. Croft. Lda-based document models for ad-hoc retrieval. In Proc. SIGIR, pages 178â 185. ACM, 2006. [48] B. J. Wilson and A. M. J. Schakel. | 1602.01137#61 | 1602.01137#63 | 1602.01137 | [
"1510.02675"
] |
1602.01137#63 | A Dual Embedding Space Model for Document Ranking | Controlled experiments for word embeddings. arXiv preprint arXiv:1510.02675, 2015. [49] X. Yan, J. Guo, S. Liu, X. Cheng, and Y. Wang. Learning topics in short texts by non-negative matrix factorization on term correlation matrix. In Proceedings of the SIAM International Conference on Data Mining, 2013. [50] G. Zheng and J. Callan. | 1602.01137#62 | 1602.01137#64 | 1602.01137 | [
"1510.02675"
] |
1602.01137#64 | A Dual Embedding Space Model for Document Ranking | Learning to reweight terms with distributed representations. In Proc. SIGIR, pages 575â 584. ACM, 2015. [51] J. Zobel and A. Moffat. Exploring the similarity space. In ACM SIGIR Forum, volume 32, pages 18â 34. ACM, 1998. [52] W. Y. Zou, R. Socher, D. M. Cer, and C. D. Manning. | 1602.01137#63 | 1602.01137#65 | 1602.01137 | [
"1510.02675"
] |
1602.01137#65 | A Dual Embedding Space Model for Document Ranking | Bilingual word embeddings for phrase-based machine translation. In EMNLP, pages 1393â 1398, 2013. | 1602.01137#64 | 1602.01137 | [
"1510.02675"
] |
|
1602.00367#0 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | 6 1 0 2 b e F 1 ] L C . s c [ 1 v 7 6 3 0 0 . 2 0 6 1 : v i X r a # Efï¬ cient Character-level Document Classiï¬ cation by Combining Convolution and Recurrent Layers Yijun Xiao Center for Data Sciences, New York University [email protected] Kyunghyun Cho Courant Institute and Center for Data Science, New York University [email protected] # Abstract Document classiï¬ cation tasks were primar- ily tackled at word level. Recent research that works with character-level inputs shows several beneï¬ ts over word-level approaches such as natural incorporation of morphemes and better handling of rare words. We pro- pose a neural network architecture that utilizes both convolution and recurrent layers to efï¬ - ciently encode character inputs. We validate the proposed model on eight large scale doc- ument classiï¬ cation tasks and compare with character-level convolution-only models. It achieves comparable performances with much less parameters. | 1602.00367#1 | 1602.00367 | [
"1508.06615"
] |
|
1602.00367#1 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | 1 # Introduction Document classiï¬ cation is a task in natural language processing where one needs to assign a single or multiple predeï¬ ned categories to a sequence of text. A conventional approach to document classiï¬ cation generally consists of a feature extraction stage fol- lowed by a classiï¬ cation stage. For instance, it is usual to use a TF-IDF vector of a given document as an input feature to a subsequent classiï¬ er. More recently, it has become more common to use a deep neural network, which jointly performs fea- ture extraction and classiï¬ cation, for document clas- siï¬ cation (Kim, 2014; Mesnil et al., 2014; Socher et al., 2013; Carrier and Cho, 2014). In most cases, an input document is represented as a sequence of words, of which each is presented as a one-hot vec- tor.1 Each word in the sequence is projected into a 1 A one-hot vector of the i-th word is a binary vector whose continuous vector space by being multiplied with a weight matrix, forming a sequence of dense, real- valued vectors. This sequence is then fed into a deep neural network which processes the sequence in multiple layers, resulting in a prediction proba- bility. This whole pipeline, or a network, is tuned jointly to maximize the classiï¬ cation accuracy on a training set. One important aspect of these recent approaches based on deep learning is that they often work at the level of words. Despite its recent success, the word- level approach has a number of major shortcomings. | 1602.00367#0 | 1602.00367#2 | 1602.00367 | [
"1508.06615"
] |
1602.00367#2 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | First, it is statistically inefï¬ cient, as each word token is considered separately and estimated by the same number of parameters, despite the fact that many words share common root, preï¬ x or sufï¬ x. This can be overcome by using an external mechanism to seg- ment each word and infer its components (root, pre- ï¬ x, sufï¬ x), but this is not desirable as the mechanism is highly language-dependent and is tuned indepen- dently from the target objective of document classi- ï¬ cation. Second, the word-level approach cannot handle out-of-vocabulary words. Any word that is not present or rare in a training corpus, is mapped to an unknown word token. This is problematic, be- cause the model cannot handle typos easily, which happens frequently in informal documents such as postings from social network sites. Also, this makes it difï¬ cult to use a trained model to a new domain, as there may be large mismatch between the domain of the training corpus and the target domain. | 1602.00367#1 | 1602.00367#3 | 1602.00367 | [
"1508.06615"
] |
1602.00367#3 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | elements are all zeros, except for the i-th element which is set to one. Recently this year, a number of researchers have noticed that it is not at all necessary for a deep neu- ral network to work at the word level. As long as the document is represented as a sequence of one-hot vectors, the model works without any change, re- gardless of whether each one-hot vector corresponds to a word, a sub-word unit or a character. Based on this intuition, Kim et al. (Kim et al., 2015) and Ling et al. (Ling et al., 2015) proposed to use a char- acter sequence as an alternative to the word-level one-hot vector. A similar idea was applied to de- pendency parsing in (Ballesteros et al., 2015). The work in this direction, most relevant to this paper, is the character-level convolutional network for doc- ument classiï¬ cation by Zhang et al. (Zhang et al., 2015). The character-level convolutional net in (Zhang et al., 2015) is composed of many layers of convolu- tion and max-pooling, similarly to the convolutional network in computer vision (see, e.g., (Krizhevsky et al., 2012).) Each layer ï¬ rst extracts features from small, overlapping windows of the input sequence and pools over small, non-overlapping windows by taking the maximum activations in the window. This is applied recursively (with untied weights) for many times. | 1602.00367#2 | 1602.00367#4 | 1602.00367 | [
"1508.06615"
] |
1602.00367#4 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | The ï¬ nal convolutional layerâ s activation is ï¬ attened to form a vector which is then fed into a small number of fully-connected layers followed by the classiï¬ cation layer. We notice that the use of a vanilla convolutional network for character-level document classiï¬ cation has one shortcoming. As the receptive ï¬ eld of each convolutional layer is often small (7 or 3 in (Zhang et al., 2015),) the network must have many layers in order to capture long-term dependencies in an in- put sentence. This is likely the reason why Zhang et al. (Zhang et al., 2015) used a very deep convo- lutional network with six convolutional layers fol- lowed by two fully-connected layers. | 1602.00367#3 | 1602.00367#5 | 1602.00367 | [
"1508.06615"
] |
1602.00367#5 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | In order to overcome this inefï¬ ciency in model- ing a character-level sequence, in this paper we pro- pose to make a hybrid of convolutional and recur- rent networks. This was motivated by recent suc- cesses of applying recurrent networks to natural lan- guages (see, e.g., (Cho et al., 2014; Sundermeyer et al., 2015)) and from the fact that the recurrent net- work can efï¬ ciently capture long-term dependencies even with a single layer. The hybrid model processes an input sequence of characters with a number of convolutional layers followed by a single recurrent layer. Because the recurrent layer, consisting of ei- ther gated recurrent units (GRU, (Cho et al., 2014) or long short-term memory units (LSTM, (Hochre- iter and Schmidhuber, 1997; Gers et al., 2000), can efï¬ ciently capture long-term dependencies, the pro- posed network only needs a very small number of convolutional layers. We empirically validate the proposed model, to which we refer as a convolution-recurrent network, large-scale document classiï¬ cation on the eight tasks from (Zhang et al., 2015). We mainly com- pare the proposed model against the convolutional network in (Zhang et al., 2015) and show that it is indeed possible to use a much smaller model to achieve the same level of classiï¬ cation performance when a recurrent layer is put on top of the convolu- tional layers. | 1602.00367#4 | 1602.00367#6 | 1602.00367 | [
"1508.06615"
] |
1602.00367#6 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | # 2 Basic Building Blocks: Neural Network Layers In this section, we describe four basic layers in a neural network that will be used later to constitute a single network for classifying a document. # 2.1 Embedding Layer As mentioned earlier, each document is represented as a sequence of one-hot vectors. A one-hot vector of the i-th symbol in a vocabulary is a binary vector whose elements are all zeros except for the i-th ele- ment which is set to one. Therefore, each document is a sequence of T one-hot vectors (x1, x2, . . . , xT ). An embedding layer projects each of the one- hot vectors into a d-dimensional continuous vec- tor space Rd. This is done by simply multiplying the one-hot vector from left with a weight matrix W â Rdà |V |, where |V | is the number of unique symbols in a vocabulary: et = Wxt. After the embedding layer, the input sequence of one-hot vectors becomes a sequence of dense, real- valued vectors (e1, e2, . . . , eT ). # 2.2 Convolutional Layer A convolutional layer consists of two stages. In the first stage, a set of dâ filters of receptive field size r, F â ¬ R®*", is applied to the input sequence: f, = O(F [ep (rjayqas e+ Ft +++ Cr4(r/2)])> where ¢ is a nonlinear activation function such as tanh or a rectifier. This is done for every time step of the input sequence, resulting in a sequence Fâ = (f1, fo,..., fr). The resulting sequence Fis max-pooled with size r: where max applies for each element of the vectors, resulting in a sequence 1 t Ui Uy Fâ = (f[, f5,..., T)r!): | 1602.00367#5 | 1602.00367#7 | 1602.00367 | [
"1508.06615"
] |
1602.00367#7 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | # 2.3 Recurrent Layer A recurrent layer consists of a recursive function f which takes as input one input vector and the previ- ous hidden state, and returns the new hidden state: hy = f (Xt, hi-1), where x; â ¬ R? is one time step from the input se- quence (x, X9,...,x7). ho â ¬ R is often initial- ized as an all-zero vector. Recursive Function The most naive recursive function is implemented as | 1602.00367#6 | 1602.00367#8 | 1602.00367 | [
"1508.06615"
] |
1602.00367#8 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | ht = tanh (Wxxt + Uhhtâ 1) , where W,, â ¬ Râ *â and U,, â ¬ Râ *â ' are the weight matrices. This naive recursive function however is known to suffer from the problem of vanishing gra- dient (Bengio et al., 1994} Hochreiter et al., 2001). More recently it is common to use a more com- plicated function that learns to control the ï¬ ow of information so as to prevent the vanishing gradient and allows the recurrent layer to more easily capture long-term dependencies. Long short-term memory (LSTM) unit from (Hochreiter and Schmidhuber, 1997; Gers et al., 2000) is a representative example. The LSTM unit consists of four sub-unitsâ input, output, forget gates and candidate memory cell, which are computed by | 1602.00367#7 | 1602.00367#9 | 1602.00367 | [
"1508.06615"
] |
1602.00367#9 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | it = Ï (Wixt + Uihtâ 1) , ot = Ï (Woxt + Uohtâ 1) , ft = Ï (Wf xt + Uf htâ 1) , Ë ct = tanh (Wcxt + Uchtâ 1) . Based on these, the LSTM unit ï¬ rst computes the memory cell: oe =hOGt+h Oc, and computes the output, or activation: hy, = 0; © tanh(c;). The resulting sequence from the recurrent layer is # then (h1, h2, . . . , hT ), where T is the length of the input sequence to the layer. Bidirectional Recurrent Layer One property of the recurrent layer is that there is imbalance in the amount of information seen by the hidden states at different time steps. The earlier hidden states only observe a few vectors from the lower layer, while the later ones are computed based on the most of the lower-layer vectors. This can be easily alleviated by having a bidirectional recurrent layer which is com- posed of two recurrent layers working in opposite directions. This layer will return two sequences of hidden states from the forward and reverse recurrent layers, respectively. | 1602.00367#8 | 1602.00367#10 | 1602.00367 | [
"1508.06615"
] |
1602.00367#10 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | # 2.4 Classiï¬ cation Layer A classiï¬ cation layer is in essence a logistic re- gression classiï¬ er. Given a ï¬ xed-dimensional input from the lower layer, the classiï¬ cation layer afï¬ ne- transforms it followed by a softmax activation func- tion (Bridle, 1990) to compute the predictive proba- bilities for all the categories. This is done by exp(w) x + by) Sh exp(wjix + by)â ply = k|X) where wkâ s and bkâ s are the weight and bias vectors. We assume there are K categories. | 1602.00367#9 | 1602.00367#11 | 1602.00367 | [
"1508.06615"
] |
1602.00367#11 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | It is worth noting that this classiï¬ cation layer takes as input a ï¬ xed-dimensional vector, while the recurrent layer or convolutional layer returns a variable-length sequence of vectors (the length de- termined by the input sequence). This can be ad- dressed by either simply max-pooling the vectors (Kim, 2014) over the time dimension (for both con- volutional and recurrent layers), taking the last hid- den state (for recurrent layers) or taking the last hid- den states of the forward and reverse recurrent net- works (for bidirectional recurrent layers.) # 3 Character-Level Convolutional-Recurrent Network In this section, we propose a hybrid of convolutional and recurrent networks for character-level document classiï¬ | 1602.00367#10 | 1602.00367#12 | 1602.00367 | [
"1508.06615"
] |
1602.00367#12 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | cation. # 3.1 Motivation One basic motivation for using the convolutional layer is that it learns to extract higher-level features that are invariant to local translation. By stack- ing multiple convolutional layers, the network can extract higher-level, abstract, (locally) translation- invariant features from the input sequence, in this case the document, efï¬ ciently. Despite this advantage, we noticed that it requires many layers of convolution to capture long-term de- pendencies, due to the locality of the convolution and pooling (see Sec. 2.2.) This becomes more se- vere as the length of the input sequence grows, and in the case of character-level modeling, it is usual for a document to be a sequence of hundreds or thou- sands of characters. Ultimately, this leads to the need for a very deep network having many convo- lutional layers. Contrary to the convolutional layer, the recurrent layer from Sec. 2.3 is able to capture long-term de- pendencies even when there is only a single layer. This is especially true in the case of a bidirectional recurrent layer, because each hidden state is com- puted based on the whole input sequence. However, the recurrent layer is computationally more expen- sive. The computational complexity grows linearly with respect to the length of the input sequence, and most of the computations need to be done sequen- tially. This is in contrast to the convolutional layer for which computations can be efï¬ ciently done in parallel. Based on these observations, we propose to com- bine the convolutional and recurrent layers into a single model so that this network can capture long- term dependencies in the document more efï¬ ciently for the task of classiï¬ cation. # 3.2 Model Description The proposed model, convolution-recurrent network (ConvRec), p(y|X) Classification Layers Sec. 2.4 (Recurrent ee Layers /se0. 2.3 ( Embedding Layer iS Sec. 2.1 (11,22, eeegd ry) (a) (b) P(y|X) Classification Layers Sec. 2.4 Convolutional Layers Sec. 2.2 ( Embedding Layer ie Sec. 2.1 (11,22, eeegd ry) Figure 1: | 1602.00367#11 | 1602.00367#13 | 1602.00367 | [
"1508.06615"
] |
1602.00367#13 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | Graphical illustration of (a) the convolutional net- work and (b) the proposed convolution-recurrent network for character-level document classiï¬ cation. with a one-hot sequence input X = (x1, x2, . . . , xT ). This input sequence is turned into a sequence of dense, real-valued vectors E = (e1, e2, . . . , eT ) using the embedding layer from Sec. 2.1. We apply multiple convolutional layers (Sec. 2.2) to E to get a shorter sequence of feature vectors: This feature vector is then fed into a bidirectional recurrent layer (Sec. 2.3), resulting in two sequences > Hyorward = (hi, hg,..., hr), Freverse = (hi, ho, ae) hrâ | 1602.00367#12 | 1602.00367#14 | 1602.00367 | [
"1508.06615"
] |
1602.00367#14 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | ). We take the last hidden states of both directions and concatenate them to form a ï¬ xed-dimensional vec- tor: h= [Br hi] . Finally, the ï¬ xed-dimensional vector h is fed into the classiï¬ cation layer to compute the predictive probabilities p(y = k|X) of all the categories k = 1, . . . , K given the input sequence X. See Fig. 1 (b) for the graphical illustration of the proposed model. Data set Classes Task Training size Test size AGâ | 1602.00367#13 | 1602.00367#15 | 1602.00367 | [
"1508.06615"
] |
1602.00367#15 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | s news Sogou news DBPedia Yelp review polarity Yelp review full Yahoo! Answers Amazon review polarity Amazon review full 4 5 14 2 5 10 2 5 news categorization news categorization ontology classiï¬ cation sentiment analysis sentiment analysis question type classiï¬ cation sentiment analysis sentiment analysis 120,000 450,000 560,000 560,000 650,000 1,400,000 3,600,000 3,000,000 7,600 60,000 70,000 38,000 50,000 60,000 400,000 650,000 Table 1: Data sets summary. # 3.3 Related Work Convolutional network for document classiï¬ ca- tion The convolutional networks for document classiï¬ cation, proposed earlier in (Kim, 2014; Zhang et al., 2015) and illustrated in Fig. 1 (a), is almost identical to the proposed model. One ma- jor difference is the lack of the recurrent layer in their models. Their model consists of the embedding layer, a number of convolutional layers followed by the classiï¬ cation layer only. Recurrent network for document classiï¬ cation Carrier and Cho in (Carrier and Cho, 2014) give a tutorial on using a recurrent neural network for sen- timent analysis which is one type of document clas- siï¬ cation. Unlike the convolution-recurrent network proposed in this paper, they do not use any convolu- tional layer in their model. Their model starts with the embedding layer followed by the recurrent layer. The hidden states from the recurrent layer are then averaged and fed into the classiï¬ cation layer. Hybrid model: | 1602.00367#14 | 1602.00367#16 | 1602.00367 | [
"1508.06615"
] |
1602.00367#16 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | Conv-GRNN Perhaps the most related work is the convolution-gated recurrent neu- ral net (Conv-GRNN) from (Tang et al., 2015). They proposed a hierarchical processing of a document. In their model, either a convolutional network or a recurrent network is used to extract a feature vector from each sentence, and another (bidirectional) re- current network is used to extract a feature vector of the document by reading the sequence of sentence vectors. This document vector is used by the classi- ï¬ cation layer. work. | 1602.00367#15 | 1602.00367#17 | 1602.00367 | [
"1508.06615"
] |
1602.00367#17 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | In their model, the convolutional network is strictly constrained to model each sentence, and the recurrent network to model inter-sentence struc- tures. On the other hand, the proposed ConvRec network uses a recurrent layer in order to assist the convolutional layers to capture long-term dependen- cies (across the whole document) more efï¬ ciently. These are orthogonal to each other, and it is possi- ble to plug in the proposed ConvRec as a sentence feature extraction module in the Conv-GRNN from (Tang et al., 2015). Similarly, it is possible to use the proposed ConvRec as a composition function for the sequence of sentence vectors to make computation more efï¬ cient, especially when the input document consists of many sentences. Recursive Neural Networks A recursive neural network has been applied to sentence classiï¬ cation earlier (see, e.g., (Socher et al., 2013).) In this ap- proach, a composition function is deï¬ ned and recur- sively applied at each node of the parse tree of an input sentence to eventually extract a feature vector of the sentence. This model family is heavily de- pendent on an external parser, unlike all the other models such as the ConvRec proposed here as well as other related models described above. It is also not trivial to apply the recursive neural network to documents which consist of multiple sentences. We do not consider this family of recursive neural net- works directly related to the proposed model. | 1602.00367#16 | 1602.00367#18 | 1602.00367 | [
"1508.06615"
] |
1602.00367#18 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | # 4 Experiment Settings The major difference between their approach and the proposed ConvRec is in the purpose of com- bining the convolutional network and recurrent net- # 4.1 Task Description We validate the proposed model on eight large-scale document classiï¬ cation tasks from (Zhang et al., Embedding Layer Convolutional Layer Recurrent Layer Model Sec.|2 Sec. Sec. |V| d dâ r r o d C2RIDD 5,3 2,2 C3RIDD 5,5,3 2,2,2 C4RIDD 6 8 5,533 2,222 Rev D C5RIDD 5,5,3,3,3 2,2,2,1,2 Table 2: Different architectures tested in this paper. 2015). The sizes of the data sets range from 200,000 to 4,000,000 documents. These tasks include senti- ment analysis (Yelp reviews, Amazon reviews), on- tology classiï¬ cation (DBPedia), question type clas- siï¬ cation (Yahoo! Answers), and news categoriza- tion (AGâ s news, Sogou news). | 1602.00367#17 | 1602.00367#19 | 1602.00367 | [
"1508.06615"
] |
1602.00367#19 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | Dropout (Srivastava et al., 2014) is an effective way to regularize deep neural networks. We apply dropout after the last convolutional layer as well as after the recurrent layer. Without dropout, the inputs to the recurrent layer xtâ s are Data Sets A summary of the statistics for each data set is listed in Table 1. There are equal num- ber of examples in each class for both training and test sets. DBPedia data set, for example, has 40,000 training and 5,000 test examples per class. For more detailed information on the data set construction process, see (Zhang et al., 2015). | 1602.00367#18 | 1602.00367#20 | 1602.00367 | [
"1508.06615"
] |
1602.00367#20 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | x, =f; where f; is the ¢-th output from the last convolutional layer defined in Sec. After adding dropout, we have ri ~ Bernoulli(p) ~ gl x =rOf # 4.2 Model Settings p is the dropout probability which we set to 0.5; 7} is the i-th component of the binary vector r, ⠬ R®. Referring to Sec. 2.1, the vocabulary V for our experiments consists of 96 characters including all upper-case and lower-case letters, digits, common punctuation marks, and spaces. Character embed- ding size d is set to 8. | 1602.00367#19 | 1602.00367#21 | 1602.00367 | [
"1508.06615"
] |
1602.00367#21 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | As described in Sec. 3-1] we believe by adding re- current layers, one can effectively reduce the num- ber of convolutional layers needed in order to cap- ture long-term dependencies. Thus for each data set, we consider models with two to five convolutional layers. Following notations in Sec. each layer has dâ = 128 filters. For AGâ s news and Yahoo! An- swers, we also experiment larger models with 1,024 filters in the convolutional layers. Receptive field size r is either five or three depending on the depth. Max pooling size râ is set to 2. Rectified linear units (ReLUs, (Glorot et al., 2011)) are used as activation functions in the convolutional layers. The recurrent layer (Sec. is fixed to a single layer of bidi- rectional LSTM for all models. | 1602.00367#20 | 1602.00367#22 | 1602.00367 | [
"1508.06615"
] |
1602.00367#22 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | Hidden states di- mension dâ is set to 128. More detailed setups are described in Table[2} # 4.3 Training and Validation For each of the data sets, we randomly split the full training examples into training and validation. The validation size is the same as the corresponding test size and is balanced in each class. The models are trained by minimizing the follow- ing regularized negative log-likelihood or cross en- tropy loss. Xâ s and yâ s are document character se- quences and their corresponding observed class as- signments in the training set D. w is the collec- tion of model weights. Weight decay is applied with λ = 5 à 10â 4. MN. 1=â | 1602.00367#21 | 1602.00367#23 | 1602.00367 | [
"1508.06615"
] |
1602.00367#23 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | $7 log(v(ylX)) + Sill? XyEeD We train our models using AdaDelta with p = 0.95, â ¬ = 10~> and a batch size of 128. Examples are padded to the longest sequence in each batch and masks are generated to help iden- tify the padded region. The corresponding masks of Data set #Ex. #Cl. Network #Params â Error (%) Network #Params_ Error (%) AG 120k 4 C2R1D1024 20M 8.39/8.64 C6F2D1024 27â ¢M. -/9.85 Sogou 450k 5 C3R1D128 AM 4.82/4.83 C6F2D1024* 27â ¢M. -/4.88 DBPedia 560k 14 C2R1D128 3M 1.46/1.43 C6F2D1024 27â ¢M. -/1.66 Yelp P. 560k 2 C2R1D128 3M 5.50/5.51 C6F2D1024 27â ¢M. -/5.25 Yelp F. 650k 5 C2R1D128 3M 38.00/38.18 | C6F2D1024 27â ¢M. -/38.40 Yahoo A. 1.4M 10 | C2R1D1024 20M 28.62/28.26 | C6F2D1024* 27â ¢M. -/29.55 Amazon P. || 3.6M 2 C3R1D128 AM 5.64/5.87 C6F2D256* 2.7M -/5.50 Amazon F. || 3.0M 5 C3R1D128 AM 40.30/40.77 | C6F2D256* 2.7M -/40.53 Table 3: Results on character-level document classification. | 1602.00367#22 | 1602.00367#24 | 1602.00367 | [
"1508.06615"
] |
1602.00367#24 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | CCRRFFDD refers to a network with C convolutional layers, R recurrent layers, F' fully-connected layers and D dimensional feature vectors. * denotes a model which does not distinguish between lower-case and upper-case letters. We only considered the character-level models without using Thesaraus-based data augmentation. We report both the validation and test errors. In our case, the network architecture for each dataset was selected based on the validation errors. The numbers of parameters are approximate. the outputs from convolutional layers can be com- puted analytically and are used by the recurrent layer to properly ignore padded inputs. The gradient of the cost function is computed with backpropagation through time (BPTT, (Werbos, 1990p). If the gra- dient has an L2 norm larger than 5, we rescale the gradient by a factor of Tan Le. leh) llglle Zc = g-min (1. dw and gc is the clipped gradient. | 1602.00367#23 | 1602.00367#25 | 1602.00367 | [
"1508.06615"
] |
1602.00367#25 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | Early stopping strategy is employed to prevent Before training, we set an initial overï¬ tting. patience value. At each epoch, we calculate and record the validation loss. If it is lower than the current lowest validation loss by 0.5%, we extend patience by two. Training stops when the number of epochs is larger than patience. We report the test error rate evaluated using the model with the lowest validation error. # 5 Results and Analysis Experimental results are listed in Table 3. We com- pare to the best character-level convolutional model without data augmentation from (Zhang et al., 2015) on each data set. Our model achieves comparable performances for all the eight data sets with signiï¬ - cantly less parameters. Speciï¬ cally, it performs bet- ter on AGâ | 1602.00367#24 | 1602.00367#26 | 1602.00367 | [
"1508.06615"
] |
1602.00367#26 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | s news, Sogou news, DBPedia, Yelp re- view full, and Yahoo! Answers data sets. Number of classes Fig. 2 (a) shows how relative performance of our model changes with respect to It is worth noting that as the number of classes. the number of classes increases, our model achieves better results compared to convolution-only models. For example, our model has a much lower test er- ror on DBPedia which has 14 classes, but it scores worse on Yelp review polarity and Amazon review polarity both of which have only two classes. Our conjecture is that more detailed and complete infor- mation needs to be preserved from the input text for the model to assign one of many classes to it. The convolution-only model likely loses detailed local features because it has more pooling layers. On the other hand, the proposed model with less pooling layers can better maintain the detailed information and hence performs better when such needs exist. Number of training examples Although it is less signiï¬ cant, Fig. 2 (b) shows that the proposed model generally works better compared to the convolution- only model when the data size is small. Considering the difference in the number of parameters, we sus- pect that because the proposed model is more com- pact, it is less prone to overï¬ | 1602.00367#25 | 1602.00367#27 | 1602.00367 | [
"1508.06615"
] |
1602.00367#27 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | tting. Therefore it gen- eralizes better when the training size is limited. Number of convolutional layers An interesting observation from our experiments is that the model accuracy does not always increase with the number of convolutional layers. Performances peak at two or three convolutional layers and decrease if we add e i=) L x iJ Sf} x 5 vo o x gv 0}. ------- Yor crt & o -5 x c oC ic] 2 -10 x -15 * 2 4 6 8 10 12 14 16 # of classes 10 -10 % change in test error x 0 500 1000 1500 2000 2500 3000 3500 4000 # of training examples (in thousands) -15 e i=) L x iJ Sf} x 5 vo o x 0}. ------- Yor crt & o -5 x c oC ic] 2 -10 x -15 * 2 4 6 8 10 12 14 16 # of classes (a) 10 -10 % change in test error x 0 500 1000 1500 2000 2500 3000 3500 4000 # of training examples (in thousands) (b) -15 (a) (b) Figure 2: Relative test performance of the proposed model compared to the convolution-only model w.r.t. (a) the number of classes and (b) the size of training set. Lower is better. more to the model. As more convolutional layers produce longer character n-grams, this indicates that there is an optimal level of local features to be fed into the recurrent layer. Also, as discussed above, more pooling layers likely lead to the lost of detailed information which in turn affects the ability of the recurrent layer to capture long-term dependencies. | 1602.00367#26 | 1602.00367#28 | 1602.00367 | [
"1508.06615"
] |
1602.00367#28 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | Number of ï¬ lters We experiment large models with 1,024 ï¬ lters on AGâ s news and Yahoo! An- swers data sets. Although adding more ï¬ lters in the convolutional layers does help with the model per- formances on these two data sets, the gains are lim- ited compared to the increased number of parame- ters. Validation error improves from 8.75% to 8.39% for AGâ s news and from 29.48% to 28.62% for Ya- hoo! Answers at the cost of a 70 times increase in the number of model parameters. Note that in our model we set the number of ï¬ l- ters in the convolutional layers to be the same as the dimension of the hidden states in the recurrent layer. It is possible to use more ï¬ lters in the convolutional layers while keeping the recurrent layer dimension the same to potentially get better performances with less sacriï¬ ce of the number of parameters. information. We validated the proposed model on eight large scale document classiï¬ cation tasks. The model achieved comparable results with much less convo- lutional layers compared to the convolution-only ar- chitecture. We further discussed several aspects that affect the model performance. The proposed model generally performs better when number of classes is large, training size is small, and when the number of convolutional layers is set to two or three. The proposed model is a general encoding archi- tecture that is not limited to document classiï¬ ca- tion tasks or natural language inputs. For example, (Chen et al., 2015; Visin et al., 2015) combined con- volution and recurrent layers to tackle image seg- mentation tasks; (Sainath et al., 2015) applied a sim- ilar model to do speech recognition. It will be inter- esting to see future research on applying the archi- tecture to other applications such as machine trans- lation and music information retrieval. Using recur- rent layers as substitutes for pooling layers to poten- tially reduce the lost of detailed local information is also a direction that worth exploring. # 6 Conclusion # Acknowledgments | 1602.00367#27 | 1602.00367#29 | 1602.00367 | [
"1508.06615"
] |
1602.00367#29 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | In this paper, we proposed a hybrid model that pro- cesses an input sequence of characters with a num- ber of convolutional layers followed by a single re- current layer. The proposed model is able to encode documents from character level capturing sub-word This work is done as a part of the course DS-GA 1010-001 Independent Study in Data Science at the Center for Data Science, New York University. # References [Ballesteros et al.2015] Miguel Ballesteros, Chris Dyer, and Noah A Smith. 2015. Improved transition-based parsing by modeling characters instead of words with lstms. arXiv preprint arXiv:1508.00657. [Bengio et al.1994] Yoshua Bengio, Patrice Simard, and Paolo Frasconi. 1994. Learning long-term depen- dencies with gradient descent is difï¬ | 1602.00367#28 | 1602.00367#30 | 1602.00367 | [
"1508.06615"
] |
1602.00367#30 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | cult. Neural Net- works, IEEE Transactions on, 5(2):157â 166. [Bridle1990] John S Bridle. 1990. Probabilistic interpre- tation of feedforward classiï¬ cation network outputs, with relationships to statistical pattern recognition. In Neurocomputing, pages 227â 236. Springer. [Carrier and Cho2014] Pierre Luc Carrier and LSTM networks for Kyunghyun Cho. sentiment analysis. Deep Learning Tutorials. 2014. [Chen et al.2015] Liang-Chieh Chen, Jonathan T. | 1602.00367#29 | 1602.00367#31 | 1602.00367 | [
"1508.06615"
] |
1602.00367#31 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | Bar- ron, George Papandreou, Kevin Murphy, and Alan L. Yuille. 2015. Semantic image segmentation with task-speciï¬ c edge detection using cnns and a dis- CoRR, criminatively trained domain transform. abs/1511.03328. [Cho et al.2014] Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase represen- tations using rnn encoder-decoder for statistical ma- chine translation. In Conference on Empirical Meth- ods in Natural Language Processing (EMNLP 2014). [Gers et al.2000] Felix A Gers, J¨urgen Schmidhuber, and 2000. Learning to forget: Con- Neural computation, Fred Cummins. tinual prediction with lstm. 12(10):2451â 2471. [Glorot et al.2011] Xavier Glorot, Antoine Bordes, and Yoshua Bengio. 2011. Deep sparse rectiï¬ er neural networks. | 1602.00367#30 | 1602.00367#32 | 1602.00367 | [
"1508.06615"
] |
1602.00367#32 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | In Geoffrey J. Gordon and David B. Dun- son, editors, Proceedings of the Fourteenth Interna- tional Conference on Artiï¬ cial Intelligence and Statis- tics (AISTATS-11), volume 15, pages 315â 323. Journal of Machine Learning Research - Workshop and Con- ference Proceedings. [Hochreiter and Schmidhuber1997] Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735â | 1602.00367#31 | 1602.00367#33 | 1602.00367 | [
"1508.06615"
] |
1602.00367#33 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | 1780. [Hochreiter et al.2001] Sepp Hochreiter, Yoshua Bengio, Paolo Frasconi, and Jï¬ rgen Schmidhuber. 2001. Gra- dient ï¬ ow in recurrent nets: the difï¬ culty of learning long-term dependencies, volume 1. IEEE. [Kim et al.2015] Yoon Kim, Yacine Jernite, David Son- 2015. Character- arXiv preprint language models. tag, and Alexander M Rush. aware neural arXiv:1508.06615. [Kim2014] Yoon Kim. 2014. | 1602.00367#32 | 1602.00367#34 | 1602.00367 | [
"1508.06615"
] |
1602.00367#34 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | Convolutional neural networks for sentence classiï¬ cation. arXiv preprint arXiv:1408.5882. [Krizhevsky et al.2012] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. 2012. Imagenet classiï¬ cation with deep convolutional neural networks. In F. Pereira, C.J.C. Burges, L. Bottou, and K.Q. Weinberger, edi- tors, Advances in Neural Information Processing Sys- tems 25, pages 1097â 1105. Curran Associates, Inc. [Ling et al.2015] Wang Ling, Tiago Lu´ıs, Lu´ıs Marujo, Ram´on Fernandez Astudillo, Silvio Amir, Chris Dyer, Alan W Black, and Isabel Trancoso. 2015. | 1602.00367#33 | 1602.00367#35 | 1602.00367 | [
"1508.06615"
] |
1602.00367#35 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | Finding function in form: Compositional character models for open vocabulary word representation. arXiv preprint arXiv:1508.02096. [Mesnil et al.2014] Gr´egoire Mesnil, Marcâ Aurelio Ran- zato, Tomas Mikolov, and Yoshua Bengio. 2014. En- semble of generative and discriminative techniques for sentiment analysis of movie reviews. arXiv preprint arXiv:1412.5335. [Sainath et al.2015] T.N. Sainath, O. Vinyals, A. Senior, and H. Sak. 2015. | 1602.00367#34 | 1602.00367#36 | 1602.00367 | [
"1508.06615"
] |
1602.00367#36 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | Convolutional, long short-term memory, fully connected deep neural networks. In Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on, pages 4580â 4584, April. [Socher et al.2013] Richard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In EMNLP. [Srivastava et al.2014] Nitish Srivastava, Geoffrey Hin- ton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to pre- vent neural networks from overï¬ tting. Journal of Ma- chine Learning Research, 15:1929â 1958. [Sundermeyer et al.2015] Martin Sundermeyer, Hermann Ney, and Ralf Schluter. 2015. From feedforward to recurrent lstm neural networks for language modeling. Audio, Speech, and Language Processing, IEEE/ACM Transactions on, 23(3):517â | 1602.00367#35 | 1602.00367#37 | 1602.00367 | [
"1508.06615"
] |
1602.00367#37 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | 529. [Tang et al.2015] Duyu Tang, Bing Qin, and Ting Liu. 2015. Document modeling with gated recurrent neu- ral network for sentiment classiï¬ cation. In Proceed- ings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1422â 1432. Kyle Kastner, Aaron C. Courville, Yoshua Bengio, Matteo Mat- 2015. Reseg: A teucci, and KyungHyun Cho. recurrent neural network for object segmentation. | 1602.00367#36 | 1602.00367#38 | 1602.00367 | [
"1508.06615"
] |
1602.00367#38 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | CoRR, abs/1511.07053. [Werbos1990] P. Werbos. 1990. Backpropagation In through time: what does it do and how to do it. Proceedings of IEEE, volume 78, pages 1550â 1560. [Zeiler2012] Matthew D. Zeiler. 2012. ADADELTA: an adaptive learning rate method. CoRR, abs/1212.5701. [Zhang et al.2015] Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classiï¬ cation. In Advanced in Neural Informa- tion Processing Systems (NIPS 2015), volume 28. | 1602.00367#37 | 1602.00367 | [
"1508.06615"
] |
|
1601.06759#0 | Pixel Recurrent Neural Networks | 6 1 0 2 g u A 9 1 ] V C . s c [ 3 v 9 5 7 6 0 . 1 0 6 1 : v i X r a # Pixel Recurrent Neural Networks # A¨aron van den Oord Nal Kalchbrenner Koray Kavukcuoglu [email protected] [email protected] [email protected] | 1601.06759#1 | 1601.06759 | [
"1511.01844"
] |
|
1601.06759#1 | Pixel Recurrent Neural Networks | Google DeepMind # Abstract occluded # completions # original Modeling the distribution of natural images is a landmark problem in unsupervised learning. This task requires an image model that is at tractable and scalable. We once expressive, present a deep neural network that sequentially predicts the pixels in an image along the two spatial dimensions. Our method models the dis- crete probability of the raw pixel values and en- codes the complete set of dependencies in the image. Architectural novelties include fast two- dimensional recurrent layers and an effective use of residual connections in deep recurrent net- works. We achieve log-likelihood scores on nat- ural images that are considerably better than the previous state of the art. Our main results also provide benchmarks on the diverse ImageNet dataset. Samples generated from the model ap- pear crisp, varied and globally coherent. Figure 1. Image completions sampled from a PixelRNN. eling is building complex and expressive models that are also tractable and scalable. This trade-off has resulted in a large variety of generative models, each having their ad- vantages. Most work focuses on stochastic latent variable models such as VAEâ s (Rezende et al., 2014; Kingma & Welling, 2013) that aim to extract meaningful representa- tions, but often come with an intractable inference step that can hinder their performance. | 1601.06759#0 | 1601.06759#2 | 1601.06759 | [
"1511.01844"
] |
1601.06759#2 | Pixel Recurrent Neural Networks | # 1. Introduction Generative image modeling is a central problem in unsu- pervised learning. Probabilistic density models can be used for a wide variety of tasks that range from image compres- sion and forms of reconstruction such as image inpainting (e.g., see Figure 1) and deblurring, to generation of new images. When the model is conditioned on external infor- mation, possible applications also include creating images based on text descriptions or simulating future frames in a planning task. One of the great advantages in generative modeling is that there are practically endless amounts of image data available to learn from. However, because im- ages are high dimensional and highly structured, estimating the distribution of natural images is extremely challenging. One of the most important obstacles in generative mod- Proceedings of the 33 rd International Conference on Machine Learning, New York, NY, USA, 2016. JMLR: W&CP volume 48. Copyright 2016 by the author(s). | 1601.06759#1 | 1601.06759#3 | 1601.06759 | [
"1511.01844"
] |
1601.06759#3 | Pixel Recurrent Neural Networks | One effective approach to tractably model a joint distribu- tion of the pixels in the image is to cast it as a product of conditional distributions; this approach has been adopted in autoregressive models such as NADE (Larochelle & Mur- ray, 2011) and fully visible neural networks (Neal, 1992; Bengio & Bengio, 2000). The factorization turns the joint modeling problem into a sequence problem, where one learns to predict the next pixel given all the previously gen- erated pixels. But to model the highly nonlinear and long- range correlations between pixels and the complex condi- tional distributions that result, a highly expressive sequence model is necessary. Recurrent Neural Networks (RNN) are powerful models that offer a compact, shared parametrization of a series of conditional distributions. RNNs have been shown to excel at hard sequence problems ranging from handwriting gen- eration (Graves, 2013), to character prediction (Sutskever et al., 2011) and to machine translation (Kalchbrenner & Blunsom, 2013). A two-dimensional RNN has produced very promising results in modeling grayscale images and textures (Theis & Bethge, 2015). In this paper we advance two-dimensional RNNs and ap- Pixel Recurrent Neural Networks Mask B 2. oe ee eee cs Multi-scale context Mask A Context Figure 2. Left: To generate pixel xi one conditions on all the pre- viously generated pixels left and above of xi. Center: To gen- erate a pixel in the multi-scale case we can also condition on the subsampled image pixels (in light blue). Right: Diagram of the connectivity inside a masked convolution. In the ï¬ rst layer, each of the RGB channels is connected to previous channels and to the context, but is not connected to itself. In subsequent layers, the channels are also connected to themselves. | 1601.06759#2 | 1601.06759#4 | 1601.06759 | [
"1511.01844"
] |
1601.06759#4 | Pixel Recurrent Neural Networks | The contributions of the paper are as follows. In Section 3 we design two types of PixelRNNs corresponding to the two types of LSTM layers; we describe the purely convo- lutional PixelCNN that is our fastest architecture; and we design a Multi-Scale version of the PixelRNN. In Section 5 we show the relative beneï¬ ts of using the discrete softmax distribution in our models and of adopting residual connec- tions for the LSTM layers. Next we test the models on MNIST and on CIFAR-10 and show that they obtain log- likelihood scores that are considerably better than previous results. We also provide results for the large-scale Ima- geNet dataset resized to both 32 à 32 and 64 à 64 pixels; to our knowledge likelihood values from generative models have not previously been reported on this dataset. Finally, we give a qualitative evaluation of the samples generated from the PixelRNNs. ply them to large-scale modeling of natural images. The resulting PixelRNNs are composed of up to twelve, fast two-dimensional Long Short-Term Memory (LSTM) lay- ers. These layers use LSTM units in their state (Hochreiter & Schmidhuber, 1997; Graves & Schmidhuber, 2009) and adopt a convolution to compute at once all the states along one of the spatial dimensions of the data. We design two types of these layers. | 1601.06759#3 | 1601.06759#5 | 1601.06759 | [
"1511.01844"
] |
1601.06759#5 | Pixel Recurrent Neural Networks | The ï¬ rst type is the Row LSTM layer where the convolution is applied along each row; a similar technique is described in (Stollenga et al., 2015). The sec- ond type is the Diagonal BiLSTM layer where the convolu- tion is applied in a novel fashion along the diagonals of the image. The networks also incorporate residual connections (He et al., 2015) around LSTM layers; we observe that this helps with training of the PixelRNN for up to twelve layers of depth. We also consider a second, simpliï¬ ed architecture which shares the same core components as the PixelRNN. We ob- serve that Convolutional Neural Networks (CNN) can also be used as sequence model with a ï¬ xed dependency range, by using Masked convolutions. The PixelCNN architec- ture is a fully convolutional network of ï¬ fteen layers that preserves the spatial resolution of its input throughout the layers and outputs a conditional distribution at each loca- tion. | 1601.06759#4 | 1601.06759#6 | 1601.06759 | [
"1511.01844"
] |
1601.06759#6 | Pixel Recurrent Neural Networks | # 2. Model Our aim is to estimate a distribution over natural images that can be used to tractably compute the likelihood of im- ages and to generate new ones. The network scans the im- age one row at a time and one pixel at a time within each row. For each pixel it predicts the conditional distribution over the possible pixel values given the scanned context. Figure 2 illustrates this process. The joint distribution over the image pixels is factorized into a product of conditional distributions. The parameters used in the predictions are shared across all pixel positions in the image. To capture the generation process, Theis & Bethge (2015) propose to use a two-dimensional LSTM network (Graves & Schmidhuber, 2009) that starts at the top left pixel and proceeds towards the bottom right pixel. The advantage of the LSTM network is that it effectively handles long-range dependencies that are central to object and scene under- standing. The two-dimensional structure ensures that the signals are well propagated both in the left-to-right and top- to-bottom directions. | 1601.06759#5 | 1601.06759#7 | 1601.06759 | [
"1511.01844"
] |
1601.06759#7 | Pixel Recurrent Neural Networks | In this section we ï¬ rst focus on the form of the distribution, whereas the next section will be devoted to describing the architectural innovations inside PixelRNN. Both PixelRNN and PixelCNN capture the full generality of pixel inter-dependencies without introducing indepen- dence assumptions as in e.g., latent variable models. The dependencies are also maintained between the RGB color values within each individual pixel. Furthermore, in con- trast to previous approaches that model the pixels as con- tinuous values (e.g., Theis & Bethge (2015); Gregor et al. (2014)), we model the pixels as discrete values using a multinomial distribution implemented with a simple soft- max layer. We observe that this approach gives both repre- sentational and training advantages for our models. | 1601.06759#6 | 1601.06759#8 | 1601.06759 | [
"1511.01844"
] |
1601.06759#8 | Pixel Recurrent Neural Networks | # 2.1. Generating an Image Pixel by Pixel The goal is to assign a probability p(x) to each image x formed of n à n pixels. We can write the image x as a one- dimensional sequence x1, ..., xn2 where pixels are taken from the image row by row. To estimate the joint distri- bution p(x) we write it as the product of the conditional distributions over the pixels: p(x) =] [e(eiles, 214) dd) i=1 Pixel Recurrent Neural Networks The value p(xi|x1, ..., xiâ 1) is the probability of the i-th pixel xi given all the previous pixels x1, ..., xiâ 1. The gen- eration proceeds row by row and pixel by pixel. Figure 2 (Left) illustrates the conditioning scheme. Each pixel xi is in turn jointly determined by three values, one for each of the color channels Red, Green and Blue (RGB). We rewrite the distribution p(xi|x<i) as the fol- lowing product: p(xi,R|x<i)p(xi,G|x<i, xi,R)p(xi,B|x<i, xi,R, xi,G) (2) Each of the colors is thus conditioned on the other channels as well as on all the previously generated pixels. | 1601.06759#7 | 1601.06759#9 | 1601.06759 | [
"1511.01844"
] |
1601.06759#9 | Pixel Recurrent Neural Networks | _ Sai nn Figure 3. In the Diagonal BiLSTM, to allow for parallelization along the diagonals, the input map is skewed by offseting each row by one position with respect to the previous row. When the spatial layer is computed left to right and column by column, the output map is shifted back into the original size. The convolution uses a kernel of size 2 à 1. Note that during training and evaluation the distributions over the pixel values are computed in parallel, while the generation of an image is sequential. dimensional convolution has size k à 1 where k ⠥ 3; the larger the value of k the broader the context that is captured. | 1601.06759#8 | 1601.06759#10 | 1601.06759 | [
"1511.01844"
] |
1601.06759#10 | Pixel Recurrent Neural Networks | The weight sharing in the convolution ensures translation invariance of the computed features along each row. # 2.2. Pixels as Discrete Variables Previous approaches use a continuous distribution for the values of the pixels in the image (e.g. Theis & Bethge (2015); Uria et al. (2014)). By contrast we model p(x) as a discrete distribution, with every conditional distribution in Equation 2 being a multinomial that is modeled with a softmax layer. Each channel variable xi,â simply takes one of 256 distinct values. The discrete distribution is represen- tationally simple and has the advantage of being arbitrarily multimodal without prior on the shape (see Fig. 6). | 1601.06759#9 | 1601.06759#11 | 1601.06759 | [
"1511.01844"
] |
1601.06759#11 | Pixel Recurrent Neural Networks | Exper- imentally we also ï¬ nd the discrete distribution to be easy to learn and to produce better performance compared to a continuous distribution (Section 5). The computation proceeds as follows. An LSTM layer has an input-to-state component and a recurrent state-to-state component that together determine the four gates inside the LSTM core. To enhance parallelization in the Row LSTM the input-to-state component is ï¬ rst computed for the entire two-dimensional input map; for this a k à 1 convolution is used to follow the row-wise orientation of the LSTM itself. The convolution is masked to include only the valid context (see Section 3.4) and produces a tensor of size 4h à | 1601.06759#10 | 1601.06759#12 | 1601.06759 | [
"1511.01844"
] |
1601.06759#12 | Pixel Recurrent Neural Networks | n à n, representing the four gate vectors for each position in the input map, where h is the number of output feature maps. # 3. Pixel Recurrent Neural Networks To compute one step of the state-to-state component of the LSTM layer, one is given the previous hidden and cell states hiâ 1 and ciâ 1, each of size h à n à 1. The new hidden and cell states hi, ci are obtained as follows: In this section we describe the architectural components that compose the PixelRNN. In Sections 3.1 and 3.2, we describe the two types of LSTM layers that use convolu- tions to compute at once the states along one of the spatial dimensions. In Section 3.3 we describe how to incorporate residual connections to improve the training of a PixelRNN with many LSTM layers. In Section 3.4 we describe the softmax layer that computes the discrete joint distribution of the colors and the masking technique that ensures the proper conditioning scheme. In Section 3.5 we describe the PixelCNN architecture. Finally in Section 3.6 we describe the multi-scale architecture. # 3.1. Row LSTM The Row LSTM is a unidirectional layer that processes the image row by row from top to bottom computing fea- tures for a whole row at once; the computation is per- formed with a one-dimensional convolution. For a pixel xi the layer captures a roughly triangular context above the pixel as shown in Figure 4 (center). The kernel of the one- {o;, fi, i;, gi] = o(K** ® hy_| + K* ®x;) f,Oc¢-14+i; Ogi (3) 0; © tanh(c;) Ci hj where x; of size h x n x 1 is row i of the input map, and ® represents the convolution operation and © the element- wise multiplication. The weights K** and K** are the kernel weights for the state-to-state and the input-to-state components, where the latter is precomputed as described above. In the case of the output, forget and input gates 0,, f, and i;, the activation a is the logistic sigmoid function, whereas for the content gate g;, o is the tanh function. Each step computes at once the new state for an entire row of the input map. | 1601.06759#11 | 1601.06759#13 | 1601.06759 | [
"1511.01844"
] |
1601.06759#13 | Pixel Recurrent Neural Networks | Because the Row LSTM has a triangular receptive field (Figure 4), it is unable to capture the entire available context. Pixel Recurrent Neural Networks oo000 2 _® @-©-9-0- ©0000 ones oeees ooe@0o°o 00800 @ECOO oo cof~ Oo 00 cfoo oo cto Oo OTe Kore) OOOO Ooo000 ofe (e) oy olronene) O01I000 C@e0°0 C0@e@00 C0e@0o°o oo000 lomonenene) oo000 PixelCNN Row LSTM Diagonal BiLSTM | 1601.06759#12 | 1601.06759#14 | 1601.06759 | [
"1511.01844"
] |
1601.06759#14 | Pixel Recurrent Neural Networks | Figure 4. Visualization of the input-to-state and state-to-state mappings for the three proposed architectures. # 3.2. Diagonal BiLSTM The Diagonal BiLSTM is designed to both parallelize the computation and to capture the entire available context for any image size. Each of the two directions of the layer scans the image in a diagonal fashion starting from a cor- ner at the top and reaching the opposite corner at the bot- tom. Each step in the computation computes at once the LSTM state along a diagonal in the image. Figure 4 (right) illustrates the computation and the resulting receptive ï¬ | 1601.06759#13 | 1601.06759#15 | 1601.06759 | [
"1511.01844"
] |
1601.06759#15 | Pixel Recurrent Neural Networks | eld. The diagonal computation proceeds as follows. We ï¬ rst skew the input map into a space that makes it easy to ap- ply convolutions along diagonals. The skewing operation offsets each row of the input map by one position with re- spect to the previous row, as illustrated in Figure 3; this results in a map of size n à (2n â 1). At this point we can compute the input-to-state and state-to-state components of the Diagonal BiLSTM. For each of the two directions, the input-to-state component is simply a 1 à 1 convolution K is that contributes to the four gates in the LSTM core; the op- eration generates a 4h à n à n tensor. The state-to-state recurrent component is then computed with a column-wise convolution K ss that has a kernel of size 2 à 1. The step takes the previous hidden and cell states, combines the con- tribution of the input-to-state component and produces the next hidden and cell states, as deï¬ ned in Equation 3. The output feature map is then skewed back into an n à n map by removing the offset positions. This computation is re- peated for each of the two directions. Given the two out- put maps, to prevent the layer from seeing future pixels, the right output map is then shifted down by one row and added to the left output map. | 1601.06759#14 | 1601.06759#16 | 1601.06759 | [
"1511.01844"
] |
1601.06759#16 | Pixel Recurrent Neural Networks | Besides reaching the full dependency ï¬ eld, the Diagonal BiLSTM has the additional advantage that it uses a con- volutional kernel of size 2 à 1 that processes a minimal amount of information at each step yielding a highly non- linear computation. Kernel sizes larger than 2 à 1 are not particularly useful as they do not broaden the already global receptive ï¬ eld of the Diagonal BiLSTM. # 3.3. Residual Connections We train PixelRNNs of up to twelve layers of depth. As a means to both increase convergence speed and propagate signals more directly through the network, we deploy resid- ual connections (He et al., 2015) from one LSTM layer to the next. Figure 5 shows a diagram of the residual blocks. The input map to the PixelRNN LSTM layer has 2h fea- tures. The input-to-state component reduces the number of features by producing h features per gate. After applying the recurrent layer, the output map is upsampled back to 2h features per position via a 1 à 1 convolution and the input map is added to the output map. This method is related to previous approaches that use gating along the depth of the recurrent network (Kalchbrenner et al., 2015; Zhang et al., 2016), but has the advantage of not requiring additional gates. Apart from residual connections, one can also use learnable skip connections from each layer to the output. In the experiments we evaluate the relative effectiveness of residual and layer-to-output skip connections. ReLU - 1x1 Conv 1x1 Conv 2h ry 2h h ReLU - 3x3 Conv h ry h 2h eres 2h LSTM Figure 5. Residual blocks for a PixelCNN (left) and PixelRNNs. # 3.4. Masked Convolution The h features for each input position at every layer in the network are split into three parts, each corresponding to one of the RGB channels. When predicting the R chan- nel for the current pixel xi, only the generated pixels left and above of xi can be used as context. When predicting the G channel, the value of the R channel can also be used as context in addition to the previously generated pixels. Likewise, for the B channel, the values of both the R and G channels can be used. | 1601.06759#15 | 1601.06759#17 | 1601.06759 | [
"1511.01844"
] |
1601.06759#17 | Pixel Recurrent Neural Networks | To restrict connections in the net- work to these dependencies, we apply a mask to the input- to-state convolutions and to other purely convolutional lay- ers in a PixelRNN. We use two types of masks that we indicate with mask A and mask B, as shown in Figure 2 (Right). Mask A is ap- plied only to the ï¬ rst convolutional layer in a PixelRNN and restricts the connections to those neighboring pixels and to those colors in the current pixels that have already been predicted. On the other hand, mask B is applied to all the subsequent input-to-state convolutional transitions and relaxes the restrictions of mask A by also allowing the connection from a color to itself. The masks can be eas- ily implemented by zeroing out the corresponding weights in the input-to-state convolutions after each update. | 1601.06759#16 | 1601.06759#18 | 1601.06759 | [
"1511.01844"
] |
1601.06759#18 | Pixel Recurrent Neural Networks | Simi- Pixel Recurrent Neural Networks PixelCNN Row LSTM Diagonal BiLSTM 7 à 7 conv mask A Multiple residual blocks: (see ï¬ g 5) Conv 3 à 3 mask B i-s: 3 à 1 mask B s-s: 3 à 1 no mask Row LSTM Diagonal BiLSTM i-s: 1 à 1 mask B s-s: 1 à 2 no mask ReLU followed by 1 à 1 conv, mask B (2 layers) 256-way Softmax for each RGB color (Natural images) or Sigmoid (MNIST) | 1601.06759#17 | 1601.06759#19 | 1601.06759 | [
"1511.01844"
] |
1601.06759#19 | Pixel Recurrent Neural Networks | Table 1. Details of the architectures. In the LSTM architectures i-s and s-s stand for input-state and state-state convolutions. layer in the conditional PixelRNN, one simply maps the c à n à n conditioning map into a 4h à n à n map that is added to the input-to-state map of the corresponding layer; this is performed using a 1 à 1 unmasked convolution. The larger n à n image is then generated as usual. # 4. Speciï¬ cations of Models In this section we give the speciï¬ cations of the PixelRNNs used in the experiments. We have four types of networks: the PixelRNN based on Row LSTM, the one based on Di- agonal BiLSTM, the fully convolutional one and the Multi- Scale one. lar masks have also been used in variational autoencoders (Gregor et al., 2014; Germain et al., 2015). | 1601.06759#18 | 1601.06759#20 | 1601.06759 | [
"1511.01844"
] |
1601.06759#20 | Pixel Recurrent Neural Networks | # 3.5. PixelCNN The Row and Diagonal LSTM layers have a potentially unbounded dependency range within their receptive ï¬ eld. This comes with a computational cost as each state needs to be computed sequentially. One simple workaround is to make the receptive ï¬ eld large, but not unbounded. We can use standard convolutional layers to capture a bounded receptive ï¬ eld and compute features for all pixel positions at once. The PixelCNN uses multiple convolutional lay- ers that preserve the spatial resolution; pooling layers are not used. Masks are adopted in the convolutions to avoid seeing the future context; masks have previously also been used in non-convolutional models such as MADE (Ger- main et al., 2015). Note that the advantage of paralleliza- tion of the PixelCNN over the PixelRNN is only available during training or during evaluating of test images. The image generation process is sequential for both kinds of networks, as each sampled pixel needs to be given as input back into the network. | 1601.06759#19 | 1601.06759#21 | 1601.06759 | [
"1511.01844"
] |
1601.06759#21 | Pixel Recurrent Neural Networks | Table 1 speciï¬ es each layer in the single-scale networks. The ï¬ rst layer is a 7 à 7 convolution that uses the mask of type A. The two types of LSTM networks then use a vari- able number of recurrent layers. The input-to-state con- volution in this layer uses a mask of type B, whereas the state-to-state convolution is not masked. The PixelCNN uses convolutions of size 3 à 3 with a mask of type B. The top feature map is then passed through a couple of layers consisting of a Rectiï¬ ed Linear Unit (ReLU) and a 1à 1 convolution. For the CIFAR-10 and ImageNet experi- ments, these layers have 1024 feature maps; for the MNIST experiment, the layers have 32 feature maps. Residual and layer-to-output connections are used across the layers of all three networks. The networks used in the experiments have the following hyperparameters. For MNIST we use a Diagonal BiLSTM with 7 layers and a value of h = 16 (Section 3.3 and Figure 5 right). For CIFAR-10 the Row and Diagonal BiLSTMs have 12 layers and a number of h = 128 units. The Pixel- CNN has 15 layers and h = 128. For 32 à 32 ImageNet we adopt a 12 layer Row LSTM with h = 384 units and for 64 à 64 ImageNet we use a 4 layer Row LSTM with h = 512 units; the latter model does not use residual con- nections. | 1601.06759#20 | 1601.06759#22 | 1601.06759 | [
"1511.01844"
] |
1601.06759#22 | Pixel Recurrent Neural Networks | # 3.6. Multi-Scale PixelRNN The Multi-Scale PixelRNN is composed of an uncondi- tional PixelRNN and one or more conditional PixelRNNs. The unconditional network ï¬ rst generates in the standard way a smaller sà s image that is subsampled from the orig- inal image. The conditional network then takes the s à s image as an additional input and generates a larger n à n image, as shown in Figure 2 (Middle). # 5. Experiments In this section we describe our experiments and results. We begin by describing the way we evaluate and compare our results. In Section 5.2 we give details about the training. Then we give results on the relative effectiveness of archi- tectural components and our best results on the MNIST, CIFAR-10 and ImageNet datasets. The conditional network is similar to a standard PixelRNN, but each of its layers is biased with an upsampled version of the small s à s image. The upsampling and biasing pro- cesses are deï¬ ned as follows. In the upsampling process, one uses a convolutional network with deconvolutional lay- ers to construct an enlarged feature map of size c à n à n, where c is the number of features in the output map of the upsampling network. Then, in the biasing process, for each # 5.1. Evaluation | 1601.06759#21 | 1601.06759#23 | 1601.06759 | [
"1511.01844"
] |
1601.06759#23 | Pixel Recurrent Neural Networks | All our models are trained and evaluated on the log- likelihood loss function coming from a discrete distribu- tion. Although natural image data is usually modeled with continuous distributions using density functions, we can compare our results with previous art in the following way. Pixel Recurrent Neural Networks In the literature it is currently best practice to add real- valued noise to the pixel values to dequantize the data when using density functions (Uria et al., 2013). When uniform noise is added (with values in the interval [0, 1]), then the log-likelihoods of continuous and discrete models are di- rectly comparable (Theis et al., 2015). In our case, we can use the values from the discrete distribution as a piecewise- uniform continuous function that has a constant value for every interval [i, i + 1], i = 1, 2, . . . 256. This correspond- ing distribution will have the same log-likelihood (on data with added noise) as the original discrete distribution (on discrete data). In Figure 6 we show a few softmax activations from the model. | 1601.06759#22 | 1601.06759#24 | 1601.06759 | [
"1511.01844"
] |
1601.06759#24 | Pixel Recurrent Neural Networks | Although we donâ t embed prior information about the meaning or relations of the 256 color categories, e.g. that pixel values 51 and 52 are neighbors, the distributions predicted by the model are meaningful and can be multi- modal, skewed, peaked or long tailed. Also note that values 0 and 255 often get a much higher probability as they are more frequent. Another advantage of the discrete distribu- tion is that we do not worry about parts of the distribution mass lying outside the interval [0, 255], which is something that typically happens with continuous distributions. For MNIST we report the negative log-likelihood in nats as it is common practice in literature. For CIFAR-10 and ImageNet we report negative log-likelihoods in bits per di- mension. The total discrete log-likelihood is normalized by the dimensionality of the images (e.g., 32 Ã 32 Ã 3 = 3072 for CIFAR-10). These numbers are interpretable as the number of bits that a compression scheme based on this model would need to compress every RGB color value (van den Oord & Schrauwen, 2014b; Theis et al., 2015); in practice there is also a small overhead due to arithmetic coding. # 5.2. | 1601.06759#23 | 1601.06759#25 | 1601.06759 | [
"1511.01844"
] |
1601.06759#25 | Pixel Recurrent Neural Networks | Training Details A 0 2550 255 Our models are trained on GPUs using the Torch toolbox. From the different parameter update rules tried, RMSProp gives best convergence performance and is used for all ex- periments. The learning rate schedules were manually set for every dataset to the highest values that allowed fast con- vergence. The batch sizes also vary for different datasets. For smaller datasets such as MNIST and CIFAR-10 we use smaller batch sizes of 16 images as this seems to regularize the models. For ImageNet we use as large a batch size as allowed by the GPU memory; this corresponds to 64 im- ages/batch for 32 Ã 32 ImageNet, and 32 images/batch for 64 Ã 64 ImageNet. Apart from scaling and centering the images at the input of the network, we donâ t use any other preprocessing or augmentation. For the multinomial loss function we use the raw pixel color values as categories. For all the PixelRNN models, we learn the initial recurrent state of the network. | 1601.06759#24 | 1601.06759#26 | 1601.06759 | [
"1511.01844"
] |
1601.06759#26 | Pixel Recurrent Neural Networks | Figure 6. Example softmax activations from the model. The top left shows the distribution of the ï¬ rst pixel red value (ï¬ rst value to sample). # 5.4. Residual Connections Another core component of the networks is residual con- nections. In Table 2 we show the results of having residual connections, having standard skip connections or having both, in the 12-layer CIFAR-10 Row LSTM model. We see that using residual connections is as effective as using skip connections; using both is also effective and preserves the advantage. | 1601.06759#25 | 1601.06759#27 | 1601.06759 | [
"1511.01844"
] |
1601.06759#27 | Pixel Recurrent Neural Networks | # 5.3. Discrete Softmax Distribution Apart from being intuitive and easy to implement, we ï¬ nd that using a softmax on discrete pixel values instead of a mixture density approach on continuous pixel values gives better results. For the Row LSTM model with a softmax output distribution we obtain 3.06 bits/dim on the CIFAR- 10 validation set. For the same model with a Mixture of Conditional Gaussian Scale Mixtures (MCGSM) (Theis & Bethge, 2015) we obtain 3.22 bits/dim. | 1601.06759#26 | 1601.06759#28 | 1601.06759 | [
"1511.01844"
] |
1601.06759#28 | Pixel Recurrent Neural Networks | No skip Skip No residual: Residual: 3.22 3.07 3.09 3.06 Table 2. Effect of residual and skip connections in the Row LSTM network evaluated on the Cifar-10 validation set in bits/dim. When using both the residual and skip connections, we see in Table 3 that performance of the Row LSTM improves with increased depth. This holds for up to the 12 LSTM layers that we tried. Pixel Recurrent Neural Networks Figure 7. Samples from models trained on CIFAR-10 (left) and ImageNet 32x32 (right) images. In general we can see that the models capture local spatial dependencies relatively well. The ImageNet model seems to be better at capturing more global structures than the CIFAR-10 model. The ImageNet model was larger and trained on much more data, which explains the qualitative difference in samples. # layers: 1 2 3 6 9 12 NLL: 3.30 3.20 3.17 3.09 3.08 3.06 Table 3. Effect of the number of layers on the negative log likeli- hood evaluated on the CIFAR-10 validation set (bits/dim). # 5.5. MNIST Although the goal of our work was to model natural images on a large scale, we also tried our model on the binary ver- sion (Salakhutdinov & Murray, 2008) of MNIST (LeCun et al., 1998) as it is a good sanity check and there is a lot of previous art on this dataset to compare with. In Table 4 we report the performance of the Diagonal BiLSTM model and that of previous published results. To our knowledge this is the best reported result on MNIST so far. Model NLL Test DBM 2hl [1]: DBN 2hl [2]: NADE [3]: EoNADE 2hl (128 orderings) [3]: EoNADE-5 2hl (128 orderings) [4]: DLGM [5]: DLGM 8 leapfrog steps [6]: DARN 1hl [7]: MADE 2hl (32 masks) [8]: DRAW [9]: PixelCNN: Row LSTM: Diagonal BiLSTM (1 layer, h = 32): | 1601.06759#27 | 1601.06759#29 | 1601.06759 | [
"1511.01844"
] |
1601.06759#29 | Pixel Recurrent Neural Networks | Diagonal BiLSTM (7 layers, h = 16): â 84.62 â 84.55 88.33 85.10 84.68 â 86.60 â 85.51 â 84.13 86.64 â ¤ 80.97 81.30 80.54 80.75 79.20 # 5.6. CIFAR-10 Next we test our models on the CIFAR-10 dataset (Krizhevsky, 2009). Table 5 lists the results of our mod- els and that of previously published approaches. All our results were obtained without data augmentation. For the proposed networks, the Diagonal BiLSTM has the best performance, followed by the Row LSTM and the Pixel- CNN. | 1601.06759#28 | 1601.06759#30 | 1601.06759 | [
"1511.01844"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.