doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
1608.08614
39
[11] C. Fellbaum. WordNet: An Electronic Lexical Database. Bradford Books, 1998. [12] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich fea- ture hierarchies for accurate object detection and semantic segmentation. In Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on, pages 580–587. IEEE, 2014. [13] G. Gkioxari, R. Girshick, and J. Malik. Contextual action recognition with rcnn. In ICCV, 2015. [14] R. Goroshin, J. Bruna, J. Tompson, D. Eigen, and Y. LeCun. Unsupervised feature learning from temporal data. arXiv preprint arXiv:1504.02518, 2015. [15] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. CoRR, abs/1512.03385, 2015. 9 [16] D. Jayaraman and K. Grauman. Learning image representa- tions tied to ego-motion. In Proceedings of the IEEE Inter- national Conference on Computer Vision, pages 1413–1421, 2015.
1608.08614#39
What makes ImageNet good for transfer learning?
The tremendous success of ImageNet-trained deep features on a wide range of transfer tasks begs the question: what are the properties of the ImageNet dataset that are critical for learning good, general-purpose features? This work provides an empirical investigation of various facets of this question: Is more pre-training data always better? How does feature quality depend on the number of training examples per class? Does adding more object classes improve performance? For the same data budget, how should the data be split into classes? Is fine-grained recognition necessary for learning good features? Given the same number of training classes, is it better to have coarse classes or fine-grained classes? Which is better: more classes or more examples per class? To answer these and related questions, we pre-trained CNN features on various subsets of the ImageNet dataset and evaluated transfer performance on PASCAL detection, PASCAL action classification, and SUN scene classification tasks. Our overall findings suggest that most changes in the choice of pre-training data long thought to be critical do not significantly affect transfer performance.? Given the same number of training classes, is it better to have coarse classes or fine-grained classes? Which is better: more classes or more examples per class?
http://arxiv.org/pdf/1608.08614
Minyoung Huh, Pulkit Agrawal, Alexei A. Efros
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20160830
20161210
[ { "id": "1507.06550" }, { "id": "1504.02518" }, { "id": "1512.04412" } ]
1608.08614
40
[17] Y. Jia. Caffe: An open source convolutional archi- http://caffe. tecture for fast feature embedding. berkeleyvision.org/, 2013. [18] A. Joulin, L. van der Maaten, A. Jabri, and N. Vasilache. Learning visual features from large weakly supervised data. In ECCV, 2016. [19] A. Karpathy and L. Fei-Fei. Deep visual-semantic align- In Proceedings ments for generating image descriptions. of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3128–3137, 2015. [20] D. P. Kingma and M. Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. [21] P. Kr¨ahenb¨uhl, C. Doersch, J. Donahue, and T. Darrell. Data- dependent initializations of convolutional neural networks. In ICLR, 2016. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012.
1608.08614#40
What makes ImageNet good for transfer learning?
The tremendous success of ImageNet-trained deep features on a wide range of transfer tasks begs the question: what are the properties of the ImageNet dataset that are critical for learning good, general-purpose features? This work provides an empirical investigation of various facets of this question: Is more pre-training data always better? How does feature quality depend on the number of training examples per class? Does adding more object classes improve performance? For the same data budget, how should the data be split into classes? Is fine-grained recognition necessary for learning good features? Given the same number of training classes, is it better to have coarse classes or fine-grained classes? Which is better: more classes or more examples per class? To answer these and related questions, we pre-trained CNN features on various subsets of the ImageNet dataset and evaluated transfer performance on PASCAL detection, PASCAL action classification, and SUN scene classification tasks. Our overall findings suggest that most changes in the choice of pre-training data long thought to be critical do not significantly affect transfer performance.? Given the same number of training classes, is it better to have coarse classes or fine-grained classes? Which is better: more classes or more examples per class?
http://arxiv.org/pdf/1608.08614
Minyoung Huh, Pulkit Agrawal, Alexei A. Efros
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20160830
20161210
[ { "id": "1507.06550" }, { "id": "1504.02518" }, { "id": "1512.04412" } ]
1608.08614
41
Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012. [23] G. Larsson, M. Maire, and G. Shakhnarovich. Learning rep- resentations for automatic colorization. In ECCV, 2016. [24] Y. LeCun, Y. Bengio, and G. Hinton. Deep learning. Nature, 521(7553):436–444, 2015. [25] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Backpropagation applied to handwritten zip code recognition. Neural compu- tation, 1(4):541–551, 1989. [26] Z. Li and D. Hoiem. Learning without forgetting. In ECCV, 2016. [27] H. Mobahi, R. Collobert, and J. Weston. Deep learning from temporal coherence in video. In Proceedings of the 26th An- nual International Conference on Machine Learning, pages 737–744. ACM, 2009. [28] M. Noroozi and F. Paolo. Unsupervised learning of visual representations by solving jigsaw puzzles. In ECCV, 2016.
1608.08614#41
What makes ImageNet good for transfer learning?
The tremendous success of ImageNet-trained deep features on a wide range of transfer tasks begs the question: what are the properties of the ImageNet dataset that are critical for learning good, general-purpose features? This work provides an empirical investigation of various facets of this question: Is more pre-training data always better? How does feature quality depend on the number of training examples per class? Does adding more object classes improve performance? For the same data budget, how should the data be split into classes? Is fine-grained recognition necessary for learning good features? Given the same number of training classes, is it better to have coarse classes or fine-grained classes? Which is better: more classes or more examples per class? To answer these and related questions, we pre-trained CNN features on various subsets of the ImageNet dataset and evaluated transfer performance on PASCAL detection, PASCAL action classification, and SUN scene classification tasks. Our overall findings suggest that most changes in the choice of pre-training data long thought to be critical do not significantly affect transfer performance.? Given the same number of training classes, is it better to have coarse classes or fine-grained classes? Which is better: more classes or more examples per class?
http://arxiv.org/pdf/1608.08614
Minyoung Huh, Pulkit Agrawal, Alexei A. Efros
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20160830
20161210
[ { "id": "1507.06550" }, { "id": "1504.02518" }, { "id": "1512.04412" } ]
1608.08614
42
[28] M. Noroozi and F. Paolo. Unsupervised learning of visual representations by solving jigsaw puzzles. In ECCV, 2016. [29] B. A. Olshausen et al. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381(6583):607–609, 1996. [30] A. Owens, P. Isola, J. McDermott, A. Torralba, E. Adelson, and F. William. Visually indicated sounds. In CVPR, 2016. [31] D. Pathak, P. Kr¨ahenb¨uhl, J. Donahue, T. Darrell, and A. Efros. Context encoders: Feature learning by inpainting. In CVPR, 2016. [32] M. Ranzato, F. J. Huang, Y.-L. Boureau, and Y. LeCun. Un- supervised learning of invariant feature hierarchies with ap- In Computer Vision and plications to object recognition. Pattern Recognition, 2007. CVPR’07. IEEE Conference on, pages 1–8. IEEE, 2007.
1608.08614#42
What makes ImageNet good for transfer learning?
The tremendous success of ImageNet-trained deep features on a wide range of transfer tasks begs the question: what are the properties of the ImageNet dataset that are critical for learning good, general-purpose features? This work provides an empirical investigation of various facets of this question: Is more pre-training data always better? How does feature quality depend on the number of training examples per class? Does adding more object classes improve performance? For the same data budget, how should the data be split into classes? Is fine-grained recognition necessary for learning good features? Given the same number of training classes, is it better to have coarse classes or fine-grained classes? Which is better: more classes or more examples per class? To answer these and related questions, we pre-trained CNN features on various subsets of the ImageNet dataset and evaluated transfer performance on PASCAL detection, PASCAL action classification, and SUN scene classification tasks. Our overall findings suggest that most changes in the choice of pre-training data long thought to be critical do not significantly affect transfer performance.? Given the same number of training classes, is it better to have coarse classes or fine-grained classes? Which is better: more classes or more examples per class?
http://arxiv.org/pdf/1608.08614
Minyoung Huh, Pulkit Agrawal, Alexei A. Efros
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20160830
20161210
[ { "id": "1507.06550" }, { "id": "1504.02518" }, { "id": "1512.04412" } ]
1608.08614
43
[33] A. Razavian, H. Azizpour, J. Sullivan, and S. Carlsson. Cnn features off-the-shelf: an astounding baseline for recogni- tion. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 806–813, 2014. [34] S. Ren, K. He, R. Girshick, and J. Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems, pages 91–99, 2015. [35] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 2015. [36] R. Salakhutdinov and G. E. Hinton. Deep boltzmann ma- chines. In International Conference on Artificial Intelligence and Statistics, pages 448–455, 2009.
1608.08614#43
What makes ImageNet good for transfer learning?
The tremendous success of ImageNet-trained deep features on a wide range of transfer tasks begs the question: what are the properties of the ImageNet dataset that are critical for learning good, general-purpose features? This work provides an empirical investigation of various facets of this question: Is more pre-training data always better? How does feature quality depend on the number of training examples per class? Does adding more object classes improve performance? For the same data budget, how should the data be split into classes? Is fine-grained recognition necessary for learning good features? Given the same number of training classes, is it better to have coarse classes or fine-grained classes? Which is better: more classes or more examples per class? To answer these and related questions, we pre-trained CNN features on various subsets of the ImageNet dataset and evaluated transfer performance on PASCAL detection, PASCAL action classification, and SUN scene classification tasks. Our overall findings suggest that most changes in the choice of pre-training data long thought to be critical do not significantly affect transfer performance.? Given the same number of training classes, is it better to have coarse classes or fine-grained classes? Which is better: more classes or more examples per class?
http://arxiv.org/pdf/1608.08614
Minyoung Huh, Pulkit Agrawal, Alexei A. Efros
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20160830
20161210
[ { "id": "1507.06550" }, { "id": "1504.02518" }, { "id": "1512.04412" } ]
1608.08614
44
[37] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun. Overfeat: Integrated recognition, localization and detection using convolutional networks. arXiv preprint arXiv:1312.6229, 2013. [38] K. Simonyan and A. Zisserman. Two-stream convolutional In Advances networks for action recognition in videos. in Neural Information Processing Systems, pages 568–576, 2014. Very deep convolu- tional networks for large-scale image recognition. CoRR, abs/1409.1556, 2014. [40] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In CVPR, 2015. [41] X. Wang and A. Gupta. Unsupervised learning of visual rep- resentations using videos. In Proceedings of the IEEE Inter- national Conference on Computer Vision, pages 2794–2802, 2015.
1608.08614#44
What makes ImageNet good for transfer learning?
The tremendous success of ImageNet-trained deep features on a wide range of transfer tasks begs the question: what are the properties of the ImageNet dataset that are critical for learning good, general-purpose features? This work provides an empirical investigation of various facets of this question: Is more pre-training data always better? How does feature quality depend on the number of training examples per class? Does adding more object classes improve performance? For the same data budget, how should the data be split into classes? Is fine-grained recognition necessary for learning good features? Given the same number of training classes, is it better to have coarse classes or fine-grained classes? Which is better: more classes or more examples per class? To answer these and related questions, we pre-trained CNN features on various subsets of the ImageNet dataset and evaluated transfer performance on PASCAL detection, PASCAL action classification, and SUN scene classification tasks. Our overall findings suggest that most changes in the choice of pre-training data long thought to be critical do not significantly affect transfer performance.? Given the same number of training classes, is it better to have coarse classes or fine-grained classes? Which is better: more classes or more examples per class?
http://arxiv.org/pdf/1608.08614
Minyoung Huh, Pulkit Agrawal, Alexei A. Efros
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20160830
20161210
[ { "id": "1507.06550" }, { "id": "1504.02518" }, { "id": "1512.04412" } ]
1608.08614
45
[42] P. Weinzaepfel, J. Revaud, Z. Harchaoui, and C. Schmid. Deepflow: Large displacement optical flow with deep match- ing. In Proceedings of the IEEE International Conference on Computer Vision, pages 1385–1392, 2013. [43] L. Wiskott and T. J. Sejnowski. Slow feature analysis: Un- supervised learning of invariances. Neural computation, 14(4):715–770, 2002. [44] J. Yosinski, J. Clune, Y. Bengio, and H. Lipson. How trans- ferable are features in deep neural networks? In Advances in Neural Information Processing Systems, pages 3320–3328, 2014. [45] R. Zhang, P. Isola, and A. Efros. Colorful image colorization. In ECCV, 2016. [46] B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, and A. Oliva. Learning deep features for scene recognition using places database. NIPS, 2014. 10
1608.08614#45
What makes ImageNet good for transfer learning?
The tremendous success of ImageNet-trained deep features on a wide range of transfer tasks begs the question: what are the properties of the ImageNet dataset that are critical for learning good, general-purpose features? This work provides an empirical investigation of various facets of this question: Is more pre-training data always better? How does feature quality depend on the number of training examples per class? Does adding more object classes improve performance? For the same data budget, how should the data be split into classes? Is fine-grained recognition necessary for learning good features? Given the same number of training classes, is it better to have coarse classes or fine-grained classes? Which is better: more classes or more examples per class? To answer these and related questions, we pre-trained CNN features on various subsets of the ImageNet dataset and evaluated transfer performance on PASCAL detection, PASCAL action classification, and SUN scene classification tasks. Our overall findings suggest that most changes in the choice of pre-training data long thought to be critical do not significantly affect transfer performance.? Given the same number of training classes, is it better to have coarse classes or fine-grained classes? Which is better: more classes or more examples per class?
http://arxiv.org/pdf/1608.08614
Minyoung Huh, Pulkit Agrawal, Alexei A. Efros
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20160830
20161210
[ { "id": "1507.06550" }, { "id": "1504.02518" }, { "id": "1512.04412" } ]
1608.07905
1
Jing Jiang School of Information Systems Singapore Management University [email protected] # ABSTRACT Machine comprehension of text is an important problem in natural language pro- cessing. A recently released dataset, the Stanford Question Answering Dataset (SQuAD), offers a large number of real questions and their answers created by humans through crowdsourcing. SQuAD provides a challenging testbed for eval- uating machine comprehension algorithms, partly because compared with previ- ous datasets, in SQuAD the answers do not come from a small set of candidate answers and they have variable lengths. We propose an end-to-end neural architec- ture for the task. The architecture is based on match-LSTM, a model we proposed previously for textual entailment, and Pointer Net, a sequence-to-sequence model proposed by Vinyals et al. (2015) to constrain the output tokens to be from the input sequences. We propose two ways of using Pointer Net for our task. Our experiments show that both of our two models substantially outperform the best results obtained by Rajpurkar et al. (2016) using logistic regression and manually crafted features. # INTRODUCTION
1608.07905#1
Machine Comprehension Using Match-LSTM and Answer Pointer
Machine comprehension of text is an important problem in natural language processing. A recently released dataset, the Stanford Question Answering Dataset (SQuAD), offers a large number of real questions and their answers created by humans through crowdsourcing. SQuAD provides a challenging testbed for evaluating machine comprehension algorithms, partly because compared with previous datasets, in SQuAD the answers do not come from a small set of candidate answers and they have variable lengths. We propose an end-to-end neural architecture for the task. The architecture is based on match-LSTM, a model we proposed previously for textual entailment, and Pointer Net, a sequence-to-sequence model proposed by Vinyals et al.(2015) to constrain the output tokens to be from the input sequences. We propose two ways of using Pointer Net for our task. Our experiments show that both of our two models substantially outperform the best results obtained by Rajpurkar et al.(2016) using logistic regression and manually crafted features.
http://arxiv.org/pdf/1608.07905
Shuohang Wang, Jing Jiang
cs.CL, cs.AI
11 pages; 3 figures
null
cs.CL
20160829
20161107
[ { "id": "1602.04341" }, { "id": "1607.02250" }, { "id": "1610.09996" } ]
1608.07905
2
# INTRODUCTION Machine comprehension of text is one of the ultimate goals of natural language processing. While the ability of a machine to understand text can be assessed in many different ways, in recent years, several benchmark datasets have been created to focus on answering questions as a way to evaluate machine comprehension (Richardson et al., 2013; Hermann et al., 2015; Hill et al., 2016; Weston et al., 2016; Rajpurkar et al., 2016). In this setup, typically the machine is first presented with a piece of text such as a news article or a story. The machine is then expected to answer one or multiple questions related to the text.
1608.07905#2
Machine Comprehension Using Match-LSTM and Answer Pointer
Machine comprehension of text is an important problem in natural language processing. A recently released dataset, the Stanford Question Answering Dataset (SQuAD), offers a large number of real questions and their answers created by humans through crowdsourcing. SQuAD provides a challenging testbed for evaluating machine comprehension algorithms, partly because compared with previous datasets, in SQuAD the answers do not come from a small set of candidate answers and they have variable lengths. We propose an end-to-end neural architecture for the task. The architecture is based on match-LSTM, a model we proposed previously for textual entailment, and Pointer Net, a sequence-to-sequence model proposed by Vinyals et al.(2015) to constrain the output tokens to be from the input sequences. We propose two ways of using Pointer Net for our task. Our experiments show that both of our two models substantially outperform the best results obtained by Rajpurkar et al.(2016) using logistic regression and manually crafted features.
http://arxiv.org/pdf/1608.07905
Shuohang Wang, Jing Jiang
cs.CL, cs.AI
11 pages; 3 figures
null
cs.CL
20160829
20161107
[ { "id": "1602.04341" }, { "id": "1607.02250" }, { "id": "1610.09996" } ]
1608.07905
3
In most of the benchmark datasets, a question can be treated as a multiple choice question, whose correct answer is to be chosen from a set of provided candidate answers (Richardson et al., 2013; Hill et al., 2016). Presumably, questions with more given candidate answers are more challenging. The Stanford Question Answering Dataset (SQuAD) introduced recently by Rajpurkar et al. (2016) contains such more challenging questions whose correct answers can be any sequence of tokens from the given text. Moreover, unlike some other datasets whose questions and answers were created automatically in Cloze style (Hermann et al., 2015; Hill et al., 2016), the questions and answers in SQuAD were created by humans through crowdsourcing, which makes the dataset more realistic. Given these advantages of the SQuAD dataset, in this paper, we focus on this new dataset to study machine comprehension of text. A sample piece of text and three of its associated questions are shown in Table 1.
1608.07905#3
Machine Comprehension Using Match-LSTM and Answer Pointer
Machine comprehension of text is an important problem in natural language processing. A recently released dataset, the Stanford Question Answering Dataset (SQuAD), offers a large number of real questions and their answers created by humans through crowdsourcing. SQuAD provides a challenging testbed for evaluating machine comprehension algorithms, partly because compared with previous datasets, in SQuAD the answers do not come from a small set of candidate answers and they have variable lengths. We propose an end-to-end neural architecture for the task. The architecture is based on match-LSTM, a model we proposed previously for textual entailment, and Pointer Net, a sequence-to-sequence model proposed by Vinyals et al.(2015) to constrain the output tokens to be from the input sequences. We propose two ways of using Pointer Net for our task. Our experiments show that both of our two models substantially outperform the best results obtained by Rajpurkar et al.(2016) using logistic regression and manually crafted features.
http://arxiv.org/pdf/1608.07905
Shuohang Wang, Jing Jiang
cs.CL, cs.AI
11 pages; 3 figures
null
cs.CL
20160829
20161107
[ { "id": "1602.04341" }, { "id": "1607.02250" }, { "id": "1610.09996" } ]
1608.07905
4
Traditional solutions to this kind of question answering tasks rely on NLP pipelines that involve mul- tiple steps of linguistic analyses and feature engineering, including syntactic parsing, named entity recognition, question classification, semantic parsing, etc. Recently, with the advances of applying neural network models in NLP, there has been much interest in building end-to-end neural architec- tures for various NLP tasks, including several pieces of work on machine comprehension (Hermann et al., 2015; Hill et al., 2016; Yin et al., 2016; Kadlec et al., 2016; Cui et al., 2016). However, given the properties of previous machine comprehension datasets, existing end-to-end neural architectures for the task either rely on the candidate answers (Hill et al., 2016; Yin et al., 2016) or assume that the 1 # Under review as a conference paper at ICLR 2017 In 1870, Tesla moved to Karlovac, to attend school at the Higher Real Gymnasium, where he was profoundly influenced by a math teacher Martin Sekuli´c. The classes were held in German, as it was a school within the Austro-Hungarian Military Frontier. Tesla was able to perform integral calculus in his head, which prompted his teachers to believe that he was cheating. He finished a four-year term in three years, graduating in 1873.
1608.07905#4
Machine Comprehension Using Match-LSTM and Answer Pointer
Machine comprehension of text is an important problem in natural language processing. A recently released dataset, the Stanford Question Answering Dataset (SQuAD), offers a large number of real questions and their answers created by humans through crowdsourcing. SQuAD provides a challenging testbed for evaluating machine comprehension algorithms, partly because compared with previous datasets, in SQuAD the answers do not come from a small set of candidate answers and they have variable lengths. We propose an end-to-end neural architecture for the task. The architecture is based on match-LSTM, a model we proposed previously for textual entailment, and Pointer Net, a sequence-to-sequence model proposed by Vinyals et al.(2015) to constrain the output tokens to be from the input sequences. We propose two ways of using Pointer Net for our task. Our experiments show that both of our two models substantially outperform the best results obtained by Rajpurkar et al.(2016) using logistic regression and manually crafted features.
http://arxiv.org/pdf/1608.07905
Shuohang Wang, Jing Jiang
cs.CL, cs.AI
11 pages; 3 figures
null
cs.CL
20160829
20161107
[ { "id": "1602.04341" }, { "id": "1607.02250" }, { "id": "1610.09996" } ]
1608.07905
5
1. In what language were the classes given? 2. Who was Tesla’s main influence in Karlovac? Martin Sekuli´c 3. Why did Tesla go to Karlovac? German attend school at the Higher Real Gymnasium Table 1: A paragraph from Wikipedia and three associated questions together with their answers, taken from the SQuAD dataset. The tokens in bold in the paragraph are our predicted answers while the texts next to the questions are the ground truth answers. answer is a single token (Hermann et al., 2015; Kadlec et al., 2016; Cui et al., 2016), which make these methods unsuitable for the SQuAD dataset. In this paper, we propose a new end-to-end neural architecture to address the machine comprehension problem as defined in the SQuAD dataset.
1608.07905#5
Machine Comprehension Using Match-LSTM and Answer Pointer
Machine comprehension of text is an important problem in natural language processing. A recently released dataset, the Stanford Question Answering Dataset (SQuAD), offers a large number of real questions and their answers created by humans through crowdsourcing. SQuAD provides a challenging testbed for evaluating machine comprehension algorithms, partly because compared with previous datasets, in SQuAD the answers do not come from a small set of candidate answers and they have variable lengths. We propose an end-to-end neural architecture for the task. The architecture is based on match-LSTM, a model we proposed previously for textual entailment, and Pointer Net, a sequence-to-sequence model proposed by Vinyals et al.(2015) to constrain the output tokens to be from the input sequences. We propose two ways of using Pointer Net for our task. Our experiments show that both of our two models substantially outperform the best results obtained by Rajpurkar et al.(2016) using logistic regression and manually crafted features.
http://arxiv.org/pdf/1608.07905
Shuohang Wang, Jing Jiang
cs.CL, cs.AI
11 pages; 3 figures
null
cs.CL
20160829
20161107
[ { "id": "1602.04341" }, { "id": "1607.02250" }, { "id": "1610.09996" } ]
1608.07905
6
Specifically, observing that in the SQuAD dataset many questions are paraphrases of sentences from the original text, we adopt a match-LSTM model that we developed earlier for textual entail- ment (Wang & Jiang, 2016). We further adopt the Pointer Net (Ptr-Net) model developed by Vinyals et al. (2015), which enables the predictions of tokens from the input sequence only rather than from a larger fixed vocabulary and thus allows us to generate answers that consist of multiple tokens from the original text. We propose two ways to apply the Ptr-Net model for our task: a sequence model and a boundary model. We also further extend the boundary model with a search mechanism. Ex- periments on the SQuAD dataset show that our two models both outperform the best performance reported by Rajpurkar et al. (2016). Moreover, using an ensemble of several of our models, we can achieve very competitive performance on SQuAD.
1608.07905#6
Machine Comprehension Using Match-LSTM and Answer Pointer
Machine comprehension of text is an important problem in natural language processing. A recently released dataset, the Stanford Question Answering Dataset (SQuAD), offers a large number of real questions and their answers created by humans through crowdsourcing. SQuAD provides a challenging testbed for evaluating machine comprehension algorithms, partly because compared with previous datasets, in SQuAD the answers do not come from a small set of candidate answers and they have variable lengths. We propose an end-to-end neural architecture for the task. The architecture is based on match-LSTM, a model we proposed previously for textual entailment, and Pointer Net, a sequence-to-sequence model proposed by Vinyals et al.(2015) to constrain the output tokens to be from the input sequences. We propose two ways of using Pointer Net for our task. Our experiments show that both of our two models substantially outperform the best results obtained by Rajpurkar et al.(2016) using logistic regression and manually crafted features.
http://arxiv.org/pdf/1608.07905
Shuohang Wang, Jing Jiang
cs.CL, cs.AI
11 pages; 3 figures
null
cs.CL
20160829
20161107
[ { "id": "1602.04341" }, { "id": "1607.02250" }, { "id": "1610.09996" } ]
1608.07905
7
Our contributions can be summarized as follows: (1) We propose two new end-to-end neural network models for machine comprehension, which combine match-LSTM and Ptr-Net to handle the special properties of the SQuAD dataset. (2) We have achieved the performance of an exact match score of 67.9% and an F1 score of 77.0% on the unseen test dataset, which is much better than the feature- engineered solution (Rajpurkar et al., 2016). Our performance is also close to the state of the art on SQuAD, which is 71.6% in terms of exact match and 80.4% in terms of F1 from Salesforce Research. (3) Our further analyses of the models reveal some useful insights for further improving the method. Beisdes, we also made our code available online 1. # 2 METHOD In this section, we first briefly review match-LSTM and Pointer Net. These two pieces of existing work lay the foundation of our method. We then present our end-to-end neural architecture for machine comprehension. 2.1 MATCH-LSTM
1608.07905#7
Machine Comprehension Using Match-LSTM and Answer Pointer
Machine comprehension of text is an important problem in natural language processing. A recently released dataset, the Stanford Question Answering Dataset (SQuAD), offers a large number of real questions and their answers created by humans through crowdsourcing. SQuAD provides a challenging testbed for evaluating machine comprehension algorithms, partly because compared with previous datasets, in SQuAD the answers do not come from a small set of candidate answers and they have variable lengths. We propose an end-to-end neural architecture for the task. The architecture is based on match-LSTM, a model we proposed previously for textual entailment, and Pointer Net, a sequence-to-sequence model proposed by Vinyals et al.(2015) to constrain the output tokens to be from the input sequences. We propose two ways of using Pointer Net for our task. Our experiments show that both of our two models substantially outperform the best results obtained by Rajpurkar et al.(2016) using logistic regression and manually crafted features.
http://arxiv.org/pdf/1608.07905
Shuohang Wang, Jing Jiang
cs.CL, cs.AI
11 pages; 3 figures
null
cs.CL
20160829
20161107
[ { "id": "1602.04341" }, { "id": "1607.02250" }, { "id": "1610.09996" } ]
1608.07905
8
2.1 MATCH-LSTM In a recent work on learning natural language inference, we proposed a match-LSTM model for predicting textual entailment (Wang & Jiang, 2016). In textual entailment, two sentences are given where one is a premise and the other is a hypothesis. To predict whether the premise entails the hypothesis, the match-LSTM model goes through the tokens of the hypothesis sequentially. At each position of the hypothesis, attention mechanism is used to obtain a weighted vector representation of the premise. This weighted premise is then to be combined with a vector representation of the current token of the hypothesis and fed into an LSTM, which we call the match-LSTM. The match- LSTM essentially sequentially aggregates the matching of the attention-weighted premise to each token of the hypothesis and uses the aggregated matching result to make a final prediction. # 1 https://github.com/shuohangwang/SeqMatchSeq 2 # Under review as a conference paper at ICLR 2017 Answer Pointer Layer Tt Le Match-LSTM layer = .sTM preprocess- ing Layer forP LsT™ preprocess- ing Layer fora +4 hg Wg Tesla ? (a) Sequence Model hy Why Why did Tesla ? (b) Boundary Model
1608.07905#8
Machine Comprehension Using Match-LSTM and Answer Pointer
Machine comprehension of text is an important problem in natural language processing. A recently released dataset, the Stanford Question Answering Dataset (SQuAD), offers a large number of real questions and their answers created by humans through crowdsourcing. SQuAD provides a challenging testbed for evaluating machine comprehension algorithms, partly because compared with previous datasets, in SQuAD the answers do not come from a small set of candidate answers and they have variable lengths. We propose an end-to-end neural architecture for the task. The architecture is based on match-LSTM, a model we proposed previously for textual entailment, and Pointer Net, a sequence-to-sequence model proposed by Vinyals et al.(2015) to constrain the output tokens to be from the input sequences. We propose two ways of using Pointer Net for our task. Our experiments show that both of our two models substantially outperform the best results obtained by Rajpurkar et al.(2016) using logistic regression and manually crafted features.
http://arxiv.org/pdf/1608.07905
Shuohang Wang, Jing Jiang
cs.CL, cs.AI
11 pages; 3 figures
null
cs.CL
20160829
20161107
[ { "id": "1602.04341" }, { "id": "1607.02250" }, { "id": "1610.09996" } ]
1608.07905
9
Figure 1: An overview of our two models. Both models consist of an LSTM preprocessing layer, a match-LSTM layer and an Answer Pointer layer. For each match-LSTM in a particular direction, hi, which is defined as H‘a1, is computed using the a in the corresponding direction, as described in either Eqn. (2) 2.2 POINTER NET Vinyals et al. (2015) proposed a Pointer Network (Ptr-Net) model to solve a special kind of problems where we want to generate an output sequence whose tokens must come from the input sequence. Instead of picking an output token from a fixed vocabulary, Ptr-Net uses attention mechanism as a pointer to select a position from the input sequence as an output symbol. The pointer mechanism has inspired some recent work on language processing (Gu et al., 2016; Kadlec et al., 2016). Here we adopt Ptr-Net in order to construct answers using tokens from the input text. # 2.3 OUR METHOD
1608.07905#9
Machine Comprehension Using Match-LSTM and Answer Pointer
Machine comprehension of text is an important problem in natural language processing. A recently released dataset, the Stanford Question Answering Dataset (SQuAD), offers a large number of real questions and their answers created by humans through crowdsourcing. SQuAD provides a challenging testbed for evaluating machine comprehension algorithms, partly because compared with previous datasets, in SQuAD the answers do not come from a small set of candidate answers and they have variable lengths. We propose an end-to-end neural architecture for the task. The architecture is based on match-LSTM, a model we proposed previously for textual entailment, and Pointer Net, a sequence-to-sequence model proposed by Vinyals et al.(2015) to constrain the output tokens to be from the input sequences. We propose two ways of using Pointer Net for our task. Our experiments show that both of our two models substantially outperform the best results obtained by Rajpurkar et al.(2016) using logistic regression and manually crafted features.
http://arxiv.org/pdf/1608.07905
Shuohang Wang, Jing Jiang
cs.CL, cs.AI
11 pages; 3 figures
null
cs.CL
20160829
20161107
[ { "id": "1602.04341" }, { "id": "1607.02250" }, { "id": "1610.09996" } ]
1608.07905
10
# 2.3 OUR METHOD Formally, the problem we are trying to solve can be formulated as follows. We are given a piece of text, which we refer to as a passage, and a question related to the passage. The passage is represented by matrix P ∈ Rd×P , where P is the length (number of tokens) of the passage and d is the dimensionality of word embeddings. Similarly, the question is represented by matrix Q ∈ Rd×Q where Q is the length of the question. Our goal is to identify a subsequence from the passage as the answer to the question. As pointed out earlier, since the output tokens are from the input, we would like to adopt the Pointer Net for this problem. A straightforward way of applying Ptr-Net here is to treat an answer as a sequence of tokens from the input passage but ignore the fact that these tokens are consecutive in the original passage, because Ptr-Net does not make the consecutivity assumption. Specifically, we represent the answer as a sequence of integers a = (a1, a2, . . .), where each ai is an integer between 1 and P , indicating a certain position in the passage.
1608.07905#10
Machine Comprehension Using Match-LSTM and Answer Pointer
Machine comprehension of text is an important problem in natural language processing. A recently released dataset, the Stanford Question Answering Dataset (SQuAD), offers a large number of real questions and their answers created by humans through crowdsourcing. SQuAD provides a challenging testbed for evaluating machine comprehension algorithms, partly because compared with previous datasets, in SQuAD the answers do not come from a small set of candidate answers and they have variable lengths. We propose an end-to-end neural architecture for the task. The architecture is based on match-LSTM, a model we proposed previously for textual entailment, and Pointer Net, a sequence-to-sequence model proposed by Vinyals et al.(2015) to constrain the output tokens to be from the input sequences. We propose two ways of using Pointer Net for our task. Our experiments show that both of our two models substantially outperform the best results obtained by Rajpurkar et al.(2016) using logistic regression and manually crafted features.
http://arxiv.org/pdf/1608.07905
Shuohang Wang, Jing Jiang
cs.CL, cs.AI
11 pages; 3 figures
null
cs.CL
20160829
20161107
[ { "id": "1602.04341" }, { "id": "1607.02250" }, { "id": "1610.09996" } ]
1608.07905
11
Alternatively, if we want to ensure consecutivity, that is, if we want to ensure that we indeed select a subsequence from the passage as an answer, we can use the Ptr-Net to predict only the start and the end of an answer. In this case, the Ptr-Net only needs to select two tokens from the input passage, and all the tokens between these two tokens in the passage are treated as the answer. Specifically, we can represent the answer to be predicted as two integers a = (as, ae), where as an ae are integers between 1 and P . 3 # Under review as a conference paper at ICLR 2017 We refer to the first setting above as a sequence model and the second setting above as a bound- ary model. For either model, we assume that a set of training examples in the form of triplets {(Pn, Qn, an)}N
1608.07905#11
Machine Comprehension Using Match-LSTM and Answer Pointer
Machine comprehension of text is an important problem in natural language processing. A recently released dataset, the Stanford Question Answering Dataset (SQuAD), offers a large number of real questions and their answers created by humans through crowdsourcing. SQuAD provides a challenging testbed for evaluating machine comprehension algorithms, partly because compared with previous datasets, in SQuAD the answers do not come from a small set of candidate answers and they have variable lengths. We propose an end-to-end neural architecture for the task. The architecture is based on match-LSTM, a model we proposed previously for textual entailment, and Pointer Net, a sequence-to-sequence model proposed by Vinyals et al.(2015) to constrain the output tokens to be from the input sequences. We propose two ways of using Pointer Net for our task. Our experiments show that both of our two models substantially outperform the best results obtained by Rajpurkar et al.(2016) using logistic regression and manually crafted features.
http://arxiv.org/pdf/1608.07905
Shuohang Wang, Jing Jiang
cs.CL, cs.AI
11 pages; 3 figures
null
cs.CL
20160829
20161107
[ { "id": "1602.04341" }, { "id": "1607.02250" }, { "id": "1610.09996" } ]
1608.07905
12
An overview of the two neural network models are shown in Figure 1. Both models consist of three layers: (1) An LSTM preprocessing layer that preprocesses the passage and the question using LSTMs. (3) An (2) A match-LSTM layer that tries to match the passage against the question. Answer Pointer (Ans-Ptr) layer that uses Ptr-Net to select a set of tokens from the passage as the answer. The difference between the two models only lies in the third layer. # LSTM Preprocessing Layer The purpose for the LSTM preprocessing layer is to incorporate contextual information into the representation of each token in the passage and the question. We use a standard one-directional LSTM (Hochreiter & Schmidhuber, 1997) 2 to process the passage and the question separately, as shown below: −−−→ LSTM(P), Hq = −−−→ LSTM(Q). Hp = (1) The resulting matrices Hp ∈ Rl×P and Hq ∈ Rl×Q are hidden representations of the passage and the question, where l is the dimensionality of the hidden vectors. In other words, the ith column vector hp i ) in Hp (or Hq) represents the ith token in the passage (or the question) together with some contextual information from the left.
1608.07905#12
Machine Comprehension Using Match-LSTM and Answer Pointer
Machine comprehension of text is an important problem in natural language processing. A recently released dataset, the Stanford Question Answering Dataset (SQuAD), offers a large number of real questions and their answers created by humans through crowdsourcing. SQuAD provides a challenging testbed for evaluating machine comprehension algorithms, partly because compared with previous datasets, in SQuAD the answers do not come from a small set of candidate answers and they have variable lengths. We propose an end-to-end neural architecture for the task. The architecture is based on match-LSTM, a model we proposed previously for textual entailment, and Pointer Net, a sequence-to-sequence model proposed by Vinyals et al.(2015) to constrain the output tokens to be from the input sequences. We propose two ways of using Pointer Net for our task. Our experiments show that both of our two models substantially outperform the best results obtained by Rajpurkar et al.(2016) using logistic regression and manually crafted features.
http://arxiv.org/pdf/1608.07905
Shuohang Wang, Jing Jiang
cs.CL, cs.AI
11 pages; 3 figures
null
cs.CL
20160829
20161107
[ { "id": "1602.04341" }, { "id": "1607.02250" }, { "id": "1610.09996" } ]
1608.07905
13
# Match-LSTM Layer We apply the match-LSTM model (Wang & Jiang, 2016) proposed for textual entailment to our machine comprehension problem by treating the question as a premise and the passage as a hypoth- esis. The match-LSTM sequentially goes through the passage. At position i of the passage, it first uses the standard word-by-word attention mechanism to obtain attention weight vector −→α i ∈ RQ as follows: G, = tanh(W4HS + (WPh? + WRT, +b?) @ 0), Qi = softmax(wTG; +b®eg), (2) # −→ h r
1608.07905#13
Machine Comprehension Using Match-LSTM and Answer Pointer
Machine comprehension of text is an important problem in natural language processing. A recently released dataset, the Stanford Question Answering Dataset (SQuAD), offers a large number of real questions and their answers created by humans through crowdsourcing. SQuAD provides a challenging testbed for evaluating machine comprehension algorithms, partly because compared with previous datasets, in SQuAD the answers do not come from a small set of candidate answers and they have variable lengths. We propose an end-to-end neural architecture for the task. The architecture is based on match-LSTM, a model we proposed previously for textual entailment, and Pointer Net, a sequence-to-sequence model proposed by Vinyals et al.(2015) to constrain the output tokens to be from the input sequences. We propose two ways of using Pointer Net for our task. Our experiments show that both of our two models substantially outperform the best results obtained by Rajpurkar et al.(2016) using logistic regression and manually crafted features.
http://arxiv.org/pdf/1608.07905
Shuohang Wang, Jing Jiang
cs.CL, cs.AI
11 pages; 3 figures
null
cs.CL
20160829
20161107
[ { "id": "1602.04341" }, { "id": "1607.02250" }, { "id": "1610.09996" } ]
1608.07905
14
# −→ h r i−1 ∈ Rl is the where Wq, Wp, Wr ∈ Rl×l, bp, w ∈ Rl and b ∈ R are parameters to be learned, hidden vector of the one-directional match-LSTM (to be explained below) at position i − 1, and the outer product (· ⊗ eQ) produces a matrix or row vector by repeating the vector or scalar on the left for Q times. Essentially, the resulting attention weight −→α i,j above indicates the degree of matching between the ith token in the passage with the jth token in the question. Next, we use the attention weight vector −→α i to obtain a weighted version of the question and combine it with the current token of the passage to form a vector −→z i: Pp Zi- [ane| @) This vector −→z i is fed into a standard one-directional LSTM to form our so-called match-LSTM: −−−→ LSTM(−→z i, # −→ h r i ∈ Rl. where
1608.07905#14
Machine Comprehension Using Match-LSTM and Answer Pointer
Machine comprehension of text is an important problem in natural language processing. A recently released dataset, the Stanford Question Answering Dataset (SQuAD), offers a large number of real questions and their answers created by humans through crowdsourcing. SQuAD provides a challenging testbed for evaluating machine comprehension algorithms, partly because compared with previous datasets, in SQuAD the answers do not come from a small set of candidate answers and they have variable lengths. We propose an end-to-end neural architecture for the task. The architecture is based on match-LSTM, a model we proposed previously for textual entailment, and Pointer Net, a sequence-to-sequence model proposed by Vinyals et al.(2015) to constrain the output tokens to be from the input sequences. We propose two ways of using Pointer Net for our task. Our experiments show that both of our two models substantially outperform the best results obtained by Rajpurkar et al.(2016) using logistic regression and manually crafted features.
http://arxiv.org/pdf/1608.07905
Shuohang Wang, Jing Jiang
cs.CL, cs.AI
11 pages; 3 figures
null
cs.CL
20160829
20161107
[ { "id": "1602.04341" }, { "id": "1607.02250" }, { "id": "1610.09996" } ]
1608.07905
15
# −→ h r i ∈ Rl. where We further build a similar match-LSTM in the reverse direction. The purpose is to obtain a repre- sentation that encodes the contexts from both directions for each token in the passage. To build this reverse match-LSTM, we first define a G; = tanh(W°HS + (Wh? + Wh", , +b”) @e0), @, = softmax(w' G; +b® eg). (5) 2As the output gates in the preprocessing layer affect the final performance little, we remove it in our experiments. 4 (4) # Under review as a conference paper at ICLR 2017 Note that the parameters here (Wq, Wp, Wr, bp, w and b) are the same as used in Eqn. (2). We ←− then define ←−z i in a similar way and finally define h r i to be the hidden representation at position i produced by the match-LSTM in the reverse direction. −→ h r 1, # −→ h r # −→ h r
1608.07905#15
Machine Comprehension Using Match-LSTM and Answer Pointer
Machine comprehension of text is an important problem in natural language processing. A recently released dataset, the Stanford Question Answering Dataset (SQuAD), offers a large number of real questions and their answers created by humans through crowdsourcing. SQuAD provides a challenging testbed for evaluating machine comprehension algorithms, partly because compared with previous datasets, in SQuAD the answers do not come from a small set of candidate answers and they have variable lengths. We propose an end-to-end neural architecture for the task. The architecture is based on match-LSTM, a model we proposed previously for textual entailment, and Pointer Net, a sequence-to-sequence model proposed by Vinyals et al.(2015) to constrain the output tokens to be from the input sequences. We propose two ways of using Pointer Net for our task. Our experiments show that both of our two models substantially outperform the best results obtained by Rajpurkar et al.(2016) using logistic regression and manually crafted features.
http://arxiv.org/pdf/1608.07905
Shuohang Wang, Jing Jiang
cs.CL, cs.AI
11 pages; 3 figures
null
cs.CL
20160829
20161107
[ { "id": "1602.04341" }, { "id": "1607.02250" }, { "id": "1610.09996" } ]
1608.07905
16
−→ h r 1, # −→ h r # −→ h r −→ h r 1, −→ h r −→ h r −→ Hr ∈ Rl×P represent the hidden states [ Let ←− ←− h r h r 1, [ 2, . . . , P ] and ←− h r P ]. We define Hr ∈ R2l×P as the concatenation of the two: 2, . . . , ←− Hr ∈ Rl×P represent w- F # Answer Pointer Layer The top layer, the Answer Pointer (Ans-Ptr) layer, is motivated by the Pointer Net introduced by Vinyals et al. (2015). This layer uses the sequence Hr as input. Recall that we have two different models: The sequence model produces a sequence of answer tokens but these tokens may not be consecutive in the original passage. The boundary model produces only the start token and the end token of the answer, and then all the tokens between these two in the original passage are considered to be the answer. We now explain the two models separately.
1608.07905#16
Machine Comprehension Using Match-LSTM and Answer Pointer
Machine comprehension of text is an important problem in natural language processing. A recently released dataset, the Stanford Question Answering Dataset (SQuAD), offers a large number of real questions and their answers created by humans through crowdsourcing. SQuAD provides a challenging testbed for evaluating machine comprehension algorithms, partly because compared with previous datasets, in SQuAD the answers do not come from a small set of candidate answers and they have variable lengths. We propose an end-to-end neural architecture for the task. The architecture is based on match-LSTM, a model we proposed previously for textual entailment, and Pointer Net, a sequence-to-sequence model proposed by Vinyals et al.(2015) to constrain the output tokens to be from the input sequences. We propose two ways of using Pointer Net for our task. Our experiments show that both of our two models substantially outperform the best results obtained by Rajpurkar et al.(2016) using logistic regression and manually crafted features.
http://arxiv.org/pdf/1608.07905
Shuohang Wang, Jing Jiang
cs.CL, cs.AI
11 pages; 3 figures
null
cs.CL
20160829
20161107
[ { "id": "1602.04341" }, { "id": "1607.02250" }, { "id": "1610.09996" } ]
1608.07905
17
The Sequence Model: Recall that in the sequence model, the answer is represented by a sequence of integers a = (a1, a2, . . .) indicating the positions of the selected tokens in the original passage. The Ans-Ptr layer models the generation of these integers in a sequential manner. Because the length of an answer is not fixed, in order to stop generating answer tokens at certain point, we allow each ak to take up an integer value between 1 and P + 1, where P + 1 is a special value indicating the end of the answer. Once ak is set to be P + 1, the generation of the answer stops. In order to generate the kth answer token indicated by ak, first, the attention mechanism is used again to obtain an attention weight vector βk ∈ R(P +1), where βk,j (1 ≤ j ≤ P + 1) is the probability of selecting the jth token from the passage as the kth token in the answer, and βk,(P +1) is the probability of stopping the answer generation at position k. βk is modeled as follows: k−1 + ba) ⊗ e(P +1)), F, = tanh(VH" + (W*hi_, + b*) By = softmax(v™F, + ¢® e(p41)), (8)
1608.07905#17
Machine Comprehension Using Match-LSTM and Answer Pointer
Machine comprehension of text is an important problem in natural language processing. A recently released dataset, the Stanford Question Answering Dataset (SQuAD), offers a large number of real questions and their answers created by humans through crowdsourcing. SQuAD provides a challenging testbed for evaluating machine comprehension algorithms, partly because compared with previous datasets, in SQuAD the answers do not come from a small set of candidate answers and they have variable lengths. We propose an end-to-end neural architecture for the task. The architecture is based on match-LSTM, a model we proposed previously for textual entailment, and Pointer Net, a sequence-to-sequence model proposed by Vinyals et al.(2015) to constrain the output tokens to be from the input sequences. We propose two ways of using Pointer Net for our task. Our experiments show that both of our two models substantially outperform the best results obtained by Rajpurkar et al.(2016) using logistic regression and manually crafted features.
http://arxiv.org/pdf/1608.07905
Shuohang Wang, Jing Jiang
cs.CL, cs.AI
11 pages; 3 figures
null
cs.CL
20160829
20161107
[ { "id": "1602.04341" }, { "id": "1607.02250" }, { "id": "1610.09996" } ]
1608.07905
18
F, = tanh(VH" + (W*hi_, + b*) By = softmax(v™F, + ¢® e(p41)), (8) where H' € R2!*(P+1) is the concatenation of H' with a zero vector, defined as H' = [H'; 0], V eR*”, W* € R™!, b*,v € R! and c € R are parameters to be learned, (- ® e(p+1)) follows the same definition as before, and hj,_, € R’ is the hidden vector at position k — 1 of an answer LSTM as defined below: # LSTM(H hj, = LSTM(H 3}, hi,_,)- (9) We can then model the probability of generating the answer sequence as p(a|Hr) = p(ak|a1, a2, . . . , ak−1, Hr), (10) k and p(ak = j|a1, a2, . . . , ak−1, Hr) = βk,j. (11) To train the model, we minimize the following loss function based on the training examples: N = SF log p(an{Pn; Qn): (12) n=1 − n=1
1608.07905#18
Machine Comprehension Using Match-LSTM and Answer Pointer
Machine comprehension of text is an important problem in natural language processing. A recently released dataset, the Stanford Question Answering Dataset (SQuAD), offers a large number of real questions and their answers created by humans through crowdsourcing. SQuAD provides a challenging testbed for evaluating machine comprehension algorithms, partly because compared with previous datasets, in SQuAD the answers do not come from a small set of candidate answers and they have variable lengths. We propose an end-to-end neural architecture for the task. The architecture is based on match-LSTM, a model we proposed previously for textual entailment, and Pointer Net, a sequence-to-sequence model proposed by Vinyals et al.(2015) to constrain the output tokens to be from the input sequences. We propose two ways of using Pointer Net for our task. Our experiments show that both of our two models substantially outperform the best results obtained by Rajpurkar et al.(2016) using logistic regression and manually crafted features.
http://arxiv.org/pdf/1608.07905
Shuohang Wang, Jing Jiang
cs.CL, cs.AI
11 pages; 3 figures
null
cs.CL
20160829
20161107
[ { "id": "1602.04341" }, { "id": "1607.02250" }, { "id": "1610.09996" } ]
1608.07905
19
N = SF log p(an{Pn; Qn): (12) n=1 − n=1 The Boundary Model: The boundary model works in a way very similar to the sequence model above, except that instead of predicting a sequence of indices a1, a2, . . ., we only need to predict two indices as and ae. So the main difference from the sequence model above is that in the boundary model we do not need to add the zero padding to Hr, and the probability of generating an answer is simply modeled as p(a|Hr) = p(as|Hr)p(ae|as, Hr). (13) 5 # Under review as a conference paper at ICLR 2017
1608.07905#19
Machine Comprehension Using Match-LSTM and Answer Pointer
Machine comprehension of text is an important problem in natural language processing. A recently released dataset, the Stanford Question Answering Dataset (SQuAD), offers a large number of real questions and their answers created by humans through crowdsourcing. SQuAD provides a challenging testbed for evaluating machine comprehension algorithms, partly because compared with previous datasets, in SQuAD the answers do not come from a small set of candidate answers and they have variable lengths. We propose an end-to-end neural architecture for the task. The architecture is based on match-LSTM, a model we proposed previously for textual entailment, and Pointer Net, a sequence-to-sequence model proposed by Vinyals et al.(2015) to constrain the output tokens to be from the input sequences. We propose two ways of using Pointer Net for our task. Our experiments show that both of our two models substantially outperform the best results obtained by Rajpurkar et al.(2016) using logistic regression and manually crafted features.
http://arxiv.org/pdf/1608.07905
Shuohang Wang, Jing Jiang
cs.CL, cs.AI
11 pages; 3 figures
null
cs.CL
20160829
20161107
[ { "id": "1602.04341" }, { "id": "1607.02250" }, { "id": "1610.09996" } ]
1608.07905
20
5 # Under review as a conference paper at ICLR 2017 l |θ| Exact Match Test Dev Dev F1 Random Guess Logistic Regression DCR - - - 0 - - 1.1 40.0 62.5 1.3 40.4 62.5 4.1 51.0 71.2 Match-LSTM with Ans-Ptr (Sequence) Match-LSTM with Ans-Ptr (Boundary) Match-LSTM with Ans-Ptr (Boundary+Search) Match-LSTM with Ans-Ptr (Boundary+Search) Match-LSTM with Ans-Ptr (Boundary+Search+b) Match-LSTM with Bi-Ans-Ptr (Boundary+Search+b) 150 150 150 300 150 150 882K 54.4 882K 61.1 882K 63.0 3.2M 63.1 1.1M 63.4 1.4M 64.1 - - - - - 64.7 68.2 71.2 72.7 72.7 73.0 73.9 Match-LSTM with Ans-Ptr (Boundary+Search+en) 150 882K 67.6 67.9 76.8 Test 4.3 51.0 71.0 - - - - - 73.7 77.0
1608.07905#20
Machine Comprehension Using Match-LSTM and Answer Pointer
Machine comprehension of text is an important problem in natural language processing. A recently released dataset, the Stanford Question Answering Dataset (SQuAD), offers a large number of real questions and their answers created by humans through crowdsourcing. SQuAD provides a challenging testbed for evaluating machine comprehension algorithms, partly because compared with previous datasets, in SQuAD the answers do not come from a small set of candidate answers and they have variable lengths. We propose an end-to-end neural architecture for the task. The architecture is based on match-LSTM, a model we proposed previously for textual entailment, and Pointer Net, a sequence-to-sequence model proposed by Vinyals et al.(2015) to constrain the output tokens to be from the input sequences. We propose two ways of using Pointer Net for our task. Our experiments show that both of our two models substantially outperform the best results obtained by Rajpurkar et al.(2016) using logistic regression and manually crafted features.
http://arxiv.org/pdf/1608.07905
Shuohang Wang, Jing Jiang
cs.CL, cs.AI
11 pages; 3 figures
null
cs.CL
20160829
20161107
[ { "id": "1602.04341" }, { "id": "1607.02250" }, { "id": "1610.09996" } ]
1608.07905
21
Table 2: Experiment Results. Here “Search” refers to globally searching the spans with no more than 15 tokens, “b” refers to using bi-directional pre-processing LSTM, and “en” refers to ensemble method. We further extend the boundary model by incorporating a search mechanism. Specifically, during prediction, we try to limit the length of the span and globally search the span with the highest probability computed by p(as) × p(ae). Besides, as the boundary has a sequence of fixed number of values, bi-directional Ans-Ptr can be simply combined to fine-tune the correct span. # 3 EXPERIMENTS In this section, we present our experiment results and perform some analyses to better understand how our models works. # 3.1 DATA
1608.07905#21
Machine Comprehension Using Match-LSTM and Answer Pointer
Machine comprehension of text is an important problem in natural language processing. A recently released dataset, the Stanford Question Answering Dataset (SQuAD), offers a large number of real questions and their answers created by humans through crowdsourcing. SQuAD provides a challenging testbed for evaluating machine comprehension algorithms, partly because compared with previous datasets, in SQuAD the answers do not come from a small set of candidate answers and they have variable lengths. We propose an end-to-end neural architecture for the task. The architecture is based on match-LSTM, a model we proposed previously for textual entailment, and Pointer Net, a sequence-to-sequence model proposed by Vinyals et al.(2015) to constrain the output tokens to be from the input sequences. We propose two ways of using Pointer Net for our task. Our experiments show that both of our two models substantially outperform the best results obtained by Rajpurkar et al.(2016) using logistic regression and manually crafted features.
http://arxiv.org/pdf/1608.07905
Shuohang Wang, Jing Jiang
cs.CL, cs.AI
11 pages; 3 figures
null
cs.CL
20160829
20161107
[ { "id": "1602.04341" }, { "id": "1607.02250" }, { "id": "1610.09996" } ]
1608.07905
22
# 3 EXPERIMENTS In this section, we present our experiment results and perform some analyses to better understand how our models works. # 3.1 DATA We use the Stanford Question Answering Dataset (SQuAD) v1.1 to conduct our experiments. Pas- sages in SQuAD come from 536 articles from Wikipedia covering a wide range of topics. Each passage is a single paragraph from a Wikipedia article, and each passage has around 5 questions associated with it. In total, there are 23,215 passages and 107,785 questions. The data has been split into a training set (with 87,599 question-answer pairs), a development set (with 10,570 question- answer pairs) and a hidden test set. 3.2 EXPERIMENT SETTINGS We first tokenize all the passages, questions and answers. The resulting vocabulary contains 117K unique words. We use word embeddings from GloVe (Pennington et al., 2014) to initialize the model. Words not found in GloVe are initialized as zero vectors. The word embeddings are not updated during the training of the model.
1608.07905#22
Machine Comprehension Using Match-LSTM and Answer Pointer
Machine comprehension of text is an important problem in natural language processing. A recently released dataset, the Stanford Question Answering Dataset (SQuAD), offers a large number of real questions and their answers created by humans through crowdsourcing. SQuAD provides a challenging testbed for evaluating machine comprehension algorithms, partly because compared with previous datasets, in SQuAD the answers do not come from a small set of candidate answers and they have variable lengths. We propose an end-to-end neural architecture for the task. The architecture is based on match-LSTM, a model we proposed previously for textual entailment, and Pointer Net, a sequence-to-sequence model proposed by Vinyals et al.(2015) to constrain the output tokens to be from the input sequences. We propose two ways of using Pointer Net for our task. Our experiments show that both of our two models substantially outperform the best results obtained by Rajpurkar et al.(2016) using logistic regression and manually crafted features.
http://arxiv.org/pdf/1608.07905
Shuohang Wang, Jing Jiang
cs.CL, cs.AI
11 pages; 3 figures
null
cs.CL
20160829
20161107
[ { "id": "1602.04341" }, { "id": "1607.02250" }, { "id": "1610.09996" } ]
1608.07905
23
The dimensionality l of the hidden layers is set to be 150 or 300. We use ADAMAX (Kingma & Ba, 2015) with the coefficients β1 = 0.9 and β2 = 0.999 to optimize the model. Each update is computed through a minibatch of 30 instances. We do not use L2-regularization. The performance is measured by two metrics: percentage of exact match with the ground truth answers, and word-level F1 score when comparing the tokens in the predicted answers with the tokens in the ground truth answers. Note that in the development set and the test set each question has around three ground truth answers. F1 scores with the best matching answers are used to compute the average F1 score. 3.3 RESULTS The results of our models as well as the results of the baselines given by Rajpurkar et al. (2016) and Yu et al. (2016) are shown in Table 2. We can see that both of our two models have clearly outper6 # Under review as a conference paper at ICLR 2017
1608.07905#23
Machine Comprehension Using Match-LSTM and Answer Pointer
Machine comprehension of text is an important problem in natural language processing. A recently released dataset, the Stanford Question Answering Dataset (SQuAD), offers a large number of real questions and their answers created by humans through crowdsourcing. SQuAD provides a challenging testbed for evaluating machine comprehension algorithms, partly because compared with previous datasets, in SQuAD the answers do not come from a small set of candidate answers and they have variable lengths. We propose an end-to-end neural architecture for the task. The architecture is based on match-LSTM, a model we proposed previously for textual entailment, and Pointer Net, a sequence-to-sequence model proposed by Vinyals et al.(2015) to constrain the output tokens to be from the input sequences. We propose two ways of using Pointer Net for our task. Our experiments show that both of our two models substantially outperform the best results obtained by Rajpurkar et al.(2016) using logistic regression and manually crafted features.
http://arxiv.org/pdf/1608.07905
Shuohang Wang, Jing Jiang
cs.CL, cs.AI
11 pages; 3 figures
null
cs.CL
20160829
20161107
[ { "id": "1602.04341" }, { "id": "1607.02250" }, { "id": "1610.09996" } ]
1608.07905
24
# Under review as a conference paper at ICLR 2017 Answer: German in what F language | Ware = = = = _| J the classes = = = given Li aU = nm _ | 20 30 40 Question Answer: Martin Sekuli¢ Who = = T = ’ = 7 was + a = = 4 Tesla = S main influence | | in Karlovac Lt L | | = | n 0 10 20 30 40 Question Answer: attend school at the Higher Real Gymnasium | | s Z | a oe 3a Question S' Karlovac 2 he} was | profoundly | The classes Inf 1870 Tesla moved tof Karlovac to attend school at the Higher Real where influenced | by F al math teacher Martin Sekulié were | held | ink German by as was | school within the Austro-Hungarian [ij Military Frontier } Gymnasium Paragraph Figure 2: Visualization of the attention weights α for three questions associated with the same passage.
1608.07905#24
Machine Comprehension Using Match-LSTM and Answer Pointer
Machine comprehension of text is an important problem in natural language processing. A recently released dataset, the Stanford Question Answering Dataset (SQuAD), offers a large number of real questions and their answers created by humans through crowdsourcing. SQuAD provides a challenging testbed for evaluating machine comprehension algorithms, partly because compared with previous datasets, in SQuAD the answers do not come from a small set of candidate answers and they have variable lengths. We propose an end-to-end neural architecture for the task. The architecture is based on match-LSTM, a model we proposed previously for textual entailment, and Pointer Net, a sequence-to-sequence model proposed by Vinyals et al.(2015) to constrain the output tokens to be from the input sequences. We propose two ways of using Pointer Net for our task. Our experiments show that both of our two models substantially outperform the best results obtained by Rajpurkar et al.(2016) using logistic regression and manually crafted features.
http://arxiv.org/pdf/1608.07905
Shuohang Wang, Jing Jiang
cs.CL, cs.AI
11 pages; 3 figures
null
cs.CL
20160829
20161107
[ { "id": "1602.04341" }, { "id": "1607.02250" }, { "id": "1610.09996" } ]
1608.07905
25
formed the logistic regression model by Rajpurkar et al. (2016), which relies on carefully designed features. Furthermore, our boundary model has outperformed the sequence model, achieving an ex- act match score of 61.1% and an F1 score of 71.2%. In particular, in terms of the exact match score, the boundary model has a clear advantage over the sequence model. The improvement of our models over the logistic regression model shows that our end-to-end neural network models without much feature engineering are very effective on this task and this dataset. Considering the effectiveness of boundary model, we further explore this model. Observing that most of the answers are the spans with relatively small sizes, we simply limit the largest predicted span to have no more than 15 tokens and conducted experiment with span searching This resulted in 1.5% improvement in F1 on the de- velopment data and that outperformed the DCR model (Yu et al., 2016), which also introduced some language features such as POS and NE into their model. Besides, we tried to increase the memory dimension l in the model or add bi-directional pre-processing LSTM or add bi-directional Ans-Ptr. The improvement on the development data using the
1608.07905#25
Machine Comprehension Using Match-LSTM and Answer Pointer
Machine comprehension of text is an important problem in natural language processing. A recently released dataset, the Stanford Question Answering Dataset (SQuAD), offers a large number of real questions and their answers created by humans through crowdsourcing. SQuAD provides a challenging testbed for evaluating machine comprehension algorithms, partly because compared with previous datasets, in SQuAD the answers do not come from a small set of candidate answers and they have variable lengths. We propose an end-to-end neural architecture for the task. The architecture is based on match-LSTM, a model we proposed previously for textual entailment, and Pointer Net, a sequence-to-sequence model proposed by Vinyals et al.(2015) to constrain the output tokens to be from the input sequences. We propose two ways of using Pointer Net for our task. Our experiments show that both of our two models substantially outperform the best results obtained by Rajpurkar et al.(2016) using logistic regression and manually crafted features.
http://arxiv.org/pdf/1608.07905
Shuohang Wang, Jing Jiang
cs.CL, cs.AI
11 pages; 3 figures
null
cs.CL
20160829
20161107
[ { "id": "1602.04341" }, { "id": "1607.02250" }, { "id": "1610.09996" } ]
1608.07905
26
the memory dimension l in the model or add bi-directional pre-processing LSTM or add bi-directional Ans-Ptr. The improvement on the development data using the first two methods is quite small. While by adding Bi-Ans-Ptr with bi-directional pre-processing LSTM, we can get 1.2% improvement in F1. Finally, we explore the ensemble method by simply computing the product of the boundary prob- abilities collected from 5 boundary models and then searching the most likely span with no more than 15 tokens. This ensemble method achieved the best performance as shown in the table.
1608.07905#26
Machine Comprehension Using Match-LSTM and Answer Pointer
Machine comprehension of text is an important problem in natural language processing. A recently released dataset, the Stanford Question Answering Dataset (SQuAD), offers a large number of real questions and their answers created by humans through crowdsourcing. SQuAD provides a challenging testbed for evaluating machine comprehension algorithms, partly because compared with previous datasets, in SQuAD the answers do not come from a small set of candidate answers and they have variable lengths. We propose an end-to-end neural architecture for the task. The architecture is based on match-LSTM, a model we proposed previously for textual entailment, and Pointer Net, a sequence-to-sequence model proposed by Vinyals et al.(2015) to constrain the output tokens to be from the input sequences. We propose two ways of using Pointer Net for our task. Our experiments show that both of our two models substantially outperform the best results obtained by Rajpurkar et al.(2016) using logistic regression and manually crafted features.
http://arxiv.org/pdf/1608.07905
Shuohang Wang, Jing Jiang
cs.CL, cs.AI
11 pages; 3 figures
null
cs.CL
20160829
20161107
[ { "id": "1602.04341" }, { "id": "1607.02250" }, { "id": "1610.09996" } ]
1608.07905
27
3.4 FURTHER ANALYSES To better understand the strengths and weaknesses of our models, we perform some further analyses of the results below. First, we suspect that longer answers are harder to predict. To verify this hypothesis, we analysed the performance in terms of both exact match and F1 score with respect to the answer length on the development set. For example, for questions whose answers contain more than 9 tokens, the F1 score of the boundary model drops to around 55% and the exact match score drops to only around 30%, compared to the F1 score and exact match score of close to 72% and 67%, respectively, for questions with single-token answers. And that supports our hypothesis. 7 # Under review as a conference paper at ICLR 2017
1608.07905#27
Machine Comprehension Using Match-LSTM and Answer Pointer
Machine comprehension of text is an important problem in natural language processing. A recently released dataset, the Stanford Question Answering Dataset (SQuAD), offers a large number of real questions and their answers created by humans through crowdsourcing. SQuAD provides a challenging testbed for evaluating machine comprehension algorithms, partly because compared with previous datasets, in SQuAD the answers do not come from a small set of candidate answers and they have variable lengths. We propose an end-to-end neural architecture for the task. The architecture is based on match-LSTM, a model we proposed previously for textual entailment, and Pointer Net, a sequence-to-sequence model proposed by Vinyals et al.(2015) to constrain the output tokens to be from the input sequences. We propose two ways of using Pointer Net for our task. Our experiments show that both of our two models substantially outperform the best results obtained by Rajpurkar et al.(2016) using logistic regression and manually crafted features.
http://arxiv.org/pdf/1608.07905
Shuohang Wang, Jing Jiang
cs.CL, cs.AI
11 pages; 3 figures
null
cs.CL
20160829
20161107
[ { "id": "1602.04341" }, { "id": "1607.02250" }, { "id": "1610.09996" } ]
1608.07905
28
7 # Under review as a conference paper at ICLR 2017 Next, we analyze the performance of our models on different groups of questions. We use a crude way to split the questions into different groups based on a set of question words we have defined, including “what,” “how,” “who,” “when,” “which,” “where,” and “why.” These different question words roughly refer to questions with different types of answers. For example, “when” questions look for temporal expressions as answers, whereas “where” questions look for locations as answers. According to the performance on the development data set, our models work the best for “when” questions. This may be because in this dataset temporal expressions are relatively easier to recog- nize. Other groups of questions whose answers are noun phrases, such as “what” questions, “which” questions and “where” questions, also get relatively better results. On the other hand, “why” ques- tions are the hardest to answer. This is not surprising because the answers to “why” questions can be very diverse, and they are not restricted to any certain type of phrases.
1608.07905#28
Machine Comprehension Using Match-LSTM and Answer Pointer
Machine comprehension of text is an important problem in natural language processing. A recently released dataset, the Stanford Question Answering Dataset (SQuAD), offers a large number of real questions and their answers created by humans through crowdsourcing. SQuAD provides a challenging testbed for evaluating machine comprehension algorithms, partly because compared with previous datasets, in SQuAD the answers do not come from a small set of candidate answers and they have variable lengths. We propose an end-to-end neural architecture for the task. The architecture is based on match-LSTM, a model we proposed previously for textual entailment, and Pointer Net, a sequence-to-sequence model proposed by Vinyals et al.(2015) to constrain the output tokens to be from the input sequences. We propose two ways of using Pointer Net for our task. Our experiments show that both of our two models substantially outperform the best results obtained by Rajpurkar et al.(2016) using logistic regression and manually crafted features.
http://arxiv.org/pdf/1608.07905
Shuohang Wang, Jing Jiang
cs.CL, cs.AI
11 pages; 3 figures
null
cs.CL
20160829
20161107
[ { "id": "1602.04341" }, { "id": "1607.02250" }, { "id": "1610.09996" } ]
1608.07905
29
Finally, we would like to check whether the attention mechanism used in the match-LSTM layer is effective in helping the model locate the answer. We show the attention weights α in Figure 2. In the figure the darker the color is the higher the weight is. We can see that some words have been well aligned based on the attention weights. For example, the word “German” in the passage is aligned well to the word “language” in the first question, and the model successfully predicts “German” as the answer to the question. For the question word “who” in the second question, the word “teacher” actually receives relatively higher attention weight, and the model has predicted the phrase “Martin Sekulic” after that as the answer, which is correct. For the last question that starts with “why”, the attention weights are more evenly distributed and it is not clear which words have been aligned to “why”. # 4 RELATED WORK Machine comprehension of text has gained much attention in recent years, and increasingly re- searchers are building data-drive, end-to-end neural network models for the task. We will first review the recently released datasets and then some end-to-end models on this task. # 4.1 DATASETS
1608.07905#29
Machine Comprehension Using Match-LSTM and Answer Pointer
Machine comprehension of text is an important problem in natural language processing. A recently released dataset, the Stanford Question Answering Dataset (SQuAD), offers a large number of real questions and their answers created by humans through crowdsourcing. SQuAD provides a challenging testbed for evaluating machine comprehension algorithms, partly because compared with previous datasets, in SQuAD the answers do not come from a small set of candidate answers and they have variable lengths. We propose an end-to-end neural architecture for the task. The architecture is based on match-LSTM, a model we proposed previously for textual entailment, and Pointer Net, a sequence-to-sequence model proposed by Vinyals et al.(2015) to constrain the output tokens to be from the input sequences. We propose two ways of using Pointer Net for our task. Our experiments show that both of our two models substantially outperform the best results obtained by Rajpurkar et al.(2016) using logistic regression and manually crafted features.
http://arxiv.org/pdf/1608.07905
Shuohang Wang, Jing Jiang
cs.CL, cs.AI
11 pages; 3 figures
null
cs.CL
20160829
20161107
[ { "id": "1602.04341" }, { "id": "1607.02250" }, { "id": "1610.09996" } ]
1608.07905
30
# 4.1 DATASETS A number of datasets for studying machine comprehension were created in Cloze style by removing a single token from a sentence in the original corpus, and the task is to predict the missing word. For example, Hermann et al. (2015) created questions in Cloze style from CNN and Daily Mail highlights. Hill et al. (2016) created the Children’s Book Test dataset, which is based on children’s stories. Cui et al. (2016) released two similar datasets in Chinese, the People Daily dataset and the Children’s Fairy Tale dataset. Instead of creating questions in Cloze style, a number of other datasets rely on human annotators to create real questions. Richardson et al. (2013) created the well-known MCTest dataset and Tapaswi et al. (2016) created the MovieQA dataset. In these datasets, candidate answers are provided for each question. Similar to these two datasets, the SQuAD dataset (Rajpurkar et al., 2016) was also created by human annotators. Different from the previous two, however, the SQuAD dataset does not provide candidate answers, and thus all possible subsequences from the given passage have to be considered as candidate answers.
1608.07905#30
Machine Comprehension Using Match-LSTM and Answer Pointer
Machine comprehension of text is an important problem in natural language processing. A recently released dataset, the Stanford Question Answering Dataset (SQuAD), offers a large number of real questions and their answers created by humans through crowdsourcing. SQuAD provides a challenging testbed for evaluating machine comprehension algorithms, partly because compared with previous datasets, in SQuAD the answers do not come from a small set of candidate answers and they have variable lengths. We propose an end-to-end neural architecture for the task. The architecture is based on match-LSTM, a model we proposed previously for textual entailment, and Pointer Net, a sequence-to-sequence model proposed by Vinyals et al.(2015) to constrain the output tokens to be from the input sequences. We propose two ways of using Pointer Net for our task. Our experiments show that both of our two models substantially outperform the best results obtained by Rajpurkar et al.(2016) using logistic regression and manually crafted features.
http://arxiv.org/pdf/1608.07905
Shuohang Wang, Jing Jiang
cs.CL, cs.AI
11 pages; 3 figures
null
cs.CL
20160829
20161107
[ { "id": "1602.04341" }, { "id": "1607.02250" }, { "id": "1610.09996" } ]
1608.07905
31
Besides the datasets above, there are also a few other datasets created for machine comprehension, such as WikiReading dataset (Hewlett et al., 2016) and bAbI dataset (Weston et al., 2016), but they are quite different from the datasets above in nature. 4.2 END-TO-END NEURAL NETWORK MODELS FOR MACHINE COMPREHENSION There have been a number of studies proposing end-to-end neural network models for machine comprehension. A common approach is to use recurrent neural networks (RNNs) to process the given text and the question in order to predict or generate the answers (Hermann et al., 2015). Attention mechanism is also widely used on top of RNNs in order to match the question with the given passage (Hermann et al., 2015; Chen et al., 2016). Given that answers often come from the given passage, Pointer Network has been adopted in a few studies in order to copy tokens from the given passage as answers (Kadlec et al., 2016; Trischler et al., 2016). Compared with existing 8 # Under review as a conference paper at ICLR 2017 work, we use match-LSTM to match a question and a given passage, and we use Pointer Network in a different way such that we can generate answers that contain multiple tokens from the given passage.
1608.07905#31
Machine Comprehension Using Match-LSTM and Answer Pointer
Machine comprehension of text is an important problem in natural language processing. A recently released dataset, the Stanford Question Answering Dataset (SQuAD), offers a large number of real questions and their answers created by humans through crowdsourcing. SQuAD provides a challenging testbed for evaluating machine comprehension algorithms, partly because compared with previous datasets, in SQuAD the answers do not come from a small set of candidate answers and they have variable lengths. We propose an end-to-end neural architecture for the task. The architecture is based on match-LSTM, a model we proposed previously for textual entailment, and Pointer Net, a sequence-to-sequence model proposed by Vinyals et al.(2015) to constrain the output tokens to be from the input sequences. We propose two ways of using Pointer Net for our task. Our experiments show that both of our two models substantially outperform the best results obtained by Rajpurkar et al.(2016) using logistic regression and manually crafted features.
http://arxiv.org/pdf/1608.07905
Shuohang Wang, Jing Jiang
cs.CL, cs.AI
11 pages; 3 figures
null
cs.CL
20160829
20161107
[ { "id": "1602.04341" }, { "id": "1607.02250" }, { "id": "1610.09996" } ]
1608.07905
32
Memory Networks (Weston et al., 2015) have also been applied to machine comprehen- sion (Sukhbaatar et al., 2015; Kumar et al., 2016; Hill et al., 2016), but its scalability when applied to a large dataset is still an issue. In this work, we did not consider memory networks for the SQuAD dataset. # 5 CONCLUSIONS In this paper, We developed two models for the machine comprehension problem defined in the Stanford Question Answering (SQuAD) dataset, both making use of match-LSTM and Pointer Net- work. Experiments on the SQuAD dataset showed that our second model, the boundary model, could achieve an exact match score of 67.6% and an F1 score of 77% on the test dataset, which is better than our sequence model and Rajpurkar et al. (2016)’s feature-engineered model. In the future, we plan to look further into the different types of questions and focus on those questions which currently have low performance, such as the “why’ questions. We also plan to test how our models could be applied to other machine comprehension datasets. # 6 ACKNOWLEDGMENTS We thank Pranav Rajpurkar for testing our model on the hidden test dataset and Percy Liang for helping us with the Dockerfile for Codalab. # REFERENCES
1608.07905#32
Machine Comprehension Using Match-LSTM and Answer Pointer
Machine comprehension of text is an important problem in natural language processing. A recently released dataset, the Stanford Question Answering Dataset (SQuAD), offers a large number of real questions and their answers created by humans through crowdsourcing. SQuAD provides a challenging testbed for evaluating machine comprehension algorithms, partly because compared with previous datasets, in SQuAD the answers do not come from a small set of candidate answers and they have variable lengths. We propose an end-to-end neural architecture for the task. The architecture is based on match-LSTM, a model we proposed previously for textual entailment, and Pointer Net, a sequence-to-sequence model proposed by Vinyals et al.(2015) to constrain the output tokens to be from the input sequences. We propose two ways of using Pointer Net for our task. Our experiments show that both of our two models substantially outperform the best results obtained by Rajpurkar et al.(2016) using logistic regression and manually crafted features.
http://arxiv.org/pdf/1608.07905
Shuohang Wang, Jing Jiang
cs.CL, cs.AI
11 pages; 3 figures
null
cs.CL
20160829
20161107
[ { "id": "1602.04341" }, { "id": "1607.02250" }, { "id": "1610.09996" } ]
1608.07905
33
We thank Pranav Rajpurkar for testing our model on the hidden test dataset and Percy Liang for helping us with the Dockerfile for Codalab. # REFERENCES Danqi Chen, Jason Bolton, and Christopher D. Manning. A thorough examination of the CNN/Daily Mail reading comprehension task. In Proceedings of the Conference on Association for Compu- tational Linguistics, 2016. Yiming Cui, Ting Liu, Zhipeng Chen, Shijin Wang, and Guoping Hu. Consensus attention-based neural networks for chinese reading comprehension. In arXiv preprint arXiv:1607.02250, 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of the Conference on Association for Computa- tional Linguistics, 2016. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. Teaching machines to read and comprehend. In Proceedings of the Conference on Advances in Neural Information Processing Systems, pp. 1693–1701, 2015.
1608.07905#33
Machine Comprehension Using Match-LSTM and Answer Pointer
Machine comprehension of text is an important problem in natural language processing. A recently released dataset, the Stanford Question Answering Dataset (SQuAD), offers a large number of real questions and their answers created by humans through crowdsourcing. SQuAD provides a challenging testbed for evaluating machine comprehension algorithms, partly because compared with previous datasets, in SQuAD the answers do not come from a small set of candidate answers and they have variable lengths. We propose an end-to-end neural architecture for the task. The architecture is based on match-LSTM, a model we proposed previously for textual entailment, and Pointer Net, a sequence-to-sequence model proposed by Vinyals et al.(2015) to constrain the output tokens to be from the input sequences. We propose two ways of using Pointer Net for our task. Our experiments show that both of our two models substantially outperform the best results obtained by Rajpurkar et al.(2016) using logistic regression and manually crafted features.
http://arxiv.org/pdf/1608.07905
Shuohang Wang, Jing Jiang
cs.CL, cs.AI
11 pages; 3 figures
null
cs.CL
20160829
20161107
[ { "id": "1602.04341" }, { "id": "1607.02250" }, { "id": "1610.09996" } ]
1608.07905
34
Daniel Hewlett, Alexandre Lacoste, Llion Jones, Illia Polosukhin, Andrew Fandrianto, Jay Han, Matthew Kelcey, and David Berthelot. WIKIREADING: A novel large-scale language under- standing task over wikipedia. In Proceedings of the Conference on Association for Computational Linguistics, 2016. Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. The Goldilocks principle: Read- ing children’s books with explicit memory representations. In Proceedings of the International Conference on Learning Representations, 2016. Sepp Hochreiter and J¨urgen Schmidhuber. Long short-term memory. Neural computation, 9(8): 1735–1780, 1997. Rudolf Kadlec, Martin Schmid, Ondrej Bajgar, and Jan Kleindienst. Text understanding with the attention sum reader network. In Proceedings of the Conference on Association for Computational Linguistics, 2016. Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Representations, 2015. 9 # Under review as a conference paper at ICLR 2017
1608.07905#34
Machine Comprehension Using Match-LSTM and Answer Pointer
Machine comprehension of text is an important problem in natural language processing. A recently released dataset, the Stanford Question Answering Dataset (SQuAD), offers a large number of real questions and their answers created by humans through crowdsourcing. SQuAD provides a challenging testbed for evaluating machine comprehension algorithms, partly because compared with previous datasets, in SQuAD the answers do not come from a small set of candidate answers and they have variable lengths. We propose an end-to-end neural architecture for the task. The architecture is based on match-LSTM, a model we proposed previously for textual entailment, and Pointer Net, a sequence-to-sequence model proposed by Vinyals et al.(2015) to constrain the output tokens to be from the input sequences. We propose two ways of using Pointer Net for our task. Our experiments show that both of our two models substantially outperform the best results obtained by Rajpurkar et al.(2016) using logistic regression and manually crafted features.
http://arxiv.org/pdf/1608.07905
Shuohang Wang, Jing Jiang
cs.CL, cs.AI
11 pages; 3 figures
null
cs.CL
20160829
20161107
[ { "id": "1602.04341" }, { "id": "1607.02250" }, { "id": "1610.09996" } ]
1608.07905
35
9 # Under review as a conference paper at ICLR 2017 Ankit Kumar, Ozan Irsoy, Jonathan Su, James Bradbury, Robert English, Brian Pierce, Peter On- druska, Ishaan Gulrajani, and Richard Socher. Ask me anything: Dynamic memory networks In Proceedings of the International Conference on Machine for natural language processing. Learning, 2016. Jeffrey Pennington, Richard Socher, and Christopher D Manning. GloVe: Global vectors for word In Proceedings of the Conference on Empirical Methods in Natural Language representation. Processing, 2014. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, 2016. Matthew Richardson, Christopher JC Burges, and Erin Renshaw. MCTest: A challenge dataset for the open-domain machine comprehension of text. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, 2013. Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. End-to-end memory networks. In Proceed- ings of the Conference on Advances in neural information processing systems, 2015.
1608.07905#35
Machine Comprehension Using Match-LSTM and Answer Pointer
Machine comprehension of text is an important problem in natural language processing. A recently released dataset, the Stanford Question Answering Dataset (SQuAD), offers a large number of real questions and their answers created by humans through crowdsourcing. SQuAD provides a challenging testbed for evaluating machine comprehension algorithms, partly because compared with previous datasets, in SQuAD the answers do not come from a small set of candidate answers and they have variable lengths. We propose an end-to-end neural architecture for the task. The architecture is based on match-LSTM, a model we proposed previously for textual entailment, and Pointer Net, a sequence-to-sequence model proposed by Vinyals et al.(2015) to constrain the output tokens to be from the input sequences. We propose two ways of using Pointer Net for our task. Our experiments show that both of our two models substantially outperform the best results obtained by Rajpurkar et al.(2016) using logistic regression and manually crafted features.
http://arxiv.org/pdf/1608.07905
Shuohang Wang, Jing Jiang
cs.CL, cs.AI
11 pages; 3 figures
null
cs.CL
20160829
20161107
[ { "id": "1602.04341" }, { "id": "1607.02250" }, { "id": "1610.09996" } ]
1608.07905
36
Makarand Tapaswi, Yukun Zhu, Rainer Stiefelhagen, Antonio Torralba, Raquel Urtasun, and Sanja Fidler. MovieQA: Understanding stories in movies through question-answering. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2016. Adam Trischler, Zheng Ye, Xingdi Yuan, and Kaheer Suleman. Natural language comprehension with the EpiReader. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, 2016. Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. Pointer networks. In Proceedings of the Con- ference on Advances in Neural Information Processing Systems, 2015. Shuohang Wang and Jing Jiang. Learning natural language inference with LSTM. In Proceedings of the Conference on the North American Chapter of the Association for Computational Linguistics, 2016. Jason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. In Proceedings of the Inter- national Conference on Learning Representations, 2015. Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M Rush, Bart van Merri¨enboer, Armand Joulin, and Tomas Mikolov. Towards AI-complete question answering: A set of prerequisite toy tasks. In Proceedings of the International Conference on Learning Representations, 2016.
1608.07905#36
Machine Comprehension Using Match-LSTM and Answer Pointer
Machine comprehension of text is an important problem in natural language processing. A recently released dataset, the Stanford Question Answering Dataset (SQuAD), offers a large number of real questions and their answers created by humans through crowdsourcing. SQuAD provides a challenging testbed for evaluating machine comprehension algorithms, partly because compared with previous datasets, in SQuAD the answers do not come from a small set of candidate answers and they have variable lengths. We propose an end-to-end neural architecture for the task. The architecture is based on match-LSTM, a model we proposed previously for textual entailment, and Pointer Net, a sequence-to-sequence model proposed by Vinyals et al.(2015) to constrain the output tokens to be from the input sequences. We propose two ways of using Pointer Net for our task. Our experiments show that both of our two models substantially outperform the best results obtained by Rajpurkar et al.(2016) using logistic regression and manually crafted features.
http://arxiv.org/pdf/1608.07905
Shuohang Wang, Jing Jiang
cs.CL, cs.AI
11 pages; 3 figures
null
cs.CL
20160829
20161107
[ { "id": "1602.04341" }, { "id": "1607.02250" }, { "id": "1610.09996" } ]
1608.07905
37
Wenpeng Yin, Sebastian Ebert, and Hinrich Sch¨utze. Attention-based convolutional neural network for machine comprehension. arXiv preprint arXiv:1602.04341, 2016. Yang Yu, Wei Zhang, Kazi Hasan, Mo Yu, Bing Xiang, and Bowen Zhou. End-to-end answer chunk extraction and ranking for reading comprehension. arXiv preprint arXiv:1610.09996, 2016. 10 # Under review as a conference paper at ICLR 2017 F1 score(s) Exact match(s) F1 score(b) Exact match(b) F1 score(e) Exact match(e) Score w i=} Instance number TTLIIt 1 2 3 4 5 6 7 8 9 >9 Answer length Answer length 90 (3) 7000 (4) 80 _. 6000 70 2 5000 @ 60 2 4000 5 50 & 3000 s 40} a 2000 30} = 1000 ears SS 3 oy Sy >) & © Ss by @ o & e s by & o LLL SS SS ZC CLL LS SK KS Question types Question types
1608.07905#37
Machine Comprehension Using Match-LSTM and Answer Pointer
Machine comprehension of text is an important problem in natural language processing. A recently released dataset, the Stanford Question Answering Dataset (SQuAD), offers a large number of real questions and their answers created by humans through crowdsourcing. SQuAD provides a challenging testbed for evaluating machine comprehension algorithms, partly because compared with previous datasets, in SQuAD the answers do not come from a small set of candidate answers and they have variable lengths. We propose an end-to-end neural architecture for the task. The architecture is based on match-LSTM, a model we proposed previously for textual entailment, and Pointer Net, a sequence-to-sequence model proposed by Vinyals et al.(2015) to constrain the output tokens to be from the input sequences. We propose two ways of using Pointer Net for our task. Our experiments show that both of our two models substantially outperform the best results obtained by Rajpurkar et al.(2016) using logistic regression and manually crafted features.
http://arxiv.org/pdf/1608.07905
Shuohang Wang, Jing Jiang
cs.CL, cs.AI
11 pages; 3 figures
null
cs.CL
20160829
20161107
[ { "id": "1602.04341" }, { "id": "1607.02250" }, { "id": "1610.09996" } ]
1608.07905
38
Figure 3: Performance breakdown by answer lengths and question types. Top: Plot (1) shows the performance of our two models (where s refers to the sequence model , b refers to the boundary model, and e refers to the ensemble boundary model) over answers with different lengths. Plot (2) shows the numbers of answers with different lengths. Bottom: Plot (3) shows the performance our the two models on different types of questions. Plot (4) shows the numbers of different types of questions. # A APPENDIX We show the performance breakdown by answer lengths and question types for our sequence model, boundary model and the ensemble model in Figure 3. 11
1608.07905#38
Machine Comprehension Using Match-LSTM and Answer Pointer
Machine comprehension of text is an important problem in natural language processing. A recently released dataset, the Stanford Question Answering Dataset (SQuAD), offers a large number of real questions and their answers created by humans through crowdsourcing. SQuAD provides a challenging testbed for evaluating machine comprehension algorithms, partly because compared with previous datasets, in SQuAD the answers do not come from a small set of candidate answers and they have variable lengths. We propose an end-to-end neural architecture for the task. The architecture is based on match-LSTM, a model we proposed previously for textual entailment, and Pointer Net, a sequence-to-sequence model proposed by Vinyals et al.(2015) to constrain the output tokens to be from the input sequences. We propose two ways of using Pointer Net for our task. Our experiments show that both of our two models substantially outperform the best results obtained by Rajpurkar et al.(2016) using logistic regression and manually crafted features.
http://arxiv.org/pdf/1608.07905
Shuohang Wang, Jing Jiang
cs.CL, cs.AI
11 pages; 3 figures
null
cs.CL
20160829
20161107
[ { "id": "1602.04341" }, { "id": "1607.02250" }, { "id": "1610.09996" } ]
1608.06993
1
# Abstract Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convo- lutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections—one between each layer and its subsequent layer—our network has L(L+1) direct connections. For 2 each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several com- pelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage fea- ture reuse, and substantially reduce the number of parame- ters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain sig- nificant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high per- formance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet.
1608.06993#1
Densely Connected Convolutional Networks
Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet .
http://arxiv.org/pdf/1608.06993
Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger
cs.CV, cs.LG
CVPR 2017
null
cs.CV
20160825
20180128
[ { "id": "1605.07716" }, { "id": "1605.07146" }, { "id": "1603.08029" }, { "id": "1607.01097" }, { "id": "1604.03640" }, { "id": "1511.06421" }, { "id": "1605.07648" }, { "id": "1707.06990" } ]
1608.06993
2
# 1. Introduction Convolutional neural networks (CNNs) have become the dominant machine learning approach for visual object recognition. Although they were originally introduced over 20 years ago [18], improvements in computer hardware and network structure have enabled the training of truly deep CNNs only recently. The original LeNet5 [19] consisted of 5 layers, VGG featured 19 [29], and only last year Highway
1608.06993#2
Densely Connected Convolutional Networks
Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet .
http://arxiv.org/pdf/1608.06993
Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger
cs.CV, cs.LG
CVPR 2017
null
cs.CV
20160825
20180128
[ { "id": "1605.07716" }, { "id": "1605.07146" }, { "id": "1603.08029" }, { "id": "1607.01097" }, { "id": "1604.03640" }, { "id": "1511.06421" }, { "id": "1605.07648" }, { "id": "1707.06990" } ]
1608.06993
3
Figure 1: A 5-layer dense block with a growth rate of k = 4. Each layer takes all preceding feature-maps as input. Networks [34] and Residual Networks (ResNets) [11] have surpassed the 100-layer barrier. As CNNs become increasingly deep, a new research problem emerges: as information about the input or gra- dient passes through many layers, it can vanish and “wash out” by the time it reaches the end (or beginning) of the network. Many recent publications address this or related problems. ResNets [11] and Highway Networks [34] by- pass signal from one layer to the next via identity connec- tions. Stochastic depth [13] shortens ResNets by randomly dropping layers during training to allow better information and gradient flow. FractalNets [17] repeatedly combine sev- eral parallel layer sequences with different number of con- volutional blocks to obtain a large nominal depth, while maintaining many short paths in the network. Although these different approaches vary in network topology and training procedure, they all share a key characteristic: they create short paths from early layers to later layers. ∗Authors contributed equally 1
1608.06993#3
Densely Connected Convolutional Networks
Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet .
http://arxiv.org/pdf/1608.06993
Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger
cs.CV, cs.LG
CVPR 2017
null
cs.CV
20160825
20180128
[ { "id": "1605.07716" }, { "id": "1605.07146" }, { "id": "1603.08029" }, { "id": "1607.01097" }, { "id": "1604.03640" }, { "id": "1511.06421" }, { "id": "1605.07648" }, { "id": "1707.06990" } ]
1608.06993
4
∗Authors contributed equally 1 In this paper, we propose an architecture that distills this insight into a simple connectivity pattern: to ensure maxi- mum information flow between layers in the network, we connect all layers (with matching feature-map sizes) di- rectly with each other. To preserve the feed-forward nature, each layer obtains additional inputs from all preceding lay- ers and passes on its own feature-maps to all subsequent layers. Figure | illustrates this layout schematically. Cru- cially, in contrast to ResNets, we never combine features through summation before they are passed into a layer; in- stead, we combine features by concatenating them. Hence, the ¢*” layer has @ inputs, consisting of the feature-maps of all preceding convolutional blocks. Its own feature-maps are passed on to all L — ¢ subsequent layers. This introduces Ett) connections in an L-layer network, instead of just I, as in traditional architectures. Because of its dense con- nectivity pattern, we refer to our approach as Dense Convo- lutional Network (DenseNet).
1608.06993#4
Densely Connected Convolutional Networks
Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet .
http://arxiv.org/pdf/1608.06993
Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger
cs.CV, cs.LG
CVPR 2017
null
cs.CV
20160825
20180128
[ { "id": "1605.07716" }, { "id": "1605.07146" }, { "id": "1603.08029" }, { "id": "1607.01097" }, { "id": "1604.03640" }, { "id": "1511.06421" }, { "id": "1605.07648" }, { "id": "1707.06990" } ]
1608.06993
5
A possibly counter-intuitive effect of this dense connec- tivity pattern is that it requires fewer parameters than tra- ditional convolutional networks, as there is no need to re- learn redundant feature-maps. Traditional feed-forward ar- chitectures can be viewed as algorithms with a state, which is passed on from layer to layer. Each layer reads the state from its preceding layer and writes to the subsequent layer. It changes the state but also passes on information that needs to be preserved. ResNets [11] make this information preser- vation explicit through additive identity transformations. Recent variations of ResNets [13] show that many layers contribute very little and can in fact be randomly dropped during training. This makes the state of ResNets similar to (unrolled) recurrent neural networks [21], but the num- ber of parameters of ResNets is substantially larger because each layer has its own weights. Our proposed DenseNet ar- chitecture explicitly differentiates between information that is added to the network and information that is preserved. DenseNet layers are very narrow (e.g., 12 filters per layer), adding only a small set of feature-maps to the “collective knowledge” of the network and keep the remaining feature- maps unchanged—and the final classifier makes a decision based on all feature-maps in the network.
1608.06993#5
Densely Connected Convolutional Networks
Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet .
http://arxiv.org/pdf/1608.06993
Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger
cs.CV, cs.LG
CVPR 2017
null
cs.CV
20160825
20180128
[ { "id": "1605.07716" }, { "id": "1605.07146" }, { "id": "1603.08029" }, { "id": "1607.01097" }, { "id": "1604.03640" }, { "id": "1511.06421" }, { "id": "1605.07648" }, { "id": "1707.06990" } ]
1608.06993
6
Besides better parameter efficiency, one big advantage of DenseNets is their improved flow of information and gra- dients throughout the network, which makes them easy to train. Each layer has direct access to the gradients from the loss function and the original input signal, leading to an im- plicit deep supervision [20]. This helps training of deeper network architectures. Further, we also observe that dense connections have a regularizing effect, which reduces over- fitting on tasks with smaller training set sizes. We evaluate DenseNets on four highly competitive benchmark datasets (CIFAR-10, CIFAR-100, SVHN, and ImageNet). Our models tend to require much fewer parameters than existing algorithms with comparable accuracy. Further, we significantly outperform the current state-of- the-art results on most of the benchmark tasks. # 2. Related Work The exploration of network architectures has been a part of neural network research since their initial discovery. The recent resurgence in popularity of neural networks has also revived this research domain. The increasing number of lay- ers in modern networks amplifies the differences between architectures and motivates the exploration of different con- nectivity patterns and the revisiting of old research ideas.
1608.06993#6
Densely Connected Convolutional Networks
Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet .
http://arxiv.org/pdf/1608.06993
Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger
cs.CV, cs.LG
CVPR 2017
null
cs.CV
20160825
20180128
[ { "id": "1605.07716" }, { "id": "1605.07146" }, { "id": "1603.08029" }, { "id": "1607.01097" }, { "id": "1604.03640" }, { "id": "1511.06421" }, { "id": "1605.07648" }, { "id": "1707.06990" } ]
1608.06993
7
A cascade structure similar to our proposed dense net- work layout has already been studied in the neural networks literature in the 1980s [3]. Their pioneering work focuses on fully connected multi-layer perceptrons trained in a layer- by-layer fashion. More recently, fully connected cascade networks to be trained with batch gradient descent were proposed [40]. Although effective on small datasets, this approach only scales to networks with a few hundred pa- rameters. In [9, 23, 31, 41], utilizing multi-level features in CNNs through skip-connnections has been found to be effective for various vision tasks. Parallel to our work, [1] derived a purely theoretical framework for networks with cross-layer connections similar to ours.
1608.06993#7
Densely Connected Convolutional Networks
Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet .
http://arxiv.org/pdf/1608.06993
Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger
cs.CV, cs.LG
CVPR 2017
null
cs.CV
20160825
20180128
[ { "id": "1605.07716" }, { "id": "1605.07146" }, { "id": "1603.08029" }, { "id": "1607.01097" }, { "id": "1604.03640" }, { "id": "1511.06421" }, { "id": "1605.07648" }, { "id": "1707.06990" } ]
1608.06993
8
Highway Networks [34] were amongst the first architec- tures that provided a means to effectively train end-to-end networks with more than 100 layers. Using bypassing paths along with gating units, Highway Networks with hundreds of layers can be optimized without difficulty. The bypass- ing paths are presumed to be the key factor that eases the training of these very deep networks. This point is further supported by ResNets [11], in which pure identity mappings are used as bypassing paths. ResNets have achieved im- pressive, record-breaking performance on many challeng- ing image recognition, localization, and detection tasks, such as ImageNet and COCO object detection [11]. Re- cently, stochastic depth was proposed as a way to success- fully train a 1202-layer ResNet [13]. Stochastic depth im- proves the training of deep residual networks by dropping layers randomly during training. This shows that not all layers may be needed and highlights that there is a great amount of redundancy in deep (residual) networks. Our pa- per was partly inspired by that observation. ResNets with pre-activation also facilitate the training of state-of-the-art networks with > 1000 layers [12].
1608.06993#8
Densely Connected Convolutional Networks
Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet .
http://arxiv.org/pdf/1608.06993
Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger
cs.CV, cs.LG
CVPR 2017
null
cs.CV
20160825
20180128
[ { "id": "1605.07716" }, { "id": "1605.07146" }, { "id": "1603.08029" }, { "id": "1607.01097" }, { "id": "1604.03640" }, { "id": "1511.06421" }, { "id": "1605.07648" }, { "id": "1707.06990" } ]
1608.06993
9
An orthogonal approach to making networks deeper (e.g., with the help of skip connections) is to increase the network width. The GoogLeNet [36, 37] uses an “Incep- tion module” which concatenates feature-maps produced by filters of different sizes. In [38], a variant of ResNets with wide generalized residual blocks was proposed. In fact, simply increasing the number of filters in each layer of
1608.06993#9
Densely Connected Convolutional Networks
Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet .
http://arxiv.org/pdf/1608.06993
Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger
cs.CV, cs.LG
CVPR 2017
null
cs.CV
20160825
20180128
[ { "id": "1605.07716" }, { "id": "1605.07146" }, { "id": "1603.08029" }, { "id": "1607.01097" }, { "id": "1604.03640" }, { "id": "1511.06421" }, { "id": "1605.07648" }, { "id": "1707.06990" } ]
1608.06993
10
Input Dense Block 1 —O-vO eve v TORRIOAUOD v TONMOAUOD v Buyoog v Dense Block 2 Prediction 9 Dense Block 3 3 2 v ec S}e|8le| C+eveveve |>/8}o/3])-| “horse” i= 2 ih Sab aber Ei 3B 5 = Figure 2: A deep DenseNet with three dense blocks. The layers between two adjacent blocks are referred to as transition layers and change feature-map sizes via convolution and pooling. ResNets can improve its performance provided the depth is sufficient [42]. FractalNets also achieve competitive results on several datasets using a wide network structure [17]. Instead of drawing representational power from ex- tremely deep or wide architectures, DenseNets exploit the potential of the network through feature reuse, yielding con- densed models that are easy to train and highly parameter- efficient. Concatenating feature-maps learned by different layers increases variation in the input of subsequent layers and improves efficiency. This constitutes a major difference between DenseNets and ResNets. Compared to Inception networks [36, 37], which also concatenate features from dif- An advantage of ResNets is
1608.06993#10
Densely Connected Convolutional Networks
Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet .
http://arxiv.org/pdf/1608.06993
Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger
cs.CV, cs.LG
CVPR 2017
null
cs.CV
20160825
20180128
[ { "id": "1605.07716" }, { "id": "1605.07146" }, { "id": "1603.08029" }, { "id": "1607.01097" }, { "id": "1604.03640" }, { "id": "1511.06421" }, { "id": "1605.07648" }, { "id": "1707.06990" } ]
1608.06993
11
between DenseNets and ResNets. Compared to Inception networks [36, 37], which also concatenate features from dif- An advantage of ResNets is that the gradient can flow di- rectly through the identity function from later layers to the earlier layers. However, the identity function and the output of Hy are combined by summation, which may impede the information flow in the network. Dense connectivity. To further improve the information flow between layers we propose a different connectivity pattern: we introduce direct connections from any layer to all subsequent layers. Figure | illustrates the layout of the resulting DenseNet schematically. Consequently, the ¢'” layer receives the feature-maps of all preceding layers, Xo,---,X¢_—1, as input:
1608.06993#11
Densely Connected Convolutional Networks
Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet .
http://arxiv.org/pdf/1608.06993
Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger
cs.CV, cs.LG
CVPR 2017
null
cs.CV
20160825
20180128
[ { "id": "1605.07716" }, { "id": "1605.07146" }, { "id": "1603.08029" }, { "id": "1607.01097" }, { "id": "1604.03640" }, { "id": "1511.06421" }, { "id": "1605.07648" }, { "id": "1707.06990" } ]
1608.06993
12
ferent layers, DenseNets are simpler and more efficient. There are other notable network architecture innovations which have yielded competitive results. The Network in Network (NIN) [22] structure includes micro multi-layer perceptrons into the filters of convolutional layers to ex- tract more complicated features. In Deeply Supervised Net- work (DSN) [20], internal layers are directly supervised by auxiliary classifiers, which can strengthen the gradients received by earlier layers. Ladder Networks [27, 25] in- troduce lateral connections into autoencoders, producing impressive accuracies on semi-supervised learning tasks. In [39], Deeply-Fused Nets (DFNs) were proposed to im- prove information flow by combining intermediate layers of different base networks. The augmentation of networks with pathways that minimize reconstruction losses was also shown to improve image classification models [43]. # 3. DenseNets
1608.06993#12
Densely Connected Convolutional Networks
Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet .
http://arxiv.org/pdf/1608.06993
Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger
cs.CV, cs.LG
CVPR 2017
null
cs.CV
20160825
20180128
[ { "id": "1605.07716" }, { "id": "1605.07146" }, { "id": "1603.08029" }, { "id": "1607.01097" }, { "id": "1604.03640" }, { "id": "1511.06421" }, { "id": "1605.07648" }, { "id": "1707.06990" } ]
1608.06993
13
# 3. DenseNets Consider a single image xo that is passed through a con- volutional network. The network comprises L layers, each of which implements a non-linear transformation He(-), where ¢ indexes the layer. H;(-) can be a composite func- tion of operations such as Batch Normalization (BN) [14], rectified linear units (ReLU) [6], Pooling [19], or Convolu- tion (Conv). We denote the output of the ¢” layer as xy. x¢ = He([xo,X1,---,Xe-1]), (2) where [x,X1,...,Xe_1] refers to the concatenation of the feature-maps produced in layers 0,...,£—1. Because of its dense connectivity we refer to this network architecture as Dense Convolutional Network (DenseNet). For ease of im- plementation, we concatenate the multiple inputs of H;(-) in eq. (2) into a single tensor. Composite function. Motivated by [12], we define Hy(-) as a composite function of three consecutive operations: batch normalization (BN) [14], followed by a rectified lin- ear unit (ReLU) [6] and a3 x 3 convolution (Conv).
1608.06993#13
Densely Connected Convolutional Networks
Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet .
http://arxiv.org/pdf/1608.06993
Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger
cs.CV, cs.LG
CVPR 2017
null
cs.CV
20160825
20180128
[ { "id": "1605.07716" }, { "id": "1605.07146" }, { "id": "1603.08029" }, { "id": "1607.01097" }, { "id": "1604.03640" }, { "id": "1511.06421" }, { "id": "1605.07648" }, { "id": "1707.06990" } ]
1608.06993
14
Pooling layers. The concatenation operation used in Eq. (2) is not viable when the size of feature-maps changes. However, an essential part of convolutional networks is down-sampling layers that change the size of feature-maps. To facilitate down-sampling in our architecture we divide the network into multiple densely connected dense blocks; see Figure 2. We refer to layers between blocks as transition layers, which do convolution and pooling. The transition layers used in our experiments consist of a batch normal- ization layer and an 1×1 convolutional layer followed by a 2×2 average pooling layer. ResNets. Traditional convolutional feed-forward _ net- works connect the output of the ¢“” layer as input to the (€ + 1)" layer [16], which gives rise to the following layer transition: xe = Hy(xe_1). ResNets [11] add a skip-connection that bypasses the non-linear transforma- tions with an identity function:
1608.06993#14
Densely Connected Convolutional Networks
Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet .
http://arxiv.org/pdf/1608.06993
Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger
cs.CV, cs.LG
CVPR 2017
null
cs.CV
20160825
20180128
[ { "id": "1605.07716" }, { "id": "1605.07146" }, { "id": "1603.08029" }, { "id": "1607.01097" }, { "id": "1604.03640" }, { "id": "1511.06421" }, { "id": "1605.07648" }, { "id": "1707.06990" } ]
1608.06993
15
Growth rate. If each function Hy produces k feature- maps, it follows that the ¢’” layer has ko +k x (€—1) input feature-maps, where ko is the number of channels in the in- put layer. An important difference between DenseNet and existing network architectures is that DenseNet can have very narrow layers, e.g., k = 12. We refer to the hyper- parameter k as the growth rate of the network. We show in Section 4 that a relatively small growth rate is sufficient to xe = He(xe-1) + X0-1. ()
1608.06993#15
Densely Connected Convolutional Networks
Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet .
http://arxiv.org/pdf/1608.06993
Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger
cs.CV, cs.LG
CVPR 2017
null
cs.CV
20160825
20180128
[ { "id": "1605.07716" }, { "id": "1605.07146" }, { "id": "1603.08029" }, { "id": "1607.01097" }, { "id": "1604.03640" }, { "id": "1511.06421" }, { "id": "1605.07648" }, { "id": "1707.06990" } ]
1608.06993
16
Layers Output Size DenseNet-121 DenseNet-169 DenseNet-201 DenseNet-264 Convolution 112 x 112 7 x 7 conv, stride 2 Pooling 56 x 56 3 x 3 max pool, stride 2 Dense Block 1 x lL conv 1 x I conv 1 x L conv 1 x I conv 56 x 56 6 6 6 6 qd) * 3 x 3 conv * 3 x 3 conv * 3 x 3 conv * 3 x 3. conv * Transition Layer 56 x 56 1 x lL conv qd) 28 x 28 2 x 2 average pool, stride 2 Dense Block 1 x lL conv 1 x I conv 1 x L conv 1 x I conv 28 x 28 12 12 12 12 (2) * 3x 3conv | * 3 x 3 conv * 3 x 3 conv * 3 x 3 conv * Transition Layer 28 x 28 1 x lL conv (2) 14x 14 2 x 2 average pool, stride 2 Dense Block 1 x lL conv 1 x I conv 1 x L conv 1 x I conv 14x 14 24 32 48 64 (3) * 3x 3conv | * 3 x 3 conv * 3 x 3 conv * 3 x 3 conv * Transition Layer 14x 14 1 x lL conv (3) 7x7 2 x 2 average pool, stride 2 Dense Block 1 x lL conv 1 x I conv 1
1608.06993#16
Densely Connected Convolutional Networks
Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet .
http://arxiv.org/pdf/1608.06993
Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger
cs.CV, cs.LG
CVPR 2017
null
cs.CV
20160825
20180128
[ { "id": "1605.07716" }, { "id": "1605.07146" }, { "id": "1603.08029" }, { "id": "1607.01097" }, { "id": "1604.03640" }, { "id": "1511.06421" }, { "id": "1605.07648" }, { "id": "1707.06990" } ]
1608.06993
18
Table 1: DenseNet architectures for ImageNet. The growth rate for all the networks is k = 32. Note that each “conv” layer shown in the table corresponds the sequence BN-ReLU-Conv. obtain state-of-the-art results on the datasets that we tested on. One explanation for this is that each layer has access to all the preceding feature-maps in its block and, therefore, to the network’s “collective knowledge”. One can view the feature-maps as the global state of the network. Each layer adds k feature-maps of its own to this state. The growth rate regulates how much new information each layer con- tributes to the global state. The global state, once written, can be accessed from everywhere within the network and, unlike in traditional network architectures, there is no need to replicate it from layer to layer.
1608.06993#18
Densely Connected Convolutional Networks
Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet .
http://arxiv.org/pdf/1608.06993
Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger
cs.CV, cs.LG
CVPR 2017
null
cs.CV
20160825
20180128
[ { "id": "1605.07716" }, { "id": "1605.07146" }, { "id": "1603.08029" }, { "id": "1607.01097" }, { "id": "1604.03640" }, { "id": "1511.06421" }, { "id": "1605.07648" }, { "id": "1707.06990" } ]
1608.06993
19
Bottleneck layers. Although each layer only produces k output feature-maps, it typically has many more inputs. It has been noted in [37, 11] that a 1 x 1 convolution can be in- troduced as bottleneck layer before each 3x3 convolution to reduce the number of input feature-maps, and thus to improve computational efficiency. We find this design es- pecially effective for DenseNet and we refer to our network with such a bottleneck layer, i.e., to the BN-ReLU-Conv(1 x 1)-BN-ReLU-Conv(3 x3) version of H», as DenseNet-B. In our experiments, we let each 1x1 convolution produce 4k feature-maps.
1608.06993#19
Densely Connected Convolutional Networks
Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet .
http://arxiv.org/pdf/1608.06993
Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger
cs.CV, cs.LG
CVPR 2017
null
cs.CV
20160825
20180128
[ { "id": "1605.07716" }, { "id": "1605.07146" }, { "id": "1603.08029" }, { "id": "1607.01097" }, { "id": "1604.03640" }, { "id": "1511.06421" }, { "id": "1605.07648" }, { "id": "1707.06990" } ]
1608.06993
20
Compression. To further improve model compactness, we can reduce the number of feature-maps at transition layers. If a dense block contains m feature-maps, we let the following transition layer generate |@m| output feature- maps, where 0 <6 <1 is referred to as the compression fac- tor. When 6 = 1, the number of feature-maps across transi- tion layers remains unchanged. We refer the DenseNet with @<14as DenseNet-C, and we set @ = 0.5 in our experiment. When both the bottleneck and transition layers with 0 < 1 are used, we refer to our model as DenseNet-BC.
1608.06993#20
Densely Connected Convolutional Networks
Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet .
http://arxiv.org/pdf/1608.06993
Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger
cs.CV, cs.LG
CVPR 2017
null
cs.CV
20160825
20180128
[ { "id": "1605.07716" }, { "id": "1605.07146" }, { "id": "1603.08029" }, { "id": "1607.01097" }, { "id": "1604.03640" }, { "id": "1511.06421" }, { "id": "1605.07648" }, { "id": "1707.06990" } ]
1608.06993
21
Implementation Details. On all datasets except Ima- geNet, the DenseNet used in our experiments has three dense blocks that each has an equal number of layers. Be- fore entering the first dense block, a convolution with 16 (or twice the growth rate for DenseNet-BC) output channels is performed on the input images. For convolutional layers with kernel size 3×3, each side of the inputs is zero-padded by one pixel to keep the feature-map size fixed. We use 1×1 convolution followed by 2×2 average pooling as transition layers between two contiguous dense blocks. At the end of the last dense block, a global average pooling is performed and then a softmax classifier is attached. The feature-map sizes in the three dense blocks are 32× 32, 16×16, and 8×8, respectively. We experiment with the basic DenseNet structure with configurations {L = 40, k = 12}, {L = 100, k = 12} and {L = 100, k = 24}. For DenseNet- BC, the networks with configurations {L = 100, k = 12}, {L = 250, k = 24} and {L = 190, k = 40} are evaluated.
1608.06993#21
Densely Connected Convolutional Networks
Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet .
http://arxiv.org/pdf/1608.06993
Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger
cs.CV, cs.LG
CVPR 2017
null
cs.CV
20160825
20180128
[ { "id": "1605.07716" }, { "id": "1605.07146" }, { "id": "1603.08029" }, { "id": "1607.01097" }, { "id": "1604.03640" }, { "id": "1511.06421" }, { "id": "1605.07648" }, { "id": "1707.06990" } ]
1608.06993
22
In our experiments on ImageNet, we use a DenseNet-BC structure with 4 dense blocks on 224×224 input images. The initial convolution layer comprises 2k convolutions of size 7×7 with stride 2; the number of feature-maps in all other layers also follow from setting k. The exact network configurations we used on ImageNet are shown in Table 1. # 4. Experiments We empirically demonstrate DenseNet’s effectiveness on several benchmark datasets and compare with state-of-the- art architectures, especially with ResNet and its variants.
1608.06993#22
Densely Connected Convolutional Networks
Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet .
http://arxiv.org/pdf/1608.06993
Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger
cs.CV, cs.LG
CVPR 2017
null
cs.CV
20160825
20180128
[ { "id": "1605.07716" }, { "id": "1605.07146" }, { "id": "1603.08029" }, { "id": "1607.01097" }, { "id": "1604.03640" }, { "id": "1511.06421" }, { "id": "1605.07648" }, { "id": "1707.06990" } ]
1608.06993
23
Method Network in Network [22] All-CNN [32] Deeply Supervised Net [20] Highway Network [34] FractalNet [17] with Dropout/Drop-path ResNet [11] ResNet (reported by [13]) ResNet with Stochastic Depth [13] Wide ResNet [42] with Dropout ResNet (pre-activation) [12] DenseNet (k = 12) DenseNet (k = 12) DenseNet (k = 24) DenseNet-BC (k = 12) DenseNet-BC (k = 24) DenseNet-BC (k = 40) Depth - - - - 21 21 110 110 110 1202 16 28 16 164 1001 40 100 100 100 250 190 Params - - - - 38.6M 38.6M 1.7M 1.7M 1.7M 10.2M 11.0M 36.5M 2.7M 1.7M 10.2M 1.0M 7.0M 27.2M 0.8M 15.3M 25.6M C10 10.41 9.08 9.69 - 10.18 7.33 - 13.63 11.66 - - - - 11.26∗ 10.56∗ 7.00 5.77 5.83 5.92 5.19 - C10+ 8.81 7.25
1608.06993#23
Densely Connected Convolutional Networks
Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet .
http://arxiv.org/pdf/1608.06993
Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger
cs.CV, cs.LG
CVPR 2017
null
cs.CV
20160825
20180128
[ { "id": "1605.07716" }, { "id": "1605.07146" }, { "id": "1603.08029" }, { "id": "1607.01097" }, { "id": "1604.03640" }, { "id": "1511.06421" }, { "id": "1605.07648" }, { "id": "1707.06990" } ]
1608.06993
24
- - - - 11.26∗ 10.56∗ 7.00 5.77 5.83 5.92 5.19 - C10+ 8.81 7.25 7.97 7.72 5.22 4.60 6.61 6.41 5.23 4.91 4.81 4.17 - 5.46 4.62 5.24 4.10 3.74 4.51 3.62 3.46 C100 35.68 - - - 35.34 28.20 - 44.74 37.80 - - - - 35.58∗ 33.47∗ 27.55 23.79 23.42 24.15 19.64 - C100+ - 33.71 34.57 32.39 23.30 23.73 - 27.22 24.58 - 22.07 20.50 - 24.33 22.71 24.42 20.20 19.25 22.27 17.60 17.18 SVHN 2.35 - 1.92 - 2.01 1.87 - 2.01 1.75 - - - 1.64 - - 1.79 1.67 1.59 1.76 1.74 Table 2: Error rates (%) on CIFAR and SVHN datasets. k denotes network’s growth rate. Results that surpass all competing methods are bold and the overall best results are
1608.06993#24
Densely Connected Convolutional Networks
Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet .
http://arxiv.org/pdf/1608.06993
Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger
cs.CV, cs.LG
CVPR 2017
null
cs.CV
20160825
20180128
[ { "id": "1605.07716" }, { "id": "1605.07146" }, { "id": "1603.08029" }, { "id": "1607.01097" }, { "id": "1604.03640" }, { "id": "1511.06421" }, { "id": "1605.07648" }, { "id": "1707.06990" } ]
1608.06993
25
CIFAR and SVHN datasets. k denotes network’s growth rate. Results that surpass all competing methods are bold and the overall best results are blue. “+” indicates standard data augmentation (translation and/or mirroring). ∗ indicates results run by ourselves. All the results of DenseNets without data augmentation (C10, C100, SVHN) are obtained using Dropout. DenseNets achieve lower error rates while using fewer parameters than ResNet. Without data augmentation, DenseNet performs better by a large margin.
1608.06993#25
Densely Connected Convolutional Networks
Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet .
http://arxiv.org/pdf/1608.06993
Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger
cs.CV, cs.LG
CVPR 2017
null
cs.CV
20160825
20180128
[ { "id": "1605.07716" }, { "id": "1605.07146" }, { "id": "1603.08029" }, { "id": "1607.01097" }, { "id": "1604.03640" }, { "id": "1511.06421" }, { "id": "1605.07648" }, { "id": "1707.06990" } ]
1608.06993
26
# 4.1. Datasets CIFAR. The two CIFAR datasets [15] consist of colored natural images with 32×32 pixels. CIFAR-10 (C10) con- sists of images drawn from 10 and CIFAR-100 (C100) from 100 classes. The training and test sets contain 50,000 and 10,000 images respectively, and we hold out 5,000 training images as a validation set. We adopt a standard data aug- mentation scheme (mirroring/shifting) that is widely used for these two datasets [11, 13, 17, 22, 28, 20, 32, 34]. We denote this data augmentation scheme by a “+” mark at the end of the dataset name (e.g., C10+). For preprocessing, we normalize the data using the channel means and stan- dard deviations. For the final run we use all 50,000 training images and report the final test error at the end of training.
1608.06993#26
Densely Connected Convolutional Networks
Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet .
http://arxiv.org/pdf/1608.06993
Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger
cs.CV, cs.LG
CVPR 2017
null
cs.CV
20160825
20180128
[ { "id": "1605.07716" }, { "id": "1605.07146" }, { "id": "1603.08029" }, { "id": "1607.01097" }, { "id": "1604.03640" }, { "id": "1511.06421" }, { "id": "1605.07648" }, { "id": "1707.06990" } ]
1608.06993
27
SVHN. The Street View House Numbers (SVHN) dataset [24] contains 32×32 colored digit images. There are 73,257 images in the training set, 26,032 images in the test set, and 531,131 images for additional training. Following common practice [7, 13, 20, 22, 30] we use all the training data with- out any data augmentation, and a validation set with 6,000 images is split from the training set. We select the model with the lowest validation error during training and report the test error. We follow [42] and divide the pixel values by 255 so they are in the [0, 1] range. ImageNet. The ILSVRC 2012 classification dataset [2] consists 1.2 million images for training, and 50,000 for val- idation, from 1, 000 classes. We adopt the same data aug- mentation scheme for training images as in [8, 11, 12], and apply a single-crop or 10-crop with size 224×224 at test time. Following [11, 12, 13], we report classification errors on the validation set. # 4.2. Training
1608.06993#27
Densely Connected Convolutional Networks
Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet .
http://arxiv.org/pdf/1608.06993
Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger
cs.CV, cs.LG
CVPR 2017
null
cs.CV
20160825
20180128
[ { "id": "1605.07716" }, { "id": "1605.07146" }, { "id": "1603.08029" }, { "id": "1607.01097" }, { "id": "1604.03640" }, { "id": "1511.06421" }, { "id": "1605.07648" }, { "id": "1707.06990" } ]
1608.06993
28
# 4.2. Training All the networks are trained using stochastic gradient de- scent (SGD). On CIFAR and SVHN we train using batch size 64 for 300 and 40 epochs, respectively. The initial learning rate is set to 0.1, and is divided by 10 at 50% and 75% of the total number of training epochs. On ImageNet, we train models for 90 epochs with a batch size of 256. The learning rate is set to 0.1 initially, and is lowered by 10 times at epoch 30 and 60. Note that a naive implemen- tation of DenseNet may contain memory inefficiencies. To reduce the memory consumption on GPUs, please refer to our technical report on the memory-efficient implementa- tion of DenseNets [26]. Following [8], we use a weight decay of 10−4 and a Nesterov momentum [35] of 0.9 without dampening. We adopt the weight initialization introduced by [10]. For the three datasets without data augmentation, i.e., C10, C100
1608.06993#28
Densely Connected Convolutional Networks
Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet .
http://arxiv.org/pdf/1608.06993
Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger
cs.CV, cs.LG
CVPR 2017
null
cs.CV
20160825
20180128
[ { "id": "1605.07716" }, { "id": "1605.07146" }, { "id": "1603.08029" }, { "id": "1607.01097" }, { "id": "1604.03640" }, { "id": "1511.06421" }, { "id": "1605.07648" }, { "id": "1707.06990" } ]
1608.06993
29
Model top-1 top-5 DenseNet-121 25.02 / 23.61 7.71 / 6.66 DenseNet-169 23.80 / 22.08 6.85 / 5.92 DenseNet-201 22.58 / 21.46 6.34 / 5.54 DenseNet-264 22.15 / 20.80 6.12 / 5.29 & 21.5, —2=ResNets ResNet-34_|—&—DenseNets-BC 265 DenseNt-169: DenséNet"3Q1 ResNet-101 278 a= ResNets —A4~ DenseNets-BC Reshlet-34 25.5 DenteNet-121 ResNet?50°" 24.56 \- Fl ResNet=50 validation error (%) 23.5 ResNet-101 ResNe}~152 22.5 FlesNet~152 DenseNet-264 Denseflet-264 215 04 3. 4 +5 6 7 O5 075 1 125 15 175 2 225 #parameters, x10" #flops x10 # validation error (%) Table 3: The top-1 and top-5 error rates on the ImageNet validation set, with single-crop / 10- crop testing.
1608.06993#29
Densely Connected Convolutional Networks
Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet .
http://arxiv.org/pdf/1608.06993
Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger
cs.CV, cs.LG
CVPR 2017
null
cs.CV
20160825
20180128
[ { "id": "1605.07716" }, { "id": "1605.07146" }, { "id": "1603.08029" }, { "id": "1607.01097" }, { "id": "1604.03640" }, { "id": "1511.06421" }, { "id": "1605.07648" }, { "id": "1707.06990" } ]
1608.06993
30
# validation error (%) Table 3: The top-1 and top-5 error rates on the ImageNet validation set, with single-crop / 10- crop testing. Figure 3: Comparison of the DenseNets and ResNets top-1 error rates (single-crop testing) on the ImageNet validation dataset as a function of learned parameters (left) and FLOPs during test-time (right). and SVHN, we add a dropout layer [33] after each convolu- tional layer (except the first one) and set the dropout rate to 0.2. The test errors were only evaluated once for each task and model setting. # 4.3. Classification Results on CIFAR and SVHN We train DenseNets with different depths, L, and growth rates, k. The main results on CIFAR and SVHN are shown in Table 2. To highlight general trends, we mark all results that outperform the existing state-of-the-art in boldface and the overall best result in blue.
1608.06993#30
Densely Connected Convolutional Networks
Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet .
http://arxiv.org/pdf/1608.06993
Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger
cs.CV, cs.LG
CVPR 2017
null
cs.CV
20160825
20180128
[ { "id": "1605.07716" }, { "id": "1605.07146" }, { "id": "1603.08029" }, { "id": "1607.01097" }, { "id": "1604.03640" }, { "id": "1511.06421" }, { "id": "1605.07648" }, { "id": "1707.06990" } ]
1608.06993
31
Accuracy. Possibly the most noticeable trend may orig- inate from the bottom row of Table 2, which shows that DenseNet-BC with L = 190 and k = 40 outperforms the existing state-of-the-art consistently on all the CIFAR datasets. Its error rates of 3.46% on C10+ and 17.18% on C100+ are significantly lower than the error rates achieved by wide ResNet architecture [42]. Our best results on C10 and C100 (without data augmentation) are even more encouraging: both are close to 30% lower than Fractal- Net with drop-path regularization [17]. On SVHN, with dropout, the DenseNet with L = 100 and k = 24 also surpasses the current best result achieved by wide ResNet. However, the 250-layer DenseNet-BC doesn’t further im- prove the performance over its shorter counterpart. This may be explained by that SVHN is a relatively easy task, and extremely deep models may overfit to the training set.
1608.06993#31
Densely Connected Convolutional Networks
Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet .
http://arxiv.org/pdf/1608.06993
Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger
cs.CV, cs.LG
CVPR 2017
null
cs.CV
20160825
20180128
[ { "id": "1605.07716" }, { "id": "1605.07146" }, { "id": "1603.08029" }, { "id": "1607.01097" }, { "id": "1604.03640" }, { "id": "1511.06421" }, { "id": "1605.07648" }, { "id": "1707.06990" } ]
1608.06993
32
Parameter Efficiency. The results in Table 2 indicate that DenseNets utilize parameters more efficiently than alterna- tive architectures (in particular, ResNets). The DenseNet- BC with bottleneck structure and dimension reduction at transition layers is particularly parameter-efficient. For ex- ample, our 250-layer model only has 15.3M parameters, but it consistently outperforms other models such as FractalNet and Wide ResNets that have more than 30M parameters. We also highlight that DenseNet-BC with L = 100 and k = 12 achieves comparable performance (e.g., 4.51% vs 4.62% er- ror on C10+, 22.27% vs 22.71% error on C100+) as the 1001-layer pre-activation ResNet using 90% fewer parame- ters. Figure 4 (right panel) shows the training loss and test errors of these two networks on C10+. The 1001-layer deep ResNet converges to a lower training loss value but a similar test error. We analyze this effect in more detail below.
1608.06993#32
Densely Connected Convolutional Networks
Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet .
http://arxiv.org/pdf/1608.06993
Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger
cs.CV, cs.LG
CVPR 2017
null
cs.CV
20160825
20180128
[ { "id": "1605.07716" }, { "id": "1605.07146" }, { "id": "1603.08029" }, { "id": "1607.01097" }, { "id": "1604.03640" }, { "id": "1511.06421" }, { "id": "1605.07648" }, { "id": "1707.06990" } ]
1608.06993
33
Overfitting. One positive side-effect of the more efficient use of parameters is a tendency of DenseNets to be less prone to overfitting. We observe that on the datasets without data augmentation, the improvements of DenseNet architec- tures over prior work are particularly pronounced. On C10, the improvement denotes a 29% relative reduction in error from 7.33% to 5.19%. On C100, the reduction is about 30% from 28.20% to 19.64%. In our experiments, we observed potential overfitting in a single setting: on C10, a 4× growth of parameters produced by increasing k = 12 to k = 24 lead to a modest increase in error from 5.77% to 5.83%. The DenseNet-BC bottleneck and compression layers appear to be an effective way to counter this trend.
1608.06993#33
Densely Connected Convolutional Networks
Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet .
http://arxiv.org/pdf/1608.06993
Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger
cs.CV, cs.LG
CVPR 2017
null
cs.CV
20160825
20180128
[ { "id": "1605.07716" }, { "id": "1605.07146" }, { "id": "1603.08029" }, { "id": "1607.01097" }, { "id": "1604.03640" }, { "id": "1511.06421" }, { "id": "1605.07648" }, { "id": "1707.06990" } ]
1608.06993
34
Capacity. Without compression or bottleneck layers, there is a general trend that DenseNets perform better as L and k increase. We attribute this primarily to the corre- sponding growth in model capacity. This is best demon- strated by the column of C10+ and C100+. On C10+, the error drops from 5.24% to 4.10% and finally to 3.74% as the number of parameters increases from 1.0M, over 7.0M to 27.2M. On C100+, we observe a similar trend. This sug- gests that DenseNets can utilize the increased representa- tional power of bigger and deeper models. It also indicates that they do not suffer from overfitting or the optimization difficulties of residual networks [11]. # 4.4. Classification Results on ImageNet We evaluate DenseNet-BC with different depths and growth rates on the ImageNet classification task, and com- pare it with state-of-the-art ResNet architectures. To en- sure a fair comparison between the two architectures, we eliminate all other factors such as differences in data pre- processing and optimization settings by adopting the pub- licly available Torch implementation for ResNet by [8]1.
1608.06993#34
Densely Connected Convolutional Networks
Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet .
http://arxiv.org/pdf/1608.06993
Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger
cs.CV, cs.LG
CVPR 2017
null
cs.CV
20160825
20180128
[ { "id": "1605.07716" }, { "id": "1605.07146" }, { "id": "1603.08029" }, { "id": "1607.01097" }, { "id": "1604.03640" }, { "id": "1511.06421" }, { "id": "1605.07648" }, { "id": "1707.06990" } ]
1608.06993
35
1https://github.com/facebook/fb.resnet.torch 25 16, 16 1 T T T r T T 16 — _ DenseNet —~ ResNet Test error: ResNet-1001 (10.2M) 400 14 — _ DenseNet-C 14 — DenseNet-BC 14 — Test error: DenseNet-BC-100 (0.8M) — _ DenseNet-B Training loss: ResNet-1001 (10.2M) ~ — DenseNet-Bc}| —. _ -ss.Training loss: DenseNet-BC-100 (0.8M) giz ge ge 1018 Zz ra S 2 £10 S10 S10 2 o oO 3 < 4 bey 3 € ge 8s 8s 4028 6 6 6 4 4 4 Sore 108 o 1. 2 38 4 5 6 7 8 O 1 2 3 4 6 7 8 0 50 100 750 200 250 300 #parameters x10° #parameters 10° epoch
1608.06993#35
Densely Connected Convolutional Networks
Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet .
http://arxiv.org/pdf/1608.06993
Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger
cs.CV, cs.LG
CVPR 2017
null
cs.CV
20160825
20180128
[ { "id": "1605.07716" }, { "id": "1605.07146" }, { "id": "1603.08029" }, { "id": "1607.01097" }, { "id": "1604.03640" }, { "id": "1511.06421" }, { "id": "1605.07648" }, { "id": "1707.06990" } ]
1608.06993
36
Figure 4: Left: Comparison of the parameter efficiency on C10+ between DenseNet variations. Middle: Comparison of the parameter efficiency between DenseNet-BC and (pre-activation) ResNets. DenseNet-BC requires about 1/3 of the parameters as ResNet to achieve comparable accuracy. Right: Training and testing curves of the 1001-layer pre-activation ResNet [12] with more than 10M parameters and a 100-layer DenseNet with only 0.8M parameters. We simply replace the ResNet model with the DenseNet- BC network, and keep all the experiment settings exactly the same as those used for ResNet.
1608.06993#36
Densely Connected Convolutional Networks
Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet .
http://arxiv.org/pdf/1608.06993
Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger
cs.CV, cs.LG
CVPR 2017
null
cs.CV
20160825
20180128
[ { "id": "1605.07716" }, { "id": "1605.07146" }, { "id": "1603.08029" }, { "id": "1607.01097" }, { "id": "1604.03640" }, { "id": "1511.06421" }, { "id": "1605.07648" }, { "id": "1707.06990" } ]
1608.06993
37
We simply replace the ResNet model with the DenseNet- BC network, and keep all the experiment settings exactly the same as those used for ResNet. We report the single-crop and 10-crop validation errors of DenseNets on ImageNet in Table 3. Figure 3 shows the single-crop top-1 validation errors of DenseNets and ResNets as a function of the number of parameters (left) and FLOPs (right). The results presented in the figure reveal that DenseNets perform on par with the state-of-the-art ResNets, whilst requiring significantly fewer parameters and compu- tation to achieve comparable performance. For example, a DenseNet-201 with 20M parameters model yields similar validation error as a 101-layer ResNet with more than 40M parameters. Similar trends can be observed from the right panel, which plots the validation error as a function of the number of FLOPs: a DenseNet that requires as much com- putation as a ResNet-50 performs on par with a ResNet-101, which requires twice as much computation.
1608.06993#37
Densely Connected Convolutional Networks
Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet .
http://arxiv.org/pdf/1608.06993
Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger
cs.CV, cs.LG
CVPR 2017
null
cs.CV
20160825
20180128
[ { "id": "1605.07716" }, { "id": "1605.07146" }, { "id": "1603.08029" }, { "id": "1607.01097" }, { "id": "1604.03640" }, { "id": "1511.06421" }, { "id": "1605.07648" }, { "id": "1707.06990" } ]
1608.06993
38
ResNet architecture (middle). We train multiple small net- works with varying depths on C10+ and plot their test ac- curacies as a function of network parameters. In com- parison with other popular network architectures, such as AlexNet [16] or VGG-net [29], ResNets with pre-activation use fewer parameters while typically achieving better re- sults [12]. Hence, we compare DenseNet (k = 12) against this architecture. The training setting for DenseNet is kept the same as in the previous section. The graph shows that DenseNet-BC is consistently the most parameter efficient variant of DenseNet. Further, to achieve the same level of accuracy, DenseNet-BC only re- quires around 1/3 of the parameters of ResNets (middle plot). This result is in line with the results on ImageNet we presented in Figure 3. The right plot in Figure 4 shows that a DenseNet-BC with only 0.8M trainable parameters is able to achieve comparable accuracy as the 1001-layer (pre-activation) ResNet [12] with 10.2M parameters.
1608.06993#38
Densely Connected Convolutional Networks
Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet .
http://arxiv.org/pdf/1608.06993
Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger
cs.CV, cs.LG
CVPR 2017
null
cs.CV
20160825
20180128
[ { "id": "1605.07716" }, { "id": "1605.07146" }, { "id": "1603.08029" }, { "id": "1607.01097" }, { "id": "1604.03640" }, { "id": "1511.06421" }, { "id": "1605.07648" }, { "id": "1707.06990" } ]
1608.06993
39
It is worth noting that our experimental setup implies that we use hyperparameter settings that are optimized for ResNets but not for DenseNets. It is conceivable that more extensive hyper-parameter searches may further improve the performance of DenseNet on ImageNet. # 5. Discussion Superficially, DenseNets are quite similar to ResNets: Eq. (2) differs from Eq. (1) only in that the inputs to H¢(-) are concatenated instead of summed. However, the implica- tions of this seemingly small modification lead to substan- tially different behaviors of the two network architectures. Model compactness. As a direct consequence of the in- put concatenation, the feature-maps learned by any of the DenseNet layers can be accessed by all subsequent layers. This encourages feature reuse throughout the network, and leads to more compact models. The left two plots in Figure 4 show the result of an experiment that aims to compare the parameter efficiency of all variants of DenseNets (left) and also a comparable
1608.06993#39
Densely Connected Convolutional Networks
Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet .
http://arxiv.org/pdf/1608.06993
Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger
cs.CV, cs.LG
CVPR 2017
null
cs.CV
20160825
20180128
[ { "id": "1605.07716" }, { "id": "1605.07146" }, { "id": "1603.08029" }, { "id": "1607.01097" }, { "id": "1604.03640" }, { "id": "1511.06421" }, { "id": "1605.07648" }, { "id": "1707.06990" } ]
1608.06993
40
Implicit Deep Supervision. One explanation for the im- proved accuracy of dense convolutional networks may be that individual layers receive additional supervision from the loss function through the shorter connections. One can interpret DenseNets to perform a kind of “deep supervi- sion”. The benefits of deep supervision have previously been shown in deeply-supervised nets (DSN; [20]), which have classifiers attached to every hidden layer, enforcing the intermediate layers to learn discriminative features. DenseNets perform a similar deep supervision in an im- plicit fashion: a single classifier on top of the network pro- vides direct supervision to all layers through at most two or three transition layers. However, the loss function and gra- dient of DenseNets are substantially less complicated, as the same loss function is shared between all layers.
1608.06993#40
Densely Connected Convolutional Networks
Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet .
http://arxiv.org/pdf/1608.06993
Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger
cs.CV, cs.LG
CVPR 2017
null
cs.CV
20160825
20180128
[ { "id": "1605.07716" }, { "id": "1605.07146" }, { "id": "1603.08029" }, { "id": "1607.01097" }, { "id": "1604.03640" }, { "id": "1511.06421" }, { "id": "1605.07648" }, { "id": "1707.06990" } ]
1608.06993
41
Stochastic vs. deterministic connection. There is an interesting connection between dense convolutional net- works and stochastic depth regularization of residual net- works [13]. In stochastic depth, layers in residual networks are randomly dropped, which creates direct connections between the surrounding layers. As the pooling layers are never dropped, the network results in a similar connectiv- ity pattern as DenseNet: there is a small probability for any two layers, between the same pooling layers, to be di- rectly connected—if all intermediate layers are randomly dropped. Although the methods are ultimately quite dif- ferent, the DenseNet interpretation of stochastic depth may provide insights into the success of this regularizer.
1608.06993#41
Densely Connected Convolutional Networks
Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet .
http://arxiv.org/pdf/1608.06993
Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger
cs.CV, cs.LG
CVPR 2017
null
cs.CV
20160825
20180128
[ { "id": "1605.07716" }, { "id": "1605.07146" }, { "id": "1603.08029" }, { "id": "1607.01097" }, { "id": "1604.03640" }, { "id": "1511.06421" }, { "id": "1605.07648" }, { "id": "1707.06990" } ]
1608.06993
42
Feature Reuse. By design, DenseNets allow layers ac- cess to feature-maps from all of its preceding layers (al- though sometimes through transition layers). We conduct an experiment to investigate if a trained network takes ad- vantage of this opportunity. We first train a DenseNet on C10+ with L = 40 and k = 12. For each convolutional layer ¢ within a block, we compute the average (absolute) weight assigned to connections with layer s. Figure 5 shows a heat-map for all three dense blocks. The average absolute weight serves as a surrogate for the dependency of a convo- lutional layer on its preceding layers. A red dot in position (, s) indicates that the layer £ makes, on average, strong use of feature-maps produced s-layers before. Several observa- tions can be made from the plot: 1. All layers spread their weights over many inputs within the same block. This indicates that features extracted by very early layers are, indeed, directly used by deep layers throughout the same dense block.
1608.06993#42
Densely Connected Convolutional Networks
Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet .
http://arxiv.org/pdf/1608.06993
Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger
cs.CV, cs.LG
CVPR 2017
null
cs.CV
20160825
20180128
[ { "id": "1605.07716" }, { "id": "1605.07146" }, { "id": "1603.08029" }, { "id": "1607.01097" }, { "id": "1604.03640" }, { "id": "1511.06421" }, { "id": "1605.07648" }, { "id": "1707.06990" } ]
1608.06993
43
1. All layers spread their weights over many inputs within the same block. This indicates that features extracted by very early layers are, indeed, directly used by deep layers throughout the same dense block. 2. The weights of the transition layers also spread their weight across all layers within the preceding dense block, indicating information flow from the first to the last layers of the DenseNet through few indirections. 3. The layers within the second and third dense block consistently assign the least weight to the outputs of the transition layer (the top row of the triangles), in- dicating that the transition layer outputs many redun- dant features (with low weight on average). This is in keeping with the strong results of DenseNet-BC where exactly these outputs are compressed. 4. Although the final classification layer, shown on the very right, also uses weights across the entire dense block, there seems to be a concentration towards final feature-maps, suggesting that there may be some more high-level features produced late in the network. # 6. Conclusion
1608.06993#43
Densely Connected Convolutional Networks
Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet .
http://arxiv.org/pdf/1608.06993
Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger
cs.CV, cs.LG
CVPR 2017
null
cs.CV
20160825
20180128
[ { "id": "1605.07716" }, { "id": "1605.07146" }, { "id": "1603.08029" }, { "id": "1607.01097" }, { "id": "1604.03640" }, { "id": "1511.06421" }, { "id": "1605.07648" }, { "id": "1707.06990" } ]
1608.06993
44
# 6. Conclusion We proposed a new convolutional network architec- ture, which we refer to as Dense Convolutional Network (DenseNet). It introduces direct connections between any two layers with the same feature-map size. We showed that DenseNets scale naturally to hundreds of layers, while ex- In our experiments, hibiting no optimization difficulties. Dense Block 1 Dense Block 2 Dense Block 3 Transition layer 1 Transition layer 2. Classification layer 2 4 6 8 ww 2 4 6 8 Ww 1 2 4 6 8 ww Target layer (0) Target layer (/) Target layer (0) Figure 5: The average absolute filter weights of convolutional lay- ers in a trained DenseNet. The color of pixel (s, £) encodes the av- erage L1 norm (normalized by number of input feature-maps) of the weights connecting convolutional layer s to @ within a dense block. Three columns highlighted by black rectangles correspond to two transition layers and the classification layer. The first row encodes weights connected to the input layer of the dense block.
1608.06993#44
Densely Connected Convolutional Networks
Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet .
http://arxiv.org/pdf/1608.06993
Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger
cs.CV, cs.LG
CVPR 2017
null
cs.CV
20160825
20180128
[ { "id": "1605.07716" }, { "id": "1605.07146" }, { "id": "1603.08029" }, { "id": "1607.01097" }, { "id": "1604.03640" }, { "id": "1511.06421" }, { "id": "1605.07648" }, { "id": "1707.06990" } ]
1608.06993
45
DenseNets tend to yield consistent improvement in accu- racy with growing number of parameters, without any signs of performance degradation or overfitting. Under multi- ple settings, it achieved state-of-the-art results across sev- eral highly competitive datasets. Moreover, DenseNets require substantially fewer parameters and less computa- tion to achieve state-of-the-art performances. Because we adopted hyperparameter settings optimized for residual net- works in our study, we believe that further gains in accuracy of DenseNets may be obtained by more detailed tuning of hyperparameters and learning rate schedules. Whilst following a simple connectivity rule, DenseNets naturally integrate the properties of identity mappings, deep supervision, and diversified depth. They allow feature reuse throughout the networks and can consequently learn more compact and, according to our experiments, more accurate models. Because of their compact internal representations and reduced feature redundancy, DenseNets may be good feature extractors for various computer vision tasks that build on convolutional features, e.g., [4, 5]. We plan to study such feature transfer with DenseNets in future work.
1608.06993#45
Densely Connected Convolutional Networks
Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet .
http://arxiv.org/pdf/1608.06993
Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger
cs.CV, cs.LG
CVPR 2017
null
cs.CV
20160825
20180128
[ { "id": "1605.07716" }, { "id": "1605.07146" }, { "id": "1603.08029" }, { "id": "1607.01097" }, { "id": "1604.03640" }, { "id": "1511.06421" }, { "id": "1605.07648" }, { "id": "1707.06990" } ]
1608.06993
46
Acknowledgements. The authors are supported in part by the NSF III-1618134, III-1526012, IIS-1149882, the Of- fice of Naval Research Grant N00014-17-1-2175 and the Bill and Melinda Gates foundation. GH is supported by the International Postdoctoral Exchange Fellowship Pro- gram of China Postdoctoral Council (No.20150015). ZL is supported by the National Basic Research Program of China Grants 2011CBA00300, 2011CBA00301, the NSFC 61361136003. We also thank Daniel Sedra, Geoff Pleiss and Yu Sun for many insightful discussions. # References [1] C. Cortes, X. Gonzalvo, V. Kuznetsov, M. Mohri, and S. Yang. Adanet: Adaptive structural learning of artificial neural networks. arXiv preprint arXiv:1607.01097, 2016. 2 [2] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei- Fei. Imagenet: A large-scale hierarchical image database. In CVPR, 2009. 5 [3] S. E. Fahlman and C. Lebiere. The cascade-correlation learn- ing architecture. In NIPS, 1989. 2
1608.06993#46
Densely Connected Convolutional Networks
Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet .
http://arxiv.org/pdf/1608.06993
Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger
cs.CV, cs.LG
CVPR 2017
null
cs.CV
20160825
20180128
[ { "id": "1605.07716" }, { "id": "1605.07146" }, { "id": "1603.08029" }, { "id": "1607.01097" }, { "id": "1604.03640" }, { "id": "1511.06421" }, { "id": "1605.07648" }, { "id": "1707.06990" } ]
1608.06993
47
[3] S. E. Fahlman and C. Lebiere. The cascade-correlation learn- ing architecture. In NIPS, 1989. 2 [4] J. R. Gardner, M. J. Kusner, Y. Li, P. Upchurch, K. Q. Weinberger, and J. E. Hopcroft. Deep manifold traversal: Changing labels with convolutional features. arXiv preprint arXiv:1511.06421, 2015. 8 [5] L. Gatys, A. Ecker, and M. Bethge. A neural algorithm of artistic style. Nature Communications, 2015. 8 [6] X. Glorot, A. Bordes, and Y. Bengio. Deep sparse rectifier neural networks. In AISTATS, 2011. 3 [7] I. Goodfellow, D. Warde-Farley, M. Mirza, A. Courville, and Y. Bengio. Maxout networks. In ICML, 2013. 5 [8] S. Gross and M. Wilber. Training and investigating residual nets, 2016. 5, 6 [9] B. Hariharan, P. Arbeláez, R. Girshick, and J. Malik. Hyper- columns for object segmentation and fine-grained localiza- tion. In CVPR, 2015. 2
1608.06993#47
Densely Connected Convolutional Networks
Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet .
http://arxiv.org/pdf/1608.06993
Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger
cs.CV, cs.LG
CVPR 2017
null
cs.CV
20160825
20180128
[ { "id": "1605.07716" }, { "id": "1605.07146" }, { "id": "1603.08029" }, { "id": "1607.01097" }, { "id": "1604.03640" }, { "id": "1511.06421" }, { "id": "1605.07648" }, { "id": "1707.06990" } ]
1608.06993
48
[10] K. He, X. Zhang, S. Ren, and J. Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In ICCV, 2015. 5 [11] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016. 1, 2, 3, 4, 5, 6 [12] K. He, X. Zhang, S. Ren, and J. Sun. Identity mappings in deep residual networks. In ECCV, 2016. 2, 3, 5, 7 [13] G. Huang, Y. Sun, Z. Liu, D. Sedra, and K. Q. Weinberger. Deep networks with stochastic depth. In ECCV, 2016. 1, 2, 5, 7 [14] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, 2015. 3 [15] A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. Tech Report, 2009. 5 [16] A. Krizhevsky, I. Sutskever, and G. E. Hinton.
1608.06993#48
Densely Connected Convolutional Networks
Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet .
http://arxiv.org/pdf/1608.06993
Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger
cs.CV, cs.LG
CVPR 2017
null
cs.CV
20160825
20180128
[ { "id": "1605.07716" }, { "id": "1605.07146" }, { "id": "1603.08029" }, { "id": "1607.01097" }, { "id": "1604.03640" }, { "id": "1511.06421" }, { "id": "1605.07648" }, { "id": "1707.06990" } ]
1608.06993
49
features from tiny images. Tech Report, 2009. 5 [16] A. Krizhevsky, I. Sutskever, and G. E. Hinton. classification with deep convolutional neural networks. NIPS, 2012. 3, 7 Imagenet In [17] G. Larsson, M. Maire, and G. Shakhnarovich. Fractalnet: Ultra-deep neural networks without residuals. arXiv preprint arXiv:1605.07648, 2016. 1, 3, 5, 6 [18] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Backpropagation applied to handwritten zip code recognition. Neural compu- tation, 1(4):541–551, 1989. 1 [19] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient- based learning applied to document recognition. Proceed- ings of the IEEE, 86(11):2278–2324, 1998. 1, 3 [20] C.-Y. Lee, S. Xie, P. Gallagher, Z. Zhang, and Z. Tu. Deeply- supervised nets. In AISTATS, 2015. 2, 3, 5, 7
1608.06993#49
Densely Connected Convolutional Networks
Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet .
http://arxiv.org/pdf/1608.06993
Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger
cs.CV, cs.LG
CVPR 2017
null
cs.CV
20160825
20180128
[ { "id": "1605.07716" }, { "id": "1605.07146" }, { "id": "1603.08029" }, { "id": "1607.01097" }, { "id": "1604.03640" }, { "id": "1511.06421" }, { "id": "1605.07648" }, { "id": "1707.06990" } ]
1608.06993
50
[21] Q. Liao and T. Poggio. Bridging the gaps between residual learning, recurrent neural networks and visual cortex. arXiv preprint arXiv:1604.03640, 2016. 2 [22] M. Lin, Q. Chen, and S. Yan. Network in network. In ICLR, 2014. 3, 5 [23] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In CVPR, 2015. 2 [24] Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng. Reading digits in natural images with unsupervised fea- ture learning, 2011. In NIPS Workshop, 2011. 5 [25] M. Pezeshki, L. Fan, P. Brakel, A. Courville, and Y. Bengio. In ICML, Deconstructing the ladder network architecture. 2016. 3
1608.06993#50
Densely Connected Convolutional Networks
Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet .
http://arxiv.org/pdf/1608.06993
Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger
cs.CV, cs.LG
CVPR 2017
null
cs.CV
20160825
20180128
[ { "id": "1605.07716" }, { "id": "1605.07146" }, { "id": "1603.08029" }, { "id": "1607.01097" }, { "id": "1604.03640" }, { "id": "1511.06421" }, { "id": "1605.07648" }, { "id": "1707.06990" } ]
1608.06993
51
[26] G. Pleiss, D. Chen, G. Huang, T. Li, L. van der Maaten, and K. Q. Weinberger. Memory-efficient implementation of densenets. arXiv preprint arXiv:1707.06990, 2017. 5 [27] A. Rasmus, M. Berglund, M. Honkala, H. Valpola, and T. Raiko. Semi-supervised learning with ladder networks. In NIPS, 2015. 3 [28] A. Romero, N. Ballas, S. E. Kahou, A. Chassang, C. Gatta, and Y. Bengio. Fitnets: Hints for thin deep nets. In ICLR, 2015. 5 [29] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. Imagenet large scale visual recognition challenge. IJCV. 1, 7 [30] P. Sermanet, S. Chintala, and Y. LeCun. Convolutional neu- ral networks applied to house numbers digit classification. In ICPR, pages 3288–3291. IEEE, 2012. 5
1608.06993#51
Densely Connected Convolutional Networks
Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet .
http://arxiv.org/pdf/1608.06993
Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger
cs.CV, cs.LG
CVPR 2017
null
cs.CV
20160825
20180128
[ { "id": "1605.07716" }, { "id": "1605.07146" }, { "id": "1603.08029" }, { "id": "1607.01097" }, { "id": "1604.03640" }, { "id": "1511.06421" }, { "id": "1605.07648" }, { "id": "1707.06990" } ]
1608.06993
52
[31] P. Sermanet, K. Kavukcuoglu, S. Chintala, and Y. LeCun. Pedestrian detection with unsupervised multi-stage feature learning. In CVPR, 2013. 2 [32] J. T. Springenberg, A. Dosovitskiy, T. Brox, and M. Ried- miller. Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806, 2014. 5 [33] N. Srivastava, G. E. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. JMLR, 2014. 6 [34] R. K. Srivastava, K. Greff, and J. Schmidhuber. Training very deep networks. In NIPS, 2015. 1, 2, 5 [35] I. Sutskever, J. Martens, G. Dahl, and G. Hinton. On the importance of initialization and momentum in deep learning. In ICML, 2013. 5
1608.06993#52
Densely Connected Convolutional Networks
Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet .
http://arxiv.org/pdf/1608.06993
Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger
cs.CV, cs.LG
CVPR 2017
null
cs.CV
20160825
20180128
[ { "id": "1605.07716" }, { "id": "1605.07146" }, { "id": "1603.08029" }, { "id": "1607.01097" }, { "id": "1604.03640" }, { "id": "1511.06421" }, { "id": "1605.07648" }, { "id": "1707.06990" } ]
1608.06993
53
[36] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In CVPR, 2015. 2, 3 [37] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. Rethinking the inception architecture for computer vision. In CVPR, 2016. 2, 3, 4 [38] S. Targ, D. Almeida, in resnet: Generalizing residual architectures. arXiv preprint arXiv:1603.08029, 2016. 2 [39] J. Wang, Z. Wei, T. Zhang, and W. Zeng. Deeply-fused nets. arXiv preprint arXiv:1605.07716, 2016. 3 [40] B. M. Wilamowski and H. Yu. Neural network learning without backpropagation. IEEE Transactions on Neural Net- works, 21(11):1793–1803, 2010. 2 [41] S. Yang and D. Ramanan. Multi-scale recognition with dag- cnns. In ICCV, 2015. 2
1608.06993#53
Densely Connected Convolutional Networks
Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet .
http://arxiv.org/pdf/1608.06993
Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger
cs.CV, cs.LG
CVPR 2017
null
cs.CV
20160825
20180128
[ { "id": "1605.07716" }, { "id": "1605.07146" }, { "id": "1603.08029" }, { "id": "1607.01097" }, { "id": "1604.03640" }, { "id": "1511.06421" }, { "id": "1605.07648" }, { "id": "1707.06990" } ]
1608.04868
0
7 1 0 2 n a J 5 1 ] M M . s c [ 2 v 8 6 8 4 0 . 8 0 6 1 : v i X r a # TOWARDS MUSIC CAPTIONING: GENERATING MUSIC PLAYLIST DESCRIPTIONS Keunwoo Choi, Gy¨orgy Fazekas, Mark Sandler Centre for Digital Music Queen Mary University of London [email protected] Brian McFee, Kyunghyun Cho Center for Data Science New York University {first.last}@nyu.edu # ABSTRACT Descriptions are often provided along with recommenda- tions to help users’ discovery. Recommending automati- cally generated music playlists (e.g. personalised playlists) introduces the problem of generating descriptions. In this paper, we propose a method for generating music playlist descriptions, which is called as music captioning. In the proposed method, audio content analysis and natural lan- guage processing are adopted to utilise the information of each track. y Ly eH} Go x | am _ hungry Figure 1. A block diagram of an RNN unit (left) and sequence-to-sequence module that is applied to English- Korean translation (right). # 1. INTRODUCTION
1608.04868#0
Towards Music Captioning: Generating Music Playlist Descriptions
Descriptions are often provided along with recommendations to help users' discovery. Recommending automatically generated music playlists (e.g. personalised playlists) introduces the problem of generating descriptions. In this paper, we propose a method for generating music playlist descriptions, which is called as music captioning. In the proposed method, audio content analysis and natural language processing are adopted to utilise the information of each track.
http://arxiv.org/pdf/1608.04868
Keunwoo Choi, George Fazekas, Brian McFee, Kyunghyun Cho, Mark Sandler
cs.MM, cs.AI, cs.CL
2 pages, ISMIR 2016 Late-breaking/session extended abstract
null
cs.MM
20160817
20170115
[ { "id": "1507.07998" } ]
1608.04868
1
Figure 1. A block diagram of an RNN unit (left) and sequence-to-sequence module that is applied to English- Korean translation (right). # 1. INTRODUCTION Motivation: One of the crucial problems in music discov- ery is to deliver the summary of music without playing it. One common method is to add descriptions of a music item or playlist, e.g. Getting emotional with the undisputed King of Pop 1 , Just the right blend of chilled-out acoustic songs to work, relax, think, and dream to 2 . These exam- ples show that they are more than simple descriptions and even add value to the curated playlist as a product. There have been attempts to automate the generation of these descriptions. In [8], Eck et al. proposed to use social tags to describe each music item. Fields proposed a similar idea for playlist using social tag and topic model [9] using Latent Dirichlet Allocation [1]. Besides text, Bogdanov in- troduced music avatars, whose outlook - hair style, clothes, and accessories - describes the recommended music [2].
1608.04868#1
Towards Music Captioning: Generating Music Playlist Descriptions
Descriptions are often provided along with recommendations to help users' discovery. Recommending automatically generated music playlists (e.g. personalised playlists) introduces the problem of generating descriptions. In this paper, we propose a method for generating music playlist descriptions, which is called as music captioning. In the proposed method, audio content analysis and natural language processing are adopted to utilise the information of each track.
http://arxiv.org/pdf/1608.04868
Keunwoo Choi, George Fazekas, Brian McFee, Kyunghyun Cho, Mark Sandler
cs.MM, cs.AI, cs.CL
2 pages, ISMIR 2016 Late-breaking/session extended abstract
null
cs.MM
20160817
20170115
[ { "id": "1507.07998" } ]
1608.04868
2
• Seq2seq: Sequence-to-sequence (seq2seq) learning in- dicates training a model whose input and output are se- quences (Figure 1, right). Seq2seq models can be used to machine translation, where a phrase in a language is sum- marised by an encoder RNN, which is followed by a de- coder RNN to generate a phrase in another language [4]. • Word2vec: Word embeddings are distributed vector representations of words that aim to preserve the seman- tic relationships among words. One successful example is word2vec algorithm, which is usually trained with large corpora in an unsupervised manner [13]. • ConvNets: Convolutional neural networks (ConvNets) have been extensively adopted in nearly every computer vision task and algorithm since the record-breaking per- formance of AlexNet [12]. ConvNets also show state-of- the-art results in many music information retrieval tasks including auto-tagging [5].
1608.04868#2
Towards Music Captioning: Generating Music Playlist Descriptions
Descriptions are often provided along with recommendations to help users' discovery. Recommending automatically generated music playlists (e.g. personalised playlists) introduces the problem of generating descriptions. In this paper, we propose a method for generating music playlist descriptions, which is called as music captioning. In the proposed method, audio content analysis and natural language processing are adopted to utilise the information of each track.
http://arxiv.org/pdf/1608.04868
Keunwoo Choi, George Fazekas, Brian McFee, Kyunghyun Cho, Mark Sandler
cs.MM, cs.AI, cs.CL
2 pages, ISMIR 2016 Late-breaking/session extended abstract
null
cs.MM
20160817
20170115
[ { "id": "1507.07998" } ]