id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
listlengths 1
1
|
---|---|---|---|---|---|---|
1608.08614#28 | What makes ImageNet good for transfer learning? | The results presented in Table 2 show that having more images per class with fewer number of classes results in features that perform very slightly better on PASCAL- DET, whereas for SUN-CLS, the performance is compara- ble across the two settings. # 5.6. How important is to pre-train on classes that are also present in a target task? It is natural to expect that higher correlation between pre- training and transfer tasks leads to better performance on a transfer task. This indeed has been shown to be true in [44]. One possible source of correlation between pre-training and Minimal Split Random Split Figure 7: An illustration of the procedure used to split the Ima- geNet dataset. Splits were constructed in 2 different ways. The random split selects classes at random from the 1000 ImageNet classes. The minimal split is made in a manner that ensures no two classes in the same split have a common ancestor up to depth four of WordNet tree. Collage in Figure 8 visualizes the random and minimal splits. transfer tasks are classes common to both tasks. | 1608.08614#27 | 1608.08614#29 | 1608.08614 | [
"1507.06550"
] |
1608.08614#29 | What makes ImageNet good for transfer learning? | In order to investigate how strong is the inï¬ uence of these common classes, we ran an experiment where we removed all the classes from ImageNet that are contained in the PASCAL challenge. PASCAL has 20 classes, some of which map to more than one ImageNet class and thus, after applying this exclusion criterion we are only left with 771 ImageNet classes. Table 3 compares the results on PASCAL-DET when the PASCAL-removed-ImageNet is used for pre-training against the original ImageNet and a baseline of pre- training on the Places [46] dataset. The PASCAL-removed- ImageNet achieves mAP of 57.8 (compared to 58.3 with the full ImageNet) indicating that training on ImageNet classes that are not present in PASCAL is sufï¬ cient to learn features that are also good for PASCAL classes. | 1608.08614#28 | 1608.08614#30 | 1608.08614 | [
"1507.06550"
] |
1608.08614#30 | What makes ImageNet good for transfer learning? | # 6. Does data augmentation from non-target classes always improve performance? The analysis using PASCAL-removed ImageNet indi- cates that pre-training on non-PASCAL classes aids perfor- mance on PASCAL. This raises the question: is it always better to add pre-training data from additional classes that are not part of the target task? To investigate and test this hypothesis, we chose two different methods of splitting the ImageNet classes. The ï¬ rst is random split, in which the 1000 ImageNet classes are split randomly; the second is a minimal split, in which the classes are deliberately split to ensure that similar classes are not in the same split, (Fig- ure 7). In order to determine if additional data helps perfor- mance for classes in split A, we pre-trained two CNNs â one for classifying all classes in split A and the other for clas- sifying all classes in both split A and B (i.e. full dataset). | 1608.08614#29 | 1608.08614#31 | 1608.08614 | [
"1507.06550"
] |
1608.08614#31 | What makes ImageNet good for transfer learning? | We then ï¬ netuned the last layer of the network trained on the full dataset on split A only. If it is the case that addi- 7 Minimal Splits Figure 8: Visualization of the random and minimal splits used for testing - is adding more pre-training data always useful? The two minimal sets contain disparate sets of objects. The minimal split A and B consists mostly of inanimate objects and living things re- spectively. On the other hand, random splits contain semantically similar objects. tional data from split B helps performance on split A, then the CNN pre-trained with the full dataset should perform better than CNN pre-trained only on split A. Using the random split, Figure 9 shows that the results of this experiment conï¬ rms the intuition that additional data is indeed useful for both splits. However, under a random class split within ImageNet, we are almost certain to have extremely similar classes (e.g. two different breeds of dogs) ending up on the different sides of the split. So, what we have shown so far is that we can improve performance on, say, husky classiï¬ cation by also training on poodles. Hence, the motivation for the minimal split: does adding arbitrary, unrelated classes, such as ï¬ re trucks, help dog classiï¬ ca- tion? | 1608.08614#30 | 1608.08614#32 | 1608.08614 | [
"1507.06550"
] |
1608.08614#32 | What makes ImageNet good for transfer learning? | The classes in minimal split A do not share any common ancestor with minimal split B up until the nodes at depth 4 of the WordNet hierarchy (Figure 7). This ensures that any class in split A is sufï¬ ciently disjoint from split B. Split A has 522 classes and split B has 478 classes (N.B.: for con- sistency, random splits A and B also had the same number of classes). In order to intuitively understand the difference between min splits A and B, we have visualized a random sample of images in these splits in Figure 8. Min split A consists of mostly static images and min split B consists of living objects. Contrary to the earlier observation, Figure 9 shows that both min split A and B performs better than the full dataset when we ï¬ | 1608.08614#31 | 1608.08614#33 | 1608.08614 | [
"1507.06550"
] |
1608.08614#33 | What makes ImageNet good for transfer learning? | netune only the last layer. This result is quite sur- prising because it shows that ï¬ netuning the last layer from a network pre-trained on the full dataset, it is not possible > 7 = Full Dataset £ 3 65 = Split Dataset S x a © O60 = joe wn wo 55 Zz wv oO 2 = E 50 Random Split A Random Split B Minimum Split A Minimum Split B Figure 9: Does adding arbitrary classes to pre-training data al- ways improve transfer performance? This question was tested by training two CNNs, one for classifying classes in split A and other for classifying classes in split A and B both. | 1608.08614#32 | 1608.08614#34 | 1608.08614 | [
"1507.06550"
] |
1608.08614#34 | What makes ImageNet good for transfer learning? | We then ï¬ netuned the CNN trained on both the splits on split A. If it is the case that adding more pre-training data helps, then performance of the CNN pre-trained on both the splits (black) should be higher than a CNN pre-trained on a single split (orange). For random splits, this indeed is the case, whereas for minimal splits adding more pre-training data hurts performance. This suggests, that additional pre-training data is useful only if it is correlated to the target task. to match the performance of a network trained on just one split. We have observed that when training all the layers for an extensive amount of time (420K iterations), the accuracy of min split A does beneï¬ t from pre-training on split B but does not for min split B. One explanation could be that im- ages in split B (e.g. person) is contained in images in split A, (e.g. buildings, clothing) but not vice versa. While it might be possible to recover performance with very clever adjustments of learning rates, current results suggest that training with data from unrelated classes may push the network into a local minimum from which it might be hard to ï¬ nd a better optima that can be obtained by train- ing the network from scratch. | 1608.08614#33 | 1608.08614#35 | 1608.08614 | [
"1507.06550"
] |
1608.08614#35 | What makes ImageNet good for transfer learning? | # 7. Discussion In this work we analyzed factors that affect the quality of ImageNet pre-trained features for transfer learning. Our goal was not to consider alternative neural network archi- tectures, but rather to establish facts about which aspects of the training data are important for feature learning. The current consensus in the ï¬ eld is that the key to learn- ing highly generalizable deep features is the large amounts of training data and the large number of classes. To quote the inï¬ uential R-CNN paper: â ..success re- sulted from training a large CNN on 1.2 million labeled images...â [12]. After the publication of R-CNN, most re- searchers assumed that the full ImageNet is necessary to pre-train good general-purpose features. Our work quan- titatively questions this assumption, and yields some quite surprising results. For example, we have found that a sig- | 1608.08614#34 | 1608.08614#36 | 1608.08614 | [
"1507.06550"
] |
1608.08614#36 | What makes ImageNet good for transfer learning? | 8 niï¬ cant reduction in the number of classes or the number of images used in pre-training has only a modest effect on transfer task performance. While we do not have an explanation as to the cause of this resilience, we list some speculative possibilities that should inform further study of this topic: â ¢ In our experiments, we investigated only one CNN ar- chitecture â AlexNet. While ImageNet-trained AlexNet features are currently the most popular starting point for ï¬ ne-tuning on transfer tasks, there exist deeper architectures such as VGG [39], ResNet [15], and It would be interesting to see if our GoogLeNet [40]. ï¬ ndings hold up on deeper networks. If not, it might suggest that AlexNet capacity is less than previously thought. | 1608.08614#35 | 1608.08614#37 | 1608.08614 | [
"1507.06550"
] |
1608.08614#37 | What makes ImageNet good for transfer learning? | â ¢ Our results might indicate that researchers have been overestimating the amount of data required for learn- ing good general CNN features. If that is the case, it might suggest that CNN training is not as data-hungry as previously thought. It would also suggest that beat- ing ImageNet-trained features with models trained on a much bigger data corpus will be much harder than once thought. â ¢ Finally, it might be that the currently popular target tasks, such as PASCAL and SUN, are too similar to the origi- nal ImageNet task to really test the generalization of the learned features. Alternatively, perhaps a more appropri- ate approach to test the generalization is with much less ï¬ ne-tuning (e.g. one-shot-learning) or no ï¬ ne-tuning at all (e.g. nearest neighbour in the learned feature space). | 1608.08614#36 | 1608.08614#38 | 1608.08614 | [
"1507.06550"
] |
1608.08614#38 | What makes ImageNet good for transfer learning? | In conclusion, while the answer to the titular question â What makes ImageNet good for transfer learning?â still lacks a deï¬ nitive answer, our results have shown that a lot of â folk wisdomâ on why ImageNet works well is not ac- curate. We hope that this paper will pique our colleaguesâ curiosity and facilitate further research on this fascinating topic. # 8. Acknowledgements This work was supported in part by ONR MURI N00014-14-1-0671. We gratefully acknowledge NVIDIA corporation for the donation of K40 GPUs and access to the NVIDIA PSG cluster for this research. We would like to acknowledge the support from the Berkeley Vision and Learning Center (BVLC) and Berkeley DeepDrive (BDD). Minyoung Huh was partially supported by the Rose Hill Foundation. # References [1] P. Agrawal, J. Carreira, and J. | 1608.08614#37 | 1608.08614#39 | 1608.08614 | [
"1507.06550"
] |
1608.08614#39 | What makes ImageNet good for transfer learning? | Malik. Learning to see by moving. In Proceedings of the IEEE International Confer- ence on Computer Vision, pages 37â 45, 2015. [2] P. Agrawal, R. Girshick, and J. Malik. Analyzing the perfor- mance of multilayer neural networks for object recognition. In Computer Visionâ ECCV 2014, pages 329â 344. Springer, 2014. [3] H. Azizpour, A. Razavian, J. Sullivan, A. Maki, and S. | 1608.08614#38 | 1608.08614#40 | 1608.08614 | [
"1507.06550"
] |
1608.08614#40 | What makes ImageNet good for transfer learning? | Carls- son. From generic to speciï¬ c deep representations for vi- sual recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 36â 45, 2015. [4] Y. Bengio, A. C. Courville, and P. Vincent. Unsupervised feature learning and deep learning: A review and new per- spectives. CoRR, abs/1206.5538, 1, 2012. [5] H. Bourlard and Y. Kamp. | 1608.08614#39 | 1608.08614#41 | 1608.08614 | [
"1507.06550"
] |
1608.08614#41 | What makes ImageNet good for transfer learning? | Auto-association by multilayer perceptrons and singular value decomposition. Biological cybernetics, 59(4-5):291â 294, 1988. [6] J. Carreira, P. Agrawal, K. Fragkiadaki, and J. Malik. Human pose estimation with iterative error feedback. arXiv preprint arXiv:1507.06550, 2015. Instance-aware semantic seg- mentation via multi-task network cascades. arXiv preprint arXiv:1512.04412, 2015. | 1608.08614#40 | 1608.08614#42 | 1608.08614 | [
"1507.06550"
] |
1608.08614#42 | What makes ImageNet good for transfer learning? | [8] C. Doersch, A. Gupta, and A. A. Efros. Unsupervised vi- sual representation learning by context prediction. In Pro- ceedings of the IEEE International Conference on Computer Vision, pages 1422â 1430, 2015. S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and T. Dar- rell. Long-term recurrent convolutional networks for visual In Proceedings of the IEEE recognition and description. Conference on Computer Vision and Pattern Recognition, pages 2625â 2634, 2015. [10] J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell. Decaf: | 1608.08614#41 | 1608.08614#43 | 1608.08614 | [
"1507.06550"
] |
1608.08614#43 | What makes ImageNet good for transfer learning? | A deep convolutional acti- vation feature for generic visual recognition. arXiv preprint arXiv:1310.1531, 2013. [11] C. Fellbaum. WordNet: An Electronic Lexical Database. Bradford Books, 1998. [12] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich fea- ture hierarchies for accurate object detection and semantic segmentation. In Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on, pages 580â | 1608.08614#42 | 1608.08614#44 | 1608.08614 | [
"1507.06550"
] |
1608.08614#44 | What makes ImageNet good for transfer learning? | 587. IEEE, 2014. [13] G. Gkioxari, R. Girshick, and J. Malik. Contextual action recognition with rcnn. In ICCV, 2015. [14] R. Goroshin, J. Bruna, J. Tompson, D. Eigen, and Y. LeCun. Unsupervised feature learning from temporal data. arXiv preprint arXiv:1504.02518, 2015. [15] K. He, X. Zhang, S. Ren, and J. Sun. | 1608.08614#43 | 1608.08614#45 | 1608.08614 | [
"1507.06550"
] |
1608.08614#45 | What makes ImageNet good for transfer learning? | Deep residual learning for image recognition. CoRR, abs/1512.03385, 2015. 9 [16] D. Jayaraman and K. Grauman. Learning image representa- tions tied to ego-motion. In Proceedings of the IEEE Inter- national Conference on Computer Vision, pages 1413â 1421, 2015. [17] Y. Jia. Caffe: An open source convolutional archi- http://caffe. tecture for fast feature embedding. berkeleyvision.org/, 2013. [18] A. Joulin, L. van der Maaten, A. Jabri, and N. Vasilache. | 1608.08614#44 | 1608.08614#46 | 1608.08614 | [
"1507.06550"
] |
1608.08614#46 | What makes ImageNet good for transfer learning? | Learning visual features from large weakly supervised data. In ECCV, 2016. [19] A. Karpathy and L. Fei-Fei. Deep visual-semantic align- In Proceedings ments for generating image descriptions. of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3128â 3137, 2015. [20] D. P. Kingma and M. Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. [21] P. Kr¨ahenb¨uhl, C. Doersch, J. Donahue, and T. Darrell. | 1608.08614#45 | 1608.08614#47 | 1608.08614 | [
"1507.06550"
] |
1608.08614#47 | What makes ImageNet good for transfer learning? | Data- dependent initializations of convolutional neural networks. In ICLR, 2016. Imagenet classiï¬ cation with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097â 1105, 2012. [23] G. Larsson, M. Maire, and G. Shakhnarovich. Learning rep- resentations for automatic colorization. In ECCV, 2016. [24] Y. LeCun, Y. Bengio, and G. Hinton. Deep learning. Nature, 521(7553):436â 444, 2015. [25] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. | 1608.08614#46 | 1608.08614#48 | 1608.08614 | [
"1507.06550"
] |
1608.08614#48 | What makes ImageNet good for transfer learning? | Backpropagation applied to handwritten zip code recognition. Neural compu- tation, 1(4):541â 551, 1989. [26] Z. Li and D. Hoiem. Learning without forgetting. In ECCV, 2016. [27] H. Mobahi, R. Collobert, and J. Weston. Deep learning from temporal coherence in video. In Proceedings of the 26th An- nual International Conference on Machine Learning, pages 737â | 1608.08614#47 | 1608.08614#49 | 1608.08614 | [
"1507.06550"
] |
1608.08614#49 | What makes ImageNet good for transfer learning? | 744. ACM, 2009. [28] M. Noroozi and F. Paolo. Unsupervised learning of visual representations by solving jigsaw puzzles. In ECCV, 2016. [29] B. A. Olshausen et al. Emergence of simple-cell receptive ï¬ eld properties by learning a sparse code for natural images. Nature, 381(6583):607â 609, 1996. [30] A. Owens, P. Isola, J. McDermott, A. Torralba, E. Adelson, and F. William. | 1608.08614#48 | 1608.08614#50 | 1608.08614 | [
"1507.06550"
] |
1608.08614#50 | What makes ImageNet good for transfer learning? | Visually indicated sounds. In CVPR, 2016. [31] D. Pathak, P. Kr¨ahenb¨uhl, J. Donahue, T. Darrell, and A. Efros. Context encoders: Feature learning by inpainting. In CVPR, 2016. [32] M. Ranzato, F. J. Huang, Y.-L. Boureau, and Y. LeCun. Un- supervised learning of invariant feature hierarchies with ap- In Computer Vision and plications to object recognition. Pattern Recognition, 2007. CVPRâ 07. IEEE Conference on, pages 1â 8. IEEE, 2007. [33] A. Razavian, H. Azizpour, J. Sullivan, and S. | 1608.08614#49 | 1608.08614#51 | 1608.08614 | [
"1507.06550"
] |
1608.08614#51 | What makes ImageNet good for transfer learning? | Carlsson. Cnn features off-the-shelf: an astounding baseline for recogni- tion. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 806â 813, 2014. [34] S. Ren, K. He, R. Girshick, and J. Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems, pages 91â 99, 2015. [35] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 2015. [36] R. Salakhutdinov and G. E. Hinton. Deep boltzmann ma- chines. | 1608.08614#50 | 1608.08614#52 | 1608.08614 | [
"1507.06550"
] |
1608.08614#52 | What makes ImageNet good for transfer learning? | In International Conference on Artiï¬ cial Intelligence and Statistics, pages 448â 455, 2009. [37] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun. Overfeat: Integrated recognition, localization and detection using convolutional networks. arXiv preprint arXiv:1312.6229, 2013. [38] K. Simonyan and A. Zisserman. Two-stream convolutional In Advances networks for action recognition in videos. in Neural Information Processing Systems, pages 568â 576, 2014. Very deep convolu- tional networks for large-scale image recognition. CoRR, abs/1409.1556, 2014. [40] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In CVPR, 2015. [41] X. Wang and A. Gupta. | 1608.08614#51 | 1608.08614#53 | 1608.08614 | [
"1507.06550"
] |
1608.08614#53 | What makes ImageNet good for transfer learning? | Unsupervised learning of visual rep- resentations using videos. In Proceedings of the IEEE Inter- national Conference on Computer Vision, pages 2794â 2802, 2015. [42] P. Weinzaepfel, J. Revaud, Z. Harchaoui, and C. Schmid. Deepï¬ ow: Large displacement optical ï¬ ow with deep match- ing. In Proceedings of the IEEE International Conference on Computer Vision, pages 1385â 1392, 2013. [43] L. Wiskott and T. J. Sejnowski. | 1608.08614#52 | 1608.08614#54 | 1608.08614 | [
"1507.06550"
] |
1608.08614#54 | What makes ImageNet good for transfer learning? | Slow feature analysis: Un- supervised learning of invariances. Neural computation, 14(4):715â 770, 2002. [44] J. Yosinski, J. Clune, Y. Bengio, and H. Lipson. How trans- ferable are features in deep neural networks? In Advances in Neural Information Processing Systems, pages 3320â 3328, 2014. [45] R. Zhang, P. Isola, and A. Efros. | 1608.08614#53 | 1608.08614#55 | 1608.08614 | [
"1507.06550"
] |
1608.08614#55 | What makes ImageNet good for transfer learning? | Colorful image colorization. In ECCV, 2016. [46] B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, and A. Oliva. Learning deep features for scene recognition using places database. NIPS, 2014. 10 | 1608.08614#54 | 1608.08614 | [
"1507.06550"
] |
|
1608.07905#0 | Machine Comprehension Using Match-LSTM and Answer Pointer | 6 1 0 2 v o N 7 ] L C . s c [ 2 v 5 0 9 7 0 . 8 0 6 1 : v i X r a # Under review as a conference paper at ICLR 2017 # MACHINE COMPREHENSION USING MATCH-LSTM AND ANSWER POINTER Shuohang Wang School of Information Systems Singapore Management University [email protected] Jing Jiang School of Information Systems Singapore Management University [email protected] # ABSTRACT | 1608.07905#1 | 1608.07905 | [
"1602.04341"
] |
|
1608.07905#1 | Machine Comprehension Using Match-LSTM and Answer Pointer | Machine comprehension of text is an important problem in natural language pro- cessing. A recently released dataset, the Stanford Question Answering Dataset (SQuAD), offers a large number of real questions and their answers created by humans through crowdsourcing. SQuAD provides a challenging testbed for eval- uating machine comprehension algorithms, partly because compared with previ- ous datasets, in SQuAD the answers do not come from a small set of candidate answers and they have variable lengths. We propose an end-to-end neural architec- ture for the task. The architecture is based on match-LSTM, a model we proposed previously for textual entailment, and Pointer Net, a sequence-to-sequence model proposed by Vinyals et al. (2015) to constrain the output tokens to be from the input sequences. We propose two ways of using Pointer Net for our task. Our experiments show that both of our two models substantially outperform the best results obtained by Rajpurkar et al. (2016) using logistic regression and manually crafted features. # INTRODUCTION Machine comprehension of text is one of the ultimate goals of natural language processing. While the ability of a machine to understand text can be assessed in many different ways, in recent years, several benchmark datasets have been created to focus on answering questions as a way to evaluate machine comprehension (Richardson et al., 2013; Hermann et al., 2015; Hill et al., 2016; Weston et al., 2016; Rajpurkar et al., 2016). In this setup, typically the machine is ï¬ rst presented with a piece of text such as a news article or a story. The machine is then expected to answer one or multiple questions related to the text. In most of the benchmark datasets, a question can be treated as a multiple choice question, whose correct answer is to be chosen from a set of provided candidate answers (Richardson et al., 2013; Hill et al., 2016). Presumably, questions with more given candidate answers are more challenging. The Stanford Question Answering Dataset (SQuAD) introduced recently by Rajpurkar et al. (2016) contains such more challenging questions whose correct answers can be any sequence of tokens from the given text. | 1608.07905#0 | 1608.07905#2 | 1608.07905 | [
"1602.04341"
] |
1608.07905#2 | Machine Comprehension Using Match-LSTM and Answer Pointer | Moreover, unlike some other datasets whose questions and answers were created automatically in Cloze style (Hermann et al., 2015; Hill et al., 2016), the questions and answers in SQuAD were created by humans through crowdsourcing, which makes the dataset more realistic. Given these advantages of the SQuAD dataset, in this paper, we focus on this new dataset to study machine comprehension of text. A sample piece of text and three of its associated questions are shown in Table 1. Traditional solutions to this kind of question answering tasks rely on NLP pipelines that involve mul- tiple steps of linguistic analyses and feature engineering, including syntactic parsing, named entity recognition, question classiï¬ cation, semantic parsing, etc. Recently, with the advances of applying neural network models in NLP, there has been much interest in building end-to-end neural architec- tures for various NLP tasks, including several pieces of work on machine comprehension (Hermann et al., 2015; Hill et al., 2016; Yin et al., 2016; Kadlec et al., 2016; Cui et al., 2016). However, given the properties of previous machine comprehension datasets, existing end-to-end neural architectures for the task either rely on the candidate answers (Hill et al., 2016; Yin et al., 2016) or assume that the | 1608.07905#1 | 1608.07905#3 | 1608.07905 | [
"1602.04341"
] |
1608.07905#3 | Machine Comprehension Using Match-LSTM and Answer Pointer | 1 # Under review as a conference paper at ICLR 2017 In 1870, Tesla moved to Karlovac, to attend school at the Higher Real Gymnasium, where he was profoundly inï¬ uenced by a math teacher Martin Sekuli´c. The classes were held in German, as it was a school within the Austro-Hungarian Military Frontier. Tesla was able to perform integral calculus in his head, which prompted his teachers to believe that he was cheating. | 1608.07905#2 | 1608.07905#4 | 1608.07905 | [
"1602.04341"
] |
1608.07905#4 | Machine Comprehension Using Match-LSTM and Answer Pointer | He ï¬ nished a four-year term in three years, graduating in 1873. 1. In what language were the classes given? 2. Who was Teslaâ s main inï¬ uence in Karlovac? Martin Sekuli´c 3. Why did Tesla go to Karlovac? German attend school at the Higher Real Gymnasium Table 1: A paragraph from Wikipedia and three associated questions together with their answers, taken from the SQuAD dataset. The tokens in bold in the paragraph are our predicted answers while the texts next to the questions are the ground truth answers. answer is a single token (Hermann et al., 2015; Kadlec et al., 2016; Cui et al., 2016), which make these methods unsuitable for the SQuAD dataset. In this paper, we propose a new end-to-end neural architecture to address the machine comprehension problem as deï¬ ned in the SQuAD dataset. Speciï¬ cally, observing that in the SQuAD dataset many questions are paraphrases of sentences from the original text, we adopt a match-LSTM model that we developed earlier for textual entail- ment (Wang & Jiang, 2016). We further adopt the Pointer Net (Ptr-Net) model developed by Vinyals et al. (2015), which enables the predictions of tokens from the input sequence only rather than from a larger ï¬ xed vocabulary and thus allows us to generate answers that consist of multiple tokens from the original text. We propose two ways to apply the Ptr-Net model for our task: a sequence model and a boundary model. We also further extend the boundary model with a search mechanism. Ex- periments on the SQuAD dataset show that our two models both outperform the best performance reported by Rajpurkar et al. (2016). Moreover, using an ensemble of several of our models, we can achieve very competitive performance on SQuAD. | 1608.07905#3 | 1608.07905#5 | 1608.07905 | [
"1602.04341"
] |
1608.07905#5 | Machine Comprehension Using Match-LSTM and Answer Pointer | Our contributions can be summarized as follows: (1) We propose two new end-to-end neural network models for machine comprehension, which combine match-LSTM and Ptr-Net to handle the special properties of the SQuAD dataset. (2) We have achieved the performance of an exact match score of 67.9% and an F1 score of 77.0% on the unseen test dataset, which is much better than the feature- engineered solution (Rajpurkar et al., 2016). Our performance is also close to the state of the art on SQuAD, which is 71.6% in terms of exact match and 80.4% in terms of F1 from Salesforce Research. (3) Our further analyses of the models reveal some useful insights for further improving the method. Beisdes, we also made our code available online 1. # 2 METHOD In this section, we ï¬ rst brieï¬ | 1608.07905#4 | 1608.07905#6 | 1608.07905 | [
"1602.04341"
] |
1608.07905#6 | Machine Comprehension Using Match-LSTM and Answer Pointer | y review match-LSTM and Pointer Net. These two pieces of existing work lay the foundation of our method. We then present our end-to-end neural architecture for machine comprehension. 2.1 MATCH-LSTM In a recent work on learning natural language inference, we proposed a match-LSTM model for predicting textual entailment (Wang & Jiang, 2016). In textual entailment, two sentences are given where one is a premise and the other is a hypothesis. To predict whether the premise entails the hypothesis, the match-LSTM model goes through the tokens of the hypothesis sequentially. At each position of the hypothesis, attention mechanism is used to obtain a weighted vector representation of the premise. This weighted premise is then to be combined with a vector representation of the current token of the hypothesis and fed into an LSTM, which we call the match-LSTM. The match- LSTM essentially sequentially aggregates the matching of the attention-weighted premise to each token of the hypothesis and uses the aggregated matching result to make a ï¬ nal prediction. # 1 https://github.com/shuohangwang/SeqMatchSeq 2 # Under review as a conference paper at ICLR 2017 Answer Pointer Layer Tt Le Match-LSTM layer = .sTM preprocess- ing Layer forP LsTâ ¢ preprocess- ing Layer fora +4 hg Wg Tesla ? (a) Sequence Model hy Why Why did Tesla ? (b) Boundary Model Figure 1: An overview of our two models. Both models consist of an LSTM preprocessing layer, a match-LSTM layer and an Answer Pointer layer. For each match-LSTM in a particular direction, hi, which is defined as Hâ a1, is computed using the a in the corresponding direction, as described in either Eqn. (2) 2.2 POINTER NET Vinyals et al. (2015) proposed a Pointer Network (Ptr-Net) model to solve a special kind of problems where we want to generate an output sequence whose tokens must come from the input sequence. Instead of picking an output token from a ï¬ xed vocabulary, Ptr-Net uses attention mechanism as a pointer to select a position from the input sequence as an output symbol. The pointer mechanism has inspired some recent work on language processing (Gu et al., 2016; Kadlec et al., 2016). Here we adopt Ptr-Net in order to construct answers using tokens from the input text. | 1608.07905#5 | 1608.07905#7 | 1608.07905 | [
"1602.04341"
] |
1608.07905#7 | Machine Comprehension Using Match-LSTM and Answer Pointer | # 2.3 OUR METHOD Formally, the problem we are trying to solve can be formulated as follows. We are given a piece of text, which we refer to as a passage, and a question related to the passage. The passage is represented by matrix P â Rdà P , where P is the length (number of tokens) of the passage and d is the dimensionality of word embeddings. Similarly, the question is represented by matrix Q â Rdà Q where Q is the length of the question. Our goal is to identify a subsequence from the passage as the answer to the question. As pointed out earlier, since the output tokens are from the input, we would like to adopt the Pointer Net for this problem. A straightforward way of applying Ptr-Net here is to treat an answer as a sequence of tokens from the input passage but ignore the fact that these tokens are consecutive in the original passage, because Ptr-Net does not make the consecutivity assumption. | 1608.07905#6 | 1608.07905#8 | 1608.07905 | [
"1602.04341"
] |
1608.07905#8 | Machine Comprehension Using Match-LSTM and Answer Pointer | Speciï¬ cally, we represent the answer as a sequence of integers a = (a1, a2, . . .), where each ai is an integer between 1 and P , indicating a certain position in the passage. Alternatively, if we want to ensure consecutivity, that is, if we want to ensure that we indeed select a subsequence from the passage as an answer, we can use the Ptr-Net to predict only the start and the end of an answer. In this case, the Ptr-Net only needs to select two tokens from the input passage, and all the tokens between these two tokens in the passage are treated as the answer. | 1608.07905#7 | 1608.07905#9 | 1608.07905 | [
"1602.04341"
] |
1608.07905#9 | Machine Comprehension Using Match-LSTM and Answer Pointer | Speciï¬ cally, we can represent the answer to be predicted as two integers a = (as, ae), where as an ae are integers between 1 and P . 3 # Under review as a conference paper at ICLR 2017 We refer to the ï¬ rst setting above as a sequence model and the second setting above as a bound- ary model. For either model, we assume that a set of training examples in the form of triplets {(Pn, Qn, an)}N An overview of the two neural network models are shown in Figure 1. Both models consist of three layers: (1) An LSTM preprocessing layer that preprocesses the passage and the question using LSTMs. (3) An (2) A match-LSTM layer that tries to match the passage against the question. Answer Pointer (Ans-Ptr) layer that uses Ptr-Net to select a set of tokens from the passage as the answer. The difference between the two models only lies in the third layer. # LSTM Preprocessing Layer The purpose for the LSTM preprocessing layer is to incorporate contextual information into the representation of each token in the passage and the question. We use a standard one-directional LSTM (Hochreiter & Schmidhuber, 1997) 2 to process the passage and the question separately, as shown below: | 1608.07905#8 | 1608.07905#10 | 1608.07905 | [
"1602.04341"
] |
1608.07905#10 | Machine Comprehension Using Match-LSTM and Answer Pointer | â â â â LSTM(P), Hq = â â â â LSTM(Q). Hp = (1) The resulting matrices Hp â Rlà P and Hq â Rlà Q are hidden representations of the passage and the question, where l is the dimensionality of the hidden vectors. In other words, the ith column vector hp i ) in Hp (or Hq) represents the ith token in the passage (or the question) together with some contextual information from the left. # Match-LSTM Layer We apply the match-LSTM model (Wang & Jiang, 2016) proposed for textual entailment to our machine comprehension problem by treating the question as a premise and the passage as a hypoth- esis. The match-LSTM sequentially goes through the passage. At position i of the passage, it ï¬ rst uses the standard word-by-word attention mechanism to obtain attention weight vector â â α i â RQ as follows: G, = tanh(W4HS + (WPh? + WRT, +b?) @ 0), Qi = softmax(wTG; +b®eg), (2) # â â h r iâ 1 â Rl is the where Wq, Wp, Wr â Rlà l, bp, w â Rl and b â R are parameters to be learned, hidden vector of the one-directional match-LSTM (to be explained below) at position i â 1, and the outer product (· â eQ) produces a matrix or row vector by repeating the vector or scalar on the left for Q times. Essentially, the resulting attention weight â â α i,j above indicates the degree of matching between the ith token in the passage with the jth token in the question. Next, we use the attention weight vector â â α i to obtain a weighted version of the question and combine it with the current token of the passage to form a vector â â z i: Pp Zi- [ane| @) This vector â â z i is fed into a standard one-directional LSTM to form our so-called match-LSTM: â â â â LSTM(â â z i, # â â h r i â Rl. where | 1608.07905#9 | 1608.07905#11 | 1608.07905 | [
"1602.04341"
] |
1608.07905#11 | Machine Comprehension Using Match-LSTM and Answer Pointer | We further build a similar match-LSTM in the reverse direction. The purpose is to obtain a repre- sentation that encodes the contexts from both directions for each token in the passage. To build this reverse match-LSTM, we ï¬ rst deï¬ ne a G; = tanh(W°HS + (Wh? + Wh", , +bâ ) @e0), @, = softmax(w' G; +b® eg). (5) 2As the output gates in the preprocessing layer affect the ï¬ nal performance little, we remove it in our experiments. 4 (4) | 1608.07905#10 | 1608.07905#12 | 1608.07905 | [
"1602.04341"
] |
1608.07905#12 | Machine Comprehension Using Match-LSTM and Answer Pointer | # Under review as a conference paper at ICLR 2017 Note that the parameters here (Wq, Wp, Wr, bp, w and b) are the same as used in Eqn. (2). We â â then deï¬ ne â â z i in a similar way and ï¬ nally deï¬ ne h r i to be the hidden representation at position i produced by the match-LSTM in the reverse direction. â â h r 1, # â â h r # â â h r â â h r 1, â â h r â â h r â â Hr â Rlà P represent the hidden states [ Let â â â â h r h r 1, [ 2, . . . , P ] and â â h r P ]. We deï¬ ne Hr â R2là P as the concatenation of the two: 2, . . . , â â Hr â Rlà P represent w- F # Answer Pointer Layer The top layer, the Answer Pointer (Ans-Ptr) layer, is motivated by the Pointer Net introduced by Vinyals et al. (2015). This layer uses the sequence Hr as input. Recall that we have two different models: The sequence model produces a sequence of answer tokens but these tokens may not be consecutive in the original passage. The boundary model produces only the start token and the end token of the answer, and then all the tokens between these two in the original passage are considered to be the answer. We now explain the two models separately. The Sequence Model: Recall that in the sequence model, the answer is represented by a sequence of integers a = (a1, a2, . . .) indicating the positions of the selected tokens in the original passage. The Ans-Ptr layer models the generation of these integers in a sequential manner. Because the length of an answer is not ï¬ xed, in order to stop generating answer tokens at certain point, we allow each ak to take up an integer value between 1 and P + 1, where P + 1 is a special value indicating the end of the answer. Once ak is set to be P + 1, the generation of the answer stops. In order to generate the kth answer token indicated by ak, ï¬ | 1608.07905#11 | 1608.07905#13 | 1608.07905 | [
"1602.04341"
] |
1608.07905#13 | Machine Comprehension Using Match-LSTM and Answer Pointer | rst, the attention mechanism is used again to obtain an attention weight vector βk â R(P +1), where βk,j (1 â ¤ j â ¤ P + 1) is the probability of selecting the jth token from the passage as the kth token in the answer, and βk,(P +1) is the probability of stopping the answer generation at position k. βk is modeled as follows: kâ 1 + ba) â e(P +1)), F, = tanh(VH" + (W*hi_, + b*) By = softmax(vâ ¢F, + ¢® e(p41)), (8) where H' â ¬ R2!*(P+1) is the concatenation of H' with a zero vector, defined as H' = [H'; 0], V eR*â , W* â ¬ Râ ¢!, b*,v â ¬ R! and c â ¬ R are parameters to be learned, (- ® e(p+1)) follows the same definition as before, and hj,_, â ¬ Râ is the hidden vector at position k â 1 of an answer LSTM as defined below: # LSTM(H hj, = LSTM(H 3}, hi,_,)- (9) We can then model the probability of generating the answer sequence as p(a|Hr) = p(ak|a1, a2, . . . , akâ 1, Hr), (10) k and p(ak = j|a1, a2, . . . , akâ 1, Hr) = βk,j. (11) | 1608.07905#12 | 1608.07905#14 | 1608.07905 | [
"1602.04341"
] |
1608.07905#14 | Machine Comprehension Using Match-LSTM and Answer Pointer | To train the model, we minimize the following loss function based on the training examples: N = SF log p(an{Pn; Qn): (12) n=1 â n=1 The Boundary Model: The boundary model works in a way very similar to the sequence model above, except that instead of predicting a sequence of indices a1, a2, . . ., we only need to predict two indices as and ae. So the main difference from the sequence model above is that in the boundary model we do not need to add the zero padding to Hr, and the probability of generating an answer is simply modeled as p(a|Hr) = p(as|Hr)p(ae|as, Hr). | 1608.07905#13 | 1608.07905#15 | 1608.07905 | [
"1602.04341"
] |
1608.07905#15 | Machine Comprehension Using Match-LSTM and Answer Pointer | (13) 5 # Under review as a conference paper at ICLR 2017 l |θ| Exact Match Test Dev Dev F1 Random Guess Logistic Regression DCR - - - 0 - - 1.1 40.0 62.5 1.3 40.4 62.5 4.1 51.0 71.2 Match-LSTM with Ans-Ptr (Sequence) Match-LSTM with Ans-Ptr (Boundary) Match-LSTM with Ans-Ptr (Boundary+Search) Match-LSTM with Ans-Ptr (Boundary+Search) Match-LSTM with Ans-Ptr (Boundary+Search+b) Match-LSTM with Bi-Ans-Ptr (Boundary+Search+b) 150 150 150 300 150 150 882K 54.4 882K 61.1 882K 63.0 3.2M 63.1 1.1M 63.4 1.4M 64.1 - - - - - 64.7 68.2 71.2 72.7 72.7 73.0 73.9 Match-LSTM with Ans-Ptr (Boundary+Search+en) 150 882K 67.6 67.9 76.8 Test 4.3 51.0 71.0 - - - - - 73.7 77.0 Table 2: Experiment Results. Here â Searchâ refers to globally searching the spans with no more than 15 tokens, â bâ refers to using bi-directional pre-processing LSTM, and â enâ refers to ensemble method. | 1608.07905#14 | 1608.07905#16 | 1608.07905 | [
"1602.04341"
] |
1608.07905#16 | Machine Comprehension Using Match-LSTM and Answer Pointer | We further extend the boundary model by incorporating a search mechanism. Speciï¬ cally, during prediction, we try to limit the length of the span and globally search the span with the highest probability computed by p(as) à p(ae). Besides, as the boundary has a sequence of ï¬ xed number of values, bi-directional Ans-Ptr can be simply combined to ï¬ ne-tune the correct span. # 3 EXPERIMENTS In this section, we present our experiment results and perform some analyses to better understand how our models works. # 3.1 DATA We use the Stanford Question Answering Dataset (SQuAD) v1.1 to conduct our experiments. Pas- sages in SQuAD come from 536 articles from Wikipedia covering a wide range of topics. Each passage is a single paragraph from a Wikipedia article, and each passage has around 5 questions associated with it. In total, there are 23,215 passages and 107,785 questions. The data has been split into a training set (with 87,599 question-answer pairs), a development set (with 10,570 question- answer pairs) and a hidden test set. 3.2 EXPERIMENT SETTINGS | 1608.07905#15 | 1608.07905#17 | 1608.07905 | [
"1602.04341"
] |
1608.07905#17 | Machine Comprehension Using Match-LSTM and Answer Pointer | We ï¬ rst tokenize all the passages, questions and answers. The resulting vocabulary contains 117K unique words. We use word embeddings from GloVe (Pennington et al., 2014) to initialize the model. Words not found in GloVe are initialized as zero vectors. The word embeddings are not updated during the training of the model. The dimensionality l of the hidden layers is set to be 150 or 300. We use ADAMAX (Kingma & Ba, 2015) with the coefï¬ cients β1 = 0.9 and β2 = 0.999 to optimize the model. Each update is computed through a minibatch of 30 instances. We do not use L2-regularization. The performance is measured by two metrics: percentage of exact match with the ground truth answers, and word-level F1 score when comparing the tokens in the predicted answers with the tokens in the ground truth answers. Note that in the development set and the test set each question has around three ground truth answers. F1 scores with the best matching answers are used to compute the average F1 score. 3.3 RESULTS The results of our models as well as the results of the baselines given by Rajpurkar et al. (2016) and Yu et al. (2016) are shown in Table 2. We can see that both of our two models have clearly outper- 6 | 1608.07905#16 | 1608.07905#18 | 1608.07905 | [
"1602.04341"
] |
1608.07905#18 | Machine Comprehension Using Match-LSTM and Answer Pointer | # Under review as a conference paper at ICLR 2017 Answer: German in what F language | Ware = = = = _| J the classes = = = given Li aU = nm _ | 20 30 40 Question Answer: Martin Sekuli¢ Who = = T = â = 7 was + a = = 4 Tesla = S main influence | | in Karlovac Lt L | | = | n 0 10 20 30 40 Question Answer: attend school at the Higher Real Gymnasium | | s Z | a oe 3a Question S' Karlovac 2 he} was | profoundly | The classes Inf 1870 Tesla moved tof Karlovac to attend school at the Higher Real where influenced | by F al math teacher Martin Sekulié were | held | ink German by as was | school within the Austro-Hungarian [ij Military Frontier } Gymnasium Paragraph Figure 2: Visualization of the attention weights α for three questions associated with the same passage. formed the logistic regression model by Rajpurkar et al. (2016), which relies on carefully designed features. Furthermore, our boundary model has outperformed the sequence model, achieving an ex- act match score of 61.1% and an F1 score of 71.2%. In particular, in terms of the exact match score, the boundary model has a clear advantage over the sequence model. The improvement of our models over the logistic regression model shows that our end-to-end neural network models without much feature engineering are very effective on this task and this dataset. Considering the effectiveness of boundary model, we further explore this model. Observing that most of the answers are the spans with relatively small sizes, we simply limit the largest predicted span to have no more than 15 tokens and conducted experiment with span searching This resulted in 1.5% improvement in F1 on the de- velopment data and that outperformed the DCR model (Yu et al., 2016), which also introduced some language features such as POS and NE into their model. Besides, we tried to increase the memory dimension l in the model or add bi-directional pre-processing LSTM or add bi-directional Ans-Ptr. The improvement on the development data using the ï¬ rst two methods is quite small. | 1608.07905#17 | 1608.07905#19 | 1608.07905 | [
"1602.04341"
] |
1608.07905#19 | Machine Comprehension Using Match-LSTM and Answer Pointer | While by adding Bi-Ans-Ptr with bi-directional pre-processing LSTM, we can get 1.2% improvement in F1. Finally, we explore the ensemble method by simply computing the product of the boundary prob- abilities collected from 5 boundary models and then searching the most likely span with no more than 15 tokens. This ensemble method achieved the best performance as shown in the table. 3.4 FURTHER ANALYSES To better understand the strengths and weaknesses of our models, we perform some further analyses of the results below. First, we suspect that longer answers are harder to predict. To verify this hypothesis, we analysed the performance in terms of both exact match and F1 score with respect to the answer length on the development set. For example, for questions whose answers contain more than 9 tokens, the F1 score of the boundary model drops to around 55% and the exact match score drops to only around 30%, compared to the F1 score and exact match score of close to 72% and 67%, respectively, for questions with single-token answers. And that supports our hypothesis. 7 | 1608.07905#18 | 1608.07905#20 | 1608.07905 | [
"1602.04341"
] |
1608.07905#20 | Machine Comprehension Using Match-LSTM and Answer Pointer | # Under review as a conference paper at ICLR 2017 Next, we analyze the performance of our models on different groups of questions. We use a crude way to split the questions into different groups based on a set of question words we have deï¬ ned, including â what,â â how,â â who,â â when,â â which,â â where,â and â why.â These different question words roughly refer to questions with different types of answers. For example, â whenâ questions look for temporal expressions as answers, whereas â whereâ questions look for locations as answers. According to the performance on the development data set, our models work the best for â whenâ questions. | 1608.07905#19 | 1608.07905#21 | 1608.07905 | [
"1602.04341"
] |
1608.07905#21 | Machine Comprehension Using Match-LSTM and Answer Pointer | This may be because in this dataset temporal expressions are relatively easier to recog- nize. Other groups of questions whose answers are noun phrases, such as â whatâ questions, â whichâ questions and â whereâ questions, also get relatively better results. On the other hand, â whyâ ques- tions are the hardest to answer. This is not surprising because the answers to â whyâ questions can be very diverse, and they are not restricted to any certain type of phrases. Finally, we would like to check whether the attention mechanism used in the match-LSTM layer is effective in helping the model locate the answer. We show the attention weights α in Figure 2. In the ï¬ gure the darker the color is the higher the weight is. We can see that some words have been well aligned based on the attention weights. For example, the word â Germanâ in the passage is aligned well to the word â languageâ in the ï¬ rst question, and the model successfully predicts â Germanâ as the answer to the question. For the question word â whoâ in the second question, the word â teacherâ actually receives relatively higher attention weight, and the model has predicted the phrase â Martin Sekulicâ after that as the answer, which is correct. For the last question that starts with â whyâ , the attention weights are more evenly distributed and it is not clear which words have been aligned to â whyâ . # 4 RELATED WORK Machine comprehension of text has gained much attention in recent years, and increasingly re- searchers are building data-drive, end-to-end neural network models for the task. | 1608.07905#20 | 1608.07905#22 | 1608.07905 | [
"1602.04341"
] |
1608.07905#22 | Machine Comprehension Using Match-LSTM and Answer Pointer | We will ï¬ rst review the recently released datasets and then some end-to-end models on this task. # 4.1 DATASETS A number of datasets for studying machine comprehension were created in Cloze style by removing a single token from a sentence in the original corpus, and the task is to predict the missing word. For example, Hermann et al. (2015) created questions in Cloze style from CNN and Daily Mail highlights. Hill et al. (2016) created the Childrenâ s Book Test dataset, which is based on childrenâ s stories. Cui et al. (2016) released two similar datasets in Chinese, the People Daily dataset and the Childrenâ s Fairy Tale dataset. Instead of creating questions in Cloze style, a number of other datasets rely on human annotators to create real questions. Richardson et al. (2013) created the well-known MCTest dataset and Tapaswi et al. (2016) created the MovieQA dataset. In these datasets, candidate answers are provided for each question. Similar to these two datasets, the SQuAD dataset (Rajpurkar et al., 2016) was also created by human annotators. Different from the previous two, however, the SQuAD dataset does not provide candidate answers, and thus all possible subsequences from the given passage have to be considered as candidate answers. Besides the datasets above, there are also a few other datasets created for machine comprehension, such as WikiReading dataset (Hewlett et al., 2016) and bAbI dataset (Weston et al., 2016), but they are quite different from the datasets above in nature. 4.2 END-TO-END NEURAL NETWORK MODELS FOR MACHINE COMPREHENSION There have been a number of studies proposing end-to-end neural network models for machine comprehension. A common approach is to use recurrent neural networks (RNNs) to process the given text and the question in order to predict or generate the answers (Hermann et al., 2015). Attention mechanism is also widely used on top of RNNs in order to match the question with the given passage (Hermann et al., 2015; Chen et al., 2016). | 1608.07905#21 | 1608.07905#23 | 1608.07905 | [
"1602.04341"
] |
1608.07905#23 | Machine Comprehension Using Match-LSTM and Answer Pointer | Given that answers often come from the given passage, Pointer Network has been adopted in a few studies in order to copy tokens from the given passage as answers (Kadlec et al., 2016; Trischler et al., 2016). Compared with existing 8 # Under review as a conference paper at ICLR 2017 work, we use match-LSTM to match a question and a given passage, and we use Pointer Network in a different way such that we can generate answers that contain multiple tokens from the given passage. Memory Networks (Weston et al., 2015) have also been applied to machine comprehen- sion (Sukhbaatar et al., 2015; Kumar et al., 2016; Hill et al., 2016), but its scalability when applied to a large dataset is still an issue. In this work, we did not consider memory networks for the SQuAD dataset. # 5 CONCLUSIONS In this paper, We developed two models for the machine comprehension problem deï¬ ned in the Stanford Question Answering (SQuAD) dataset, both making use of match-LSTM and Pointer Net- work. Experiments on the SQuAD dataset showed that our second model, the boundary model, could achieve an exact match score of 67.6% and an F1 score of 77% on the test dataset, which is better than our sequence model and Rajpurkar et al. (2016)â s feature-engineered model. In the future, we plan to look further into the different types of questions and focus on those questions which currently have low performance, such as the â whyâ questions. | 1608.07905#22 | 1608.07905#24 | 1608.07905 | [
"1602.04341"
] |
1608.07905#24 | Machine Comprehension Using Match-LSTM and Answer Pointer | We also plan to test how our models could be applied to other machine comprehension datasets. # 6 ACKNOWLEDGMENTS We thank Pranav Rajpurkar for testing our model on the hidden test dataset and Percy Liang for helping us with the Dockerï¬ le for Codalab. # REFERENCES Danqi Chen, Jason Bolton, and Christopher D. Manning. A thorough examination of the CNN/Daily Mail reading comprehension task. In Proceedings of the Conference on Association for Compu- tational Linguistics, 2016. Yiming Cui, Ting Liu, Zhipeng Chen, Shijin Wang, and Guoping Hu. Consensus attention-based neural networks for chinese reading comprehension. In arXiv preprint arXiv:1607.02250, 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of the Conference on Association for Computa- tional Linguistics, 2016. | 1608.07905#23 | 1608.07905#25 | 1608.07905 | [
"1602.04341"
] |
1608.07905#25 | Machine Comprehension Using Match-LSTM and Answer Pointer | Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. Teaching machines to read and comprehend. In Proceedings of the Conference on Advances in Neural Information Processing Systems, pp. 1693â 1701, 2015. Daniel Hewlett, Alexandre Lacoste, Llion Jones, Illia Polosukhin, Andrew Fandrianto, Jay Han, Matthew Kelcey, and David Berthelot. WIKIREADING: A novel large-scale language under- standing task over wikipedia. In Proceedings of the Conference on Association for Computational Linguistics, 2016. | 1608.07905#24 | 1608.07905#26 | 1608.07905 | [
"1602.04341"
] |
1608.07905#26 | Machine Comprehension Using Match-LSTM and Answer Pointer | Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. The Goldilocks principle: Read- ing childrenâ s books with explicit memory representations. In Proceedings of the International Conference on Learning Representations, 2016. Sepp Hochreiter and J¨urgen Schmidhuber. Long short-term memory. Neural computation, 9(8): 1735â 1780, 1997. Rudolf Kadlec, Martin Schmid, Ondrej Bajgar, and Jan Kleindienst. | 1608.07905#25 | 1608.07905#27 | 1608.07905 | [
"1602.04341"
] |
1608.07905#27 | Machine Comprehension Using Match-LSTM and Answer Pointer | Text understanding with the attention sum reader network. In Proceedings of the Conference on Association for Computational Linguistics, 2016. Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Representations, 2015. 9 # Under review as a conference paper at ICLR 2017 Ankit Kumar, Ozan Irsoy, Jonathan Su, James Bradbury, Robert English, Brian Pierce, Peter On- druska, Ishaan Gulrajani, and Richard Socher. | 1608.07905#26 | 1608.07905#28 | 1608.07905 | [
"1602.04341"
] |
1608.07905#28 | Machine Comprehension Using Match-LSTM and Answer Pointer | Ask me anything: Dynamic memory networks In Proceedings of the International Conference on Machine for natural language processing. Learning, 2016. Jeffrey Pennington, Richard Socher, and Christopher D Manning. GloVe: Global vectors for word In Proceedings of the Conference on Empirical Methods in Natural Language representation. Processing, 2014. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, 2016. Matthew Richardson, Christopher JC Burges, and Erin Renshaw. MCTest: A challenge dataset for the open-domain machine comprehension of text. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, 2013. | 1608.07905#27 | 1608.07905#29 | 1608.07905 | [
"1602.04341"
] |
1608.07905#29 | Machine Comprehension Using Match-LSTM and Answer Pointer | Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. End-to-end memory networks. In Proceed- ings of the Conference on Advances in neural information processing systems, 2015. Makarand Tapaswi, Yukun Zhu, Rainer Stiefelhagen, Antonio Torralba, Raquel Urtasun, and Sanja Fidler. MovieQA: Understanding stories in movies through question-answering. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2016. Adam Trischler, Zheng Ye, Xingdi Yuan, and Kaheer Suleman. Natural language comprehension with the EpiReader. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, 2016. Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. Pointer networks. In Proceedings of the Con- ference on Advances in Neural Information Processing Systems, 2015. Shuohang Wang and Jing Jiang. Learning natural language inference with LSTM. In Proceedings of the Conference on the North American Chapter of the Association for Computational Linguistics, 2016. | 1608.07905#28 | 1608.07905#30 | 1608.07905 | [
"1602.04341"
] |
1608.07905#30 | Machine Comprehension Using Match-LSTM and Answer Pointer | Jason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. In Proceedings of the Inter- national Conference on Learning Representations, 2015. Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M Rush, Bart van Merri¨enboer, Armand Joulin, and Tomas Mikolov. Towards AI-complete question answering: A set of prerequisite toy tasks. In Proceedings of the International Conference on Learning Representations, 2016. Wenpeng Yin, Sebastian Ebert, and Hinrich Sch¨utze. Attention-based convolutional neural network for machine comprehension. arXiv preprint arXiv:1602.04341, 2016. Yang Yu, Wei Zhang, Kazi Hasan, Mo Yu, Bing Xiang, and Bowen Zhou. End-to-end answer chunk extraction and ranking for reading comprehension. arXiv preprint arXiv:1610.09996, 2016. | 1608.07905#29 | 1608.07905#31 | 1608.07905 | [
"1602.04341"
] |
1608.07905#31 | Machine Comprehension Using Match-LSTM and Answer Pointer | 10 # Under review as a conference paper at ICLR 2017 F1 score(s) Exact match(s) F1 score(b) Exact match(b) F1 score(e) Exact match(e) Score w i=} Instance number TTLIIt 1 2 3 4 5 6 7 8 9 >9 Answer length Answer length 90 (3) 7000 (4) 80 _. 6000 70 2 5000 @ 60 2 4000 5 50 & 3000 s 40} a 2000 30} = 1000 ears SS 3 oy Sy >) & © Ss by @ o & e s by & o LLL SS SS ZC CLL LS SK KS Question types Question types Figure 3: Performance breakdown by answer lengths and question types. Top: Plot (1) shows the performance of our two models (where s refers to the sequence model , b refers to the boundary model, and e refers to the ensemble boundary model) over answers with different lengths. Plot (2) shows the numbers of answers with different lengths. Bottom: Plot (3) shows the performance our the two models on different types of questions. Plot (4) shows the numbers of different types of questions. # A APPENDIX We show the performance breakdown by answer lengths and question types for our sequence model, boundary model and the ensemble model in Figure 3. 11 | 1608.07905#30 | 1608.07905 | [
"1602.04341"
] |
|
1608.06993#0 | Densely Connected Convolutional Networks | 8 1 0 2 n a J 8 2 ] V C . s c [ 5 v 3 9 9 6 0 . 8 0 6 1 : v i X r a # Densely Connected Convolutional Networks # Gao Huangâ Cornell University [email protected] # Zhuang Liuâ Tsinghua University [email protected] # Laurens van der Maaten Facebook AI Research [email protected] | 1608.06993#1 | 1608.06993 | [
"1605.07716"
] |
|
1608.06993#1 | Densely Connected Convolutional Networks | # Kilian Q. Weinberger Cornell University [email protected] # Abstract Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efï¬ cient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convo- lutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connectionsâ one between each layer and its subsequent layerâ our network has L(L+1) direct connections. For 2 each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several com- pelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage fea- ture reuse, and substantially reduce the number of parame- ters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain sig- niï¬ cant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high per- formance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet. | 1608.06993#0 | 1608.06993#2 | 1608.06993 | [
"1605.07716"
] |
1608.06993#2 | Densely Connected Convolutional Networks | # 1. Introduction Convolutional neural networks (CNNs) have become the dominant machine learning approach for visual object recognition. Although they were originally introduced over 20 years ago [18], improvements in computer hardware and network structure have enabled the training of truly deep CNNs only recently. The original LeNet5 [19] consisted of 5 layers, VGG featured 19 [29], and only last year Highway Figure 1: A 5-layer dense block with a growth rate of k = 4. Each layer takes all preceding feature-maps as input. Networks [34] and Residual Networks (ResNets) [11] have surpassed the 100-layer barrier. As CNNs become increasingly deep, a new research problem emerges: as information about the input or gra- dient passes through many layers, it can vanish and â | 1608.06993#1 | 1608.06993#3 | 1608.06993 | [
"1605.07716"
] |
1608.06993#3 | Densely Connected Convolutional Networks | wash outâ by the time it reaches the end (or beginning) of the network. Many recent publications address this or related problems. ResNets [11] and Highway Networks [34] by- pass signal from one layer to the next via identity connec- tions. Stochastic depth [13] shortens ResNets by randomly dropping layers during training to allow better information and gradient ï¬ ow. FractalNets [17] repeatedly combine sev- eral parallel layer sequences with different number of con- volutional blocks to obtain a large nominal depth, while maintaining many short paths in the network. Although these different approaches vary in network topology and training procedure, they all share a key characteristic: they create short paths from early layers to later layers. | 1608.06993#2 | 1608.06993#4 | 1608.06993 | [
"1605.07716"
] |
1608.06993#4 | Densely Connected Convolutional Networks | â Authors contributed equally 1 In this paper, we propose an architecture that distills this insight into a simple connectivity pattern: to ensure maxi- mum information flow between layers in the network, we connect all layers (with matching feature-map sizes) di- rectly with each other. To preserve the feed-forward nature, each layer obtains additional inputs from all preceding lay- ers and passes on its own feature-maps to all subsequent layers. Figure | illustrates this layout schematically. Cru- cially, in contrast to ResNets, we never combine features through summation before they are passed into a layer; in- stead, we combine features by concatenating them. Hence, the ¢*â layer has @ inputs, consisting of the feature-maps of all preceding convolutional blocks. Its own feature-maps are passed on to all L â ¢ subsequent layers. This introduces Ett) connections in an L-layer network, instead of just I, as in traditional architectures. Because of its dense con- nectivity pattern, we refer to our approach as Dense Convo- lutional Network (DenseNet). A possibly counter-intuitive effect of this dense connec- tivity pattern is that it requires fewer parameters than tra- ditional convolutional networks, as there is no need to re- learn redundant feature-maps. Traditional feed-forward ar- chitectures can be viewed as algorithms with a state, which is passed on from layer to layer. Each layer reads the state from its preceding layer and writes to the subsequent layer. It changes the state but also passes on information that needs to be preserved. ResNets [11] make this information preser- vation explicit through additive identity transformations. Recent variations of ResNets [13] show that many layers contribute very little and can in fact be randomly dropped during training. This makes the state of ResNets similar to (unrolled) recurrent neural networks [21], but the num- ber of parameters of ResNets is substantially larger because each layer has its own weights. Our proposed DenseNet ar- chitecture explicitly differentiates between information that is added to the network and information that is preserved. DenseNet layers are very narrow (e.g., 12 ï¬ lters per layer), adding only a small set of feature-maps to the â | 1608.06993#3 | 1608.06993#5 | 1608.06993 | [
"1605.07716"
] |
1608.06993#5 | Densely Connected Convolutional Networks | collective knowledgeâ of the network and keep the remaining feature- maps unchangedâ and the ï¬ nal classiï¬ er makes a decision based on all feature-maps in the network. Besides better parameter efï¬ ciency, one big advantage of DenseNets is their improved ï¬ ow of information and gra- dients throughout the network, which makes them easy to train. Each layer has direct access to the gradients from the loss function and the original input signal, leading to an im- plicit deep supervision [20]. This helps training of deeper network architectures. Further, we also observe that dense connections have a regularizing effect, which reduces over- ï¬ tting on tasks with smaller training set sizes. We evaluate DenseNets on four highly competitive benchmark datasets (CIFAR-10, CIFAR-100, SVHN, and ImageNet). Our models tend to require much fewer param- eters than existing algorithms with comparable accuracy. Further, we signiï¬ cantly outperform the current state-of- the-art results on most of the benchmark tasks. # 2. Related Work | 1608.06993#4 | 1608.06993#6 | 1608.06993 | [
"1605.07716"
] |
1608.06993#6 | Densely Connected Convolutional Networks | The exploration of network architectures has been a part of neural network research since their initial discovery. The recent resurgence in popularity of neural networks has also revived this research domain. The increasing number of lay- ers in modern networks ampliï¬ es the differences between architectures and motivates the exploration of different con- nectivity patterns and the revisiting of old research ideas. A cascade structure similar to our proposed dense net- work layout has already been studied in the neural networks literature in the 1980s [3]. Their pioneering work focuses on fully connected multi-layer perceptrons trained in a layer- by-layer fashion. More recently, fully connected cascade networks to be trained with batch gradient descent were proposed [40]. Although effective on small datasets, this approach only scales to networks with a few hundred pa- rameters. In [9, 23, 31, 41], utilizing multi-level features in CNNs through skip-connnections has been found to be effective for various vision tasks. Parallel to our work, [1] derived a purely theoretical framework for networks with cross-layer connections similar to ours. Highway Networks [34] were amongst the ï¬ rst architec- tures that provided a means to effectively train end-to-end networks with more than 100 layers. Using bypassing paths along with gating units, Highway Networks with hundreds of layers can be optimized without difï¬ | 1608.06993#5 | 1608.06993#7 | 1608.06993 | [
"1605.07716"
] |
1608.06993#7 | Densely Connected Convolutional Networks | culty. The bypass- ing paths are presumed to be the key factor that eases the training of these very deep networks. This point is further supported by ResNets [11], in which pure identity mappings are used as bypassing paths. ResNets have achieved im- pressive, record-breaking performance on many challeng- ing image recognition, localization, and detection tasks, such as ImageNet and COCO object detection [11]. Re- cently, stochastic depth was proposed as a way to success- fully train a 1202-layer ResNet [13]. Stochastic depth im- proves the training of deep residual networks by dropping layers randomly during training. This shows that not all layers may be needed and highlights that there is a great amount of redundancy in deep (residual) networks. | 1608.06993#6 | 1608.06993#8 | 1608.06993 | [
"1605.07716"
] |
1608.06993#8 | Densely Connected Convolutional Networks | Our pa- per was partly inspired by that observation. ResNets with pre-activation also facilitate the training of state-of-the-art networks with > 1000 layers [12]. An orthogonal approach to making networks deeper (e.g., with the help of skip connections) is to increase the network width. The GoogLeNet [36, 37] uses an â Incep- tion moduleâ which concatenates feature-maps produced by ï¬ lters of different sizes. In [38], a variant of ResNets with wide generalized residual blocks was proposed. In fact, simply increasing the number of ï¬ lters in each layer of Input Dense Block 1 â O-vO eve v TORRIOAUOD v TONMOAUOD v Buyoog v Dense Block 2 Prediction 9 Dense Block 3 3 2 v ec S}e|8le| C+eveveve |>/8}o/3])-| â | 1608.06993#7 | 1608.06993#9 | 1608.06993 | [
"1605.07716"
] |
1608.06993#9 | Densely Connected Convolutional Networks | horseâ i= 2 ih Sab aber Ei 3B 5 = Figure 2: A deep DenseNet with three dense blocks. The layers between two adjacent blocks are referred to as transition layers and change feature-map sizes via convolution and pooling. ResNets can improve its performance provided the depth is sufficient [42]. FractalNets also achieve competitive results on several datasets using a wide network structure [17]. Instead of drawing representational power from ex- tremely deep or wide architectures, DenseNets exploit the potential of the network through feature reuse, yielding con- densed models that are easy to train and highly parameter- efficient. Concatenating feature-maps learned by different layers increases variation in the input of subsequent layers and improves efficiency. This constitutes a major difference between DenseNets and ResNets. Compared to Inception networks [36, 37], which also concatenate features from dif- An advantage of ResNets is that the gradient can flow di- rectly through the identity function from later layers to the earlier layers. However, the identity function and the output of Hy are combined by summation, which may impede the information flow in the network. Dense connectivity. To further improve the information flow between layers we propose a different connectivity pattern: we introduce direct connections from any layer to all subsequent layers. Figure | illustrates the layout of the resulting DenseNet schematically. | 1608.06993#8 | 1608.06993#10 | 1608.06993 | [
"1605.07716"
] |
1608.06993#10 | Densely Connected Convolutional Networks | Consequently, the ¢'â layer receives the feature-maps of all preceding layers, Xo,---,X¢_â 1, as input: ferent layers, DenseNets are simpler and more efficient. There are other notable network architecture innovations which have yielded competitive results. The Network in Network (NIN) [22] structure includes micro multi-layer perceptrons into the ï¬ lters of convolutional layers to ex- tract more complicated features. In Deeply Supervised Net- work (DSN) [20], internal layers are directly supervised by auxiliary classiï¬ ers, which can strengthen the gradients received by earlier layers. Ladder Networks [27, 25] in- troduce lateral connections into autoencoders, producing impressive accuracies on semi-supervised learning tasks. In [39], Deeply-Fused Nets (DFNs) were proposed to im- prove information ï¬ ow by combining intermediate layers of different base networks. The augmentation of networks with pathways that minimize reconstruction losses was also shown to improve image classiï¬ cation models [43]. # 3. DenseNets Consider a single image xo that is passed through a con- volutional network. The network comprises L layers, each of which implements a non-linear transformation He(-), where ¢ indexes the layer. H;(-) can be a composite func- tion of operations such as Batch Normalization (BN) [14], rectified linear units (ReLU) [6], Pooling [19], or Convolu- tion (Conv). We denote the output of the ¢â | 1608.06993#9 | 1608.06993#11 | 1608.06993 | [
"1605.07716"
] |
1608.06993#11 | Densely Connected Convolutional Networks | layer as xy. x¢ = He([xo,X1,---,Xe-1]), (2) where [x,X1,...,Xe_1] refers to the concatenation of the feature-maps produced in layers 0,...,£â 1. Because of its dense connectivity we refer to this network architecture as Dense Convolutional Network (DenseNet). For ease of im- plementation, we concatenate the multiple inputs of H;(-) in eq. (2) into a single tensor. Composite function. Motivated by [12], we define Hy(-) as a composite function of three consecutive operations: batch normalization (BN) [14], followed by a rectified lin- ear unit (ReLU) [6] and a3 x 3 convolution (Conv). Pooling layers. The concatenation operation used in Eq. (2) is not viable when the size of feature-maps changes. However, an essential part of convolutional networks is down-sampling layers that change the size of feature-maps. To facilitate down-sampling in our architecture we divide the network into multiple densely connected dense blocks; see Figure 2. We refer to layers between blocks as transition layers, which do convolution and pooling. The transition layers used in our experiments consist of a batch normal- ization layer and an 1à 1 convolutional layer followed by a 2à 2 average pooling layer. ResNets. Traditional convolutional feed-forward _ net- works connect the output of the ¢â | 1608.06993#10 | 1608.06993#12 | 1608.06993 | [
"1605.07716"
] |
1608.06993#12 | Densely Connected Convolutional Networks | â layer as input to the (â ¬ + 1)" layer [16], which gives rise to the following layer transition: xe = Hy(xe_1). ResNets [11] add a skip-connection that bypasses the non-linear transforma- tions with an identity function: Growth rate. If each function Hy produces k feature- maps, it follows that the ¢â â layer has ko +k x (â ¬â 1) input feature-maps, where ko is the number of channels in the in- put layer. | 1608.06993#11 | 1608.06993#13 | 1608.06993 | [
"1605.07716"
] |
1608.06993#13 | Densely Connected Convolutional Networks | An important difference between DenseNet and existing network architectures is that DenseNet can have very narrow layers, e.g., k = 12. We refer to the hyper- parameter k as the growth rate of the network. We show in Section 4 that a relatively small growth rate is sufficient to xe = He(xe-1) + X0-1. () Layers Output Size DenseNet-121 DenseNet-169 DenseNet-201 DenseNet-264 Convolution 112 x 112 7 x 7 conv, stride 2 Pooling 56 x 56 3 x 3 max pool, stride 2 Dense Block 1 x lL conv 1 x I conv 1 x L conv 1 x I conv 56 x 56 6 6 6 6 qd) * 3 x 3 conv * 3 x 3 conv * 3 x 3 conv * 3 x 3. conv * Transition Layer 56 x 56 1 x lL conv qd) 28 x 28 2 x 2 average pool, stride 2 Dense Block 1 x lL conv 1 x I conv 1 x L conv 1 x I conv 28 x 28 12 12 12 12 (2) * 3x 3conv | * 3 x 3 conv * 3 x 3 conv * 3 x 3 conv * Transition Layer 28 x 28 1 x lL conv (2) 14x 14 2 x 2 average pool, stride 2 Dense Block 1 x lL conv 1 x I conv 1 x L conv 1 x I conv 14x 14 24 32 48 64 (3) * 3x 3conv | * 3 x 3 conv * 3 x 3 conv * 3 x 3 conv * Transition Layer 14x 14 1 x lL conv (3) 7x7 2 x 2 average pool, stride 2 Dense Block 1 x lL conv 1 x I conv 1 x L conv 1 x I conv 7x7 16 32 32 48 (4) * 3x 3conv | * 3 x 3 conv * 3 x 3 conv * 3 x 3 conv * Classification Ixil 7 x 7 global average pool Layer 1000D fully-connected, softmax | 1608.06993#12 | 1608.06993#14 | 1608.06993 | [
"1605.07716"
] |
1608.06993#14 | Densely Connected Convolutional Networks | Table 1: DenseNet architectures for ImageNet. The growth rate for all the networks is k = 32. Note that each â convâ layer shown in the table corresponds the sequence BN-ReLU-Conv. obtain state-of-the-art results on the datasets that we tested on. One explanation for this is that each layer has access to all the preceding feature-maps in its block and, therefore, to the networkâ s â collective knowledgeâ . One can view the feature-maps as the global state of the network. Each layer adds k feature-maps of its own to this state. The growth rate regulates how much new information each layer con- tributes to the global state. The global state, once written, can be accessed from everywhere within the network and, unlike in traditional network architectures, there is no need to replicate it from layer to layer. Bottleneck layers. Although each layer only produces k output feature-maps, it typically has many more inputs. It has been noted in [37, 11] that a 1 x 1 convolution can be in- troduced as bottleneck layer before each 3x3 convolution to reduce the number of input feature-maps, and thus to improve computational efficiency. We find this design es- pecially effective for DenseNet and we refer to our network with such a bottleneck layer, i.e., to the BN-ReLU-Conv(1 x 1)-BN-ReLU-Conv(3 x3) version of H», as DenseNet-B. In our experiments, we let each 1x1 convolution produce 4k feature-maps. Compression. To further improve model compactness, we can reduce the number of feature-maps at transition layers. If a dense block contains m feature-maps, we let the following transition layer generate |@m| output feature- maps, where 0 <6 <1 is referred to as the compression fac- tor. When 6 = 1, the number of feature-maps across transi- tion layers remains unchanged. We refer the DenseNet with @<14as DenseNet-C, and we set @ = 0.5 in our experiment. When both the bottleneck and transition layers with 0 < 1 are used, we refer to our model as DenseNet-BC. | 1608.06993#13 | 1608.06993#15 | 1608.06993 | [
"1605.07716"
] |
1608.06993#15 | Densely Connected Convolutional Networks | Implementation Details. On all datasets except Ima- geNet, the DenseNet used in our experiments has three dense blocks that each has an equal number of layers. Be- fore entering the ï¬ rst dense block, a convolution with 16 (or twice the growth rate for DenseNet-BC) output channels is performed on the input images. For convolutional layers with kernel size 3à 3, each side of the inputs is zero-padded by one pixel to keep the feature-map size ï¬ | 1608.06993#14 | 1608.06993#16 | 1608.06993 | [
"1605.07716"
] |
1608.06993#16 | Densely Connected Convolutional Networks | xed. We use 1à 1 convolution followed by 2à 2 average pooling as transition layers between two contiguous dense blocks. At the end of the last dense block, a global average pooling is performed and then a softmax classiï¬ er is attached. The feature-map sizes in the three dense blocks are 32à 32, 16à 16, and 8à 8, respectively. We experiment with the basic DenseNet structure with conï¬ gurations {L = 40, k = 12}, {L = 100, k = 12} and {L = 100, k = 24}. For DenseNet- BC, the networks with conï¬ gurations {L = 100, k = 12}, {L = 250, k = 24} and {L = 190, k = 40} are evaluated. In our experiments on ImageNet, we use a DenseNet-BC structure with 4 dense blocks on 224à 224 input images. The initial convolution layer comprises 2k convolutions of size 7à 7 with stride 2; the number of feature-maps in all other layers also follow from setting k. | 1608.06993#15 | 1608.06993#17 | 1608.06993 | [
"1605.07716"
] |
1608.06993#17 | Densely Connected Convolutional Networks | The exact network conï¬ gurations we used on ImageNet are shown in Table 1. # 4. Experiments We empirically demonstrate DenseNetâ s effectiveness on several benchmark datasets and compare with state-of-the- art architectures, especially with ResNet and its variants. Method Network in Network [22] All-CNN [32] Deeply Supervised Net [20] Highway Network [34] FractalNet [17] with Dropout/Drop-path ResNet [11] ResNet (reported by [13]) ResNet with Stochastic Depth [13] Wide ResNet [42] with Dropout ResNet (pre-activation) [12] DenseNet (k = 12) DenseNet (k = 12) DenseNet (k = 24) DenseNet-BC (k = 12) DenseNet-BC (k = 24) DenseNet-BC (k = 40) Depth - - - - 21 21 110 110 110 1202 16 28 16 164 1001 40 100 100 100 250 190 Params - - - - 38.6M 38.6M 1.7M 1.7M 1.7M 10.2M 11.0M 36.5M 2.7M 1.7M 10.2M 1.0M 7.0M 27.2M 0.8M 15.3M 25.6M C10 10.41 9.08 9.69 - 10.18 7.33 - 13.63 11.66 - - - - 11.26â | 1608.06993#16 | 1608.06993#18 | 1608.06993 | [
"1605.07716"
] |
1608.06993#18 | Densely Connected Convolutional Networks | 10.56â 7.00 5.77 5.83 5.92 5.19 - C10+ 8.81 7.25 7.97 7.72 5.22 4.60 6.61 6.41 5.23 4.91 4.81 4.17 - 5.46 4.62 5.24 4.10 3.74 4.51 3.62 3.46 C100 35.68 - - - 35.34 28.20 - 44.74 37.80 - - - - 35.58â 33.47â 27.55 23.79 23.42 24.15 19.64 - C100+ - 33.71 34.57 32.39 23.30 23.73 - 27.22 24.58 - 22.07 20.50 - 24.33 22.71 24.42 20.20 19.25 22.27 17.60 17.18 SVHN 2.35 - 1.92 - 2.01 1.87 - 2.01 1.75 - - - 1.64 - - 1.79 1.67 1.59 1.76 1.74 - Table 2: Error rates (%) on CIFAR and SVHN datasets. k denotes networkâ s growth rate. Results that surpass all competing methods are bold and the overall best results are blue. â +â indicates standard data augmentation (translation and/or mirroring). â indicates results run by ourselves. All the results of DenseNets without data augmentation (C10, C100, SVHN) are obtained using Dropout. DenseNets achieve lower error rates while using fewer parameters than ResNet. Without data augmentation, DenseNet performs better by a large margin. | 1608.06993#17 | 1608.06993#19 | 1608.06993 | [
"1605.07716"
] |
1608.06993#19 | Densely Connected Convolutional Networks | # 4.1. Datasets CIFAR. The two CIFAR datasets [15] consist of colored natural images with 32Ã 32 pixels. CIFAR-10 (C10) con- sists of images drawn from 10 and CIFAR-100 (C100) from 100 classes. The training and test sets contain 50,000 and 10,000 images respectively, and we hold out 5,000 training images as a validation set. We adopt a standard data aug- mentation scheme (mirroring/shifting) that is widely used for these two datasets [11, 13, 17, 22, 28, 20, 32, 34]. We denote this data augmentation scheme by a â +â mark at the end of the dataset name (e.g., C10+). | 1608.06993#18 | 1608.06993#20 | 1608.06993 | [
"1605.07716"
] |
1608.06993#20 | Densely Connected Convolutional Networks | For preprocessing, we normalize the data using the channel means and stan- dard deviations. For the ï¬ nal run we use all 50,000 training images and report the ï¬ nal test error at the end of training. SVHN. The Street View House Numbers (SVHN) dataset [24] contains 32à 32 colored digit images. There are 73,257 images in the training set, 26,032 images in the test set, and 531,131 images for additional training. Following common practice [7, 13, 20, 22, 30] we use all the training data with- out any data augmentation, and a validation set with 6,000 images is split from the training set. We select the model with the lowest validation error during training and report the test error. We follow [42] and divide the pixel values by 255 so they are in the [0, 1] range. ImageNet. The ILSVRC 2012 classiï¬ cation dataset [2] consists 1.2 million images for training, and 50,000 for val- idation, from 1, 000 classes. We adopt the same data aug- mentation scheme for training images as in [8, 11, 12], and apply a single-crop or 10-crop with size 224à 224 at test time. Following [11, 12, 13], we report classiï¬ cation errors on the validation set. # 4.2. Training | 1608.06993#19 | 1608.06993#21 | 1608.06993 | [
"1605.07716"
] |
1608.06993#21 | Densely Connected Convolutional Networks | All the networks are trained using stochastic gradient de- scent (SGD). On CIFAR and SVHN we train using batch size 64 for 300 and 40 epochs, respectively. The initial learning rate is set to 0.1, and is divided by 10 at 50% and 75% of the total number of training epochs. On ImageNet, we train models for 90 epochs with a batch size of 256. The learning rate is set to 0.1 initially, and is lowered by 10 times at epoch 30 and 60. Note that a naive implemen- tation of DenseNet may contain memory inefï¬ | 1608.06993#20 | 1608.06993#22 | 1608.06993 | [
"1605.07716"
] |
1608.06993#22 | Densely Connected Convolutional Networks | ciencies. To reduce the memory consumption on GPUs, please refer to our technical report on the memory-efï¬ cient implementa- tion of DenseNets [26]. Following [8], we use a weight decay of 10â 4 and a Nesterov momentum [35] of 0.9 without dampening. We adopt the weight initialization introduced by [10]. For the three datasets without data augmentation, i.e., C10, C100 Model top-1 top-5 DenseNet-121 25.02 / 23.61 7.71 / 6.66 DenseNet-169 23.80 / 22.08 6.85 / 5.92 DenseNet-201 22.58 / 21.46 6.34 / 5.54 DenseNet-264 22.15 / 20.80 6.12 / 5.29 | 1608.06993#21 | 1608.06993#23 | 1608.06993 | [
"1605.07716"
] |
1608.06993#23 | Densely Connected Convolutional Networks | & 21.5, â 2=ResNets ResNet-34_|â &â DenseNets-BC 265 DenseNt-169: DenséNet"3Q1 ResNet-101 278 a= ResNets â A4~ DenseNets-BC Reshlet-34 25.5 DenteNet-121 ResNet?50°" 24.56 \- Fl ResNet=50 validation error (%) 23.5 ResNet-101 ResNe}~152 22.5 FlesNet~152 DenseNet-264 Denseflet-264 215 04 3. 4 +5 6 7 O5 075 1 125 15 175 2 225 #parameters, x10" #flops x10 # validation error (%) Table 3: The top-1 and top-5 error rates on the ImageNet validation set, with single-crop / 10- crop testing. Figure 3: Comparison of the DenseNets and ResNets top-1 error rates (single-crop testing) on the ImageNet validation dataset as a function of learned parameters (left) and FLOPs during test-time (right). and SVHN, we add a dropout layer [33] after each convolu- tional layer (except the ï¬ rst one) and set the dropout rate to 0.2. The test errors were only evaluated once for each task and model setting. | 1608.06993#22 | 1608.06993#24 | 1608.06993 | [
"1605.07716"
] |
1608.06993#24 | Densely Connected Convolutional Networks | # 4.3. Classiï¬ cation Results on CIFAR and SVHN We train DenseNets with different depths, L, and growth rates, k. The main results on CIFAR and SVHN are shown in Table 2. To highlight general trends, we mark all results that outperform the existing state-of-the-art in boldface and the overall best result in blue. Accuracy. Possibly the most noticeable trend may orig- inate from the bottom row of Table 2, which shows that DenseNet-BC with L = 190 and k = 40 outperforms the existing state-of-the-art consistently on all the CIFAR datasets. Its error rates of 3.46% on C10+ and 17.18% on C100+ are signiï¬ cantly lower than the error rates achieved by wide ResNet architecture [42]. Our best results on C10 and C100 (without data augmentation) are even more encouraging: both are close to 30% lower than Fractal- Net with drop-path regularization [17]. On SVHN, with dropout, the DenseNet with L = 100 and k = 24 also surpasses the current best result achieved by wide ResNet. However, the 250-layer DenseNet-BC doesnâ t further im- prove the performance over its shorter counterpart. This may be explained by that SVHN is a relatively easy task, and extremely deep models may overï¬ t to the training set. Parameter Efï¬ ciency. The results in Table 2 indicate that DenseNets utilize parameters more efï¬ ciently than alterna- tive architectures (in particular, ResNets). The DenseNet- BC with bottleneck structure and dimension reduction at transition layers is particularly parameter-efï¬ cient. For ex- ample, our 250-layer model only has 15.3M parameters, but it consistently outperforms other models such as FractalNet and Wide ResNets that have more than 30M parameters. We also highlight that DenseNet-BC with L = 100 and k = 12 achieves comparable performance (e.g., 4.51% vs 4.62% er- ror on C10+, 22.27% vs 22.71% error on C100+) as the 1001-layer pre-activation ResNet using 90% fewer parame- ters. | 1608.06993#23 | 1608.06993#25 | 1608.06993 | [
"1605.07716"
] |
1608.06993#25 | Densely Connected Convolutional Networks | Figure 4 (right panel) shows the training loss and test errors of these two networks on C10+. The 1001-layer deep ResNet converges to a lower training loss value but a similar test error. We analyze this effect in more detail below. Overï¬ tting. One positive side-effect of the more efï¬ cient use of parameters is a tendency of DenseNets to be less prone to overï¬ tting. We observe that on the datasets without data augmentation, the improvements of DenseNet architec- tures over prior work are particularly pronounced. On C10, the improvement denotes a 29% relative reduction in error from 7.33% to 5.19%. On C100, the reduction is about 30% from 28.20% to 19.64%. In our experiments, we observed potential overï¬ tting in a single setting: on C10, a 4à growth of parameters produced by increasing k = 12 to k = 24 lead to a modest increase in error from 5.77% to 5.83%. | 1608.06993#24 | 1608.06993#26 | 1608.06993 | [
"1605.07716"
] |
1608.06993#26 | Densely Connected Convolutional Networks | The DenseNet-BC bottleneck and compression layers appear to be an effective way to counter this trend. Capacity. Without compression or bottleneck layers, there is a general trend that DenseNets perform better as L and k increase. We attribute this primarily to the corre- sponding growth in model capacity. This is best demon- strated by the column of C10+ and C100+. On C10+, the error drops from 5.24% to 4.10% and ï¬ nally to 3.74% as the number of parameters increases from 1.0M, over 7.0M to 27.2M. On C100+, we observe a similar trend. This sug- gests that DenseNets can utilize the increased representa- tional power of bigger and deeper models. It also indicates that they do not suffer from overï¬ tting or the optimization difï¬ culties of residual networks [11]. # 4.4. Classiï¬ | 1608.06993#25 | 1608.06993#27 | 1608.06993 | [
"1605.07716"
] |
1608.06993#27 | Densely Connected Convolutional Networks | cation Results on ImageNet We evaluate DenseNet-BC with different depths and growth rates on the ImageNet classiï¬ cation task, and com- pare it with state-of-the-art ResNet architectures. To en- sure a fair comparison between the two architectures, we eliminate all other factors such as differences in data pre- processing and optimization settings by adopting the pub- licly available Torch implementation for ResNet by [8]1. 1https://github.com/facebook/fb.resnet.torch | 1608.06993#26 | 1608.06993#28 | 1608.06993 | [
"1605.07716"
] |
1608.06993#28 | Densely Connected Convolutional Networks | 25 16, 16 1 T T T r T T 16 â _ DenseNet â ~ ResNet Test error: ResNet-1001 (10.2M) 400 14 â _ DenseNet-C 14 â DenseNet-BC 14 â Test error: DenseNet-BC-100 (0.8M) â _ DenseNet-B Training loss: ResNet-1001 (10.2M) ~ â DenseNet-Bc}| â . _ -ss.Training loss: DenseNet-BC-100 (0.8M) giz ge ge 1018 Zz ra S 2 £10 S10 S10 2 o oO 3 < 4 bey 3 â ¬ ge 8s 8s 4028 6 6 6 4 4 4 Sore 108 o 1. 2 38 4 5 6 7 8 O 1 2 3 4 6 7 8 0 50 100 750 200 250 300 #parameters x10° #parameters 10° epoch Figure 4: Left: Comparison of the parameter efï¬ ciency on C10+ between DenseNet variations. Middle: Comparison of the parameter efï¬ ciency between DenseNet-BC and (pre-activation) ResNets. DenseNet-BC requires about 1/3 of the parameters as ResNet to achieve comparable accuracy. | 1608.06993#27 | 1608.06993#29 | 1608.06993 | [
"1605.07716"
] |
1608.06993#29 | Densely Connected Convolutional Networks | Right: Training and testing curves of the 1001-layer pre-activation ResNet [12] with more than 10M parameters and a 100-layer DenseNet with only 0.8M parameters. We simply replace the ResNet model with the DenseNet- BC network, and keep all the experiment settings exactly the same as those used for ResNet. We report the single-crop and 10-crop validation errors of DenseNets on ImageNet in Table 3. Figure 3 shows the single-crop top-1 validation errors of DenseNets and ResNets as a function of the number of parameters (left) and FLOPs (right). | 1608.06993#28 | 1608.06993#30 | 1608.06993 | [
"1605.07716"
] |
1608.06993#30 | Densely Connected Convolutional Networks | The results presented in the ï¬ gure reveal that DenseNets perform on par with the state-of-the-art ResNets, whilst requiring signiï¬ cantly fewer parameters and compu- tation to achieve comparable performance. For example, a DenseNet-201 with 20M parameters model yields similar validation error as a 101-layer ResNet with more than 40M parameters. Similar trends can be observed from the right panel, which plots the validation error as a function of the number of FLOPs: a DenseNet that requires as much com- putation as a ResNet-50 performs on par with a ResNet-101, which requires twice as much computation. ResNet architecture (middle). We train multiple small net- works with varying depths on C10+ and plot their test ac- curacies as a function of network parameters. In com- parison with other popular network architectures, such as AlexNet [16] or VGG-net [29], ResNets with pre-activation use fewer parameters while typically achieving better re- sults [12]. Hence, we compare DenseNet (k = 12) against this architecture. The training setting for DenseNet is kept the same as in the previous section. The graph shows that DenseNet-BC is consistently the most parameter efï¬ cient variant of DenseNet. Further, to achieve the same level of accuracy, DenseNet-BC only re- quires around 1/3 of the parameters of ResNets (middle plot). This result is in line with the results on ImageNet we presented in Figure 3. The right plot in Figure 4 shows that a DenseNet-BC with only 0.8M trainable parameters is able to achieve comparable accuracy as the 1001-layer (pre-activation) ResNet [12] with 10.2M parameters. It is worth noting that our experimental setup implies that we use hyperparameter settings that are optimized for ResNets but not for DenseNets. It is conceivable that more extensive hyper-parameter searches may further improve the performance of DenseNet on ImageNet. | 1608.06993#29 | 1608.06993#31 | 1608.06993 | [
"1605.07716"
] |
1608.06993#31 | Densely Connected Convolutional Networks | # 5. Discussion Superficially, DenseNets are quite similar to ResNets: Eq. (2) differs from Eq. (1) only in that the inputs to H¢(-) are concatenated instead of summed. However, the implica- tions of this seemingly small modification lead to substan- tially different behaviors of the two network architectures. Model compactness. As a direct consequence of the in- put concatenation, the feature-maps learned by any of the DenseNet layers can be accessed by all subsequent layers. This encourages feature reuse throughout the network, and leads to more compact models. | 1608.06993#30 | 1608.06993#32 | 1608.06993 | [
"1605.07716"
] |
1608.06993#32 | Densely Connected Convolutional Networks | The left two plots in Figure 4 show the result of an experiment that aims to compare the parameter efï¬ ciency of all variants of DenseNets (left) and also a comparable Implicit Deep Supervision. One explanation for the im- proved accuracy of dense convolutional networks may be that individual layers receive additional supervision from the loss function through the shorter connections. One can interpret DenseNets to perform a kind of â deep supervi- sionâ . The beneï¬ ts of deep supervision have previously been shown in deeply-supervised nets (DSN; [20]), which have classiï¬ ers attached to every hidden layer, enforcing the intermediate layers to learn discriminative features. DenseNets perform a similar deep supervision in an im- plicit fashion: a single classiï¬ er on top of the network pro- vides direct supervision to all layers through at most two or three transition layers. However, the loss function and gra- dient of DenseNets are substantially less complicated, as the same loss function is shared between all layers. Stochastic vs. deterministic connection. There is an interesting connection between dense convolutional net- works and stochastic depth regularization of residual net- works [13]. In stochastic depth, layers in residual networks are randomly dropped, which creates direct connections be- tween the surrounding layers. As the pooling layers are never dropped, the network results in a similar connectiv- ity pattern as DenseNet: there is a small probability for any two layers, between the same pooling layers, to be di- rectly connectedâ if all intermediate layers are randomly dropped. Although the methods are ultimately quite dif- ferent, the DenseNet interpretation of stochastic depth may provide insights into the success of this regularizer. Feature Reuse. By design, DenseNets allow layers ac- cess to feature-maps from all of its preceding layers (al- though sometimes through transition layers). We conduct an experiment to investigate if a trained network takes ad- vantage of this opportunity. We first train a DenseNet on C10+ with L = 40 and k = 12. For each convolutional layer ¢ within a block, we compute the average (absolute) weight assigned to connections with layer s. Figure 5 shows a heat-map for all three dense blocks. The average absolute weight serves as a surrogate for the dependency of a convo- lutional layer on its preceding layers. | 1608.06993#31 | 1608.06993#33 | 1608.06993 | [
"1605.07716"
] |
1608.06993#33 | Densely Connected Convolutional Networks | A red dot in position (, s) indicates that the layer £ makes, on average, strong use of feature-maps produced s-layers before. Several observa- tions can be made from the plot: 1. All layers spread their weights over many inputs within the same block. This indicates that features extracted by very early layers are, indeed, directly used by deep layers throughout the same dense block. 2. The weights of the transition layers also spread their weight across all layers within the preceding dense block, indicating information ï¬ ow from the ï¬ rst to the last layers of the DenseNet through few indirections. 3. The layers within the second and third dense block consistently assign the least weight to the outputs of the transition layer (the top row of the triangles), in- dicating that the transition layer outputs many redun- dant features (with low weight on average). This is in keeping with the strong results of DenseNet-BC where exactly these outputs are compressed. | 1608.06993#32 | 1608.06993#34 | 1608.06993 | [
"1605.07716"
] |
1608.06993#34 | Densely Connected Convolutional Networks | 4. Although the ï¬ nal classiï¬ cation layer, shown on the very right, also uses weights across the entire dense block, there seems to be a concentration towards ï¬ nal feature-maps, suggesting that there may be some more high-level features produced late in the network. # 6. Conclusion We proposed a new convolutional network architec- ture, which we refer to as Dense Convolutional Network (DenseNet). It introduces direct connections between any two layers with the same feature-map size. We showed that DenseNets scale naturally to hundreds of layers, while ex- In our experiments, hibiting no optimization difï¬ | 1608.06993#33 | 1608.06993#35 | 1608.06993 | [
"1605.07716"
] |
1608.06993#35 | Densely Connected Convolutional Networks | culties. Dense Block 1 Dense Block 2 Dense Block 3 Transition layer 1 Transition layer 2. Classification layer 2 4 6 8 ww 2 4 6 8 Ww 1 2 4 6 8 ww Target layer (0) Target layer (/) Target layer (0) Figure 5: The average absolute filter weights of convolutional lay- ers in a trained DenseNet. The color of pixel (s, £) encodes the av- erage L1 norm (normalized by number of input feature-maps) of the weights connecting convolutional layer s to @ within a dense block. Three columns highlighted by black rectangles correspond to two transition layers and the classification layer. The first row encodes weights connected to the input layer of the dense block. DenseNets tend to yield consistent improvement in accu- racy with growing number of parameters, without any signs of performance degradation or overï¬ | 1608.06993#34 | 1608.06993#36 | 1608.06993 | [
"1605.07716"
] |
1608.06993#36 | Densely Connected Convolutional Networks | tting. Under multi- ple settings, it achieved state-of-the-art results across sev- eral highly competitive datasets. Moreover, DenseNets require substantially fewer parameters and less computa- tion to achieve state-of-the-art performances. Because we adopted hyperparameter settings optimized for residual net- works in our study, we believe that further gains in accuracy of DenseNets may be obtained by more detailed tuning of hyperparameters and learning rate schedules. Whilst following a simple connectivity rule, DenseNets naturally integrate the properties of identity mappings, deep supervision, and diversiï¬ ed depth. They allow feature reuse throughout the networks and can consequently learn more compact and, according to our experiments, more accurate models. Because of their compact internal representations and reduced feature redundancy, DenseNets may be good feature extractors for various computer vision tasks that build on convolutional features, e.g., [4, 5]. We plan to study such feature transfer with DenseNets in future work. | 1608.06993#35 | 1608.06993#37 | 1608.06993 | [
"1605.07716"
] |
1608.06993#37 | Densely Connected Convolutional Networks | Acknowledgements. The authors are supported in part by the NSF III-1618134, III-1526012, IIS-1149882, the Of- ï¬ ce of Naval Research Grant N00014-17-1-2175 and the Bill and Melinda Gates foundation. GH is supported by the International Postdoctoral Exchange Fellowship Pro- gram of China Postdoctoral Council (No.20150015). ZL is supported by the National Basic Research Program of China Grants 2011CBA00300, 2011CBA00301, the NSFC 61361136003. We also thank Daniel Sedra, Geoff Pleiss and Yu Sun for many insightful discussions. # References [1] C. Cortes, X. Gonzalvo, V. Kuznetsov, M. Mohri, and S. Yang. | 1608.06993#36 | 1608.06993#38 | 1608.06993 | [
"1605.07716"
] |
1608.06993#38 | Densely Connected Convolutional Networks | Adanet: Adaptive structural learning of artiï¬ cial neural networks. arXiv preprint arXiv:1607.01097, 2016. 2 [2] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei- Fei. Imagenet: A large-scale hierarchical image database. In CVPR, 2009. 5 [3] S. E. Fahlman and C. Lebiere. | 1608.06993#37 | 1608.06993#39 | 1608.06993 | [
"1605.07716"
] |
1608.06993#39 | Densely Connected Convolutional Networks | The cascade-correlation learn- ing architecture. In NIPS, 1989. 2 [4] J. R. Gardner, M. J. Kusner, Y. Li, P. Upchurch, K. Q. Weinberger, and J. E. Hopcroft. Deep manifold traversal: Changing labels with convolutional features. arXiv preprint arXiv:1511.06421, 2015. 8 [5] L. Gatys, A. Ecker, and M. Bethge. | 1608.06993#38 | 1608.06993#40 | 1608.06993 | [
"1605.07716"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.